33
HPE Reference Architecture for VMware Horizon on HPE Hyper Converged 380 Reference Architecture

HPE Reference Architecture for VMware Horizon on HPE … · Hyper convergence With IT spending trends often coming under the watchful eye of CxOs, ... deploying remote virtual desktops

Embed Size (px)

Citation preview

HPE Reference Architecture for VMware Horizon on HPE Hyper Converged 380

Reference Architecture

Reference Architecture

Contents Executive summary ................................................................................................................................................................................................................................................................................................................................ 3 Introduction ................................................................................................................................................................................................................................................................................................................................................... 4 Solution overview ..................................................................................................................................................................................................................................................................................................................................... 5 Solution components ............................................................................................................................................................................................................................................................................................................................ 7 Hardware ......................................................................................................................................................................................................................................................................................................................................................... 7

Graphics hardware........................................................................................................................................................................................................................................................................................................................ 10 Storage .................................................................................................................................................................................................................................................................................................................................................... 10

Software ........................................................................................................................................................................................................................................................................................................................................................ 11 Management software layer ................................................................................................................................................................................................................................................................................................ 11 Application software ................................................................................................................................................................................................................................................................................................................... 11 Storage layer ...................................................................................................................................................................................................................................................................................................................................... 12 End User Computing .................................................................................................................................................................................................................................................................................................................. 12

Best practices and configuration guidance for the solution ......................................................................................................................................................................................................................... 14 Physical cabling ............................................................................................................................................................................................................................................................................................................................... 14 Solution network, VLANs, and configuration ....................................................................................................................................................................................................................................................... 15

Capacity and sizing ............................................................................................................................................................................................................................................................................................................................ 15 About Login VSI ............................................................................................................................................................................................................................................................................................................................. 15 Testing strategy ............................................................................................................................................................................................................................................................................................................................. 17 Boot tests .............................................................................................................................................................................................................................................................................................................................................. 23 Analysis and recommendations ....................................................................................................................................................................................................................................................................................... 23

Summary ...................................................................................................................................................................................................................................................................................................................................................... 24 Appendix A: Bill of materials ...................................................................................................................................................................................................................................................................................................... 24 Appendix B: Configuration adjustments ......................................................................................................................................................................................................................................................................... 26

VDI specific configuration steps with NVIDIA GRID graphics accelerators .............................................................................................................................................................................. 26 Appendix C: Scaling the HPE HC 380 for VDI .......................................................................................................................................................................................................................................................... 28 Resources and additional links ................................................................................................................................................................................................................................................................................................ 33

Reference Architecture Page 3

Executive summary In today’s on-demand world, end-user expectations have reached new heights. End users want seamless access, with functional usage, to all of their personal and corporate applications and data – regardless of their endpoint device. With this level of expectation now serving as the status-quo, IT requirements have become complex and implementations have become expensive. Now that high performance and high availability need to be simultaneously balanced with simple, secure, inter-device accessibility, many enterprises struggle to find IT resources with expertise, and cost efficient solutions to validate such investments. Enter the era of hyper convergence (HC). By breaking down the physical infrastructure of traditional data centers and consolidating previously disparate components, HC has removed layers of technical complexity. Running in parallel to this hardware consolidation trend is the emergence of virtualized, software-defined solutions (mostly storage and networking) helping to simplify day-to-day management.

Hyper convergence. With IT spending trends often coming under the watchful eye of CxOs, hyper convergence is serving as a technical catalyst to enterprise infrastructures, allowing them to leverage upon the benefits delivered by HC systems. Customers are beginning to see that deploying remote virtual desktops and applications (VDI, RDSH, Session / App virtualization) is simpler than ever, requiring fewer dedicated technical support personnel, with less training and expertise. The hyper converged model is opening up new segments of the market (mid-market and SMB markets, primarily) to previously implausible end-user computing scenarios, by successfully reducing complexity and high costs.

Separate from, but equally as important to, are the obstacles of management complexity and unpredictable capital costs, was the challenge of widespread user acceptance. End users want simplistic, wide-ranging access, with performance comparable to their desktop PCs. To accommodate this, Hewlett Packard Enterprise needed an HC solution that yielded faster processing speeds, adequate capacity with predictable scaling, while also offering increased system flexibility and configurability.

This document presents a solution with VMware® Horizon® virtual desktops on an HPE next-generation data center in-a-box, with emphasis on simplifying the deployment of virtual desktops on an appliance that integrates storage, networking and computing into each node. HPE tested the virtual desktops on an HPE Hyper Converged 380 (HC 380) appliance, leveraging HPE best-in-class technologies; HPE StoreVirtual Virtual SAN Appliance (VSA), and the OneView InstantOn (OVIO) startup and expansion wizard. With Horizon running on ESXi 6.0, we can now integrate NVIDIA® GRID vGPU technology, and deliver powerful graphics rendering capabilities across multiple desktops.

This solution demonstrates a truly integrated end-user computing solution that can:

• Decrease the cost of infrastructure without compromising upon service level agreements.

• Provide testing user density quantities for sizing sample and to enable price/user modeling exercises.

• Expedite virtual and soft layer provisioning by providing sample configuration with guidance for similar deployments.

• Deliver acceptable user experience to satisfy demands of remote users and improve user acceptance.

• Display design principles to support a multitude of virtual workloads – desktops and applications (persistent, non-persistent), GPU-required, and deliver combinations of workloads simultaneously from same individual clusters.

• Accelerate and streamline desktop deployments with integrated components designed for end-user computing.

• Reduce risk with pre-tested server, storage and network configurations on systems that are workload-optimized for Virtual Desktop Infrastructure (VDI).

• Streamline operations by automating configuration tasks for administrative staff using HPE OVIO and intuitive wizards.

• Provide powerful virtualized 3D graphics to multiple desktops at once without having to redesign your solution.

Target audience: This document is intended for IT professionals who use, program, manage, or administer VDI implementations that require high availability and high performance while meeting a cost-effective price point. Specifically, this information is intended for those who evaluate, recommend, design or implement new IT architectures for virtualization. Additionally, anybody wanting to understand the Hewlett Packard Enterprise approach to end-user computing workloads with hyper converged platforms or the specific value HPE provides with their validated hyper converged HC 380 solution would be well served reading this document. The reader should have a solid understanding of end-user computing, familiarity with VMware Horizon suite, VMware vSphere® products and an understanding of user sizing/characterization concepts and associated limitations within end-user computing environments.

Reference Architecture Page 4

Document purpose: The purpose of this document is to overview validation testing completed using the HPE HC 380 solution as a platform for running VMware Horizon 6.2.2 and document recommendations for a successful implementation.

HPE HC 380 and VMware Horizon installations are standard configurations except where explicitly stated in the reference architecture.

This white paper describes testing performed in February and March of 2016.

Disclaimer: Products sold prior to the separation of Hewlett-Packard Company into Hewlett Packard Enterprise Company and HP Inc. on November 1, 2015 may have a product name and model number that differ from current models.

Introduction There are a variety of business drivers for implementing hyper convergence as a resolution to the many technical shortcomings of traditional remote desktop and application delivery methods.

Two of the most prominent business-focused drivers are: 1) end-user productivity and 2) corporate data security. For many years these market dynamics worked in contradiction to each other – one demanding mass distribution of corporate data; the other impressing the importance of user authentication and security best practices. Finding the perfect harmony was easier conceptually than in reality, and it became even more challenging when budgetary concerns were also evaluated. Finding a solution to accomplish all three objectives (increased productivity, with uncompromised security, at an acceptable cost) was often complex and undermined by the exorbitant operational costs associated – often requiring increased IT staffing, training and time.

With recent technical advances in both hardware and software, the relationship between IT and the end user has become far more harmonious and more importantly cost-effective. Taking this one step further, the HC 380 provides not only the all-inclusive hardware components within each system to deliver a productivity-stimulating user experience across many devices, but also includes built-in high availability to deliver the security assurances many organization’s require.

With a foundation designed on the principles of “Intuitive, Affordable, Smart, and Integrated”, the HC 380 can serve as the “swiss army knife” of the IT department, capable of satisfying requirements for multiple user segments and delivery models. The HC 380 enables IT to deliver the required resources to the correct users with the proper balance of secure and available, while maintaining a cost/user model that satisfies line of business owners approving the purchase orders.

Built upon the HPE ProLiant DL380, the virtualized HC 380 and all its hardware and software components can be procured from a single provider – HPE. By leveraging HPE servers, storage and networking with hypervisor and broker solutions from VMware, the HC 380 appliance offers a highly-available virtualized server and storage infrastructure that can be configured in minutes. By delivering a workload-optimized platform with pre-integrated software, customers can quickly procure, deploy and run virtual desktops and/or applications in a cost-effective and simple-to-manage fashion. This allows for single IT generalists (versus various specialists) to not only tackle day-to-day management and deliver an end-user experience which satisfies expectations of both end users and IT staffs alike but secondarily frees up IT leaders to focus on innovation versus simply “keeping the lights on”.

Included in the testing of VMware Horizon 6.2 on HC 380, HPE utilized VMware App Volumes® for simple, rapid delivery of applications to non-persistent desktops. App Volumes simplifies the isolation, delivery, and management of business critical applications independent of many common OS or system compatibility issues. Administrators can quickly package their applications within vSphere and deploy them in minutes to their users using the App Volumes Manager. This ability to deploy and update applications quickly, empowers administrators to provide specific applications to users in real time, reducing the time to deploy applications from hours to seconds. Additionally, App Volumes integration with VMware Horizon View® ensures the right application access is assigned to the right end user, via role-based authentication.

The validation testing in this reference architecture (RA) focused on demonstrating the viability and potential user density of most user segments as defined by Login VSI (Task, Office, Knowledge, Power) running on HC 380 nodes. In an effort to showcase the potential capabilities of the solution, HPE not only tested a wide range of user segments but also deployment models: persistent, fully provisioned desktops, non-persistent with linked clones, even GPU-driven power-user workloads. In order to showcase the latest advancements in the HPE hyper converged infrastructure solutions and virtualization technologies, HPE simultaneously tested a mix of all Login VSI worker types. The result is delivery of the first “mixed-user validation” for a hyper converged solution running VDI. HPE feels this test model better reflects real world implementations of end-user computing.

Reference Architecture Page 5

For client virtualization, desktop and application delivery can vary based on use case requirements that range from Task workers to Workstation users. Figure 1 below shows a client virtualization technology landscape as it exists today.

Figure 1. The graphic above shows the client virtualization technology landscape as it exists today

As represented above, the virtualization landscape today spans multiple use cases and delivery methods, ranging from simple application access and session-based desktops all the way up to hardware-based GPU-accelerated VDI environments that serve the needs of the most demanding graphic designers and engineers. VMware Horizon supports every flavor of desktop and application virtualization delivery model to meet the needs and user experience for every user in the organization.

Solution overview The HPE Hyper Converged 380 used for this validation testing was a four (4) node hyper converged appliance that offers highly available server and storage infrastructure and can be deployed and configured, including the HPE HC 380 specific management pieces, in about 1 hour with a minimal number of clicks. The HPE Hyper Converged 380 comes complete with all server, storage, networking, and management tools needed to begin a deployment, and can be configured to support from two (2) to sixteen (16) nodes per management group. Because many environments require several clusters or management groups, the HC 380 utilizes the HPE OneView InstantOn software to enable rapid expansion of appliances and facilitation of simple, cost-effective, and linear growth per management group. From a software perspective, the HC 380 is pre-configured for vSphere 6.0 and includes API integration via the HPE OneView for VMware vCenter® plug-in to facilitate simplistic platform management and solution deployment. With VMware Horizon licensing applied, the HPE HC 380 platform serves as an ideal solution for customers looking for the rapid deployment and expansion of a wide range of end-user computing solutions.

HPE validated the use of VMware Horizon 6.2 on the HPE Hyper Converged 380 using Login VSI 4.1. Figure 2 below presents a high-level overview of the architecture used in this solution testing. Items within the purple box were fully validated. HPE tested a LAN based solution but the solution supports external access as noted in the diagram using VMware Horizon View Security Servers and load balancing available from a variety of HPE and VMware partners. Virus protection and security measures are available from a variety of partners in the market. HPE does not make a recommendation on which solution to use. You should examine available options and pick the one that best fits your business objectives.

Reference Architecture Page 6

Figure 2. Solution components built on HPE HC 380

The specific solution components tested are listed in the next section including software and firmware versions for all major solution pieces

Reference Architecture Page 7

Solution components Hardware For this reference architecture, an HPE HC 380 configured for VDI with 16 disk hybrid storage and optional host cache SSDs was deployed. Figures 3 and 4 show the physical layout and components of the HPE HC 380 solution.

Figure 3. HPE HC 380 server front views, with and without the bezel

Figure 4. Rear view of the HPE HC 380 as configured for this reference architecture

Reference Architecture Page 8

Detailed specifications for the HPE HC 380 environment that was utilized during testing can be found in Appendix A. Bill of Materials. The high-level details are listed below in Table 1.

Table 1. HPE HC 380 hardware components (quantities are per node)

Component Description

CPU 2 – Intel Xeon E5-2690v3 (2.6GHz/12-core/30MB/135W) Processors

Memory 16 – HPE 32GB (1x32GB) Dual Rank x4 DDR4-2133 CAS-15-15-15 Memory

1Gb Networking 1 – HPE Ethernet 1Gb 4-port 331FLR adapter

10Gb Networking 1 – HPE Ethernet 10Gb 2P 560FLR-SFP+ Adapter

1.2TB HDD Storage

800GB Write Intensive SSD Storage

12 – 1.2TB 12G SAS 10K rpm SFF (2.5-inch) Hard Drive disk packs

4 – 800GB 6G SATA Write Intensive-2 SFF 2.5-in SC 3yr Wty Solid State Drive

240GB SSD* (Optional) 2 – 240GB 6G SATA Read Intensive-2 SFF 2.5 Solid State Drive

*The 240GB SSDs are not used by the VSA. They are an optional component for customers wishing to utilize a flash layer for host caching in VMware.

Additionally, the test environment utilized network, rack, and power components – external to the HPE HC 380 appliance – as listed below in Table 2.

Table 2. HPE switching, rack, and power components used for the tests

Hardware Quantity

HPE 5900AF-48XG-4QSFP+ Switch (JC772A) 2

HPE 5900AF-48G-4XG-2QSFP+ Switch (JG510A) 2

HPE 42U 600x1200mm Enterprise Shock Rack 1

HPE 14.4kVA 208V 50A 3Ph NA/JP ma PDU 1

Reference Architecture Page 9

Figure 5 shows the layout of the test environment and the relationship of the components used. Note that the rack, power, and switching components are independent of the solution, but are recommended by HPE.

Figure 5. Front and rear view of HPE HC 380, including front and rear view of the HPE 5900AF-48XG-4QSFP+ Switches

Reference Architecture Page 10

Graphics hardware The HPE HC 380 allows for the inclusion of an NVIDIA GRID K1 or NVIDIA GRID K2 graphics accelerator. These systems are shipped pre-installed with the proper firmware in place, but require driver installation. Table 3 below shows the software and firmware versions required for the NVIDIA cards. Extended testing for this solution included validation of the NVIDIA GRID K2 graphics accelerator in vGPU mode.

Table 3. NVIDIA GRID driver and firmware stacks for the HPE HC 380 solution

Component Version Notes

NVIDIA GRID K1 VBIOS 80.07.DC.00.05-08 Installed prior to shipment

NVIDIA GRID K1 PLX F0.47.2A.00.C0 Installed prior to shipment

NVIDIA GRID drivers for ESXi 6.0 352.70 Can be downloaded from nvidia.com

NVIDIA GRID K2 RAF VBIOS 80.04.F5.00.03/04 Installed prior to shipment

NVIDIA GRID K2 RAF PLX F0.47.2E.00.C0 Installed prior to shipment

Note See Appendix B: Configuration adjustments for VDI configuration steps prior to running OVIO, for specific instructions on installing the NVIDIA drivers.

Storage Storage is one of the most critical components in any VDI solution, with the need to balance cost, capacity, performance and efficiency. HPE HC 380 addresses these issues with storage clustering. HPE StoreVirtual storage delivers the performance advantages of being purpose-built to serve as flash-optimized architecture without compromising data resiliency, efficiency, or data mobility.

The HPE StoreVirtual VSA is the basis for the storage layer in the HPE HC 380, allowing a customer to consolidate multiple independent nodes into virtual pools of highly available, shared storage. All available capacity and performance is aggregated and becomes available to every system within the cluster; as well as being exportable via iSCSI to external systems. As storage needs increase, the HPE HC 380 can scale performance and capacity completely online. Each time new nodes or systems are added to an HPE HC 380 StoreVirtual environment, the capacity, performance, and redundancy of the entire storage solution increases in a linear fashion.

Figure 6 below shows the configuration of the storage infrastructure as demonstrated in the mixed workload test case later in this document. In this diagram, the VSA is shown as a virtual machine with control of disks on the physical servers. The disks are owned by the VSA. Volumes are created and data is replicated to other nodes as it is written to each volume based on customer selected Network RAID levels.

Figure 6. HPE HC 380 storage allocation for the mixed workload test

Reference Architecture Page 11

Software Management software layer The HPE HC 380 is a solution that is defined by its software stack. As such, software plays a crucial role in scalability and performance of the overall system. Table 4 below highlights the versions of HPE software included with the tested solution.

Table 4. HPE software specifications

Software Version

HPE OneView InstantOn (OVIO) 1.3.1

HPE StoreVirtual Centralized Management Console (CMC) 12.5

HPE StoreVirtual Virtual SAN Appliance (VSA) 12.5

HPE OneView for VMware vCenter 7.8.2

HPE HC 380 Management UI 1.0

As seen in Table 4, the HC 380 solution includes centralized solution management software as part of the core platform to simplify and expedite deployment, day-to-day administration, and troubleshooting of HPE solutions. OVIO is designed to help speed customer’s time-to-value by automating and reducing clicks required in the installation and expansion processes. CMC is included to provide remote, granular control, and updating and optimization of the virtual storage environment across multiple sites. CMC leverages the StoreVirtual VSA to deliver software-defined storage capabilities onboard the HC 380 (and most any other x86 system in the environment) allowing IT to use internal HDDs/SSDs to deliver data services on par with an enterprise-class SAN – from 99.9% HA and DR to sub-volume tiering automation.

Similar to the previous, and designed to integrate with VMware, HPE OneView for vCenter is a plug-in for VMware vCenter and serves as an intelligent bridge between vCenter, HPE Infrastructure, and HPE OneView. HPE OneView for vCenter provides visibility from the server, directly from VMware vCenter, so administrators can use the familiar VMware management tool to provision, monitor, update, and scale HPE compute, storage, and network resources without having to leave the vCenter console. By delivering the capabilities of HPE OneView into vCenter, administrators can now monitor health, configurations, capacity and even displays a visual mapping of virtualized workloads to physical resources making it possible to troubleshoot network problems in seconds, instead of hours. Rather than viewing physical and virtual infrastructure as two distinct entities, the HC 380 solution stack uses HPE OneView for VMware vCenter to manage both environments as one. Providing detailed insight into the relationship between your physical and virtual infrastructures, HPE OneView for VMware vCenter Server automates tracking, enhances management productivity, and helps you proactively manage change – leading to higher overall quality of service.

Application software For this reference architecture the solution software layer consists of the software directly applicable to the End User Computing (EUC) used within the solution, and does not include Microsoft® Windows® Operating System versions that are referenced in Tables 6 and 9 in this document. Table 5 below highlights the VMware software used to complete the testing and validation of this solution.

Table 5. Software specifications

Software Version

VMware Horizon Connection Server 6.2.2

VMware Horizon View Composer 6.2.2

VMware App Volumes Manager 2.10

VMware vSphere 6.0 6.0 U1

VMware vCenter Server 6.0 6.0 U1

A variety of virtual machines related to End User Computing and running the software defined in Table 5 were deployed in the creation of this reference architecture

.

Reference Architecture Page 12

Table 6 details the configuration of each VM. Desktop VM counts varied based on the test conducted. Management VMs were protected by the highly available configuration of the platform. In larger scale implementations, it is expected that the design will follow the VMware best practices for deploying redundant management infrastructure which can be found in documentation on the VMware web site by visiting, vmware.com/files/pdf/techpaper/VMware-PerfBest-Practices-vSphere6-0.pdf.

Table 6. Virtual machine specifications

Virtual Machine (VM) vCPU Memory VHD Networks Number of VMS

Operating System (OS)

VMware Horizon Connection Server 4 16GB 60GB Production 2 Windows Server® 2012 R2 Standard

VMware Horizon View Composer 4 12GB 60GB mgmtVMNetwork

Production

1 Windows Server 2012 R2 Standard

HPE HC 380 Management VM 4 16GB 70GB mgmtVMprivate

mgmtVMNetwork

VSAeth0

VM Network

1 Windows Server 2012 (provided as part of the solution)

VMware Horizon View Composer Database (Microsoft SQL Server 2014 Standard)1

4 12GB 100 mgmtVMNetwork 1 Windows Server 2012 R2 Standard

VMware App Volumes (Microsoft SQL Server 2014 Standard)2

2 8GB 100 Production 1 Windows Server 2012 R2 Standard

VMware App Volumes Management Server 4 8GB 50GB mgmtVMNetwork

Production

1 Windows Server 2012 R2 Standard

Win7 image templates 1-2 1.5-2GB 38GB mgmtVMNetwork 3 Windows 7 Enterprise x64

Storage layer The storage layer includes the HPE StoreVirtual Virtual SAN Appliance (VSA), the HPE StoreVirtual Centralized Management Console (CMC), and the HPE OneView for VMware vCenter. HPE OVIO was used to create a highly-redundant cluster of shared storage utilizing the local SSDs and HDDs from the servers to create a single storage pool. Adaptive Optimization technology is enabled by default on this storage to facilitate tiering of the SSDs and rotating media layers which optimizes performance based on the frequency with which blocks are accessed. In end-user computing environments, this translates to faster boot and recovery times as well as a greatly enhanced user experience.

The HPE HC 380 is also VMware certified for multi-site disaster recovery (DR), delivering business continuity with failover that is transparent to users and applications. The multi-site configuration maintains data availability beyond a single physical or logical site, and validates full compatibility with VMware high availability (HA) and fault tolerant (FT) features. Administrators can add capacity, increase performance, grow, and migrate volumes between HPE HC 380 clusters on the fly with no application downtime. Refer to HPE HC 380 User Guide at, hpe.com/us/en/integrated-systems/hyper-converged for more in depth information.

End User Computing End User Computing (EUC) based on VMware Horizon 6 delivers hosted virtual desktops and applications to end users through a single platform. These desktop and application services, including a mix of Virtual Desktop Infrastructure (VDI), hosted applications, cloud based application delivery, can all be accessed from one unified workspace to provide end users with all of the resources they want, at the speed they expect, across devices, locations, media, and connections. For this reference architecture, HPE focused on a number of methodologies for achieving the goals of delivering an end-user workspace leveraging VMware Horizon.

1 While a local SQL Server VM was included for testing purposes it is highly recommended that you take advantage of existing SQL Server clusters that follow Microsoft SQL Server

best practices for security and availability, in order to host the necessary databases for this solution. These can be found in documentation on VMware’s website at, vmware.com/files/pdf/solutions/SQL_Server_on_VMware-Best_Practices_Guide.pdf.

2 See footnote 1 above.

Reference Architecture Page 13

These methodologies are highlighted in Table 7 below.

Table 7. EUC

Method Description How HPE Implemented

Linked Clones A copy of a virtual machine that shares virtual disks with the parent virtual machine in an ongoing manner.

HPE implemented Linked Clones as a use case for Login VSI Office workers.

App Volumes Real-time application delivery and lifecycle management tool that ensures applications are centrally managed and delivered to desktops through virtual disks.

HPE implemented App Volumes along with Linked Clones to create desktops that provide many of the benefits of persistent VMs to users while giving IT administrators the benefits that come with single image management.

Hosted Desktop An instance of a desktop operating system that runs on a centralized server. Access and control is provided to the user by a client device connected over a network. Multiple host-based virtual machines can run on a single server.

HPE used VMware Horizon View to broker sessions to Microsoft Remote Desktop Session Host hosted desktops for Login VSI Task workers.

Fully Provisioned VMs A copy of a virtual machine that runs on unique, individual virtual disks, and is entirely separate from the parent virtual machine.

HPE used fully provisioned and dedicated virtual machines to assign compute resources to Login VSI Power workers.

Graphics Enabled Users Users connected to a virtualized environment that provides rich graphics, via NVIDIA GPU cards that have been enabled to provide this ability.

HPE validated the functionality of vGPU enabled graphics users with fully provisioned VMs utilizing NVIDIA GRID K2 graphics accelerators and Login VSI Power workers.

The following Horizon components are leveraged as part of the architecture:

View Connection Server: End users connect through View Connection Servers to securely and easily access their personalized virtual desktops. The View Connection Server acts as a broker for client connections by authenticating and directing incoming user desktop requests.

View Security Server: A View Security Server is an instance of View Connection Server that adds an additional layer of security between the Internet and your internal network. Outside the corporate firewall, in the DMZ, you can install and configure View Connection Server as a View Security Server. Security servers in the DMZ communicate with View Connection Servers inside the corporate firewall. Security Servers ensure that the only remote desktop traffic that can enter the corporate data center is traffic on behalf of a strongly authenticated user. Users can access only the desktop resources for which they are authorized.

View Composer Server: View Composer Server is an optional service that enables you to manage pools of “like” desktops, called linked clone desktops, by creating master images that share a common virtual disk. Linked-clone desktops are one or more copies of a master image that share the virtual disks of the parent, but which operate as individual virtual machines. Linked-clone desktops can optimize your use of storage space and facilitate updates. You can make changes to a single master image through the vSphere Client. These changes trigger View Composer Server to apply the updates to all cloned user desktops that are linked to that master image, without affecting users’ settings or personal data.

View Agent: The View Agent service communicates between virtual machines and Horizon Client. You must install the View Agent service on all virtual machines managed by vCenter Server so that the View Connection Server can communicate with them. View Agent also provides features such as connection monitoring, virtual printing, persona management, and access to locally connected USB devices. View Agent is installed on the guest operating system of the virtual machine in the data center.

App Volumes Manager: A Windows Server system used as the Web Console for administration and configuration of App Volumes and assignment of AppStacks and writable volumes. App Volumes Manager is also used as a broker for the App Volumes agents, for automated assignment of applications and writable volumes during desktop startup and/or user login.

App Volumes Database: A Microsoft SQL (production) or SQLExpress (non-production) database that contains configuration information.

App Volumes Agent: Software installed on all Windows desktops where users receive AppStack volumes and writable volume assignment. The agent runs as a service and utilizes a filter driver to handle application calls and file system redirects to AppStack and writable volume VMDKs. Windows desktops do not have to be members of the domain on which the App Volumes Manager server resides.

AppStack Volume: A read-only volume containing any number of Windows applications. Multiple AppStacks can be mapped to an individual system or user. An individual AppStack can also be mapped to more than one system or user.

Reference Architecture Page 14

Best practices and configuration guidance for the solution This section outlines the cabling and network design of the solution stack.

Physical cabling HPE HC 380 and HPE 5900 switches The figure below shows physical cabling of the HPE HC 380 as tested for this reference architecture. Network function and speed are outlined within the diagram.

Figure 7. Compute switches and cabling

Reference Architecture Page 15

Table 8 below highlights the networks carried on the various cables/NICs and how they are configured.

Table 8. Networks as implemented within this reference architecture

Connection Port Configuration Functional Description

10Gb to ToR switching Solution Management Network, PVID iSCSI Storage Network, Tagged vMotion Network, Tagged Production VLAN, Tagged

The 10Gb connection carries all functional networks. Optionally, the vMotion network can be migrated to a new vSwitch utilizing vmnic 2 and 3 (see Optional 1Gb vMotion below). Note that the production VLAN is added by the customer.

iLO 1Gb Solution Management Network, Access Port HPE deployed iLOs on the solution management network using untagged, dedicated network ports.

OVIO connection No VLAN or configuration This connection is from a laptop to vmnic 1 on ONE (1) host during the deployment of a cluster. No switching is involved. This connection may be made to any system in the cluster.

Optional 1Gb vMotion Network

vMotion Network, Access Port, customer selectable function

For customers seeking dedicated cabling for vMotion and Fault Tolerance it is recommended that a new vSwitch is created with vmnic 2 and vmnic 3 assigned and the vmKernel portgroups for these functions are migrated to this vSwitch. Note that prior to a system reset or reconfiguration these networks must be migrated back to vSwitch1.

Solution network, VLANs, and configuration Several VLANs were defined to segment and isolate traffic throughout the solution. Most of these networks are defined as a part of the HPE HC 380 solution. Active/Active configuration was used to configure the network connections including iSCSI traffic. Brief descriptions of the VLANs follows.

Production network (vlan 104) The Production network can be considered as a client or customer network or networks through which clients connect to their desktops, file shares, user data and applications. In the HPE test environment vlan 104 is a domain network that supports full infrastructure services as well as Login VSI for up to 3,000 users.

Solution management vmMgmt (vlan 21) This is a completely independent network which connects all physical components and hypervisor interfaces within the compute and the management stacks as well as hosting inter-host communication from the management VM.

iSCSI storage network (vlan 22) This network supports iSCSI traffic as well as connectivity to the iSCSI management components throughout the solution.

Storage vMotion migration (vlan 23) This is the vMotion and HA network for the solution. By default it is located on the 2 x 10Gb network adapters but may optionally be migrated to dedicated 1Gb links.

Capacity and sizing HPE set out to validate the HPE HC 380 solution in a variety of ways. In addition to our standard test image, HPE made the move to test images with newer versions of Microsoft Office. HPE also wanted to demonstrate that a variety of different workloads could run concurrently on an HC 380 solution while achieving an excellent user experience across user types.

About Login VSI Login VSI 4.1 is a load generating test tool designed to test remote computing solutions via a variety of different protocols. The Login VSI environment was hosted outside the HPE HC 380 environment. Login VSI works by starting a series of launchers which are best thought of as end-user access devices. These launchers connect to the EUC infrastructure under test via a connection protocol and then a series of scripts executed on the compute resources simulate the load of actual end users. The test suite utilized a series of desktop applications running via automated scripts within the context of the VMware Horizon virtual desktop environment.

Reference Architecture Page 16

A standardized set of applications are installed within every virtual machine and actions are taken against the installed applications. The set of applications HPE tested against are listed in Table 9 below with versions shown where applicable.

Table 9. Login VSI software specifications

Software Version

Microsoft Windows 73 7 Enterprise, x64

Adobe® Acrobat® 9.1

Adobe Flash Player 11

Adobe Shockwave Player 11

Bullzip PDF printer

Freemind

7-Zip

Microsoft Office Professional x64 bit 2010 Professional and 2013 Professional

Microsoft Internet Explorer Various

Response times are measured for a variety of actions within each session. When response times climb above a certain level on average, the test is finalized and a score, called VSImax, is created. VSImax represents the number of users at or below the average response time threshold. A detailed explanation can be found on the Login VSI website at, loginvsi.com/documentation/index.php?title=Login_VSI_VSImax.

Login VSI workload Table 10 below shows the various Login VSI 4.1 workloads available for testing and the recommended resource availability. These benchmarks can be found on Login VSI web page at, loginvsi.com/documentation. Knowledge worker workload is the base workload HPE uses to compare systems across generations and configurations. HPE adjusted the Knowledge worker workload to a more real world configuration which is specified later in this document.

Table 10. This table represents the standard Login VSI 4.1 workloads. Workloads used by HPE are discussed later in this document.

Workload VSI Version

Workload vCPU Memory Apps Open

Video CPU Usage

Disk Reads

Disk Writes

Estimated IOPS

Task worker 4.1 Light 1vCPU 1GB 2-3 None 70% 79% 77% 6

Office worker 4.1 Medium 1vCPU 1.5GB 4-6 240p 82% 90% 101% 8

Knowledge worker 4.1 Medium 2vCPU 1.5GB 4-7 360p 100% 100% 100% 8

Power worker 4.1 Heavy 2-4vCPU+ 2GB 5-9 720p 119% 133% 123% 10

3 Some tests were undertaken with Microsoft Windows 10. Those results will be disseminated in other locations as this document focuses on Microsoft Windows 7.

Reference Architecture Page 17

Testing strategy The use cases tested focused on validating the broadest swath of Horizon VDI user. Basic tests of Knowledge workers were run against a linked clone implementation, first with Office 2010 and then with Office 2013. The reasons for testing Office 2010 are twofold. First, it is still in common use in VDI environments under extended support due to a demonstrated use of fewer resources. Second, Office 2010 has been a piece of a standard image HPE tests since the introduction of Login VSI 4.1 This means that results in this document using this image are generally comparable to those HPE has published on other platforms since that introduction. The word “generally” is used in this explanation as there are nuances between hyper converged platforms and platforms built from independent components. Most of the tested solutions involving individual components host the required management pieces on separate infrastructure. With hyper converged systems, user VMs are collocated with management infrastructure such as Horizon View Connection servers, Microsoft SQL Server database servers and vCenter servers among others. This generally results in lower user counts on hyper converged platforms.

Note In addition to the Knowledge worker case, a test of the Power worker workload against fully provisioned virtual machines was also executed on the HPE HC 380 platform. This test does not have comparable references on other tested HPE platforms.

One value of the HPE HC 380 is the ability to run any End User Computing workload. To demonstrate this capability, Login VSI provided to HPE, a hotfix that allowed testing of different use cases against different user types (including graphics users), in a concurrent fashion. The results of these tests are included in this section of the RA.

Benchmarks versus field implementation Login VSI presents a relatively replicable set of tests that can be used to compare platforms and solutions within a fairly close range. The test uses a standardized set of workloads to create those comparison points. In the real world, it is highly unlikely that a customer will be running the exact set of applications featured in the test. As with most benchmarking tools, Login VSI results should be used in conjunction with results from actual system performance data from the field or via Proof-of-Concept (POC) implementations. Login VSI presents response times from various tasks and applications that could be used as a primitive baseline in a controlled environment with limited applications and resource assignments. Although these metrics are useful when comparing systems with similar resource attributes, they can be misleading when used to extrapolate to real-world implementations. As a result, the numbers in this document are guidelines only.

Historically HPE has recommended sizing solutions at 60-65% of Login VSI numbers. This recommendation, however, is dependent on the fact that similar resource allocation is used as in the test results presented. Hence, HPE strongly recommends complete analysis of the specific user requirements prior to any VDI implementations and not solely based on benchmark results. Customers new or inexperienced in VDI should undergo a deeper assessment of their environment prior to implementing VDI, to make sure they attain the results they desire. If such an assessment interests you, please engage with your HPE account team for further information on our HPE Mobility and Workplace Services, hpe.com/us/en/services/consulting/mobility-workplace.html.

Reference Architecture Page 18

Table 11 below shows the three (3) dedicated use cases that were tested on HC 380 in a four (4) node configuration. In all cases a 38GB base image was used to construct the test VM. Specifications for memory and CPU are determined by user type as shown in the table of Login VSI user types in the Login VSI workload section of this document. The table below also summarizes VSImax scores for the platforms. Results are disseminated further in the sections that follow.

Note All volumes deployed were set as thinly provisioned with Network RAID-10 for replication. HPE advises to use this configuration for all end-user VMs to optimize performance and availability as well as space efficiency.

Table 11. Test results with storage volume configuration

User type VM type Office version Windows version Number of users

Volumes Volume size

Vcpu Memory

Knowledge worker Linked Clone 2010 7 832 6 2.5TB 2 2GB

Knowledge worker Linked Clone 2013 7 559 6 2.5TB 2 2GB

Power worker Full VM 2013 7 513 5 4.5TB 4 2GB

An additional test was performed using a mixed workload. Table 12 below reflects the worker types and distribution used in this test. Results will be further disseminated in the sections that follow. Note that RDSH servers were hosted on the solution management volume and are thus not called out as having a dedicated volume or volume size. All VMs tested ran Microsoft Windows 7 Enterprise, x64 and the 64-bit version of Microsoft Office 2013 Professional.

Table 12. Mixed load allocation

User type VM type Number of users Volumes Volume size

Power worker Full 100 2 3TB

Power worker with vGPU Full 24 2 3TB

Knowledge worker Linked Clone with App Volumes 120 2 2TB

Office worker Linked Clone 160 2 2.5TB

Task worker Hosted Desktop 100 NA NA

Test Totals 504 8 10.5TB

Reference Architecture Page 19

Linked clones for Knowledge worker HPE validated the hosting of 832 Knowledge worker user sessions on individual linked clone virtual desktops on the HPE HC 380 in a four (4) node configuration. These were Windows 7 desktops with Office 2010. Figure 8 shows the output of this test. The HPE HC 380 solution will support 832 Knowledge workers in this configuration while running the Horizon and vSphere infrastructure pieces on the same platform. The base response time of 618ms is very good compared to other documents published at, loginvsi.com.

Figure 8. Login VSI results testing 832 Windows 7 virtual desktops with Microsoft Office 2010 on the HPE HC 380

Reference Architecture Page 20

As mentioned, HPE is moving to a newer version of Office for the base test image. As such, the results reported in the prior paragraph are comparable to platforms tested since the release of Login VSI 4.1. The results below which feature an identical workload but with an image running Microsoft Office 2013 are comparable only to the results in the prior paragraph. As expected and as reported widely in the industry, there was a substantial loss in density with the change to Microsoft Office 2013. The VSImax score dropped from 832 to 559 users. Figure 9 shows the output of this test.

Figure 9. Login VSI results hosting 559 Windows 7 virtual desktops with Microsoft Office 2013 on the HPE HC 380

HPE did not do a substantial amount of tuning to Office or the image. There are methods reported on a variety of websites that should improve these numbers, but it should be noted that implementing the suggested tunings should be considered carefully and tested thoroughly. Even with tunings in place it is not expected that the environment will reach the level of density of an environment running Microsoft Office 2010.

Even with the loss of density the baseline user response time was still a very low 623 ms.

Reference Architecture Page 21

Fully provisioned VMs for Power worker HPE validated the hosting of 520 Power worker user sessions on fully provisioned cloned virtual desktops on the HPE HC 380 in a four (4) node configuration using Windows 7 and Office 2013. A VSImax score of 513 user sessions was recorded. Figure 10 shows the output of this test.

Figure 10. Login VSI results hosting 513 Windows 7 virtual Power worker desktops with Microsoft Office 2013 on the HPE HC 380 mixed workload testing

Reference Architecture Page 22

Table 12 earlier in this section shows the number and type of VMs deployed as well as the use case run against them. It should be noted that 5 separate pools were created in VMware Horizon View to support this test case. The addition of Microsoft RDSH VMs as well as additional power users with vGPU assigned graphics access placed extra pressure on the system. Even with the extra sustained load the HPE HC 380 produced a mixed workload VSImax score of 481 users with a sub 700ms baseline response time. Figure 11 below shows these results.

Figure 11. Login VSI VSImax result of 481 Windows 7 virtual desktops on the HPE HC 380 hosting a mixed workload.

Reference Architecture Page 23

Boot tests A common metric sought in VDI environments is the amount of time it takes to boot a given number of VMs from power on until users can begin logging on and receiving a good user experience. The general idea is to reproduce a scenario where a catastrophic loss has been experienced and the goal is to get users online quickly. The tests can be run and measured different ways. For this document HPE looked at a raw power on. This is essentially a panic button push that powers on all VMs with the single press of a button (or issuance of a command). The test can also be run from a broker initiated power on where the connection broker instructs vCenter to power on VMs.

The major challenge to getting users online and working quickly is recovering from the IO storm that accompanies the VMs coming online. HPE used the HPE StoreVirtual Centralized Management Console to record this IO storm and time the event from inception to a point where users could start coming online comfortably. Figure 12 shows that IO pattern and suggests that a time to return to logging on users free of IO obstructions is just over 4 minutes.

Figure 12. Boot time as reflected by SAN IO

Note that in Figure 12 the larger, red dots denote the start and end time of the boot storm with the end time marking the point where end users would not run into performance conflicts from boot IO.

Analysis and recommendations The data presented in the prior section suggests that the HPE HC 380 is what it was designed to be: a flexible platform capable of handling all end-user computing use cases not only individually but also concurrently. The platform offers exceptional performance, simplified management and cost effective, linear scaling at a far more granular level than traditional end-user computing solutions.

HPE did very little tuning on the HPE HC 380 to achieve the results in this document. The two primary pieces that are recommended for consideration are as follows.

Utilize the VMware OS Optimization Tool available from VMware Labs at, https://labs.vmware.com/flings/vmware-os-optimization-tool, to optimize and tune the base images used in your environment. The tool offers a flexible way to both analyze and optimize images in test prior to deploying en masse.

Reference Architecture Page 24

The HPE HC 380 ships with a custom power and performance scheme that allows for OS control of the power scheme, so consideration should be made for altering the power scheme within the RBSU to high performance mode. This can be set in vCenter (and in fact should be, even if you opt to tune in the RBSU), but forcing high performance mode within the RBSU will maximize user counts compared to utilizing power saving modes. It is important to consider the tradeoff. While you should see an increase in user density, power consumption will also increase and systems will not go into more optimal power states during times of low utilization.

As with all end-user computing reference architectures, HPE recommends following the VMware best practices for its software at every layer and level. These can be found in documentation on the VMware website at, vmware.com/files/pdf/techpaper/VMware-PerfBest-Practices-vSphere6-0.pdf. Doing so facilitates supportability for the solution as well as insures that you are taking advantage of the years of VMware experience in the end-user computing space.

One final recommendation is to consider a deep assessment of your environment prior to implementation. The tests in this document are repeatable and show clearly that the HPE HC 380 is a performance end-user computing platform. But the translation of tests results using a standardized image is neither easy nor direct to your environment. The best overall end-user experience and optimal return on investment come from a deep understanding of how to implement this solution in your environment.

Summary In today’s rapidly changing IT world, IT organizations must constantly rethink how to best deliver an optimal desktop experience to end users whose work behavior changes as quickly as the device used, the newest application, and location from which they work. This RA builds off of the strength and versatility of the existing VMware portfolio and leverages years of HPE innovation delivering end-user computing solutions. Unique improvements in HPE server, storage and networking technologies make this newest architecture the highest-performing, lowest costing and easiest to manage solution that HPE has ever developed. It is ideally suited for the performance and scalability requirements of a VMware Horizon 6.2.2 deployment that requires architectural flexibility, extreme performance and rapid yet simple scaling to meet IT and line of business needs.

Key findings:

• The HPE HC 380 solution provides the basis for scalable, high performance VDI workloads running on VMware Horizon 6.2.2.

• HPE StoreVirtual VSA with Adaptive Optimization enabled, creates a highly redundant cluster of shared storage providing uncompromising performance for end-user computing workloads.

Appendix A: Bill of materials The following bill of materials (BOM) contains electronic license to use (E-LTU) parts. Electronic software license delivery is now available in most countries. HPE recommends purchasing electronic products over physical products (when available) for faster delivery and for the convenience of not tracking and managing confidential paper licenses. For more information, please contact your reseller or an HPE representative.

Note Part numbers are current as of time of testing and subject to change. The bill of materials does not include complete support options or other rack and power requirements. If you have questions regarding ordering, please consult with your HPE Reseller or HPE Sales Representative for more details. hpe.com/us/en/services/consulting.html.

Table 13. Bill of materials

Qty Part Number Description

1 BW908A HPE 42U 600x1200mm Enterprise Shock Rack

4 P9D74A HPE Hyper Converged 380 Cluster Appliance (Node)

4 719044-L21 HPE DL380 Gen9 Intel Xeon E5-2690v3 (2.6GHz/12-core/30MB/135W) FIO Processor Kit

4 719044-B21 HPE DL380 Gen9 Intel Xeon E5-2690v3 (2.6GHz/12-core/30MB/135W) Processor Kit

64 728629-B21 HPE 32GB (1x32GB) Dual Rank x4 DDR4-2133 CAS-15-15-15 Registered Memory Kit

Reference Architecture Page 25

Qty Part Number Description

4 724864-B21 HPE DL380 Gen9 2SFF Front/Rear SAS/SATA Kit

8 804587-B21 HPE 240GB 6G SATA Read Intensive-2 SFF 2.5-in SC 3yr Wty Solid State Drive

16 804671-B21 HPE 800GB 6G SATA Write Intensive-2 SFF 2.5-in SC 3yr Wty Solid State Drive

48 781518-B21 HPE 1.2TB 12G SAS 10K rpm SFF (2.5-inch) SC Enterprise 3yr Warranty Hard Drive

4 719073-B21 HPE DL380 Gen9 Secondary 3 Slot GPU Ready Riser Kit

4 719076-B21 HPE DL380 Gen9 Primary 2 Slot GPU Ready Riser Kit

4 665243-B21 HPE Ethernet 10Gb 2-port 560FLR-SFP+ Adapter

4 749974-B21 HPE Smart Array P440ar/2GB FBWC 12Gb 2-ports Int FIO SAS Controller

4 726821-B21 HPE Smart Array P440/4GB FBWC 12Gb 1-port Int SAS Controller

4 726897-B21 HPE Smart Array P840/4GB FBWC 12Gb 2-ports Int SAS Controller

12 783009-B21 HPE DL380 Gen9 8SFF SAS Cable Kit

4 785989-B21 HPE DL380 Gen9 2SFF x8 Front Cable Kit

4 786092-B21 HPE DL380 Gen9 8SFF H240 Cable Kit

8 753958-B21 NVIDIA GRID K2 RAF PCIe GPU Kit

4 733660-B21 HPE 2U Small Form Factor Easy Install Rail Kit

4 719082-B21 HPE DL380 Gen9 Graphics Enablement Kit

8 720620-B21 HPE 1400W Flex Slot Platinum Plus Hot Plug Power Supply Kit

4 JG505A HPE 59xx CTO Switch Solution

2 JG510A HPE 5900AF-48G-4XG-2QSFP+ Switch

2 JC772A HPE 5900AF-48XG-4QSFP+ Switch

12 JD096C HPE X240 10G SFP+ SFP+ 1.2m DAC Cable

8 JD097C HPE X240 10G SFP+ SFP+ 3m DAC Cable

2 JG326A HPE X240 40G QSFP+ QSFP+ 1m DAC Cable

9 C7535A HPE Ethernet 7ft CAT5e RJ45 M/M Cable

8 JC680A HPE A58x0AF 650W AC Power Supply

1 H8B55A HPE 14.4kVA 208V 50A 3Ph NA/JP ma PDU

8 JC682A HPE A58x0AF Back (power side) to Front (port side) Airflow Fan Tray

4 P9D85A HPE ConvergedSystem 380-HC StoreVirtual Software LTU

1 HA124A1 XW4 HPE 3Y 4 hour 24x7 Proactive Care SVC

1 H1K92A3 HPE 3Y 4 hr 24x7 Proactive Care SVC

1 H1K92A3 YMW HPE Hyper Converged 380 System Support

4 H1K92A3 YMX HPE Hyper Converged 380 Node Support

4 H1K92A3 YMY HPE Hyper Converged 380 SW LTU Support

1 HA114A1 HPE Installation and Startup Service

1 HA114A1 5WG HPE 300 Series HC StoreVirtual Strtup Svc

43-84 M7K12AAE VMware Horizon Enterprise 10 Pack 1yr Concurrent Users E-LTU

Reference Architecture Page 26

Appendix B: Configuration adjustments VDI specific configuration steps with NVIDIA GRID graphics accelerators It is advisable that you read this entire section carefully prior to taking action.

The HPE HC 380 allows for the NVIDIA GRID K1 and GRID K2 graphics accelerators to be ordered with the platform and delivered with the proper, supported firmware in place. Supported drivers must be installed on the host platform (as well as within the VM) to support these cards in either a direct pass through or vGPU use case. There are two methods to add the platform level drivers discussed in this appendix. VM drivers should be installed following the instructions provided by NVIDIA for the operating system in use. The driver tested for this document was NVIDIA-vGPU-kepler-VMware_ESXi_6.0_Host_Driver_352.70-1OEM.600.0.0.2494585.vib obtained from NVIDIA.

Recommended method The simplest way to install the drivers is prior to running OVIO. This method requires access to the iLO as well as DHCP being delivered on the solution management network.

• On the host you will run OVIO on, connect a laptop to the OVIO port (the second 1Gb NIC) and log onto the management VM via RDP.

– IP 192.168.42.100 (your laptop should be on this subnet with a direct cable link to the NIC)

– User: Administrator

– Default Password: HyperConv!234

• Shut down the management VM gracefully including closing the OVIO screen. It will relaunch and be ready to run OVIO again.

• Shift to the solution management network and starting with the management host, connect to the iLO remote console for each host in the cluster. Record the DHCP IP address of the host. You will use this to ssh into the host as well as copy the driver file.

• Using an SCP client, copy the NVIDIA-vGPU-kepler-VMware_ESXi_6.0_Host_Driver_352.70-1OEM.600.0.0.2494585.vib file you downloaded to the /tmp directory on each host.

• Log in to each host as root via either the iLO remote console (use Alt+F1 to log onto the console) or using a SSH utility such as PuTTY

• For each host (and it is recommended you start with the host you will run OVIO deployment from) type the following:

%> esxcli software vib install –f file:/tmp/ NVIDIA-vGPU-kepler-VMware_ESXi_6.0_Host_Driver_352.70-1OEM.600.0.0.2494585.vib –no-sig-check –maintenance-mode --force

• Press enter when done and validated that the driver successfully installs.

• Type in reboot when the command prompt returns, press enter and then move to the next host and repeat the process.

Post-OVIO method (not recommended) If you have already deployed the platform running OVIO and begun to implement a solution stack, it is likely you will need to use this method. This method does not require a host reboot, but it does require a manual manipulation of the status of the VSA storage VMs. Due to this, it is recommended that this method is used only by those who completely understand the steps required to maintain storage availability.

Note If you are running a 2-node HPE HC 380 do not attempt to install the drivers using this method. While this is possible, to do this method was not tested for this document and the steps described below may result in system instability and/or data loss.

Note It is strongly recommended that you perform these steps during a timeframe of very low utilization.

Reference Architecture Page 27

Note Read and comprehend these instructions thoroughly prior to attempting to update the drivers on a running system. If you have any concerns, utilize your support channels.

Processes Log onto the management VM. Log into vCenter and the HPE StoreVirtual CMC. The credentials for accessing the StoreVirtual CMC were assigned during the OVIO process.

• In the CMC note what systems are running a manager, what system(s) is not running a manager and which system houses the LeftHand OS Connection. Write this down.

• Using the notes you just made, identify which host each of the VSA VMs is running on within the vCenter cluster. Write this down.

• Temporarily disable DRS within the cluster

For each VM you will perform the following actions starting with the host running the VSA without a manager:

• From within vCenter use vMotion to migrate all VMs off of the host. Just change hosts. Do not change the datastore.

• You cannot migrate the VSA VM as it is on local storage. Instead, from the CMC power off the VSA using the tools provided after all other VMs have been migrated.

• Return to vCenter and place the host in Maintenance Mode. Choose to not migrate VMs. This should have already been done leaving the VSA as the only VM.

• Copy the NVIDIA-vGPU-kepler-VMware_ESXi_6.0_Host_Driver_352.70-1OEM.600.0.0.2494585.vib file into the /tmp directory of the host in maintenance mode.

• SSH into the host as in the recommended method and run the following command:

%> esxcli software vib install –f file:/tmp/ NVIDIA-vGPU-kepler-VMware_ESXi_6.0_Host_Driver_352.70-1OEM.600.0.0.2494585.vib –no-sig-check –maintenance-mode --force

• Once you receive a success message you can log out of the SSH session and return to vCenter to disable maintenance mode on the host.

• From vCenter, power on the VSA VM and wait for it to show as active and healthy in the StoreVirtual CMC. Do not proceed further until all volumes and VSAs show a healthy status.

Repeat the above steps on all hosts not running a manager.

• Once all hosts not running a manager are complete, move to a host that is running a manager and stop the manager running. Start a manager on a host that is not running one. See notes below.

You will now repeat these steps on each remaining host in the cluster. The following are important notes.

• Perform the action on only one host at a time, and that host should have its manager stopped. When you stop a manager on a VSA, be sure you start one on a VSA that is not running one, to maintain that you are running the number of managers that were running after the OVIO process was completed.

• Do not take down more than one system at a time, and validate that all VSAs and volumes within the management group are healthy prior to moving on to the next node.

• For each remaining host, stop the manager prior to beginning the steps. When you do this on the host with the LeftHand OS connection you will be temporarily disconnected from the CMC. This is expected. If after a few minutes you see no signs of being reconnected, log back into the Management Group. The LeftHand OS connection will have migrated to another VSA. It is generally recommended that you perform the steps above on the VSA with the LeftHand OS connection last so as to not have to shift the connection more than once.

• When done your cluster should have one host running the LeftHand OS connection and the same number of hosts running managers as before you started the process.

Reference Architecture Page 28

• Once all VSAs are healthy and all hosts are out of maintenance mode you should re-enable DRS in the cluster and choose to Run DRS from the DRS tab of the cluster to redistribute virtual machines.

VMware Horizon View environment specific configuration steps The following steps should be carried out once the system has been deployed. All tasks are easily completed from within VMware vCenter.

1. On each host, add your production networks to vSwitch1. vSwitch1 is served by the 10Gb network adapters in the solution. Whatever networks you add should be properly configured at the switch port to provide expected functionality.

2. On each host you must properly set the time and configure at least 2 NTP server sources to insure proper time. Many of the VMs within the solution depend on the hosts for time.

3. If you did not choose to alter the power and performance settings within the RBSU, you will need to select a Power Management Policy for each host. In VDI environments, it is recommended to choose High Performance to insure the best end-user experience.

Optional configuration steps 4. If you ordered the rear SSDs for your HPE HC 380 appliance, you will need to configure them as host caching devices within vCenter. It is

recommended that you choose the full capacity of the SSD for this purpose.

5. You may choose to migrate your vmKernel that is used for vMotion traffic to a new vSwitch. The new vSwitch should be created using vmnic 2 and 3 which are the unused 1Gb adapters on the 4-port 1Gb network card on each host. Note that if you choose to migrate these networks to the extra 1Gb adapters, they must be migrated back in the event of a recovery or expansion operation.

Appendix C: Scaling the HPE HC 380 for VDI Hyper converged solutions have become a very good fit for end-user computing. One of the chief reasons behind this is the granular scalability these systems are capable of. With an HPE HC 380 solution for VDI you can start as small as two (2) nodes while still gaining all of the benefits of manageability, high availability and performance and scale into the tens of thousands of users a node at a time if need be. This ultimate flexibility brings with it a model of expansion where additional costs to bring more users online are considerably lower than in traditional solutions.

To validate the scalability of the solution, HPE ran a series of Login VSI Knowledge worker tests with linked clones against 2, 3, 4, and 5 node HC 380 configurations, scaling the solution and retesting each time a node was added. The results of this test are shown in Table 14. Scaling is exceptionally linear even after bringing on an additional Horizon View Connection Server with the fifth node.

Table 14. Scaling of the solution

Nodes VSImax for Knowledge worker Additional users

2 259 NA

3 404 145

4 550 146

5 691 141

Reference Architecture Page 29

Figures 13 through 16 below report the Login VSI VSImax scores for each configuration.

Figure 13. Login VSI VSImax result of 259 Knowledge workers across a 2-node platform

Reference Architecture Page 30

Figure 14. Login VSI VSImax result of 404 Knowledge workers across a 3-node platform

Reference Architecture Page 31

Figure 15. Login VSI VSImax result of 550 Knowledge workers across a 4-node platform

Reference Architecture Page 32

Figure 16. Login VSI VSImax result of 691 Knowledge workers across a 5-node platform

Reference Architecture Page 33

Sign up for updates

© Copyright 2016-2018 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. VMware, Horizon, vSphere, App Volumes, vCenter and View Composer are registered trademarks or trademarks of VMware, Inc. Adobe and Acrobat are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. NVIDIA is a trademark and/or registered trademark of NVIDIA Corporation in the U.S. and other countries.

4AA6-5204ENW, June 2018, Rev. 3

Resources and additional links HPE Reference Architectures, hpe.com/info/ra

HPE Servers, hpe.com/servers

HPE Storage, hpe.com/storage

HPE Networking, hpe.com/networking

HPE Technology Consulting Services, hpe.com/us/en/services/consulting.html

Best practices to install or upgrade to VMware ESXi 6.0, https://kb.vmware.com/kb/2109712

NVIDIA GRID resources, nvidia.com/object/grid-enterprise-resources.html

To help us improve our documents, please provide feedback at hpe.com/contact/feedback.