Upload
others
View
19
Download
0
Embed Size (px)
Citation preview
HPE Reference Configuration for NVIDIA GPUs with Citrix XenDesktop on HPE Synergy Facilitating multiple use cases on HPE Composable Infrastructure
Reference Architecture
Reference Architecture
Contents Executive summary ................................................................................................................................................................................................................................................................................................................................ 3 Solution overview ..................................................................................................................................................................................................................................................................................................................................... 3 Solution components ............................................................................................................................................................................................................................................................................................................................ 6
Hardware ................................................................................................................................................................................................................................................................................................................................................... 6 Software ..................................................................................................................................................................................................................................................................................................................................................... 8 Hardware ................................................................................................................................................................................................................................................................................................................................................... 9 Scripting tools ....................................................................................................................................................................................................................................................................................................................................... 9 Design principles ................................................................................................................................................................................................................................................................................................................................ 9
User scenarios ......................................................................................................................................................................................................................................................................................................................................... 10 Logical design and workflow ............................................................................................................................................................................................................................................................................................... 13
Configuration guidance for the solution ......................................................................................................................................................................................................................................................................... 14 HPE Synergy hardware configuration ........................................................................................................................................................................................................................................................................ 15 Creating HPE OneView server profiles ...................................................................................................................................................................................................................................................................... 17 Creating golden images ........................................................................................................................................................................................................................................................................................................... 19 HPE Synergy Image Streamer plan scripts ............................................................................................................................................................................................................................................................ 22 Creating HPE Synergy Image Streamer OS build plans ............................................................................................................................................................................................................................. 23 HPE Synergy Image Streamer deployment plans ........................................................................................................................................................................................................................................... 24 Automating for composability ........................................................................................................................................................................................................................................................................................... 24
Capacity and sizing ............................................................................................................................................................................................................................................................................................................................ 25 Summary ...................................................................................................................................................................................................................................................................................................................................................... 26 Implementing a proof-of-concept ......................................................................................................................................................................................................................................................................................... 26 Appendix A: HPE Synergy Image Streamer plan scripts for ESXi ........................................................................................................................................................................................................... 27 Appendix B: NVIDIA CUDA Toolkit 8.0 and driver installation on RHEL 7 .................................................................................................................................................................................... 31 Resources and additional links ................................................................................................................................................................................................................................................................................................ 32
Reference Architecture Page 3
Executive summary Graphics accelerators, such as NVIDIA® GRID technology, within a client virtualization environment have become a common solution component to address high-end user workloads such as 3D engineering and design. As addressing this class of end user can represent a significant infrastructure investment, many organizations inquire how best to maximize utilization to obtain the full value of their hardware. For example, once an engineering team completes a typical 8-hour workday, the equipment will sit idle and unused overnight. Is it possible to leverage the infrastructure during these off hours to address another workload, such as data analytics? The answer is, yes. One of the key HPE Synergy solution capabilities is to provide a fast and easy means to repurpose hardware as needed to address both visualization and analytics workloads. By utilizing HPE Synergy Composer and HPE Synergy Image Streamer technologies, it is possible to quickly repurpose the HPE Synergy Compute Modules from design to simulation and back again.
In this Hewlett Packard Enterprise Reference Configuration we examine end users who require workstation class compute and graphic rendering, and we conducted a proof-of-concept on how their needs can be addressed with remote accessibility to HPE Synergy 480 Compute Modules, HPE Synergy D3940 Storage Modules, HPE 3PAR StoreServ 8450 all flash array, and Citrix® XenDesktop® with HDX 3D. For our analysis, we standardized on two types of engineering end users: a CAD designer who requires the highest performance graphics for 3D modeling, and a “junior” engineer who requires enhanced graphics to review blueprints and schematics.
What makes this Reference Configuration particularly interesting, however, is utilizing HPE Synergy composability capabilities to address multiple disparate workloads on the same infrastructure. For example, within an automotive manufacturer, design teams are remotely accessing HPE Synergy Compute Modules during normal work hours to run CAD applications. And due to the centralized data center architecture, they are able to more effectively collaborate and share files while keeping the IP safe within the data center. In off hours (overnight or on weekends) the HPE Synergy resources can quickly and easily be recomposed to run simulation workloads such as air flow modeling or crash test analysis; thus ensuring that the value of the infrastructure investment is maximized.
Key findings from our analysis:
• Switching between CAD engineering and GPU compute workloads via HPE Synergy Composer and HPE Synergy Image Streamer took less than 5 minutes to complete
• HPE Synergy offers unmatched NVIDIA GPU density per HPE Synergy 480 Compute Module via HPE Synergy 480 Multi MXM Expansion Module with up to 6 x NVIDIA GPUs
• Infrastructure transformation via automation was easily accomplished with HPE Synergy Composer powered by HPE OneView, a unified API, Windows® PowerShell, and VMware® PowerCLI, without need for extensive coding expertise
Target audience: This document is intended for IT decision makers as well as architects and implementation personnel who want to understand the HPE Composable Infrastructure capabilities offered by the HPE Synergy platform. The reader should have a solid understanding of end-user and high-performance computing, familiarity with the NVIDIA GRID technologies, VMware vSphere and Citrix XenDesktop products, and an understanding of sizing/characterization concepts and limitations in client virtualization environments.
This white paper describes a project developed by Hewlett Packard Enterprise in May 2017.
Disclaimer: Products sold prior to the separation of Hewlett-Packard Company into Hewlett Packard Enterprise Company and HP Inc. on November 1, 2015 may have a product name and model number that differ from current models.
Solution overview HPE Synergy lets IT administrators and developers use infrastructure as code to deploy and manage their data center environments. Developers and ISVs can programmatically control a Composable Infrastructure through a single, open API that is native in HPE Synergy powered by HPE OneView. HPE Synergy Image Streamer adds the ability to manage physical servers like virtual machines. This new approach for Composable Infrastructure combines true stateless computing with rapid deployment and updates. This capability becomes the differentiating factor when it comes to high-end graphics VDI deployments, which can become expensive. Modern operating systems and applications demand an unprecedented level of graphics performance to meet the requirements of today’s enterprise users. Accelerated graphics is becoming a necessity for most office use cases and is a requirement for other use cases such as virtualized CAD engineering. Organizations are also seeking to take advantage of NVIDIA GPUs to function as a graphics accelerator as well as a GPGPU (general-purpose computing on graphics processing unit). To address these demands, HPE Synergy seeks to transform the economics and performance curve for GPU-accelerated VDI, by allowing “stateless” hardware to be repurposed on-demand within minutes, reducing the total cost of ownership (TCO). The unique capability of NVIDIA GPUs to function as a graphics accelerator as well as a GPGPU is central to the composability scenarios discussed in this Reference
Reference Architecture Page 4
Configuration. In a nutshell, this Reference Configuration demonstrates switching between a high-end graphics use case and a GPGPU use case such as analytics.
This Reference Configuration is built upon the following composability concepts and capabilities of the HPE Synergy platform.
Fluid resource pools HPE Synergy allows the transformation of traditionally rigid physical systems into flexible virtual resource pools. HPE Synergy creates resource pools of “stateless” compute, storage, and fabric capacity that can be configured almost instantly to rapidly provision infrastructure for a broad range of applications.
Software-defined intelligence The software-defined intelligence in HPE Synergy reduces operational complexity and enables IT organizations to make needed programmatic changes quickly and confidently, with minimal human intervention. HPE Synergy abstracts operational details and replaces them with high-level, automated operations. HPE Synergy uses templates to automatically implement change operations such as updating firmware, adding additional storage to a service, or modifying network.
Unified API HPE Synergy delivers automation through a unified API that provides a single interface to discover, inventory, configure, provision, update, and diagnose the Composable Infrastructure in a heterogeneous environment. This fully programmable interface integrates into dozens of popular management tools such as Microsoft® System Center, VMware vCenter and open source automation and DevOps tools such as Chef, Docker, and OpenStack. This Reference Configuration uses simpler and widely used tools, such as Windows PowerShell and VMware PowerCLI to demonstrate the on-demand composability feature of the HPE Synergy platform.
Accelerated graphics HPE Synergy offers HPE Synergy 480 Compute Modules with NVIDIA Tesla M6 graphics cards, in a single or up to 6x configuration using the HPE Synergy 480 Multi MXM Expansion Module. This flexible solution supports the entire spectrum of accelerated graphics use cases from knowledge workers to high-end designers in user densities that meet the customer requirements.
Flexible storage options HPE Synergy offers affordable and flexible storage that scales to accommodate growth and adapts automatically when the unexpected happens to keep you up and running. From direct-attached storage with the HPE Synergy D3940 Storage Module, or software defined storage (HPE StoreVirtual VSA) to HPE 3PAR StoreServ all-flash storage arrays, HPE Synergy spans all storage use cases. This Reference Configuration presents flexible storage options that cater to changing customer requirements without major changes to the underlying solution design. Figure 1 below depicts the hardware stack tested for this Reference Configuration.
Reference Architecture Page 5
Figure 1. HPE Synergy three HPE Synergy 12000 Frame Reference Configuration
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
HPE StoreVirtual VSA 2-node cluster using two HPE Synergy 480 Compute Modules and two HPE D3940 Storage Modules
HPE Synergy 480 Compute Modules with single graphics mezzanine adapter
HPE Synergy 480 Compute Modules with HPE Synergy 480 Multi MXM Expansion Module.
HPE Synergy D3940 Storage Module
HPE Synergy Composers.
HPE Synergy Image Streamers
FrontPanel
Bay16
Bay6
Bay7
Bay1
ApplianceBay 2
ApplianceBay 1
Synergy12000Frame
UID
OneViewUID
Active
Power
Synergy ComposerOneViewUID
Active
Power
Synergy Composer
FrontPanel
Bay16
Bay6
Bay7
Bay1
ApplianceBay 2
ApplianceBay 1
Synergy12000Frame
UID
OneViewUID
Active
Power
Synergy ComposerOneViewUID
Active
Power
Synergy Composer
FrontPanel
Bay16
Bay6
Bay7
Bay1
ApplianceBay 2
ApplianceBay 1
Synergy12000Frame
UID
OneViewUID
Active
Power
Synergy Composer
1
2
UID
Synergy480
Gen9
ProLiantDL360pGen8
UIDSID
3
4
1
2
5
6 7 8
ProLiantDL360pGen8
UIDSID
3
4
1
2
5
6 7 8 SAS300 GB
15KSAS
300 GB
15K
SAS300 GB
15KSAS
300 GB
15K
SAS300 GB
15K
SAS300 GB
15K
SAS300 GB
15K
SAS300 GB
15K
SAS300 GB
15K
SAS300 GB
15K
SAS300 GB
15K
SAS300 GB
15K
SAS300 GB
15K
SAS300 GB
15K
SAS300 GB
15K
SAS300 GB
15K
12
110 233PAR
StoreServ7450
SAS SAS SAS SASSAS SAS SAS SASSAS SAS SAS SASSAS SAS SAS SASSAS SAS SAS SASSAS SAS SAS SAS
12
110 233PAR
StoreServ7450
SAS SAS SAS SASSAS SAS SAS SASSAS SAS SAS SASSAS SAS SAS SASSAS SAS SAS SASSAS SAS SAS SAS
UIDSID1 2 3 4
SA50
StoreServStoreServ
Optional: HPE SN6000B 16Gb SAN Switch
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy D3940Storage Module
UID
Synergy D3940Storage Module
Optional: HPE 3PAR StoreServ 8450 all-flash array and service processor
Optional: HPE ProLiant DL360 Gen9 servers holding the domain, solution management, Citrix XenDesktop and StoreVirtual FOM (Failover manager)
OneViewUID
Active
Power
Image Streamer
OneViewUID
Active
Power
Image Streamer
HP SN6000B 16Gb FC Switch
47434642454144403935383437333632312730262925282423192218211720161511141013912873625140
HP SN6000B 16Gb FC Switch
47434642454144403935383437333632312730262925282423192218211720161511141013912873625140
Reference Architecture Page 6
Solution components Hardware HPE Synergy Composer HPE Synergy Composer provides enterprise-level management to compose and deploy system resources to your application needs. This management appliance uses software-defined intelligence with embedded HPE OneView to aggregate compute, storage and fabric resources in a manner that scales to your application needs, instead of being restricted to the fixed ratios of traditional resource offerings. HPE OneView server profiles and profile templates capture the entire server configuration in one place, enabling administrators to replicate new server profiles and to modify them as needed to reflect changes in the data center. With HPE OneView Rest API and automation tools, the entire process of server personality definition and configuration can be automated. For this Reference Configuration the HPE OneView REST API and PowerShell library were used to automate the server profile application to “stateless” servers.
HPE Synergy Image Streamer The HPE Synergy Image Streamer management appliance works with HPE Synergy Composer for fast, software-defined control over physical compute modules with operating system and application provisioning. HPE Synergy Composer powered by HPE OneView captures the physical state of the server in a server profile. HPE Synergy Image Streamer enhances this server profile by capturing the “golden image” as the “deployed software state” in the form of bootable image volumes. These bootable images are stored on redundant HPE Synergy Image Streamer appliances, and they are available for fast deployment to multiple compute modules. This enables bare-metal compute modules to boot directly into a running OS with applications in a desired state.
HPE Synergy Image Streamer uses scripts to deploy and capture OS images as part of the server provisioning process. Below are some of the terminologies and concepts used in this document to discuss the server provisioning process via HPE Synergy Image Streamer.
• Plan script: A script used by OS build plans to personalize OS volumes based upon the values of custom attributes.
• OS build plan: A set of plan scripts used to modify the configuration of an OS volume during the deployment or capture process.
• Golden image: A generic format of an application and operating system image that can be customized for multiple deployments.
• Deployment plan: A combination of an OS build plan and golden image that is used by a server profile for the deployment of a server.
• Custom attributes: Custom attributes are used to provide server-specific configuration information.
• Artifacts: Artifacts are entities that combine to either perform deployment of servers or capture operating system images. In HPE Synergy Image Streamer, artifacts include Plan Script (PS), OS Build Plan (BP), Golden Image (GI), and Deployment Plan (DP).
Note The HPE Synergy Image Streamer team has predefined artifacts that can be installed once the HPE Synergy Image Streamer is up and running. These artifacts contain foundation build plans used to capture and deploy images. These artifacts can be downloaded from https://github.com/HewlettPackard
Reference Architecture Page 7
Figure 2 shows the use of server profiles to deploy software state on a stateless compute module.
Figure 2. HPE Synergy Composer server templates and HPE Synergy Image Streamer deployment
NVIDIA Tesla M6 GPU The NVIDIA Tesla M6 GPU is part of the next generation of GPUs NVIDIA introduced with its GRID 2.0 release. NVIDIA Tesla M6 GPU is designed for high density, small form factor servers, such as the HPE Synergy 480 Compute Modules. The NVIDIA Tesla M6 GPU can be used for accelerated graphics, as well as a GPGPU (general purpose graphics processor unit) for high performance computing and analytical applications. This Reference Configuration uses NVIDIA Tesla M6 GPUs and demonstrates use of this dual mode capability.
Note A utility (gpumodeswitch), provided by NVIDIA, was used to switch between the two GPU modes. An evaluation version of the utility as well as NVIDIA GRID software may be found at nvidia.com/object/vgpu-evaluation.html. For information on how to use the utility, the NVIDIA GRID 2.0 licensing model, and how to install and use NVIDIA GPU on various hypervisors, refer to NVIDIA GRID 2.0 Enterprise Software Guide.
HPE Synergy 12000 Frame The HPE Synergy 12000 Frame is a key element of HPE Synergy, providing the base for an intelligent infrastructure with embedded management and scalable links for expansion as business demand requires. The HPE Synergy 12000 Frame is the base infrastructure that pools resources of compute, storage, fabric, cooling, power and scalability. With an embedded management solution combining the HPE Synergy Composer and HPE Synergy Frame Link Modules, IT can manage, assemble and scale resources on demand. The HPE Synergy 12000 Frame is designed for needs now and in the future with expanded compute and fabric bandwidths. HPE Synergy 12000 Frame specifications can be found here. For more information on HPE Synergy architecture and components, visit the HPE Synergy website.
HPE Synergy 480 Compute Module The HPE Synergy 480 Compute Module delivers superior capacity, efficiency, and flexibility in a two-socket, half-height, single-wide form factor to support demanding workloads. Powered by the latest Intel® Xeon® E5-2600 v4 processors, HPE DDR4 SmartMemory supporting up to 1.5 TB, flexible storage controller options, three I/O connectors, and designed to create a pool of flexible compute capacity within a composable infrastructure, the HPE Synergy 480 Compute Module is an ideal platform for general-purpose enterprise workload performance now and in the future. More information on HPE Synergy 480 Compute Modules can be found at the HPE Synergy Compute Module website.
Reference Architecture Page 8
HPE Synergy D3940 Storage Module The HPE Synergy D3940 Storage Module holds up to 40 Small Form Factor (SFF) hard drives or SSDs and is designed for use in HPE Synergy 12000 Frames. Through the HPE Synergy 12Gb SAS Connection Module the HPE Synergy D3940 Storage Module provides composable direct attached storage for up to 10 HPE Synergy 480 Compute Modules in a single HPE Synergy 12000 Frame. HPE Synergy D3940 Storage Module is optimized for use as either a direct attached storage array or as software defined storage using HPE StoreVirtual VSA or similar solutions. Visit the HPE Synergy Storage website for more information.
HPE 3PAR StoreServ 8450 all-flash array HPE 3PAR StoreServ 8000 series storage, with one of the lowest all-flash starting prices in the market, delivers the performance advantages of a purpose-built, flash-optimized architecture without compromising resiliency, data services, or data mobility. Unlike other purpose-built flash arrays, HPE 3PAR StoreServ 8450 doesn’t require introduction to an entirely new architecture to achieve flash-optimized performance. Options support true convergence of block and file protocols, application managed data protection and simplified fabric zoning along with SAN diagnostics. Best practices for HPE 3PAR StoreServ were followed to build this solution. These best practices are outlined in the HPE 3PAR StoreServ best practices guide.
HPE Virtual Connect SE 40Gb F8 Module for Synergy The HPE Virtual Connect SE 40Gb F8 Module, master module based on composable fabric, is designed for Composable Infrastructure. Its disaggregated, rack-scale design uses a Master/Satellite architecture to consolidate data center network connections, reduce hardware and scales network bandwidth across multiple HPE Synergy 12000 Frames. The HPE Virtual Connect SE 40Gb F8 Module for Synergy eliminates network sprawl at the edge with one device that converges traffic inside the HPE Synergy 12000 Frames, and directly connects to external LANs.
HPE Synergy 20Gb Interconnect Link Module The HPE Synergy 20Gb Interconnect Link Module (satellite module) is designed for Composable Infrastructure. Based on a disaggregated, rack-scale design, it uses a Master/Satellite architecture to consolidate data center network connections, reduce hardware and scale network bandwidth across multiple HPE Synergy 12000 Frames.
Note HPE Synergy supports both traditional single-frame based networking as well as a multi-frame, single switch architecture. The multi-frame architecture is referred to as a Master/Satellite fabric. Please consult the Synergy hardware configuration section later in this document.
Software Table 1 lists the software used during testing of this Reference Configuration.
Table 1. Software list
Component Version
NVIDIA
Gpumodeswitch utility 1.23.0
NVIDIA GRID 2.0 host driver package 367.64-369.71
NVIDIA CUDA Tool Kit 8.0
Citrix
XenDesktop 7.12
VMware
vSphere 6.0 U2
Microsoft
Office 2013
AutoDesk
AutoCAD 2018
Reference Architecture Page 9
Component Version
Open Source
R for Linux 3.3.3 (From R Project for Statistical Computing at r-project.org)
Red Hat® Enterprise Linux® (RHEL) 7.3
HPE
HPE StoreVirtual VSA StoreVirtual OS 12.5
Hardware Table 2 lists the hardware used during testing of this Reference Configuration.
Table 2. HPE Synergy components including storage options in this Reference Configuration
Component Description
HPE 3PAR StoreServ 8450 All-flash Array 2-Node with 12 1.92TB SSDs with RAID 10 and Physical SP
HPE Synergy D3940 Storage Module 2x with 8 x HPE 800GB 6G SATA drives per storage module
HPE StoreVirtual VSA 2 VSA nodes + FOM (Failover Manager)
HPE Synergy 480 Compute Module 6x (2 modules used to host HPE StoreVirtual VSA nodes and 4 modules with NVIDIA Tesla M6 GPU)
HPE Synergy 480 Compute Module + HPE Synergy 480 Multi MXM Expansion Module with 6 x NVIDIA GPUs
1 per frame
HPE Synergy 12Gb SAS Connection Module 4 (Redundant)
HPE Virtual Connect SE 16Gb FC Module for Synergy 6 (Redundant)
HPE Virtual Connect SE 40Gb F8 Module for Synergy 2 (Redundant)
HPE Synergy 20Gb Interconnect Link Module 4 (Redundant)
HPE Synergy 12000 Frame 3 (minimum for configurations with HPE Synergy Image Streamer)
HPE Synergy Composer 2 (Redundant)
HPE Synergy Image Streamer 2 (Redundant)
Scripting tools Table 3 shows the scripting tools used to develop automation scripts for this Reference Configuration.
Table 3. Automation and scripting tools
Component Version
HPE OneView PowerShell Libraries 3.0
HPE Synergy Image Streamer Artifacts Foundation 1.0
VMware PowerCLI 6.5
Windows PowerShell 4.1
Citrix PowerShell SDK 7.12.0.13
Design principles This solution was designed with the following goals in mind:
• Solution must provide a flexible architecture to support a spectrum of accelerated graphics use cases and densities.
• Solution must provide flexible storage options from low-cost direct attached storage (DAS) and software defined storage to enterprise class shared flash storage.
Reference Architecture Page 10
• Solution must be “Composable” with capability to switch use cases dynamically on the same underlying hardware with minimal admin intervention.
The solution design that evolved from the above mentioned goals, is based on the following core HPE Synergy capabilities and design principles.
• Every solution design has storage as a central piece of its architecture. This design must be flexible enough to satisfy a range of use cases, workloads and performance requirements. The HPE Synergy D3940 Storage Module is configurable as ultra-dense direct attached zoned storage or as a flexible scale-out software defined shared storage when virtualized and provisioned by HPE StoreVirtual VSA. For this Reference Configuration a two node HPE StoreVirtual VSA was configured using disks, carved out from HPE Synergy D3940 Storage Modules. The two HPE StoreVirtual VSA nodes resided on HPE Synergy 480 Compute Modules in separate HPE Synergy 12000 Frames which also held the HPE Synergy D3940 Storage Modules. The HPE StoreVirtual Failover Manager (FOM) for the HPE StoreVirtual VSA was placed on the HPE ProLiant DL360 rack servers, providing in-rack high availability. The virtual machine files were placed on the shared HPE StoreVirtual VSA volumes. The optional, HPE 3PAR StoreServ all-flash array provided extended storage options (such as deduplication) where virtual machines and large design and data files can be placed that require higher storage capacity and IOPs. More information on the HPE Synergy D3940 Storage Module can be found here. For a complete array of HPE storage options, visit HPE storage website.
• HPE Synergy 480 Compute Modules support NVIDIA Tesla M6 GPUs in a single-wide form factor with a single GPU and in a double-wide form factor with the HPE Synergy 480 Multi MXM Expansion Module that can hold up to 6 NVIDIA Tesla M6 GPUs. The three-frame HPE Synergy configuration used for this Reference Configuration provides up to 30 single-wide HPE Synergy 480 Compute Modules, or up to 14 double-wide HPE Synergy 480 Compute Modules, plus two HPE StoreVirtual VSA nodes. The NVIDIA Tesla M6 GPU in graphics mode can be used as a bare-metal pass-through or in a vGPU configuration. With this design, customers have the ability to determine a user density that is most suited for their workloads and grow their environment as needed. For more information on HPE Synergy 480 Compute Modules, visit the HPE website at: HPE Synergy 480 Compute Modules.
• Organizations are challenged with providing a secure and high-performance work environment to their local and remote users while keeping the cost down. Virtualized GPUs (vGPUs) address this challenge effectively. Citrix XenDesktop and XenApp have supported GPU sharing for high-end graphics for a number of years. This Reference Configuration uses Citrix XenDesktop to deliver high-end graphics in a VDI environment. With NVIDIA Tesla M6 GPUs and Citrix XenDesktop and XenApp, organizations can provide their high-end users with the performance that they expect and need for engineering, design, and video applications, while centrally securing and managing valuable sensitive information and intellectual property. For this Reference Configuration, Windows 7 64bit and Windows 10 virtual machines were created with shared vGPU profiles using Citrix XenDesktop Machine Creation Services (MCS). This configuration was chosen for simplicity and quick setup rather than a technical preference.
• NVIDIA developed a parallel computing model named CUDA (Compute Unified Device Architecture) that allows programming languages direct access to the parallel computing elements of the GPU, enabling the execution of the compute kernel routines. This dual capability of the NVIDIA GPUs provides a unique opportunity for organizations that run a mix of high-end graphics and compute workloads to reduce TCO (total cost of ownership) by running the workloads on the same hardware. This Reference Configuration demonstrates the use of HPE Synergy to orchestrate the two use cases on the same underlying hardware. For more information on NVIDIA GPUs, visit NVIDIA’s website at: nvidia.com
• Automation and orchestration are native to HPE Synergy Composer powered by HPE OneView and are core to its composability story. HPE Synergy delivers automation through a unified API that provides a single interface to discover, inventory, configure, provision, update, and diagnose the Composable Infrastructure in a heterogeneous environment. This Reference Configuration uses HPE Synergy Composer powered by HPE OneView and HPE Synergy Image Streamer along with simpler and widely used tools, such as Windows PowerShell and VMware PowerCLI, to demonstrate the on-demand composability feature of the HPE Synergy platform.
User scenarios A proof-point for composability This Reference Configuration aims to demonstrate the use of fluid resource pools to effortlessly compose and recompose a high-end graphics VDI environment to a data analytics environment and vice versa, on a single block of disaggregated compute, storage, and fabric infrastructure. It serves as a proof point for dynamically switching workloads from one use case to workloads in an entirely different use case within minutes by using simple and widely used tools, such as, Windows PowerShell and VMware PowerCLI in conjunction with a unified API and HPE OneView PowerShell library, to demonstrate the on-demand composability feature of the HPE Synergy platform.
Reference Architecture Page 11
NVIDIA M6 GPU operating modes This Reference Configuration uses the NVIDIA Tesla M6 GPU’s two operating modes to demonstrate on-demand switching between the two use cases, using the same hardware, and with minimal user intervention. A brief description of the two modes follow.
• Compute mode: Used when GPU’s graphical capabilities are not needed. In this mode GPU is optimized for high-performance computing (HPC) applications.
• Graphics mode: Used when GPU is being used as a primary display device and high-end graphics application is the primary use case. In this mode, desktop virtualization technologies, such as Citrix XenDesktop, can be used to utilize shared GPU resources.
Figure 3. Composable Infrastructure running multiple use cases on same hardware
As a primary use case, a three-frame HPE Synergy starter configuration was used to house single-wide form factor HPE Synergy 480 Compute Modules with a single NVIDIA Tesla M6 GPU adapter. A double-wide form factor consisted of an HPE Synergy 480 Computer Module along with an HPE Synergy 480 Multi MXM Expansion Module that housed 6 x NVIDIA Tesla M6 GPUs. Citrix XenDesktop MCS virtual machines ran the trial version of Autodesk CAD and simulations that were used for demo purposes.
Note Performance testing was not a goal for this demonstration and none was performed. HPE recommends that for best results, customers consult with HPE Services or an HPE reseller to conduct a deeper assessment of their environment and users. This will yield the best overall end-user experience and optimal return on investment.
One way to reduce total cost of ownership (TCO) and maximize return on investment (ROI) is to utilize the purchased or leased hardware to its maximum capacity and availability lifecycle. In most cases hardware sits idle outside of office hours when no design work is being performed. Hardware use could be maximized, if there was a way to repurpose the hardware resource temporarily during this idle period, and revert back to its primary use case during office hours. With HPE Composable Infrastructure and NVIDIA GPU dual-mode capability, this is now possible. Based
Reference Architecture Page 12
on this premise, an analytics compute environment was set up to run sample data modeling and predictive analytics using R language. This environment used NVIDIA GPU’s compute mode to run various data sampling models. This workload can run during hours when GPU’s graphics capabilities are not needed. HPE Synergy Image Streamer and HPE OneView API were used to repurpose the hardware as needed by the two use cases in an automated fashion.
Note R is a programming language used for statistical and data analysis. This Reference Configuration uses R compiler to run R routines as a simple and easy to implement example. Sample R routines and compiler can be downloaded from r-project.org.
Figure 4 shows a snapshot of R compiler running sample routines.
Figure 4. R compiler GUI running sample survey sampling routines
Note The installation and configuration instructions for the Citrix XenDesktop and NVIDIA GPU software, and the analytics workload setups are not within the scope of this document. References are provided where appropriate, to guide the reader to set up these workload environments when needed. For information on setting up NVIDIA GPUs on Citrix XenDesktop, visit the NVIDIA GRID Citrix XenDesktop page.
Reference Architecture Page 13
Logical design and workflow Figure 5 below depicts the workflow to repurpose the hardware.
Figure 5. HPE Synergy logical workflow
Reference Architecture Page 14
Configuration guidance for the solution Figure 6 below shows a high-level workflow of the setup for deploying the two use cases on the HPE Synergy platform.
Figure 6. High-level workflow
Capture ESXi Golden Image
Install ESXi + GPUMODESWITCH
Utility
Capture RHEL Golden Image
Install RHEL + R Services
Create Image Streamer Plan
scripts
Create Image Streamer Plan
scripts
Create OS build plan Create OS build plan
Create Deployment Plan
Create Deployment Plan
Create RHEL OneView Profile
Create ESXi OneView Profile
Create Graphics Environment
HPE Synergy hardware ready
HPE Synergy ready for automation
Create Compute Environment
Reference Architecture Page 15
HPE Synergy hardware configuration HPE Synergy hardware for this Reference Configuration was set up using the HPE Synergy Configuration and Compatibility Guide. This section describes the setup of components specific to this Reference Configuration. HPE Synergy Image Streamer requires a minimum of three HPE Synergy 12000 Frames in a production environment. For this Reference Configuration three HPE Synergy 12000 Frames were configured, with redundant HPE Synergy Composers and HPE Synergy Image Streamers. Two HPE Synergy 480 Compute Modules with HPE Smart Array P542D Controllers were used to host the HPE StoreVirtual VSA nodes. These compute modules resided in the HPE Synergy 12000 Frames with HPE Synergy D3940 Storage Modules, with logical volumes mapped from disks in the HPE Synergy D3940 Storage Modules. Four HPE Synergy 480 half-height, single-wide form factor compute modules fitted with a single NVIDIA Tesla M6 GPU adapter were also part of the solution. Each HPE Synergy 12000 Frame had one HPE Synergy 480 half-height, double-wide form factor compute module that included HPE Synergy 480 Multi MXM Expansion Module with 6 NVIDIA Tesla M6 GPUs. These volumes served as datastores where HPE StoreVirtual VSA volumes were created. Figure 7 below depicts this configuration.
Figure 7. HPE Synergy 480 Compute Modules and HPE StoreVirtual VSA configuration
The HPE StoreVirtual VSA Failover Manager (FOM) was installed on the HPE ProLiant DL360 Gen9 management servers, providing in-rack high availability. These two HPE ProLiant DL360 Gen9 management servers held the domain and solution management VMs, and Citrix XenDesktop infrastructure. The two HPE ProLiant DL360 Gen9 servers are optional if the required infrastructure already exists or can be installed external to
FrontPanel
Bay16
Bay6
Bay7
Bay1
ApplianceBay 2
ApplianceBay 1
Synergy12000Frame
UID
OneViewUID
Active
Power
Synergy ComposerOneViewUID
Active
Power
Synergy Composer
FrontPanel
Bay16
Bay6
Bay7
Bay1
ApplianceBay 2
ApplianceBay 1
Synergy12000Frame
UID
OneViewUID
Active
Power
Synergy Composer
1
2
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy D3940Storage Module
OneViewUID
Active
Power
Image Streamer
OneViewUID
Active
Power
Image Streamer
FrontPanel
Bay16
Bay6
Bay7
Bay1
ApplianceBay 2
ApplianceBay 1
Synergy12000Frame
UID
OneViewUID
Active
Power
Synergy ComposerOneViewUID
Active
Power
Synergy Composer
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy480
Gen9
UID
Synergy D3940Storage Module
HPE Synergy 480 Compute Module with 1x NVIDIA Tesla M6 GPU
HPE Synergy 480 Compute Module with HPE Smart Array P542D Controller holding VSA volumes
HPE Synergy 480 Compute + HPE Synergy 480 Multi MXM Expansion Module with 6x NVIDIA Tesla M6 GPUs
Reference Architecture Page 16
the HPE Synergy solution stack. This Reference Configuration uses HPE StoreVirtual VSA with HPE Synergy D3940 Storage Modules. HPE Synergy D3940 can also be used with other software defined storage solutions.
Note HPE StoreVirtual Failover Manager (FOM) is designed to provide automated and transparent failover capability in a 2-node cluster configuration. For fault tolerance in a single-site configuration, the FOM runs as a virtual appliance in either a VMware vSphere, Microsoft Hyper-V Server, or Linux KVM environment, and must be installed on storage that is not provided by the HPE StoreVirtual installation it is protecting. Information on HPE StoreVirtual setup is available in the HPE StoreVirtual Architecture guide.
Table 4 shows HPE Synergy 480 Compute Module configurations based on usage.
Table 4. Typical HPE Synergy 480 Compute Module specifications
HPE Synergy 480 Compute Module Component Description
With single NVIDIA Tesla M6 GPU
Processors Intel Xeon CPU E5-2697A v4 @ 2.60GHz
Memory HPE SmartMemory 2133 MHz (256GB)
Network HPE Synergy 3820C 10/20Gb CNA
Graphics HPE Synergy 480 NVIDIA Tesla M6 FIO Mezzanine Card *
Fibre Channel HPE Synergy 3830C 16G FC HBA
With 6 x NVIDIA Tesla M6 GPUs
Processors Intel Xeon CPU E5-2697A v4 @ 2.60GHz
Memory HPE SmartMemory 2133 MHz (256GB)
Network HPE Synergy 3820C 10/20Gb CNA
Graphics NVIDIA Tesla M6 2X GPU Kit for HPE Synergy 480 with MXM GPU Expansion
Fibre Channel HPE Synergy 3830C 16G FC HBA
Graphics expansion HPE Synergy 480 Multi MXM Expansion Module
As HPE StoreVirtual VSA nodes
Processors Intel Xeon CPU E5-2697A v4 @ 2.60GHz
Memory HPE SmartMemory 2133 MHz (256GB)
Network HPE Synergy 3820C 10/20Gb CNA
SAS storage controller HPE Smart Array P542D Controller
Note * Must be installed in Mezz 1. Due to heatsink size, no other card may be installed in Mezz 2 and the HPE Smart Array P542D/2GB FBWC 12Gb Mezzanine SAS Controller, which provides connectivity to direct attach storage, cannot be in the same server due to size restraints.
HPE Synergy supports both traditional single frame-based networking as well as a multi-frame, single switch architecture. The multi-frame architecture is referred to as a Master/Satellite fabric. In a Master/Satellite fabric configuration, there is a single fabric switch or HPE Virtual Connect module (Master) whose ports span across multiple HPE Synergy 12000 Frames through (Satellite) interconnect modules. Logically, all compute modules in a multi-frame Master/Satellite configuration are directly connected to the master switch or master HPE Virtual Connect module. The three-frame Reference Configuration was in a Master/Satellite configuration. The master module contains intelligent networking capabilities that extend connectivity to satellite frames through Interconnect Link Modules. The result being elimination of the need for top of rack (ToR) switches as any addition of new satellite frames are connected to the master module instead of ToR switch, and substantially reduces
Reference Architecture Page 17
cost. The reduction in components also simplifies fabric management at scale while consuming fewer ports at the data center aggregation layer. Figure 8 below depicts this Master/Satellite fabric connectivity configuration.
Figure 8. HPE Synergy Master/Satellite frame connectivity
Creating HPE OneView server profiles For both use cases separate HPE OneView profiles were created to be used by HPE Synergy 480 Compute Modules. Deploying images using HPE Synergy Image Streamer is a two-step process:
Step 1 is to create an empty volume and deploy the profile. For initial configuration, the OS deployment plan creates an empty volume on the HPE Synergy Image Streamer local storage. This volume will be mapped to the HPE Synergy 480 Compute Modules as an iSCSI volume when the server boots up and this is where VMware ESXi will be installed. A 10GB volume was created and HPE iLO was used to install VMware ESXi. Figure 9 shows a snapshot of Step 1.
Step 2: Deploy an ESXi image using an ESXi deployment plan.
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
UID
MGMT
EM
LINK
UID
MGMT
EM
LINK
HPE VC SE 40GbF8 Module
L 2L 1 Q 4Q 3Q 2Q 1 Q 8Q 7Q 6Q 5Eth / X-Link
L 4L 3 Reset
UID
PIDL/A
HPE Synergy12 Port 12G SAS Switch
UID
HPE VC SE 16GbFC Module
7 85 63 41 2 Q 3 Q 4
1 2
3 4
1 2
3 4
Q 1 Q 2
1 2
3 4
1 2
3 4 Reset
UID
HPE Synergy12 Port 12G SAS Switch
UID
HPE VC SE 16GbFC Module
7 85 63 41 2 Q 3 Q 4
1 2
3 4
1 2
3 4
Q 1 Q 2
1 2
3 4
1 2
3 4 Reset
UID
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
UID
MGMT
EM
LINK
UID
MGMT
EM
LINK
HPE Synergy12 Port 12G SAS Switch
UID
HPE VC SE 16GbFC Module
7 85 63 41 2 Q 3 Q 4
1 2
3 4
1 2
3 4
Q 1 Q 2
1 2
3 4
1 2
3 4 Reset
UID
HPE Synergy12 Port 12G SAS Switch
UID
HPE VC SE 16GbFC Module
7 85 63 41 2 Q 3 Q 4
1 2
3 4
1 2
3 4
Q 1 Q 2
1 2
3 4
1 2
3 4 Reset
UID
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
96%2650W 200-240VAC
96%
UID
MGMT
EM
LINK
UID
MGMT
EM
LINK
L 2L 1 Q 4Q 3Q 2Q 1 Q 8Q 7Q 6Q 5Eth / X-Link
L 4L 3 Reset
UID
PIDL/A
HPE Synergy 40Gb F8Switch Module
HPE Synergy12 Port 12G SAS Switch
UID
HPE VC SE 16GbFC Module
7 85 63 41 2 Q 3 Q 4
1 2
3 4
1 2
3 4
Q 1 Q 2
1 2
3 4
1 2
3 4 Reset
UID
HPE Synergy12 Port 12G SAS Switch
UID
HPE VC SE 16GbFC Module
7 85 63 41 2 Q 3 Q 4
1 2
3 4
1 2
3 4
Q 1 Q 2
1 2
3 4
1 2
3 4 Reset
UID
HPE Synergy 20GbInterconnect Link Mod
L 2L 1
UID
HPE Synergy 20GbInterconnect Link Mod
L 2L 1
UID
HPE Synergy 20GbInterconnect Link Mod
L 2L 1
UID
HPE Synergy 20GbInterconnect Link Mod
L 2L 1
UID
UplinkUplink
Reference Architecture Page 18
Figure 9 shows a snapshot of a VMware ESXi profile created for this Reference Configuration.
Figure 9. HPE Synergy 480 VMware ESXi server profile
A separate server profile was created for the Linux OS and RHEL 7.3 was installed via iLO.
Server profile BIOS settings The default BIOS of the HPE Synergy Compute Modules is already optimized for most use cases. For the high-end graphics VDI configuration, the BIOS settings were tweaked to maximize performance and user experience. Figure 10 captures these BIOS setting.
Figure 10. HPE Synergy 480 VMware ESXi server profile BIOS settings
Reference Architecture Page 19
Creating golden images Two golden images were captured, one for each use case. Creating golden images for VMware ESXi and RHEL for capture and deployment involves the following tasks.
VMware ESXi 6.0U2 Assign a profile to the HPE Synergy 480 Compute Module and power on the server. Figure 11 shows a snapshot of a profile creation screen.
Figure 11. HPE Synergy Composer server profile creation
Reference Architecture Page 20
VMware ESXi needs to be installed on an HPE Synergy Image Streamer OS volume using iLO. VMware ESXi installer will detect the HPE Synergy Image Streamer OS volume as an iSCSI disk. Figure 12 below shows a snapshot of this process.
Figure 12. VMware ESXi installation on HPE Synergy Image Streamer iSCSI volume
A shared datastore needs to be created and mounted to VMware ESXi. For this Reference Configuration a 10GB shared HPE StoreVirtual VSA volume was configured to hold the datastore to be used for storing drivers or other software components. NVIDIA host driver for VMware ESXi was uploaded to the datastore. At this point, the path to the driver package must be noted as this will be used to automate driver installation. The gpumodeswitch utility (downloaded from NVIDIA license portal) is installed via the .vib file provided specifically for VMware ESXi.
Reference Architecture Page 21
A golden image of the HPE Synergy Image Streamer OS volume that VMware ESXi is installed on, needs to be created. There are two options available for capturing the golden image in the artifacts. One is to capture the image of the OS volume as is and the second is to “generalize” the image by removing installation specific attributes. For this Reference Configuration, the image was captured as is. If the generalize option is used, custom attributes such as IP addresses, username, passwords and hostname need to be defined in the server profile templates. Figure 13 below shows a snapshot of the HPE Synergy Image Streamer capture screen showing the two options.
Figure 13. HPE Synergy Image Streamer Golden Image creation
The volume the OS is installed on can be determined by looking at the HPE OneView server profile’s deployment settings, as shown in figure 14 below.
Figure 14. Identify OS Volume
RHEL 7.3 The process of creating the golden image for RHEL is similar to creating the ESXi image and involves the following steps.
1. Create an HPE OneView server profile with an OS deployment plan that creates an empty OS volume of size 40 GB and assign the profile to a compute node.
2. Install RHEL 7.3 on the empty volume with default settings or any modification required to your specific environment.
3. Upload and install gpumodeswitch on RHEL 7.
4. Change mode if GPU is not already in compute mode.
5. Disable default driver by setting option “options nouveau modeset=0” in “/etc/modprobe.d/nouveau.conf”.
6. Install required kernel headers, dev packages, CUDA Toolkit, and NVIDIA drivers. Follow the installation instructions in Appendix B.
Reference Architecture Page 22
7. Install R and its dependencies:
$ su -c 'rpm -Uvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm'
$ sudo yum update
$ su -c 'yum install R R-core R-core-devel R-devel'
HPE Synergy Image Streamer plan scripts HPE Synergy Image Streamer plan scripts are key to the composability aspect of this HPE Synergy solution. Description of the plan scripts specific to this Reference Configuration are listed below. Actual scripts used can be found in Appendix A.
• HPE-ESXi6.0u2-MountBootPartition-4-1-2017
Mounts ESXi partition 5 (/bootbank) so that ESXi configuration files can be accessed for personalization.
• HPE-ESXi6.0u2-UnpackState-4-1-2017
VMware ESXi boots from a partition called “bootbank”. This partition stores its configuration files in a compressed tar archive format. This build step downloads and extracts VMware ESXi configuration files from the bootbank partition and keeps it ready for personalization.
• HPE-ESXi6.0u2-SwitchGPUMode-4-1-2017
This script uses the NVIDIA gpumodeswitch utility to check the GPU operating mode. If the mode is compute, it is changed to graphics to be used by the virtual machines. The script also installs the NVIDIA host driver for VMware ESXi, which is stored on the shared HPE StoreVirtual VSA datastore. The HPE Synergy 480 Compute Module will reboot to apply settings for GPU mode and driver installation.
• HPE-ESXi6.0u2-RepackState-4-1-2017
Updates configuration files including local.sh which contains personalization commands and uploads it to the VMware ESXi bootbank area.
• HPE-ESXi6.0u2-UmountBootPartition-4-1-2017
Unmounts VMware ESXi bootbank partition.
Reference Architecture Page 23
Creating HPE Synergy Image Streamer OS build plans OS build plans are created using the HPE Synergy Image Streamer UI. OS build plans can be of type “Deploy” to deploy images or of type “Capture” to capture images. The number sequence determines when a plan script is executed and is assigned when the build is built. Figure 15 shows a snapshot of the OS build plan creation screen.
Figure 15. HPE Synergy Image Streamer OS build plan
Reference Architecture Page 24
HPE Synergy Image Streamer deployment plans HPE Synergy Image Streamer deployment plans determine the OS image that will be deployed and the OS build plan used to deploy the image. This deployment plan is later used by HPE OneView server profiles to deploy the image in the sequence defined by the build plan. Figure 16 shows a sample screenshot of the deployment plan creation screen.
Figure 16. HPE Synergy Image Streamer deployment plan
Automating for composability Automation and orchestration are native to HPE Synergy and are core to its composability story. In HPE Synergy Composer, the fully-programmable HPE OneView interface integrates over a dozen popular management tools such as Microsoft System Center, Chef, Docker, Puppet, Ansible, PowerShell, Python, and VMware vCenter. This Reference Configuration uses simpler and widely used tools such as, Windows PowerShell and VMware PowerCLI to demonstrate the on-demand composability feature of the HPE Synergy platform. The following components were installed on the automation systems.
• HPE OneView 3.0 PowerShell Library
• VMware PowerCLI 6.5 or later
• Windows PowerShell 3.0 or later
• Citrix XenDesktop PowerShell SDK (Citrix XenApp and XenDesktop Remote PowerShell SDK)
Reference Architecture Page 25
Figure 17 below outlines the automation workflow generated by the scripts.
Figure 17. Automation workflow
Capacity and sizing Sizing an accelerated graphics solution is dependent upon user performance requirements and applications being used. Graphic workloads are highly application centric and sizing and performance metrics vary widely depending on the type and number of applications being used. Newer applications and operating systems are resource hungry more than ever. For example, Windows 10 is much more graphics-intensive than its predecessors. As organizations move from Windows 7 to Windows 10 environments, it’s important for them to reevaluate the requirements of their virtual desktops prior to the transition. This Reference Configuration provides flexible storage and compute options and can be tailored to fit customer requirements. This configuration should only be used as a reference or a building block for designing production environments. Performance and environment sizing is unique to each deployment and must be performed on an individual bases.
Graphics workloads vary depending on many factors including:
• Number of applications
• Types of applications
• File sizes
• Number of monitors and their resolution
System State
Initialize PowerCLI
Virtualized CAD
GPU Graphics Mode
Graphics VM ON?
Enter ESXi Host in maint mode
Stop CAD VMYes
No
Un-assign Graphics Profile
Assign Compute Profile
Boot RHEL image
Run Analytics
ESXi running?
Wait NO
YES
Exit Citrix DDC maint mode
Start CAD VM
YES
Change GPU Mode
Shutdown Host
Initialize PowerCLI
System State
Shutdown Host
Shutdown OS
OSShutdown?
Wait NO
Statistical Analysis
GPU Compute Mode
Unassign compute profile
Assign graphics profile
Change GPU mode
Boot ESXi image
Turn on CitrixDDC maint mode
Initialize HPE OneView POSH
Exit ESXi maint mode
Reference Architecture Page 26
Since frame buffer is a dedicated resource in accelerated graphics deployments, it can help determine the overall density of the environment. To understand the impact of some of these factors on frame buffer usage, organizations must conduct real user testing to get specific scalability numbers for their environment. Table 5 lists the NVIDIA Tesla M6 vGPU profiles and their typical use cases.
Table 5. NVIDIA GRID Tesla M6 vGPU profiles
Card vGPU profile name Use case Frame buffer (MB) Max. vGPUs/VMS per GPU Max. vGPUs/VMs per card
GRID M6 GRID M6-8Q High-end designer 8,192 1 1
GRID M6 GRID M6-4Q Typical designer 4,096 2 2
GRID M6 GRID M6-2Q Entry-level designer 2,048 4 4
GRID M6 GRID M6-1Q Power user, designer 1,024 8 8
GRID M6 GRID M6-0Q Power user, designer 512 16 16
Note NVIDIA GRID M6-8Q and GRID M6-4Q profiles were used for accelerated graphics VM creation. Performance testing was out-of-scope for this Reference Configuration and none was performed. NVIDIA has performed extensive performance testing and provides guidance on most industry standard applications. Table 5, above, lists workload sizing recommendations based on testing done by NVIDIA on their Tesla M6 GPUs. Please visit the NVIDIA website for more information. For a complete list of NVIDIA GPU profiles, consult NVIDIA GRID GPU User Guide.
Summary This Reference Configuration demonstrates the use of HPE Synergy’s composability capabilities to effortlessly compose and recompose an NVIDIA GRID 2.0 high-end graphics VDI environment to a data compute environment on single blocks of disaggregated compute, storage, and fabric infrastructure. It serves as a proof point for dynamically switching workloads from one use case to workloads in an entirely different use case within minutes by using simple and widely used tools. More specifically it presents the following facts.
• The infrastructure-as-code capability of HPE Synergy accelerates transformation to a hybrid infrastructure and provides on-demand creation and delivery of applications and services with consistent governance, compliance, and integration. The unified API powered by HPE OneView creates, aggregates, and hosts internal IT resources so automation tools can provision on-demand and programmatically, without needing a detailed understanding of the underlying physical elements.
• HPE Synergy server profiles and templates are a powerful new way to quickly and reliably update and maintain existing infrastructure. HPE Synergy Composer uses templates to simplify one-to-many updates and manage HPE Synergy Compute Module profiles. These templates allow changes to be implemented automatically, significantly reducing manual interactions and errors.
• HPE Synergy Image Streamer enables HPE Synergy to quickly deploy new compute modules or update existing ones by booting them directly into their desired running OS in minutes.
• HPE Synergy offers unmatched NVIDIA GPU density per HPE Synergy 480 Compute Module via HPE Synergy 480 Multi MXM Expansion Module with up to 6 x NVIDIA Tesla M6 GPUs.
• Testing demonstrated that composing (i.e., switching between high-end graphics and data analytics use cases) took less than 5 minutes versus hours in a traditional data center environment.
• Infrastructure transformation via automation is made trivial with HPE Synergy with a unified API and simple tools such as Windows PowerShell, VMware PowerCLI, etc. No in-depth coding expertise is required.
Implementing a proof-of-concept As a matter of best practice for all deployments, HPE recommends implementing a proof-of-concept using a test environment that matches as closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained. For help with a proof-of-concept, contact an HPE Services representative (hpe.com/us/en/services/consulting.html) or your HPE partner.
Reference Architecture Page 27
Appendix A: HPE Synergy Image Streamer plan scripts for ESXi
Note The scripts provided in this document are to be used subject to compliance with the license and usage statement below. “Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the license. You may obtain a copy of the License at apache.org/licenses/LICENSE-2.0. Unless required by applicable law or agreed to in writing, software distributed under the license is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.”
HPE-ESXi6.0u2-MountBootPartition-4-1-2017 ############################################################################## # 1-HPE-ESXi6.0u2-MountBootPartition-4-1-2017 # - Script mounts ESXi boot partition # (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP ############################################################################## # Script mounts the /bootbank area for ESXi 6.x # Typical partition layout is: # 1 - UEFI ESP # 5 - /bootbank <= holds ESXi host state to be configured # 6 - /altbootbank # Tip: If the local.sh does not appear to be modified but the script ran correctly, check your bootbank and altbootbank for being "lopsided". One easy was is to check for size of the two bootbanks. A reboot should bring them back in sync. echo "########################################" echo "Mount ESXi /bootbank" echo "########################################" # List storage layout found in ESXi Golden Image / OS Volume echo "Devices:" -list-devices echo echo "Partitions:" -list-partitions echo echo "File systems:" -list-filesystems echo echo "Mount file systems:" echo "/dev/sda5 is assumed to hold ESXi host state configuration" echo "mount /dev/sda5" mount /dev/sda5 / echo "File system details for /dev/sda5:" -statvfs / Echo HPE-ESXi6.0u2-UnpackState-4-1-2017 ############################################################################## # 2-HPE-ESXi6.0u2-UnpackState-4-1-2017 # - Script unpacks the host state on the OS volume # (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP ############################################################################## echo "########################################" echo "Copy and unpack ESXi host state" echo "########################################"
Reference Architecture Page 28
echo "Create ImageStreamer temp directory" -mkdir /ImageStreamer echo echo "Extract ESXi host configuration from Golden Image" echo "Copy out boot.cfg" download /boot.cfg boot.cfg echo "Copy out state.tgz if present" -download /state.tgz state.tgz echo "Copy out onetime.tgz if present" -download /onetime.tgz onetime.tgz echo echo "Build esxi_unpack ESXi host state unpack script" upload -<<END /ImageStreamer/esxi_unpack #! /bin/bash DIR=`pwd` echo "Finding ESXi host state configuration archive in Golden Image" STATE=`grep -c onetime.tgz $DIR/boot.cfg` if [ "$STATE" -eq "0" ]; then echo echo "Unpack state.tgz" mkdir $DIR/esxi_state cd $DIR/esxi_state tar xvpzf $DIR/state.tgz echo echo "Unpack local.tgz" mkdir $DIR/esxi_local cd $DIR/esxi_local tar xvpzf $DIR/esxi_state/local.tgz echo else echo echo "Unpack onetime.tgz" mkdir $DIR/esxi_onetime cd $DIR/esxi_onetime tar xvpzf $DIR/onetime.tgz echo fi if [ -e etc/rc.local.d/local.sh ]; then cp etc/rc.local.d/local.sh $DIR/local.sh fi echo "Unpacking ESXi host state complete." exit 0 END download /ImageStreamer/esxi_unpack ./esxi_unpack echo echo "Build esxi_repack ESXi host state repack script" upload -<<END /ImageStreamer/esxi_repack #! /bin/bash DIR=`pwd` echo "---------------------------------------------------------------" echo "Final ESXi host local.sh content for configuration at first boot:" cat $DIR/local.sh echo "---------------------------------------------------------------"
Reference Architecture Page 29
echo echo "Finding ESXi host state configuration archive" STATE=`grep -c onetime.tgz $DIR/boot.cfg` if [ "$STATE" -eq "0" ]; then echo echo "Repack local.tgz" cd $DIR/esxi_local mkdir -p etc/rc.local.d cp $DIR/local.sh etc/rc.local.d/local.sh chmod 777 etc/rc.local.d/local.sh tar cvpzf $DIR/esxi_state/local.tgz * echo echo "Repack state.tgz" cd $DIR/esxi_state tar cvpzf $DIR/state.tgz * echo else echo echo "Repack onetime.tgz" cd $DIR/esxi_onetime mkdir -p etc/rc.local.d cp $DIR/local.sh etc/rc.local.d/local.sh chmod 777 etc/rc.local.d/local.sh tar cvpzf $DIR/onetime.tgz * touch $DIR/state.tgz echo fi echo "Repacking ESXi host state complete." exit 0 END download /ImageStreamer/esxi_repack ./esxi_repack echo echo "Run esxi_repack ESXi host state unpack script" !source ./esxi_unpack Echo HPE-ESXi6.0u2-SwitchGPUMode-4-1-2017 ############################################################################## # 3-HPE-ESXi6.0u2-SwitchGPUMode-4-1-2017 # - Script modifies the GPU Mode to graphics and installs the NVIDIA driver # (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP ############################################################################## echo "########################################" echo "Change GPU mode to graphics" echo "########################################" upload -<<END /ImageStreamer/gpu_mode_graphics #! /bin/bash echo "change GPU mode to graphics" cat <<"EOF" >>local.sh # check if NVIDIA Graphics drive is installed. If it is exit script esxcli software vib list | grep -i nvidia > /etc/rc.local.d/DriverInstalled.txt if grep -Fxq "NVIDIA-vGPU-VMware_ESXi_6.0" /etc/rc.local.d/DriverInstalled.txt then echo Found graphics > /etc/rc.local.d/graphicsdriverinstalled.txt exit 0 fi
Reference Architecture Page 30
#check what mode GPU is in gpumodeswitch --listgpumodes > /etc/rc.local.d/mode.txt #If the mode is Graphics then install the NVIDIA driver. If the mode is not graphics, change it to graphics if grep -Fxq "Graphics" /etc/rc.local.d/mode.txt then echo Found graphics mode > /etc/rc.local.d/graphicsmodefound.txt esxcli software vib install -v /vmfs/volumes/VSA/NVIDIA-vGPU-VMware_ESXi_6.0_Host_Driver_367.64-1OEM.600.0.0.2494585.vib > /etc/rc.local.d/DRVinstallResult.txt reboot else gpumodeswitch --gpumode graphics --auto echo Graphics Mode Changed > /etc/rc.local.d/modechangedgraphics.txt reboot fi EOF exit 0 END download /ImageStreamer/gpu_mode_graphics ./gpu_mode_graphics echo "Run gpu_mode_graphics" !source ./gpu_mode_graphics echo "Configure Graphics Mode complete" HPE-ESXi6.0u2-RepackState-4-1-2017 ############################################################################## # 4-HPE-ESXi6.0u2-RepackState-4-1-2017 # - Script packs ESXi host state which resides on the OS volume # (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP ############################################################################## echo "Run esxi_repack ESXi host state repack script" !source ./esxi_repack echo "Copy in state.tgz if present" -upload state.tgz /state.tgz echo "Copy in onetime.tgz if present" -upload onetime.tgz /onetime.tgz HPE-ESXi6.0u2-UmountBootPartition-4-1-2017 ############################################################################## # 5-HPE-ESXi6.0u2-UmountBootPartition-4-1-2017# - This HPE Synergy Image Streamer plan script unmounts ESXi filesystem # VERSION 1.0 # (C) Copyright 2016-2017 Hewlett Packard Enterprise Development LP ############################################################################## echo "Remove ImageStreamer temp directory" rm-rf /ImageStreamer echo "Unmount file systems" umount / echo
Note Scripts in this document are specific to HPE Synergy Image Streamer. End-to-end automation (PowerShell\PowerCLI) scripts that used HPE OneView API are not listed in this document.
Reference Architecture Page 31
Appendix B: NVIDIA CUDA Toolkit 8.0 and driver installation on RHEL 7 1. Install kernel headers and development packages:
$ sudo yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r)
2. Download CUDA Toolkit for Linux from:
https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda-repo-rhel7-8-0-local-ga2-8.0.61-1.x86_64-rpm
3. Download Toolkit patch that has various bug fixes and enhancements from:
https://developer.nvidia.com/compute/cuda/8.0/Prod2/patches/2/cuda-repo-rhel7-8-0-local-cublas-performance-update-8.0.61-1.x86_64-rpm
4. NVIDIA Toolkit has DKMS dependencies that can be installed from EPEl repositories:
$yum install epel-release
$yum install dkms
5. Install repository metadata:
sudo rpm --install cuda-repo-<distro>-<version>.<architecture>.rpm
6. Clean YUM repository cache:
$ sudo yum clean expire-cache
7. Install CUDA:
$sudo yum install cuda
8. Add path to the Toolkit to the PATH variable:
$ export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
9. Download NVIDIA RHEL 7 Driver from:
http://www.nvidia.com/download/driverResults.aspx/118959/en-us
10. Install NVIDIA RHEL 7 drivers:
$ rpm -i nvidia-diag-driver-local-repo-rhel7-375.66-1.x86_64.rpm
$ yum clean all
$ yum install cuda-drivers`
$ reboot
Reference Architecture Page 32
Sign up for updates
© Copyright 2017-2018 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Microsoft, and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. NVIDIA is a trademark and/or registered trademark of NVIDIA Corporation in the U.S. and other countries. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. Citrix and XenDesktop are trademarks of Citrix Systems, Inc. and/or one more of its subsidiaries, and may be registered in the United States Patent and Trademark Office and in other countries.
a00009296enw, April 2018, Rev. 2
Resources and additional links HPE Reference Architectures hpe.com/info/ra
HPE GitHub site github.com/hewlettpackard
HPE Synergy hpe.com/synergy
HPE Storage hpe.com/storage
HPE Networking hpe.com/networking
Advisory and Transformation Services hpe.com/us/en/services/consulting.html
To help us improve our documents, please provide feedback at hpe.com/contact/feedback.