50
Technical white paper Red Hat Enterprise Virtualization on HP BladeSystem Table of contents Executive summary ...................................................................................................................................................................... 3 Introduction .................................................................................................................................................................................... 3 Overview .......................................................................................................................................................................................... 4 Solution components ................................................................................................................................................................... 5 HP BladeSystem ........................................................................................................................................................................ 5 HP OneView ................................................................................................................................................................................ 7 HP 3PAR StoreServ storage .................................................................................................................................................... 8 Capacity and sizing ........................................................................................................................................................................ 8 Analysis and recommendations ............................................................................................................................................ 8 Configuration guidance ................................................................................................................................................................ 9 Getting started ........................................................................................................................................................................... 9 Network switches...................................................................................................................................................................... 9 SAN switches ............................................................................................................................................................................ 13 Intelligent PDUs ....................................................................................................................................................................... 13 HP BladeSystem c7000 ......................................................................................................................................................... 15 HP Virtual Connect FlexFabric .............................................................................................................................................. 17 HP ProLiant DL360p Gen8 .................................................................................................................................................... 21 Configure the 3PAR array ...................................................................................................................................................... 28 Configure the SAN zoning ..................................................................................................................................................... 29 Build the cluster ...................................................................................................................................................................... 30 Create the clustered VM environment ................................................................................................................................ 33 Install Red Hat Enterprise Virtualization Manager ........................................................................................................... 35 Installing HP OneView for Red Hat Enterprise Virtualization ........................................................................................ 37 Deploying the Red Hat Enterprise Virtualization Hypervisor......................................................................................... 38 Install Red Hat Enterprise Virtualization Hypervisor ....................................................................................................... 40 Update the Virtual Connect profiles .................................................................................................................................... 41 Configure the Red Hat Enterprise Virtualization data center ........................................................................................ 41 Why use a VM? ......................................................................................................................................................................... 41 Bill of materials ............................................................................................................................................................................ 41 Summary ....................................................................................................................................................................................... 45 Implementing a proof-of-concept .......................................................................................................................................... 45 Click here to verify the latest version of this document

Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

  • Upload
    others

  • View
    18

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper

Red Hat Enterprise Virtualization on HP BladeSystem

Table of contents Executive summary ...................................................................................................................................................................... 3

Introduction .................................................................................................................................................................................... 3

Overview .......................................................................................................................................................................................... 4

Solution components ................................................................................................................................................................... 5

HP BladeSystem ........................................................................................................................................................................ 5

HP OneView ................................................................................................................................................................................ 7

HP 3PAR StoreServ storage .................................................................................................................................................... 8

Capacity and sizing ........................................................................................................................................................................ 8

Analysis and recommendations ............................................................................................................................................ 8

Configuration guidance ................................................................................................................................................................ 9

Getting started ........................................................................................................................................................................... 9

Network switches...................................................................................................................................................................... 9

SAN switches ............................................................................................................................................................................ 13

Intelligent PDUs ....................................................................................................................................................................... 13

HP BladeSystem c7000 ......................................................................................................................................................... 15

HP Virtual Connect FlexFabric .............................................................................................................................................. 17

HP ProLiant DL360p Gen8 .................................................................................................................................................... 21

Configure the 3PAR array ...................................................................................................................................................... 28

Configure the SAN zoning ..................................................................................................................................................... 29

Build the cluster ...................................................................................................................................................................... 30

Create the clustered VM environment ................................................................................................................................ 33

Install Red Hat Enterprise Virtualization Manager ........................................................................................................... 35

Installing HP OneView for Red Hat Enterprise Virtualization ........................................................................................ 37

Deploying the Red Hat Enterprise Virtualization Hypervisor......................................................................................... 38

Install Red Hat Enterprise Virtualization Hypervisor ....................................................................................................... 40

Update the Virtual Connect profiles .................................................................................................................................... 41

Configure the Red Hat Enterprise Virtualization data center ........................................................................................ 41

Why use a VM? ......................................................................................................................................................................... 41

Bill of materials ............................................................................................................................................................................ 41

Summary ....................................................................................................................................................................................... 45

Implementing a proof-of-concept .......................................................................................................................................... 45

Click here to verify the latest version of this document

Page 2: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

Appendix: IP space ...................................................................................................................................................................... 46

Appendix: SAN zoning ................................................................................................................................................................ 47

Glossary ......................................................................................................................................................................................... 48

For more information ................................................................................................................................................................. 50

Page 3: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

3

Executive summary

HP delivers the most agile, reliable converged infrastructure platform that is purpose built for enterprise workloads such as virtualization and cloud using HP BladeSystem and HP OneView. Together HP BladeSystem and HP OneView deliver a single infrastructure and single management platform with automation for rapid delivery of service and rock-solid reliability with federated intelligence. HP BladeSystem is a modular infrastructure platform that converges compute, storage, fabric, management and virtualization to accelerate operations and speeds delivery of applications and services running in physical, virtual, and cloud-computing environments.

The Reference Configuration (RC) design described in this paper combines HP BladeSystem and Red Hat® Enterprise Virtualization, the open source choice for virtualizing workloads. At its core, the RC provides a foundation for building a high-performance Red Hat Enterprise Virtualization platform that has been optimized to consolidate and provision hundreds to thousands of workloads while providing extremely high availability at all levels – from the underlying network and storage fabrics up to the virtual machine (VM) layer.

With HP BladeSystem, solutions can be sized and scaled in a modular fashion, simplifying scaling up and out as additional resources are required. Additionally, the HP BladeSystem architecture helps to not only reduce the footprint of the solution but also reduce the environmental requirements through advanced power and thermal capabilities. HP Virtual Connect FlexFabric provides a converged fabric and the ability to specifically allocate network ports and associated bandwidth based on the needs of the solution. These technologies, along with the HP 3PAR StoreServ storage arrays, provide an extremely dense platform for the deployment of virtualized environments that require high levels of performance.

Target audience: This document is intended for technical decision-makers and solution architects.

Introduction

Most of today’s IT spending is consumed by operating and maintaining the existing infrastructure, leaving little space in the budget for new investments that can add value. However, while it may seem obvious that these ongoing costs need to be reduced, the inefficiency and inflexibility of your infrastructure may provide few opportunities.

Key areas that may need to be addressed in your infrastructure include the following:

• Server utilization: Servers are often dedicated to individual applications, leading to massive underutilization of resources.Such servers can typically be virtualized and consolidated.

• Manageability: Giving each new application its own system configuration creates a significant management burden. When patches and upgrades have to be applied individually it becomes difficult to automate such management tasks; thus, in order to simplify the environment, it is important to minimize the number of hardware platforms, operating systems, and system configurations.

• Inflexibility: With a proprietary or monolithic infrastructure it can be difficult to scale applications up or down or provideany level of fault-tolerance. The inherent inflexibility of such an infrastructure also makes it almost impossible to implement the updates required to accommodate changing business needs.

However, with virtualization, a single HP ProLiant server can run multiple application instances in isolation as virtual machines (VMs). Automated tools can allocate resources (CPU, storage, and networking) to individual VMs, allowing them to scale up and down in tune with the demands of the particular workload. Moreover, since each operating system and application environment is stored on a virtual disk, you can easily copy this disk and create one or many clones of the virtual machine.

VMs are also highly portable and can easily be migrated to a new physical server (host) to support maintenance activities on the original host or to better balance the workload between hosts. This portability also allows you to quickly restart an application on a new host if the original host were to fail.

There are a number of benefits that this design provides for organizations looking to deploy a virtualized environment. These benefits help to drive down both acquisition and operational costs and reduce your total cost of ownership (TCO).

Time to value

This architecture design helps to establish the basics needed to create a virtualized environment on HP BladeSystem, thus reducing the time-consuming tasks associated with designing and deploying the solution on-site.

Page 4: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

4

Optimized and balanced architecture

When designing virtualized environments with varying workloads it can often be a challenge to ensure there is sufficient I/O capacity in the design to meet the requirements of the workloads while also efficiently utilizing the other server and network resources. One of the significant driving factors for customers looking at virtualization initiatives is that there are often a high percentage of servers in the data center that run at very low CPU utilizations, consuming energy and floor space while performing very little work.

Enhanced efficiency and high availability

By standardizing platforms and system configurations, you can enhance IT efficiency and enable automation. The portability of VMs can enhance disaster recovery and business continuity.

Flexible

Based on the HP BladeSystem, the solution provides a phased growth design, allowing easy expansion of storage and/or compute nodes to improve I/O and processing power as needed. With HP Virtual Connect FlexFabric technology, the RC utilizes a single fabric that can be configured to meet the specific requirements for virtualization. It provides the flexibility to define individual networks and allocate bandwidth to those networks according to the utilization and availability requirements, while dramatically reducing the cabling and wiring complexity and associated fabric costs.

The RC provides the flexibility to build this solution using your own in-house IT staff or engage with experienced HP consultants to customize and tailor the design to meet the demands of your business.

Simplified management

The solution is built on HP Converged Infrastructure components and provides the infrastructure layer for deploying a private cloud. By deploying Red Hat Enterprise Virtualization Manager, customers can manage the entire lifecycle of both the virtual machines and the workloads running on them, while HP OneView allows customers to get deeper insight and monitoring control of the hardware.

Overview

This document provides specific details about the hardware and software configuration used. This reference configuration accommodates both I/O-intensive and CPU-intensive workloads and can be easily modified to increase storage or compute capabilities.

Figure 1 provides a graphical representation of the hardware configuration used in this reference configuration. This is only one of many possible configurations.

Page 5: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

5

Figure 1: Hardware setup - Front view

The solution discussed in this paper is intended to operate as a fully self-contained virtual hosting solution. All management and storage is contained within the solution with the ability to connect multiple virtual (VLAN) networks out to the local data center via trunked high-speed links. For the purpose of this paper VLAN 120 is used to isolate all management traffic. VLAN 1 (default) is allowed to leave the rack to provide the connectivity to the data center; additional VLANs can be added to isolate virtual machines as needed.

Solution components

HP BladeSystem

HP BladeSystem is a modular infrastructure platform that converges compute, storage, fabric, management and virtualization to accelerate operations, and speeds delivery of applications and services running in physical, virtual, and cloud-computing environments. The unique design of the HP BladeSystem c-Class helps reduce cost and complexity while delivering better, more effective IT services to end users and customers.

HP BladeSystem with HP OneView delivers the Power of One – one infrastructure, one management platform. Only the Power of One provides leading infrastructure convergence, the security of federation, and agility through data center automation to transform business economics by accelerating service delivery while reducing data center costs. As a single software-defined platform, HP OneView transforms how you manage your infrastructure across servers, storage and networking in both physical and virtual environments

HP BladeSystem c7000 Enclosure The HP BladeSystem c7000 Enclosure represents an evolution of the entire rack-mounted infrastructure, consolidating and repackaging featured infrastructure elements – computing, storage, networking, and power – into a single infrastructure-in-a-box that accelerates data center integration and optimization.

The BladeSystem enclosure infrastructure is adaptive and scalable. It transitions with your IT environment and includes modular server, interconnect, and storage components. The enclosure is 10U high and holds full-height and/or half-height server blades that may be mixed with storage blades, plus redundant network and storage interconnect modules. The enclosure includes a shared high-speed NonStop passive midplane with aggregate bandwidth for wire-once connectivity of

Page 6: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

6

server blades to network and shared storage. Power is delivered through a passive pooled-power backplane that enables the full capacity of the power supplies to be available to the server blades for improved flexibility and redundancy. Power input is provided with a very wide selection of AC and DC power subsystems for flexibility in connecting to data center power.

You can populate a BladeSystem c7000 Enclosure with these components:

• Server, storage, or other optional blades

• Interconnect modules (four redundant fabrics) featuring a variety of industry standards including:

– Ethernet

– Fibre Channel

– Fibre Channel over Ethernet (FCoE)

– InfiniBand

– iSCSI

– Serial Attached SCSI (SAS)

• Hot-plug power supplies supporting N+1 and N+N redundancy

• BladeSystem Onboard Administrator (OA) management module

Figure 2: HP BladeSystem c7000 enclosure

HP ProLiant BL460c Gen9 Server Blade

Designed for a wide range of configuration and deployment options, the HP ProLiant BL460c Gen9 Server Blade provides the flexibility to optimize your core IT applications with right-sized storage for the right workload – resulting in lower total cost of ownership (TCO). This performance workhorse adapts to any demanding blades environment, including virtualization, IT and web infrastructure, collaborative systems, cloud, and high-performance computing. HP OneView, the converged management platform, accelerates IT service delivery through a software-defined approach to manage it all.

Figure 3: HP ProLiant BL460c Gen9 server blade

Performance The HP ProLiant BL460c Gen9 Server Blade delivers performance with the Intel® Xeon® E5-2600 v3 processors and the enhanced HP DDR4 SmartMemory at a speed of 2133MHz.

Page 7: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

7

Flexibility The flexible internal storage controller options strike the right balance between performance and price, helping to lower overall TCO.

Storage options With the BL460c Gen9 Server Blade, you have standard internal USB 3.0 as well as future support for redundant Micro-SD and optional M.2 support for a variety of system boot alternatives.

HP Virtual Connect FlexFabric

HP Virtual Connect FlexFabric technology creates a dynamically scalable internal network architecture for virtualized deployments. Each c7000 enclosure includes redundant HP Virtual Connect FlexFabric 10 Gb/24-port Modules that converge data and storage networks to blade servers over high-speed 10Gb connections. Now, a single device can eliminate network sprawl at the server-edge that converges traffic inside enclosures and directly connects to external LANs and SANs.

Each FlexFabric module connects to a dual port 10Gb FlexFabric adapter in each server. Each adapter has four FlexNICs on each of its dual ports. Each FlexNIC can support guaranteed bandwidth for the storage, management and production networks.

Virtual Connect (VC) FlexFabric modules and adapters aggregate traffic from multiple networks into a 10Gb link. Flex-10 technology partitions the 10Gb data stream into multiple (up to four) adjustable bandwidths, preserving routing information for all data classes. For network traffic leaving the enclosure, multiple 10Gb links are combined using 802.3d trunking to the top of rack switches. These and other features of the VC FlexFabric modules make them an excellent choice for virtualized environments.

Figure 4: HP Virtual Connect FlexFabric 10Gb/24-port module

Alternatively, the c7000 supports HP Virtual Connect FlexFabric 20 Gb/40-port F8 Module. Based on open standards with 40GbE uplinks and 20GbE downlinks it addresses growing bandwidth needs in private and public cloud environments in a cost effective manner. Using Flex-10 and Flex-20 technology with Fibre Channel over Ethernet and accelerated iSCSI, these modules converge traffic over high-speed 10Gb/20Gb connections to servers with HP FlexFabric Adapters. Each redundant pair of Virtual Connect FlexFabric modules provide eight adjustable downlink connections (six Ethernet and two Fibre Channel, or six Ethernet and two iSCSI or eight Ethernet) to dual-port 10Gb/20Gb FlexFabric Adapters on each server. Up to twelve uplinks with eight Flexport and four QSFP+ interfaces, without splitter cables, are available for connection to upstream Ethernet and Fibre Channel switches. Including splitter cables up to 24 uplinks are available for connection to upstream Ethernet and Fibre Channel. VC FlexFabric-20/40 F8 modules avoid the confusion of traditional and other converged network solutions by eliminating the need for multiple Ethernet and Fibre Channel switches, extension modules, cables and software licenses.

HP OneView

HP OneView single management platform is designed for the way people work, rather than how devices are managed. HP OneView unifies processes, user interfaces (UIs), and the application programming interfaces (APIs) across server, storage, and networking resources. The innovative HP OneView architecture is designed for converged management across servers, storage, and networks. The unified workspace allows your entire IT team to leverage the “one model, one data, one view” approach. This streamlines activities and communications for consistent productivity. Converged management provides you with a variety of powerful, easy-to-use tools in a single interface that’s designed for the way you think and work:

• Map View allows you to visualize the relationships between your devices, up to the highest levels of your data center infrastructure.

• Dashboard provides capacity and health information at your fingertips. Custom views of alerts, health, and configuration information can also be displayed for detailed scrutiny.

• Smart Search instantly gets you the information you want for increased productivity, with search support for all the elements in your inventory (for example, to search for alerts).

• Activity View allows you to display and filter all system tasks and alerts.

• Mobile access using a scalable, modern user interface based on HTML5.

Page 8: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

8

Figure 5: HP OneView dashboard

HP 3PAR StoreServ storage

HP 3PAR StoreServ storage offers high performance to meet peak demands even during boot storms, login storms, and virus scans. This architectural advantage is particularly valuable in virtualized environments, where a single array must reliably support a wide mix of application types while delivering consistently high performance.

The HP 3PAR StoreServ architecture features mixed workload support that enables a single HP 3PAR StoreServ array to support thousands of virtual clients and to house both server and client virtualization deployments simultaneously, without compromising the user experience. Mixed workload support enables different types of applications (both transaction-based and throughput-intensive workloads) to run without contention on a single HP 3PAR StoreServ array.

Capacity and sizing

Sizing any environment requires knowledge of the applied workloads and the hardware resources. However, this is especially difficult in virtualized environments, as multiple different workloads are often applied, and resources support more applications than in the past.

HP BladeSystem allows compute and storage configurations to be defined and updated independently. If additional compute resources are required, additional blades can be added. If additional storage performance is needed, additional spindles can be added or the type of device changed. HP 3PAR StoreServ storage supports up to three (3) tiers of storage: Nearline, Fast Class and SSD depending on the need.

Depending on the number of processor cores you need, you can configure additional server blades to function as workload servers. Multiple processor options allow for the selection of various core densities and power consumption based on the design requirements. Memory options include a variety of configuration choices ranging from high performance to low voltage solutions.

Sizing a virtualization solution for servers requires detailed understanding of the intended workloads. For example, a Business Intelligence workload has a very different profile from an OLTP workload. Desktop workloads are more consistent. Red Hat has developed a guide for sizing a desktop virtualization solution; the “Red Hat Enterprise Virtualization Sizing Guide for Desktops” is available from Red Hat Network (RHN) https://access.redhat.com/site/articles/234833.

Analysis and recommendations

This document discusses the concepts and commands to build the management infrastructure and hypervisor nodes for a generic Red Hat Enterprise Virtualization solution. The concepts introduced here can be used to scale the HP BladeSystem up or down based on workload.

Page 9: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

9

Configuration guidance

The following configuration guidance is based on one possible wiring configuration. The following is a high level description of the documented solution:

• 2 x HP ProLiant DL360p Gen8 management servers

• 2 x BladeSystem c7000 Platinum enclosures

• 8 x BL460c Gen9 server blades per enclosure

• 2 x FlexFabric Virtual Connect modules per enclosure

• 6 x 8Gb Fibre Channel uplinks per enclosure (3 per VC module)

• 4 x 10GbE uplinks per enclosure (2 per VC module)

• Virtual Connect stacking between enclosures

• HP 3PAR StoreServ 7400 4-node

In addition to the items above the following are required to complete the configuration.

• 2 x 1m CAT5e or CAT6 network cables (crossover connections between ProLiant DL360p servers)

• Computer with DB9 serial port, RJ45 network port and terminal software supporting serial and SSH connections

• DB9 NULL (F/F) modem cable or USB to DB9 F cable1

• CAT5 network cable

Getting started

For installation, it is assumed Red Hat® Enterprise Linux® physical media is available. An ISO can also be attached using the virtual media capability of iLO 4 from an HTTP/FTP server attached to the local network on port 23 of the HP 5120 switch installed at U37. This port is reserved for support access to the environment and should be disabled when not in use.

Upon completion, a management network (10.251.0.0/22) will contain all internal management traffic to the rack. To avoid network collisions, this management traffic is contained to VLAN 120 which is configured to not leave the top of rack switches. Each system will have a connection on VLAN 1 which is routed through the top of rack HP 5920 switches to the data center. Additional VLANs can be configured based on need.

Network switches

Connect the DB9/RJ45 console cable, a blue, round cable with RJ45 and DB9 connectors, to the console port, Figure 6, on the HP 5120 switch in U37. Configure your preferred terminal program to connect on the serial port using 9600-8N1 and VT100 emulation.

Figure 6: HP 5120-24G EI Switch console port

1 If not available the DL360p iLO 4 can be configured using a local keyboard and monitor.

Page 10: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

10

Once connected define the management VLAN and add the ports to it, as shown below.

system-view

lldp global enable

vlan 120

description Mgmt VLAN

name Mgmt_VLan

interface vlan 1

undo ip address dhcp-alloc

interface vlan 120

ip address 10.251.0.3 22

interface Bridge-Aggregation1

port trunk permit vlan 120

port trunk pvid vlan 120

undo port trunk permit vlan 1

interface range GigabitEthernet1/0/3 GigabitEthernet1/0/4 GigabitEthernet1/0/7

to GigabitEthernet1/0/10 GigabitEthernet1/0/12 to GigabitEthernet11/0/16

port access vlan 120

interface range GigabitEthernet1/0/21 to GigabitEthernet1/0/24

port access vlan 120

save

After saving the configuration, move the serial connection to the 5120 switch in U36 and repeat the process, using 10.251.0.4 for the IP address. After configuring the HP 5120 switches, move the console cable to the console port, Figure 7, on the HP 5920AF switch in U39. Your terminal should still be configured for 9600 8N1.

Figure 7: HP 5920AF-24XG Switch console port

After connecting to the HP 5920AF switch, we need to configure the management VLAN and enable management traffic to pass from the HP 5120 switches to the HP 5920 switches.

system-view

vlan 120

description Mgmt VLAN

name Mgmt_VLan

interface vlan 1

undo ip address dhcp-alloc

interface vlan120

ip address 10.251.0.2 22

interface Bridge-Aggregation1

port trunk permit vlan 120

port trunk pvid vlan 120

undo port trunk permit vlan 1

interface Bridge-Aggregation2

port trunk permit vlan 120

port trunk pvid vlan 120

undo port trunk permit vlan 1

Page 11: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

11

Connections to the DL360p management servers use LACP dynamic trunks, one port on each of the local IRF members, to provide a highly available 20GbE network connection. We need to add these trunks, one per management station, to the switch configuration.

interface Bridge-Aggregation6

description Trunk to CR1-Mgmt1

interface Ten-GigabitEthernet1/0/19

port link-aggregation group 6

interface Ten-GigabitEthernet2/0/19

port link-aggregation group 6

interface Bridge-Aggregation6

port link-type trunk

port trunk permit vlan 120

port trunk pvid vlan 120

link-aggregation mode dynamic

interface Bridge-Aggregation7

description Trunk to CR1-Mgmt2

interface Ten-GigabitEthernet1/0/20

port link-aggregation group 7

interface Ten-GigabitEthernet2/0/20

port link-aggregation group 7

interface Bridge-Aggregation7

port link-type trunk

port trunk permit vlan 120

port trunk pvid vlan 120

link-aggregation mode dynamic

Unlike the connections to the HP 5120 switches we do not remove the ability to pass VLAN 1 from these connections. This is to ensure that the server can connect (if desired) to the data center over these links.

The last set of connections we need to define on the network switches are the links to the BladeSystem c7000 enclosures; one (1) LACP trunk is needed for each of the Virtual Connect FlexFabric modules. Create the trunks for the first enclosure.

Page 12: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

12

interface Bridge-Aggregation18

description Trunk to R1 Enc-1 IC1

interface Ten-GigabitEthernet1/0/2

port link-aggregation group 18

interface Ten-GigabitEthernet2/0/2

port link-aggregation group 18

interface Bridge-Aggregation18

port link-type trunk

port trunk permit vlan 120

port trunk pvid vlan 120

link-aggregation mode dynamic

interface Bridge-Aggregation19

description Trunk to R1 Enc-1 IC2

interface Ten-GigabitEthernet1/0/4

port link-aggregation group 19

interface Ten-GigabitEthernet2/0/4

port link-aggregation group 19

interface Bridge-Aggregation19

port link-type trunk

port trunk permit vlan 120

port trunk pvid vlan 120

link-aggregation mode dynamic

Repeat the process for the second enclosure and save the configuration.

interface Bridge-Aggregation20

description Trunk to R1 Enc-2 IC1

interface Ten-GigabitEthernet1/0/1

port link-aggregation group 20

interface Ten-GigabitEthernet2/0/1

port link-aggregation group 20

interface Bridge-Aggregation20

port link-type trunk

port trunk permit vlan 120

port trunk pvid vlan 120

link-aggregation mode dynamic

interface Bridge-Aggregation21

description Trunk to R1 Enc-2 IC2

interface Ten-GigabitEthernet1/0/3

port link-aggregation group 21

interface Ten-GigabitEthernet2/0/3

port link-aggregation group 21

interface Bridge-Aggregation21

port link-type trunk

port trunk permit vlan 120

port trunk pvid vlan 120

link-aggregation mode dynamic

save

As with the DL360p trunk, we default traffic to the management VLAN (120) but allow tagged traffic from VLAN 1 to pass.

Page 13: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

13

HP strongly recommends changing the switch passwords If you have not already done so, HP strongly recommends changing the passwords on these switches and enabling password authentication for the console. Depending on your company security policies you may also wish to disable the Telnet connectivity in these switches.

After saving your changes, the serial cable can be removed. The switch configuration is now complete for the solution, additional configuration commands can be found in the switch documentation: http://pro-networking-h17007.external.hp.com/us/en/support/converter/index.aspx?productNum=JG296A

SAN switches

Configuring the name and network information on the SAN switches will require a DB9-RJ45 console cable to connect the serial port; this cable is included with the switch and is a grey flat cable with RJ45 and DB9 connectors. Configure your terminal to connect on the serial port using 9600-8N1 and VT100 emulation.

HP strongly recommends changing the switch passwords The default username and password for the B-Series SAN switches are: admin/password. The B-Series switch has four levels of passwords: admin, user, factory and root. If these have not been set yet you will be prompted to set them when you log in.

Connect the serial cable to the console port on the SN6000B switch in U42. Once you have connected, log in as admin and set the IP address.

ipaddrset -ipv4 -add -host SANTOP -ethip 10.251.0.5 -ethmask 255.255.252.0 -dhcp

off

dnsconfig --add -domain private-network.net -serverip1 10.251.2.7 -serverip2

10.251.2.8

Move the serial cable to the switch in U41 and repeat the process replacing host and ethip entries with “SANBOT” and

10.251.0.6 respectively. Once we have configured the rest of the environment and have collected WWIDs we will need to reconnect to these switches, via serial cable or SSH, to complete the zoning configuration.

Intelligent PDUs

Configuring the Intelligent PDUs (iPDUs) requires several steps. To configure the initial networking, a serial connection is used. Using the included DB9-DB9 console cable (P/N 580655-002) connect your serial port to the “PC” (#8) port on the iPDU management module, Figure 8, for iPDU1.

Figure 8: iPDU Management

Page 14: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

14

Configure your terminal for 115200 8N1 and press the reset button (#10), the reset button does not impact the PDU itself. While the controller is rebooting you will be prompted to press a key to enter the Service menu. Once in the Service menu follow the prompts to set the network information.

• 1. Module Configuration

• 1. Network Configuration

• 1. IPV4 Network Settings

• 1. IPV4 Static Address

– Enter New IP Address: 10.251.0.65

• 2. IPV4 Static Subnet Mask

– Enter Subnet Mask: 255.255.252.0

• 3. IPV4 Static Gateway

– Enter New Default Gateway: 10.251.0.65

• 0. Previous Menu

• 0. Previous Menu

• 2. System Configuration

• 1. Date/Time Configuration

• 1. Network Time Protocol (NTP)

• 1. Primary NTP Server

– Enter Primary NTP Server: 10.251.2.7

• 2. Secondary NTP Server

– Enter Secondary NTP Server: 10.251.2.8

• 5. NTP Client

– Enter 1 to enable, 2 to disable (1-2): 1

• 6. Accept Changes

• 0. Previous Menu

• 0. Previous Menu

• 0. Previous Menu

• s. Save New Changes and Restart

Because there is no gateway on the management network, both the IP Address and the default gateway are set to the same value. Wait for the restart to complete and verify the settings.

HP strongly recommends changing the password The iPDUs are shipped from the factory with a single default user; username and password are both “admin”. HP strongly recommends changing the password and adding users with non-administrative privileges as needed.

Move the serial cable to the next iPDU and repeat the process, there are up to 8 in the compute rack and up to 4 in the 3PAR StoreServ 7000 rack. Table 1 contains the IP information for each iPDU.

Page 15: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

15

Table 1: iPDU IP Assignments

CR1-iPDU1 10.251.0.65

CR1-iPDU2 10.251.0.66

CR1-iPDU3 10.251.0.67

CR1-iPDU4 10.251.0.68

CR1-iPDU5 10.251.0.69

CR1-iPDU6 10.251.0.70

CR1-iPDU7 10.251.0.71

CR1-iPDU8 10.251.0.72

SRA-iPDU1 10.251.0.145

SRA-iPDU2 10.251.0.146

SRA-iPDU3 10.251.0.147

SRA-iPDU4 10.251.0.148

Additional information on the iPDU management and additional configuration information can be found at hp.com/go/ipdu.

HP BladeSystem c7000

The HP BladeSystem c7000 enclosure can be configured either from the front display or via serial connection. If you configure via the front display, only the IP should be configured for the Onboard Administrator (OA). The remaining configuration could then be achieved by connecting via SSH using the same commands as would be performed over the serial connection.

To connect to the enclosure over the serial console, a DB9 Null Modem cable (F/F) is required. Connect the serial cable to the primary OA and configure your connection as 9600 8N1. Once connected, authenticate using the “Administrator” account and the password printed on the OA tag.

To connect to the enclosure via SSH, connect your system with a CAT5 cable to port 23 on the HP 5120 switch in U37. Obtain or set the OA IP from the front display panel and configure your system with an available IP (10.251.1.1/22). Using your SSH client connect to the IP of the primary OA as “Administrator” using the password printed on the OA tag.

Once connected, create a new user with administrative privileges. This user will be the same regardless of which of the two OA modules you connect to.

ADD USER "admin" "Password1234"

SET USER ACCESS "admin" ADMINISTRATOR

ASSIGN SERVER ALL "admin"

ASSIGN INTERCONNECT ALL "admin"

ASSIGN OA "admin"

ENABLE USER "admin"

After creating the new admin user, we need to configure the rack and enclosure information in the Onboard Administrator. Make sure to change the enclosure name for each new enclosure you are configuring.

SET RACK NAME HP_CR1

SET ENCLOSURE NAME CR1_Enc1

SET NTP PRIMARY 10.251.2.7

SET NTP SECONDARY 10.251.2.8

SET NTP POLL 720

ENABLE NTP

If you have not already done so, the next step is to configure the IP addresses for the Onboard Administrator modules. If you are connected to the OA over an SSH connection and are not using the new IP address you will be disconnected; reconnect and continue with the next set of commands to configure the blade server iLO and interconnect bays. Make sure to change

Page 16: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

16

the name and IP address to reflect the enclosure you are currently connected to. The second c7000 enclosure uses 10.251.0.39 and 10.251.0.40 for OA #1 and #2 respectively.

SET OA NAME 2 CR1-Enc1-OA2

SET IPCONFIG STATIC 2 10.251.0.14 255.255.252.0 0.0.0.0 10.251.2.7 10.251.2.8

SET OA NAME 1 CR1-Enc1-OA1

SET IPCONFIG STATIC 1 10.251.0.13 255.255.252.0 0.0.0.0 10.251.2.7 10.251.2.8

HP recommends the use of Enclosure Based IP Addressing (EBIPA) to assign the address to the server and interconnect bays. This allows for easy configuration when adding additional servers or interconnects. Each c7000 enclosure in this environment uses 8 IPs for interconnects and 16 IPs for servers. These addresses are contiguous with the IP address assigned to the Onboard Administrator modules. The first enclosure begins addressing the interconnect bays at 10.251.0.15 and server bays at 10.251.0.23. The second enclosure starts the interconnect bays at 10.251.0.41 and server bays at 10.251.0.49. By assigning in groups and providing the starting IP address the Onboard Administrator will automatically increment the IP address for each bay.

SET SCRIPT MODE ON

DISABLE EBIPA INTERCONNECT ALL

SET EBIPA INTERCONNECT DOMAIN "private-network.net" ALL

SET EBIPA INTERCONNECT GATEWAY NONE ALL

SET EBIPA INTERCONNECT NTP PRIMARY 10.251.2.7 ALL

SET EBIPA INTERCONNECT NTP SECONDARY 10.251.2.8 ALL

ADD EBIPA INTERCONNECT DNS 10.251.2.7 ALL

ADD EBIPA INTERCONNECT DNS 10.251.2.8 ALL

SET EBIPA INTERCONNECT 10.251.0.15 255.255.252.0 1-8

ENABLE EBIPA INTERCONNECT 1-8

DISABLE EBIPA SERVER ALL

SET EBIPA SERVER DOMAIN "private-network.net" 1-16

ADD EBIPA SERVER DNS 10.251.2.7 1-16

ADD EBIPA SERVER DNS 10.251.2.8 1-16

SET EBIPA SERVER 10.251.0.23 255.255.252.0 1-16

ENABLE EBIPA SERVER 1-16

SAVE EBIPA

SET SCRIPT MODE OFF

With the OA configured and the IP addresses for the OA and iLO 4 set, we need to create some iLO users for ease of management and later use by Red Hat Enterprise Virtualization Manager to perform power control operations on the virtualization hosts. This can all be done from the OA connection using the HPONCFG utility.

Page 17: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

17

HPONCFG ALL << end_marker

<RIBCL VERSION="2.0">

<LOGIN USER_LOGIN="dummy_value" PASSWORD="UsingAutologin">

<USER_INFO MODE="write">

<ADD_USER

USER_NAME="admin"

USER_LOGIN="admin"

PASSWORD="Password1234">

<ADMIN_PRIV value ="Y"/>

<REMOTE_CONS_PRIV value ="Y"/>

<RESET_SERVER_PRIV value ="Y"/>

<VIRTUAL_MEDIA_PRIV value ="Y"/>

<CONFIG_ILO_PRIV value="Y"/>

</ADD_USER>

<ADD_USER

USER_NAME="fence"

USER_LOGIN="fence"

PASSWORD="F3nc3M3N0w">

<ADMIN_PRIV value ="Y"/>

<REMOTE_CONS_PRIV value ="Y"/>

<RESET_SERVER_PRIV value ="Y"/>

<VIRTUAL_MEDIA_PRIV value ="Y"/>

<CONFIG_ILO_PRIV value="N"/>

</ADD_USER>

</USER_INFO>

<RIB_INFO MODE="write">

<MOD_NETWORK_SETTINGS>

<REG_DDNS_SERVER value="no"/>

<IPV6_REG_DDNS_SERVER VALUE="no"/>

</MOD_NETWORK_SETTINGS>

</RIB_INFO>

</LOGIN>

</RIBCL>

end_marker

Move your connection to the primary OA of the next enclosure and repeat the process, updating the names and IP address as needed. After configuring all the enclosures reconnect to the OA on the first enclosure to begin configuring the Virtual Connect environment.

HP Virtual Connect FlexFabric

Using your existing connection to the Onboard Administrator in the first (bottom) c7000 enclosure, connect to the first interconnect using the internal connection. Log in to the Virtual Connect interconnect manager using the “Administrator” account and password found on the interconnect tag.

R1-Enc1-OA1> connect interconnect 1

NOTICE: This pass-thru connection to the integrated I/O console

is provided for convenience and does not supply additional access

control. For security reasons, use the password features of the

integrated switch.

Connecting to integrated switch 1 at 115200,N81...

Escape character is '<Ctrl>_' (Control + Shift + Underscore)

Press [Enter] to display the switch console:

VCEFX3C4249017R login: Administrator

Password: <<SEE TAG>>

Page 18: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

18

Before we can configure anything in the Virtual Connect environment, we need to inform the Virtual Connect Manager of the enclosures we need to manage. To do this we need the username and passwords we just created on the OA as well as the IP address of the remote enclosure. Once we have imported the enclosures, we need to create an administrative account which will be shared on all interconnects in the environment.

import enclosure UserName=admin Password=Password1234

import enclosure 10.251.0.39 -quiet UserName=admin Password=Password1234

add user admin Password=Password1234 Enabled=true Privileges=*

set interconnect enc0:1 -quiet Hostname="cr1-enc1-ic1"

set interconnect enc0:2 -quiet Hostname="cr1-enc1-ic2"

set interconnect enc1:1 -quiet Hostname="cr1-enc2-ic1"

set interconnect enc1:2 -quiet Hostname="cr1-enc2-ic2"

Before we can configure anymore on the domain, a few decisions need to be made. Virtual Connect allows you to use the burned in addresses or select from a list of pre-defined pools or even define your own. HP recommends using the Virtual Connect assigned address for maximum flexibility and resilience in the solution. Whichever method you choose the addresses must be unique in the environment. For more information on designing your Virtual Connect environments, please visit hp.com/go/virtualconnect. For larger environments, HP OneView is recommended. HP OneView allows you to manage multiple Virtual Connect domains from a single pane of glass and ensures duplication does not occur.

For our configuration, Virtual Connect defined pool 32 was chosen for MAC, WWID and serial numbers. The MAC addresses and WWID values will be important later when configuring our SAN zones and DHCP environment.

set domain Name=CR1_VC_Domain

set domain MacType=VC-Defined MacPool=32

set domain WwnType=VC-Defined WwnPool=32

set serverid Type=VC-Defined PoolId=32

set enet-vlan VlanCapacity=Expanded

set snmp enet ReadCommunity=public

set snmp fc ReadCommunity=public SmisEnabled=true

With the Virtual Connect domain defined, we can begin to define our connections to the top of rack switches. To match with our LACP trunks, we need to configure one network (10GbE) uplink set per interconnect module. Fibre Channel fabrics are defined per virtual connect domain. We define two, left and right, or more commonly referred to as red and blue.

add uplinkset CR1_E1_IC1_SUS

add uplinkport enc0:1:X5 UplinkSet=CR1_E1_IC1_SUS

add uplinkport enc0:1:X6 UplinkSet=CR1_E1_IC1_SUS

add uplinkset CR1_E1_IC2_SUS

add uplinkport enc0:2:X5 UplinkSet=CR1_E1_IC2_SUS

add uplinkport enc0:2:X6 UplinkSet=CR1_E1_IC2_SUS

add uplinkset CR1_E2_IC1_SUS

add uplinkport enc1:1:X5 UplinkSet=CR1_E2_IC1_SUS

add uplinkport enc1:1:X6 UplinkSet=CR1_E2_IC1_SUS

add uplinkset CR1_E2_IC2_SUS

add uplinkport enc1:2:X5 UplinkSet=CR1_E2_IC2_SUS

add uplinkport enc1:2:X6 UplinkSet=CR1_E2_IC2_SUS

Define the FC Uplinks

add fabric CR1_IC1 Bay=1 Ports=1,2,3

add fabric CR1_IC2 Bay=2 Ports=1,2,3

Page 19: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

19

With the uplinks defined, the next step is to define the actual networks that will travel over these uplinks. These networks define default and maximum bandwidths which can be modified later based on needs. For our purposes, we assume that the majority of the traffic will not be internal management and define the management network as a 1Gb network on VLAN 120. Our data center uplink (VLAN 1), which is where we expect the virtual machines to live, is given the rest of the connection.

add network CR1_E1_IC1_Mgmt UplinkSet=CR1_E1_IC1_SUS VLANID=120

set network CR1_E1_IC1_Mgmt NativeVLAN=Enabled SmartLink=Enabled

PrefSpeedType=Custom PrefSpeed=1000 MaxSpeedType=Custom MaxSpeed=1000

Color=orange

add network CR1_E1_IC2_Mgmt UplinkSet=CR1_E1_IC2_SUS VLANID=120

set network CR1_E1_IC2_Mgmt NativeVLAN=Enabled SmartLink=Enabled

PrefSpeedType=Custom PrefSpeed=1000 MaxSpeedType=Custom MaxSpeed=1000

Color=orange

add network CR1_E2_IC1_Mgmt UplinkSet=CR1_E2_IC1_SUS VLANID=120

set network CR1_E2_IC1_Mgmt NativeVLAN=Enabled SmartLink=Enabled

PrefSpeedType=Custom PrefSpeed=1000 MaxSpeedType=Custom MaxSpeed=1000

Color=orange

add network CR1_E2_IC2_Mgmt UplinkSet=CR1_E2_IC2_SUS VLANID=120

set network CR1_E2_IC2_Mgmt NativeVLAN=Enabled SmartLink=Enabled

PrefSpeedType=Custom PrefSpeed=1000 MaxSpeedType=Custom MaxSpeed=1000

Color=orange

add network CR1_E1_IC1_DC UplinkSet=CR1_E1_IC1_SUS VLANID=1

set network CR1_E1_IC1_DC SmartLink=Enabled Color=blue

add network CR1_E1_IC2_DC UplinkSet=CR1_E1_IC2_SUS VLANID=1

set network CR1_E1_IC2_DC SmartLink=Enabled Color=blue

add network CR1_E2_IC1_DC UplinkSet=CR1_E2_IC1_SUS VLANID=1

set network CR1_E2_IC1_DC SmartLink=Enabled Color=blue

add network CR1_E2_IC2_DC UplinkSet=CR1_E2_IC2_SUS VLANID=1

set network CR1_E2_IC2_DC SmartLink=Enabled Color=blue

With all the networks and fabrics defined, we can begin defining our server profiles. The MAC/WWID for each connection is defined at this time. HP recommends defining each port in the server, even if they are not currently being used. This ensures that the MAC/WWID values in the profile are contiguous if they need to be defined later. The HP ProLiant BL460c server has 2 FlexFabric ports on board, each partitionable into 4 connections. We only use six (6) connections per server today but will define all eight (8).

add profile Compute_01 -NoDefaultEnetConn -NoDefaultFcConn -NoDefaultFcoeConn

NAG=Default

add enet-connection Compute_01 Network=CR1_E1_IC1_Mgmt PXE=UseBIOS

add enet-connection Compute_01 Network=CR1_E1_IC2_Mgmt PXE=UseBIOS

add enet-connection Compute_01 Network=Unassigned PXE=UseBIOS

add enet-connection Compute_01 Network=Unassigned PXE=UseBIOS

add enet-connection Compute_01 Network=Unassigned PXE=UseBIOS

add enet-connection Compute_01 Network=Unassigned PXE=UseBIOS

add server-port-map Compute_01:3 CR1_E1_IC1_DC VLanID=1 Untagged=true

add server-port-map Compute_01:4 CR1_E1_IC2_DC VLanID=1 Untagged=true

add fcoe-connection Compute_01 Fabric="" SpeedType=4Gb

set fcoe-connection Compute_01:1 BootPriority=BIOS

add fcoe-connection Compute_01 Fabric="" SpeedType=4Gb

set fcoe-connection Compute_01:2 BootPriority=BIOS

We defined the Fibre Channel over Ethernet (FCoE) connections but did not attach them to the SAN fabrics. This is because we need the WWID defined but do not want to see the SAN storage when installing the Red Hat Enterprise Virtualization

Page 20: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

20

Hypervisor instances later. We will assign the connections after installing Red Hat Enterprise Virtualization Hypervisor onto the blade servers.

Copy the new profile 15 times, one for each bay in the c7000 enclosure. Then assign Compute_01 to Compute_16 to enc0:1 through enc0:16.

assign profile Compute_01 enc0:1

copy profile Compute_01 Compute_02

assign profile Compute_02 enc0:2

To create Compute_17, follow the process used for Compute_01 replacing the enclosure 1 (*E1*) connections with those in enclosure 2 (*E2*). Copy Compute_17 fifteen (15) times, then assign Compute_17 through Compute_32 to enc1:1 through enc1:16.

HP strongly recommends backing up your Virtual Connect configuration Your Virtual Connect configuration can be backed up from the HTTPS interface. Alternatively it can be saved to an FTP or TFTP server from the command line.

Before disconnecting from the Virtual Connect Manager, record the assigned WWPN and MAC addresses for all profiles using “show profile” commands.

Ethernet Network Connections

======================================================================

Port Network Name Status PXE MAC Address Allocated

Speed

======================================================================

1 CR1_E1_IC1_Mgmt OK UseBIOS 00-17-A4-77-7C-00 1Gb

----------------------------------------------------------------------

2 CR1_E1_IC2_Mgmt OK UseBIOS 00-17-A4-77-7C-02 1Gb

----------------------------------------------------------------------

3 Multiple OK UseBIOS 00-17-A4-77-7C-04 5Gb

Network

----------------------------------------------------------------------

4 Multiple OK UseBIOS 00-17-A4-77-7C-06 5Gb

Network

----------------------------------------------------------------------

5 <Unassigned> OK UseBIOS 00-17-A4-77-7C-08 -- --

----------------------------------------------------------------------

6 <Unassigned> OK UseBIOS 00-17-A4-77-7C-0A -- --

----------------------------------------------------------------------

FCoE Connections

==========================================================================

Port Connected Fabric Status Allocated WWPN MAC Address

To Bay Name Speed

==========================================================================

1 1 CR1_IC OK 4Gb 50:06:0B:00:0 00-17-A4-77-7

1 0:C2:DE:00 C-0C

--------------------------------------------------------------------------

2 2 CR1_IC OK 4Gb 50:06:0B:00:0 00-17-A4-77-7

2 0:C2:DE:02 C-0D

--------------------------------------------------------------------------

...

At this point you can disconnect from the Virtual Connect Manager (exit) and the internal connection (Ctrl-Shift-_) and finally the Onboard Administrator (exit).

Page 21: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

21

HP ProLiant DL360p Gen8

The DL360p servers provide the management infrastructure for the solution. Before continuing, ports 1 and 3 on the PCI 1GbE card should be connected between the two servers. These connections will be bonded using mode 1 (active-backup) and a Media Independent Interface (MII) monitor value of 100. This bonded connection will provide the cluster communications link between the two servers.

Configuring the iLO

Configuring the IP of iLO 4 on the DL360p can be achieved one of two ways. If you have a serial null modem cable, then connect to the serial port on the back of the server in U33 and configure your terminal as 9600 8N1. Press “Esc-(” to open

the connection and log in with the Administrator account information from the server tag. Once connected configure the two new users and set the IP. Repeat for the server in U32 using IP address 10.251.0.8.

create /map1/accounts1 username=admin password=Password1234 group=admin

create /map1/accounts1 username=fence password=F3nc3M3N0w

group=oemhp_rc,oemhp_power,oemhp_vm

set /map1/ manual_ilo_reset=yes

set /map1/dhcpendpt1 EnabledState=false

set /map1/dnsendpt1 Hostname=CR1-Mgmt1-ilo DomainName=private-network.net

set /map1/enetport1/lanendpt1/ipendpt1 IPv4Address=10.251.0.7

SubnetMask=255.255.252.0

set /map1/dnsserver1 AccessInfo=10.251.2.7

set /map1/dnsserver2 AccessInfo=10.251.2.8

set /map1/ manual_ilo_reset=no

reset /map1

If you do not have null modem cable, connect a local keyboard and monitor to the system in U33 and power it on. During boot you will be offered an opportunity to configure the iLO. When prompted, Figure 9, press F8 to configure the iLO.

Figure 9: iLO 4 Boot Prompt

Page 22: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

22

Configure the IP as 10.251.0.7/22 and set the DNS servers, Figure 10. The users can be added at this point or by establishing an SSH connection to the iLO. Repeat for the server in U32 using IP address 10.251.0.8.

Figure 10: Console iLO 4 config

Install Red Hat Enterprise Linux

At this point we can put away all the serial cables and configure the remaining components from the network. Connect your local system to port 23 on the HP 5120 switch in U37 and assign it an IP of 10.251.1.1/22. Complex installation of Red Hat Enterprise Linux requires the use of the graphical installer. Use a Java or ActiveX enabled web browser to connect to the iLO for the DL360p in U33, https://10.251.0.7.

Open the Remote Console and power on the server. When prompted, Figure 11, press F8 to launch the Smart Array ROM

configuration utility.

Figure 11: Smart Array boot prompt

Delete any existing drive configurations, Figure 12, and then create a new drive configuration.

Figure 12: Smart Array delete logical drive

To create a new logical drive, select “Create Logical Drive” and select all available disks. HP recommends at least 2 disks in a RAID 1 configuration be used for the system disks of the DL360p cluster.

Page 23: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

23

For this reference configuration, we selected six (6) 300 GB drives in a RAID 50 configuration, Figure 13.

Figure 13: Smart Array new config

The new configuration will not be finalized until you save it by pressing F8.

Before exiting the Smart Array configuration utility, insert your Red Hat Enterprise Linux media into the physical DVD drive or attach an ISO using the Virtual Media interface in the iLO web interface.

Install Red Hat Enterprise Linux using the “Virtualization Host” configuration. Add the “High Availability” and “Scalable File System” repositories. Using “Customize Now”, ensure that the following are selected.

• Base System – Storage Availability Tools

• Desktops – X Window System

• Desktops – Remote Desktop Clients, select all optional clients

• Applications – Internet Browser

Configure networking Configure network interfaces eth0 and eth1 into bond0 using mode 4 (802.3d). The cluster communications interface, bond1, should be built using eth3 (port 1) and eth5 (port 3) in mode 1 (active-backup) and a MII monitor value of 100. Define a VLAN subinterface on bond0 for VLAN 1. Finally create two bridges, one attached to bond0 for access to the

management network, the other attached to bond0.1 for access to the data center.

Repeat the process for the server in U32. Table 2 has the network names and IPs for the various interfaces for both servers.

Table 2: DL360p Network Interfaces

U33 br0 (attached to bond0) mgmt1.private-network.net 10.251.2.7

U33 br1 (attached to bond0.1) Defined by Data Center Defined by Data Center

U33 bond1 mgmt-node1 172.23.1.7

U32 br0 (attached to bond0) mgmt2.private-network.net 10.251.2.8

U32 br1 (attached to bond0.1) Defined by Data Center Defined by Data Center

U32 bond1 mgmt-node2 172.23.1.8

The bridge attached to bond0.1 should be configured based on your data center policies. If DHCP is used for this interface, the configuration must be set to PEER_DNS=no. DNS resolvers for the data center should be manually added to /etc/resolv.conf.

For proper operation of the virtual machine networks and the cluster, the NetworkManager service must be disabled or

uninstalled.

Service NetworkManager stop ; chkconfig NetworkManager off

Page 24: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

24

Update the /etc/hosts file to reflect the IPs of the iLO 4 interfaces and each of the network names for the management

nodes.

cat >> /etc/hosts << EOF

10.251.0.7 CR1-Mgmt1-ilo mgmt1-ilo

10.251.2.7 mgmt1.private-network.net mgmt1

172.23.1.7 mgmt-node1

10.251.0.8 CR1-Mgmt2-ilo mgmt2-ilo

10.251.2.8 mgmt2.private-network.net mgmt2

172.23.1.8 mgmt-node2

EOF

Register against the Red Hat Network

As part of the first boot configuration process you should have registered the system against the Red Hat Network or your local satellite server. If you did not, please register both systems now and perform an upgrade to the latest patch level and reboot to ensure all patches are active.

Enable serial and text console access By default the GRUB installation and the Red Hat boot entry use graphics mode displays. To ensure access to the console over iLO text console (textcons) or virtual serial port (vsp) we need to remove the graphics settings and enable serial

support.

sed -i -r 's/ (quiet|rhgb) / /g;' /boot/grub/grub.conf

sed -i -r '/^(splash|hidden)/d;' /boot/grub/grub.conf

sed -i '/^timeout/a serial --unit 1 --speed=115200\nterminal serial,console'

/boot/grub/grub.conf

To enable access to the system over the virtual serial port we need to add that as a valid console device to the boot line.

sed -i '/\tkernel / s/$/ console=tty0 console=ttyS1,115200/'

/boot/grub/grub.conf

Now when you boot, the GRUB menu will be displayed to both the serial and the text console, as will the system login prompt.

Establish an SSH trust Establishing an SSH trust is required for VM migration and many of the cluster operations later on. To establish this trust generate an RSA key for the root user on both systems, HP recommends a minimum of 2048 bits for the key. Using ssh-keyscan, collect the host keys for all the network interfaces in the cluster into the ssh/.known_hosts file. Finally enter the public SSH keys for both root users into the .ssh/authorized_keys file on both nodes.

ssh-keygen -t rsa -b 2048 -N "" -f /root/.ssh/id_rsa

ssh-keyscan -t rsa,dsa mgmt1,10.251.2.7,mgmt-node1,172.23.1.7

mgmt2,10.251.2.8,mgmt-node2,172.23.1.8 > .ssh/known_hosts

cp -p .ssh/id_rsa.pub .ssh/authorized_keys

ssh mgmt1 cat .ssh/id_rsa.pub >> .ssh/authorized_keys

scp -p .ssh/{known_hosts,authorized_keys} mgmt1:.ssh

With the SSH trust established between the hosts, we should now be able to run commands on either host from the other without constant password challenges.

Configure the firewall Before we modify the firewall, we want to make sure that we are the only ones doing so. The libvirtd process will

manipulate the firewall when configuring the default KVM virtual network. To prevent this we need to disable and remove the default network on the management servers.

ssh mgmt1 "virsh net-destroy default; virsh net-undefine default"

ssh mgmt2 "virsh net-destroy default; virsh net-undefine default"

Page 25: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

25

The default firewall configuration allows SSH and established traffic, this is a good starting point for us to customize from. We are going to have a variety of rules depending on the interface, to make things clearer we create a new chain for the management network and authorize SNMP discovery into the system from that network only.

iptables -N Mgmt_Network

iptables -I INPUT 5 -i br0 -j Mgmt_Network

iptables -A Mgmt_Network -p udp --dport 161 -m comment --comment "SNMP" -j

ACCEPT

service iptables save"

As we add additional services later on, additional rules and chains will be created.

Configure NTP services

Time synchronization is important for both the cluster and keeping the event logs from all the components in sync. The management servers will provide several network services to the internal network, amongst these will be NTP. However, before they can begin serving NTP data we need them to sync against an external time source from the data center. The default NTP configuration file includes servers from the Red Hat pool; replace these with your data center servers. After adding your data center servers, update the firewall to allow NTP query from the management network and enable the NTP service.

iptables -A Mgmt_Network -p udp --dport 123 -m comment --comment "NTP" -j ACCEPT

iptables -A Mgmt_Network -p tcp --dport 123 -m comment --comment "NTP" -j ACCEPT

service iptables save

chkconfig ntpd on ; service ntpd restart

If your environment utilizes NTP broadcasts the local firewall will need to be opened on the data center interface to accept these packets.

Configure DNS services

The last service we need to configure on the management servers is the DNS. The solution uses the domain name “private-network.net” for its internal network; this is not expected to be resolvable outside of the solution. These DNS servers will listen only on the internal network and not be accessible from outside the solution. The public interfaces, which may or may not have IP addresses will be dependent on your data center DNS for resolution.

Chapter 14 of the Red Hat Enterprise Linux 6 Deployment Guide (https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/ch-DNS_Servers.html) provides information on configuring a DNS server. It is recommended that in addition to the local domain the server be configured as a caching DNS, forwarding requests to your data center DNS servers. This will allow them to resolve external IPs if needed to connect to the Red Hat Network or your satellite server.

Install BIND on both servers and enable DNS resolution through the firewall on the management network.

yum install -y bind-chroot

iptables -A Mgmt_Network -p udp --dport 53 -m comment --comment "DNS" -j ACCEPT

iptables -A Mgmt_Network -p tcp --dport 53 -m comment --comment "DNS" -j ACCEPT

service iptables save

Page 26: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

26

Both forward and reverse lookups are needed for the environment. The final reverse lookup files should look something like:

TTL 604800 ; 1 week

@ IN SOA mgmt1.private-network.net. root.mgmt1.private-

network.net. (

2013071801 ; serial

604800 ; refresh (1 week)

86400 ; retry (1 day)

2419200 ; expire (4 weeks)

604800 ; minimum (1 week)

)

NS mgmt1.private-network.net.

NS mgmt2.private-network.net.

2 IN PTR mgmt.private-network.net.

3 IN PTR rhevm.private-network.net.

...

While the forward lookup files will look slightly different.

$TTL 604800 ; 1 week

@ IN SOA mgmt1 root.mgmt1 (

2013071801 ; serial

604800 ; refresh (1 week)

86400 ; retry (1 day)

2419200 ; expire (4 weeks)

604800 ; minimum (1 week)

)

NS CR1-mgmt1

NS CR1-mgmt2

CR-5920AF A 10.251.0.2

mgmt A 10.251.2.2

rhevm A 10.251.2.3

...

Page 27: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

27

Appendix: IP space contains a complete list of the IP addresses and names used in this reference configuration. Once you have the forward and reverse lookup files completed on mgmt1, enable mgmt1 as the primary server for the domain.

cat<<EOF>>/etc/named/bladesystem.zones

acl bladesystem_slaves { 10.251.2.8; };

acl bladesystem_nets { localhost; 10.251.0.0/22;};

zone "private-network.net" IN {

file "net.private-network.db";

type master;

allow-transfer { bladesystem_slaves; };

allow-query { bladesystem_nets; };

};

zone "0.251.10.in-addr.arpa" IN {

file "0.251.10.in-addr.arpa";

type master;

allow-transfer { bladesystem_slaves; };

allow-query { bladesystem_nets; };

};

zone "2.251.10.in-addr.arpa" IN {

file "2.251.10.in-addr.arpa";

type master;

allow-transfer { bladesystem_slaves; };

allow-query { bladesystem_nets; };

};

EOF

sed 's/ \(localhost\|::1\|127\.0\.0\.1\); / any; /; $a include

"/etc/named/bladesystem.zones";' -i /etc/named.conf

chkconfig named on ; service named start

The other management server, mgmt2, will be configured as a slave server and will obtain the DNS records from the master.

cat<<EOF>>/etc/named/bladesystem.zones

masters bladesystem_masters { 10.251.2.7 } ;

acl bladesystem_nets { localhost; 10.251.0.0/22; };

zone "private-network.net" IN {

file "slaves/net.private-network.db";

type slave;

masters { bladesystem_masters; };

allow-query { bladesystem_nets; };

};

zone "0.251.10.in-addr.arpa" IN {

file "slaves/0.251.10.in-addr.arpa";

type slave;

masters { bladesystem_masters; };

allow-query { bladesystem_nets; };

};

zone "2.251.10.in-addr.arpa" IN {

file "slaves/2.251.10.in-addr.arpa";

type slave;

masters { bladesystem_masters; };

allow-query { bladesystem_nets; };

};

EOF

sed 's/ \(localhost\|::1\|127\.0\.0\.1\); / any; /; $a include

"/etc/named/bladesystem.zones";' -i /etc/named.conf

chkconfig named on ; service named start

Page 28: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

28

Verify the zone transfer has completed by checking for the zone files in /var/named/slaves on mgmt2.

Install 3PAR MC/CLI While the 3PAR array can be managed directly by SSH connections in some cases the graphical interface is preferred. The local installation of the CLI will allow for direct scripting of complex operations on the array. These tools can be installed on both management nodes for maximum availability. On both management servers make sure the 32-bit core runtime and X11 libraries are installed, these libraries are prerequisites for installing the graphical 3PAR Management Console (MC) as well as the CLI interface.

yum install -y glibc.686 libXrender.i686 libXmu.i686 libX11.i686 libXext.i686 \

libSM.i686 libICE.i686 libXt.i686 libuuid.i686 libXp.i686 libXtst.i686 \

libXi.i686 zlib.i686 libgcc.i686 fontconfig.i686 expat.i686 freetype.i686

Insert the 3PAR Management Console media into the DVD drive or perform a loop mount against an ISO image. Once the media is mounted run the linux/setup.bin and follow the prompts. After the Management Console installation has completed, replace the media with the 3PAR CLI and SNMP media. To install the CLI run the cli/linux/setup.bin

installer program and follow the prompts. Both programs will be installed under /opt/3PAR by default.

Configure the 3PAR array

Before we can configure the 3PAR array, we need to obtain a complete list of the WWIDs for the environment. You should have the complete list for the blades from the Virtual Connect profile report. To obtain the values from the management servers we need to examine the /sys file system. As these are not managed by Virtual Connect they will be unique in every server.

ssh mgmt1 cat /sys/class/fc_host/host*/port_name

ssh mgmt2 cat /sys/class/fc_host/host*/port_name

Now that we have the complete list of port WWIDs from the servers we can connect to the 3PAR array and begin to customize it for this environment. For documentation purposes, the direct SSH connection will be used rather than the CLI or graphical management console. Connect to the 3PAR storage system using the IP and username/password that were configured during the installation.

Once connected to the array HP recommends configuring aliases for the host ports to make them easier to identify, the same names can be used in the SAN zoning aliases later.

controlport label N0_S1_P1 -f 0:1:1

controlport label N0_S1_P2 -f 0:1:2

controlport label N0_S2_P1 -f 0:2:1

controlport label N0_S2_P2 -f 0:2:2

...

After labeling all the host ports on the 3PAR array collect the port WWN from the array using the showport command.

These will be needed to complete the SAN zoning.

The next step in configuring the array is to define the host persona for each host which will have storage presented to it. Performing a showhost at this point will display the known hosts as well as any unassigned WWIDs the array has seen. At

this point we should see the two (2) management servers and nothing else, the blades are not currently attached to the SAN. In addition to creating the host persona for each host, a host set for the management cluster and each enclosure should be created to make presenting storage easier.

Linux environments use the default persona, persona 1. Detailed information on configuring and managing 3PAR environment in a Red Hat solution can be found in the “HP 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide” on the HP 3PAR Operating System Software support page: http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?lang=en&cc=us&contentType=SupportManual&prodTypeId=18964&prodSeriesId=5044394

Page 29: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

29

createhost -persona 1 CR1-MGT1 500143802422D006 500143802422D004

createhost -persona 1 CR1-MGT2 5001438024225A82 5001438024225A80

createhostset -comment "Mgmt cluster" MgmtClu CR1-MGT*

createhost -persona 1 "CR1_E1_B1" "50060B0000C2DE00" "50060B0000C2DE02"

createhost -persona 1 "CR1_E1_B2" "50060B0000C2DE04" "50060B0000C2DE06"

createhost -persona 1 "CR1_E1_B3" "50060B0000C2DE08" "50060B0000C2DE0A"

...

createhost -persona 1 "CR1_E2_B16" "50060B0000C2DE7C" "50060B0000C2DE7E"

createhostset CR1_E1 CR1_E1_*

createhostset CR1_E2 CR1_E2_*

createhostset HPBlades CR1_E*

With the hosts configured we can create the new Common Provisioning Groups (CPGs) we will use. The groups will have threshold warnings as well as more descriptive names than the default groups. We can also create the LUNs we need to create the cluster, a lock LUN and a data LUN. The size of the data LUN is dependent on how large your collection of virtual machines and ISO images on the management servers will be.

createcpg -f -aw 75 -t r5 -p -devtype FC CPG_Mgt

createcpg -f -aw 75 -t r5 -p -devtype FC CPG_Data

createcpg -f -aw 75 -t r5 -p -devtype SSD CPG_SSD

createvv CPG_Mgt cluLock 1g

createvlun cluLock 1 set: MgmtClu

createvv -tpvv CPG_Mgt cluShared 1T

createvlun cluShared 2 set: MgmtClu

createvv -tpvv -usr_aw 75 CPG_Data vmGuests 16T

createvlun vmGuests 1 set:HP_Blades

Even though we don’t need it yet, we have also provisioned and presented a 16TB LUN for use by the blade servers (Red Hat Enterprise Virtualization Hypervisor hosts) later on. Both of the data LUNs are thin provisioned, which means we can reserve 16TB even if we don’t have that much on the array. The cluster lock LUN is not thin provisioned.

With the LUNs now presented and the WWIDs collected, disconnect from the 3PAR array.

Configure the SAN zoning

HP 3PAR storage arrays support a variety of zoning styles. HP recommends that zoning be done per initiator port per target zone. Detailed zoning discussions can be found in the “HP 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide” mentioned earlier as well as the “HP SAN Design Reference Guide” available from the HP Business Support Center: http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&locale=en_US&docIndexId=179911&taskId=101&prodTypeId=12169&prodSeriesId=406734

For this reference configuration, we zoned by HBA port to multiple targets. This ensures multiple access paths as well as preventing an unbalanced load on any one target port. While similar, the zoning is unique per SAN switch. 3PAR best practices are to connect each HBA on the array to multiple fabrics; in this case the odd ports are attached to one switch and the even ports to the other.

Connect to each SAN switch via SSH using the IPs we assigned earlier, and define aliases for each port on the 3PAR array to match the assigned labels.

alicreate "N0_S1_P1", "20:11:00:02:AC:00:60:4F"

alicreate "N0_S1_P2", "20:12:00:02:AC:00:60:4F"

...

alicreate "N3_S2_P3", "23:23:00:02:AC:00:60:4F"

alicreate "N3_S2_P4", "23:24:00:02:AC:00:60:4F"

Page 30: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

30

Once you have defined aliases for each of the 3PAR ports continue adding aliases for each of the management server and blade server ports.

alicreate CR1_Mgt1_P1, "50:01:43:80:24:22:d0:04"

alicreate CR1_Mgt2_P1, "50:01:43:80:24:22:5a:80"

alicreate "CR1_E1_B01_FlexHBA_P1", "50:06:0B:00:00:C2:DE:00"

...

alicreate "CR1_E2_B16_FlexHBA_P1", "50:06:0B:00:00:C2:DE:7C"

Once we have all the aliases created we can create the zones themselves and the switch configuration. The zone configurations used in this reference configuration are contained in Appendix: SAN zoning.

While we are connected to the switches we should probably update them to refer to the new NTP servers now running on the management nodes.

tsclockserver "10.251.2.7; 10.251.2.8"

Save and enable the switch configurations. Before enabling the configuration, the switch will validate if all zone values resolve to the defined aliases. If it detects problems the configuration will not be applied. Once the configuration is successfully enabled disconnect from the switch.

Build the cluster

With the network configured, the LUNs exported from the array and the SAN zoning in place we can finally build the management cluster. Enable multipathd on the management servers.

ssh mgmt1 "/sbin/mpathconf --enable ; service multipathd start"

ssh mgmt2 "/sbin/mpathconf --enable ; service multipathd start"

Page 31: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

31

Using multipath -l, determine the WWID for the each of the LUNs presented from the array. If your zones are working

correctly you should see four (4) paths to each LUN. Define the 3PAR settings and aliases for the WWID in the multipath config file on both servers and restart the multipath service.

cat > /etc/multipath.conf <<EOF

defaults {

user_friendly_names yes

polling_interval 10

max_fds 8192

}

device {

vendor "3PARdata"

product "VV"

no_path_retry 18

features "0"

hardware_handler "0"

path_grouping_policy multibus

getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"

path_selector "round-robin 0"

rr_weight uniform

rr_min_io_rq 1

path_checker tur

failback immediate

}

blacklist {

}

multipaths {

multipath {

wwid 360002ac000000000000000030000604f

alias qdisk

}

multipath {

wwid 360002ac000000000000000040000604f

alias vmGuests

}

}

EOF

On the first management server configure the 1GB LUN as qdisk voting device.

mkqdisk -d -c /dev/mapper/qdisk -l qdisk

Page 32: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

32

On both mgmt1 and mgmt2, configure the firewalls to accept cluster communications.

iptables -N RHCS

iptables -A RHCS -p udp --dst 224.0.0.0/4 -j ACCEPT

iptables -A RHCS -p igmp -j ACCEPT

iptables -A RHCS -p udp -m state --state NEW -m multiport --dports 5404,5405 -j

ACCEPT -m comment --comment corosync

iptables -A RHCS -p tcp -m state --state NEW --dport 11111 -j ACCEPT -m comment

--comment ricci

iptables -A RHCS -p tcp -m state --state NEW --dport 16851 -j ACCEPT -m comment

--comment modcluster

iptables -A RHCS -p tcp -m state --state NEW --dport 8084 -j ACCEPT -m comment -

-comment luci

iptables -A RHCS -p tcp -m state --state NEW --dport 21064 -j ACCEPT -m comment

--comment dlm

iptables -N RHEV

iptables -A RHEV -p tcp --dport 54321 -m comment --comment "vdsm" -j ACCEPT

iptables -A RHEV -p tcp --dport 16514 -m comment --comment "libvirt tls" -j

ACCEPT

iptables -A RHEV -p tcp -m multiport --dports 5634:6166 -m comment --comment

"consoles" -j ACCEPT

iptables -A RHEV -p tcp -m multiport --dports 49152:49216 -m comment --comment

"migration" -j ACCEPT

iptables -A Mgmt_Network -j RHCS

iptables -N HB_Network

iptables -A HB_Network -j RHCS

iptables -A HB_Network -j RHEV

iptables -I INPUT 5 -i bond1 -j HB_Network

service iptables save

Disable acpid and configure and enable ricci on both nodes.

service acpid stop; chkconfig acpid off

echo Password1234 | passwd --stdin ricci; chkconfig ricci on; service ricci

start

With the firewall opened and ricci enabled we can create the base cluster. Unless otherwise stated the remaining steps

to build the cluster should all be performed on a single system.

ccs -f /etc/cluster/cluster.conf --createcluster MgmtClu

ccs -f /etc/cluster/cluster.conf --setcman quorum_dev_poll=59000

ccs -f /etc/cluster/cluster.conf --settotem consensus=4800

join=60token_retransmits_before_loss_const=20 token=59000

ccs -f /etc/cluster/cluster.conf --setquorumd tko=8 tko_up=2 interval=4

label=qdisk master_wins=1 reboot=0 votes=1

Using the account we defined earlier on the management server iLOs, add the fence devices for the cluster. At least IPMI “operator” privileges are required to control the power of the server; this corresponds to membership in the three (3) iLO groups oemhp_rc, oemhp_power, and oemhp_vm. Unless specified, the fencing process assumes “admin” privileges

and will fail if the account used to connect does not have them.

ccs -f /etc/cluster/cluster.conf --addfencedev mgmt1-ilo agent="fence_ipmilan"

ipaddr="mgmt1-ilo" lanplus="1" login="fence" passwd=F3nc3M3N0w power_wait=4

delay=2 privlvl=operator

ccs -f /etc/cluster/cluster.conf --addfencedev mgmt2-ilo agent="fence_ipmilan"

ipaddr="mgmt2-ilo" lanplus="1" login="fence" passwd=F3nc3M3N0w power_wait=4

delay=2 privlvl=operator

Page 33: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

33

To get the cluster communications to travel over the dedicated cluster network we need to add the nodes using the names assigned to those interfaces. Add the nodes and assign the fence devices to them.

ccs -f /etc/cluster/cluster.conf --addnode mgmt-node1

ccs -f /etc/cluster/cluster.conf --addmethod ilo mgmt-node1

ccs -f /etc/cluster/cluster.conf --addfenceinst mgmt1-ilo mgmt-node1 ilo

ccs -f /etc/cluster/cluster.conf --addnode mgmt-node2

ccs -f /etc/cluster/cluster.conf --addmethod ilo mgmt-node2

ccs -f /etc/cluster/cluster.conf --addfenceinst mgmt2-ilo mgmt-node2 ilo

In order to control the preferred location of cluster services we need to create some failover domains.

ccs -f /etc/cluster/cluster.conf --addfailoverdomain only_mgmt1 ordered

restricted

ccs -f /etc/cluster/cluster.conf --addfailoverdomainnode only_mgmt1 mgmt-node1 1

ccs -f /etc/cluster/cluster.conf --addfailoverdomain only_mgmt2 ordered

restricted

ccs -f /etc/cluster/cluster.conf --addfailoverdomainnode only_mgmt2 mgmt-node1 1

ccs -f /etc/cluster/cluster.conf --addfailoverdomain prefer_mgmt1 ordered

restricted

ccs -f /etc/cluster/cluster.conf --addfailoverdomainnode prefer_mgmt1 mgmt-node1

1

ccs -f /etc/cluster/cluster.conf --addfailoverdomainnode prefer_mgmt1 mgmt-node2

2

ccs -f /etc/cluster/cluster.conf --addfailoverdomain prefer_mgmt2 ordered

restricted

ccs -f /etc/cluster/cluster.conf --addfailoverdomainnode prefer_mgmt2 mgmt-node2

1

ccs -f /etc/cluster/cluster.conf --addfailoverdomainnode prefer_mgmt2 mgmt-node1

2

Validate the configuration then distribute the configuration and start the cluster.

# ccs_config_validate

Configuration validates

# ccs -f /etc/cluster/cluster.conf -h mgmt2 --setconf

mgmt2 password: <RICCI Password>

# ccs -f /etc/cluster/cluster.conf --startall

mgmt-node1 password: <RICCI Password>

Started mgmt-node2

Started mgmt-node1

# clustat

Cluster Status for MgmtClu @ Thu Jul 25 18:42:56 2013

Member Status: Quorate

Member Name ID Status

------ ---- ---- ------

mgmt-node1 1 Online, Local

mgmt-node2 2 Online

/dev/block/253:4 0 Online, Quorum Disk

Create the clustered VM environment

A clustered VM environment requires shared storage and a shared file system to maintain the current VM definition. We need to enable Clustered Logical Volume Manager (CLVM) and Global File System (GFS2) support before we can create shared virtual machines. The first step is to enable cluster locking in CLVM, this requires changing the locking type to 3 and restarting the clvmd service.

Page 34: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

34

sed -i -e 's/^\(\s*locking_type\s*=\s*\)1/\13/' /etc/lvm/lvm.conf

service clvmd restart

We want to make sure that the data is correctly aligned on the disk. 3PAR arrays support the use of VPD pages to obtain recommended and maximum transfer sizes. Detailed information can be found in the “HP 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide” discussed earlier. The parted utility will correctly identify and align the data when creating the partition table. Use the GB units when creating the partition table.

parted -a optimal /dev/mapper/vmGuests "mklabel gpt"

parted -a optimal /dev/mapper/vmGuests "mkpart primary 0gb -0gb"

With the disk partitioned, create a volume group and a logical volume for the shared file system.

pvcreate /dev/mapper/vmGuestsp1

vgcreate -c y Mgmt_VM_VG /dev/mapper/vmGuestsp1

lvcreate -n sharedFS -L10G Mgmt_VM_VG

Verify that the volume was created correction and is shared by CLVM.

service clvmd status

clvmd (pid 5740) is running...

Clustered Volume Groups: Mgmt_VM_VG

Active clustered Logical Volumes: sharedFS

Create the shared GFS2 file system, the name of the lock table must agree with the name of the cluster. The warning about a symbolic link can safely be ignored; it is caused by using the /dev/mapper device name.

mkfs.gfs2 -p lock_dlm -j 4 -t MgmtClu:sharedFS /dev/mapper/Mgmt_VM_VG-sharedFS

This will destroy any data on /dev/mapper/Mgmt_VM_VG-sharedFS.

It appears to contain: symbolic link to '../dm-8'

Are you sure you want to proceed? [y/n] y

Device: /dev/mapper/Mgmt_VM_VG-sharedFS

Blocksize: 4096

Device Size 10.00 GB (2621440 blocks)

Filesystem Size: 10.00 GB (2621438 blocks)

Journals: 4

Resource Groups: 40

Locking Protocol: "lock_dlm"

Lock Table: "MgmtClu:sharedFS"

UUID: 3e5ecd76-3d89-fb27-1ff8-c4570e45342a

Using the UUID provided in the output, create a /shared mount point on both servers and add an entry to the /etc/fstab for the UUID.

ssh mgmt1 "mkdir -p /shared; echo UUID=3e5ecd76-3d89-fb27-1ff8-c4570e45342a

/shared gfs2 defaults,noatime,nodiratime 0 0 >> /etc/fstab; service gfs2 start"

Mounting GFS2 filesystem (/shared): [ OK ]

ssh mgmt2 "mkdir -p /shared; echo UUID=3e5ecd76-3d89-fb27-1ff8-c4570e45342a

/shared gfs2 defaults,noatime,nodiratime 0 0 >> /etc/fstab; service gfs2 start"

Mounting GFS2 filesystem (/shared): [ OK ]

From mgmt1, with the file system mounted, update the SELinux context for the file system to support the virtual machines.

Page 35: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

35

chcon system_u:object_r:virt_content_t:s0 /shared

mkdir -p /shared/{defs,iso}

ls -laZ /shared"

drwxr-xr-x. root root system_u:object_r:virt_content_t:s0 .

dr-xr-xr-x. root root system_u:object_r:root_t:s0 ..

drwxr-xr-x. root root unconfined_u:object_r:virt_content_t:s0 defs

drwxr-xr-x. root root unconfined_u:object_r:virt_content_t:s0 iso

And with that we have all we need to create our first clustered virtual machine and install Red Hat Enterprise Virtualization Manager into it.

Install Red Hat Enterprise Virtualization Manager

Before we can install the operating system and Red Hat Enterprise Virtualization Manager we need to create the disks and networks for the virtual machine. Defining a network with virsh requires an XML definition file; create the definition files on the shared file system and then define the networks on each management server.

cat > /shared/defs/mgmt-net.xml <<EOF

<network ipv6='no'>

<name>mgmt-net</name>

<bridge name="br0" delay="0"/>

<forward mode="bridge"/>

</network>

EOF

cat > /shared/defs/public-net.xml <<EOF

<network ipv6='no'>

<name>public-net</name>

<bridge name="br1" delay="0"/>

<forward mode="bridge"/>

</network>

EOF

virsh net-define /shared/defs/mgmt-net.xml

virsh net-autostart mgmt-net

virsh net-start mgmt-net

virsh net-define /shared/defs/public-net.xml

virsh net-autostart public-net

virsh net-start public-net

virsh net-list

Name State Autostart Persistent

--------------------------------------------------

mgmt-net active yes yes

public-net active yes yes

Create two logical volumes in our shared volume group, one for the operating system, the other to house the local NFS exported storage domain.

lvcreate -n RHEVM_OS -L36G /dev/Mgmt_VM_VG

lvcreate -n RHEVM_Data -L200G /dev/Mgmt_VM_VG

Use virt-install to perform the basic system creation and installation. If you desire a graphical display for this system,

change the graphics definition accordingly; this will require you to have an active X11 DISPLAY defined before launching the installer.

virt-install -n RHEVM -r 32768 --cpu host --vcpus 4 --graphics none \

--disk=/dev/mapper/Mgmt_VM_VG-RHEVM_OS,bus=virtio \

--disk=/dev/mapper/Mgmt_VM_VG-RHEVM_Data,bus=virtio \

--os-type=linux --os-variant=rhel6 \

--network network=mgmt-net,model=virtio \

--network network=public-net,model=virtio \

--cdrom=/shared/iso/RHEL6.4-20130130.0-Server-x86_64-DVD1.iso

Page 36: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

36

Because we are using the text installer over a virtual serial console we need to change the boot options from GRUB. When prompted press TAB to edit the command line options and enter “linux console=ttyS0”.

+----------------------------------------------------------+

| Welcome to Red Hat Enterprise Linux 6.4! |

|----------------------------------------------------------|

| Install or upgrade an existing system |

| Install system with basic video driver |

| Rescue installed system |

| Boot from local drive |

| Memory test |

| |

| |

| |

| |

| |

| |

| |

+----------------------------------------------------------+

Press [Tab] to edit options

Automatic boot in 44 seconds...

boot:

linux vesa rescue local memtest86

boot: linux console=ttyS0

Installing over the serial console results in a much simpler installation process, only the system partitioning and root password will be set during the install. A basic system will be installed by default, additional packages and network configuration will need to be performed after the initial install. Install the operating system to the 36GB disk (vda).

After installation, configure the hostname and network connections. The primary hostname needs to be set to the name used on the management network, rhevm.private-network.net. Configure the IP for eth0, the management

network, as 10.251.2.3 using the management nodes as the DNS resolvers. Configure eth1, the data center, according to

your local polices; this is the primary connection you will use to manage the environment. If DHCP is used make sure to set PEERDNS=no. It is recommended to populate /etc/hosts with the hostname as well as the information for the two

management servers.

If you choose to install with a graphics connection defined, update the GRUB configuration to enable the serial console. This is covered earlier in the Enable serial and text console access section, replacing serial device 1 with serial device 0. This step is not necessary if you installed using the serial interface.

Configure the second virtual disk (vdb) with an ext4 file system and mount it as /domains. This will be used to house the local ISO domain later when we install Red Hat Enterprise Virtualization Manager. With the basics configured, shutdown

the system to end the install process.

To convert the local virtual machine to a clustered version we need to place the XML definition into the shared file system and remove the local definition. After removing the local definition add the virtual machine as a cluster service and update the cluster. We must also disable the libvirt-guests service on all nodes to prevent the VM from restarting outside

the control of the cluster.

ssh mgmt1 chkconfig libvirt-guests off

ssh mgmt2 chkconfig libvirt-guests off

virsh dumpxml RHEVM > /shared/defs/RHEVM.xml

virsh undefine RHEVM

ccs -h mgmt-node1 --addvm RHEVM path=/shared/defs exclusive=0 recovery=restart

max_restarts=2 restart_expire_time=600 domain=prefer_mgmt1

cman_tool version -r

Once you have verified that the virtual machine has started and that it can be migrated between the cluster nodes, connect to the Red Hat Enterprise Virtualization Manager virtual machine over the console or via SSH.

Page 37: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

37

The process for installing Red Hat Enterprise Virtualization Manager is well documented in the Red Hat Enterprise Virtualization 3.2 Installation Guide. Because this environment has multiple networks defined the firewall rules are slightly different than normal, much of the traffic should be authorized only from the management network.

iptables -N Mgmt_Net

iptables -N DC_Net

iptables -I INPUT 5 -i eth0 -j Mgmt_Net

iptables -I INPUT 6 -i eth1 -j DC_Net

iptables -A Mgmt_Net -p tcp --dport 2049 -m comment --comment "NFS" -j ACCEPT

iptables -A Mgmt_Net -p tcp --dport 111 -m comment --comment "NFS/portmap" -j

ACCEPT

iptables -A Mgmt_Net -p tcp --dport 892 -m comment --comment "NFS/mountd" -j

ACCEPT

iptables -A Mgmt_Net -p tcp --dport 875 -m comment --comment "NFS/quotad" -j

ACCEPT

iptables -A Mgmt_Net -p tcp --dport 662 -m comment --comment "NFS/statd" -j

ACCEPT

iptables -A Mgmt_Net -p tcp --dport 32803 -m comment --comment "NFS/lockd" -j

ACCEPT

iptables -A Mgmt_Net -p tcp --dport 80 -m comment --comment "HTTP" -j ACCEPT

iptables -A Mgmt_Net -p tcp --dport 443 -m comment --comment "HTTPS" -j ACCEPT

iptables -A DC_Net -p tcp --dport 80 -m comment --comment "HTTP" -j ACCEPT

iptables -A DC_Net -p tcp --dport 443 -m comment --comment "HTTPS" -j ACCEPT

services iptables save

The ports shown above are the TCP defaults for NFS v3, additional ports can be opened if UDP connections are desired. The NFS ports used for mountd, statd, and lockd are configurable in the /etc/sysconfig/nfs file.

With the NFS and HTTP(S) ports enabled in the firewall, register the system against the Red Hat Network and install the Red Hat Enterprise Virtualization Manager as per the install guide. If you wish to install the additional reports or data warehouse package these should be installed at the same time; do not configure any of them yet. Once all the packages have been installed run the rhevm-setup command and configure using these settings:

override-httpd-config: yes

http-port: 80

https-port: 443

host-fqdn: rhevm.private-network.net

org-name: private-network.net

default-dc-type: FC

db-remote-install: local

nfs-mp: /domains/ISO

config-nfs: yes

override-firewall: None

At this point Red Hat Enterprise Virtualization Manager is installed and should be accessible from either network over HTTPS. Before proceeding, if you have installed the data warehousing or reports packages configure them as defined in the installation guide.

Installing HP OneView for Red Hat Enterprise Virtualization

The HP OneView integration for Red Hat Enterprise Virtualization (RHEV) is a user interface plug-in that seamlessly integrates the manageability features of HP ProLiant, HP BladeSystem, and HP Virtual Connect within the RHEV management console. Installation of HP OneView for Red Hat Enterprise Virtualization is accomplished using YUM and a post-install configuration tool. HP recommends the installation of the full net-snmp package for more complete integration, at a minimum the net-snmp-libs package is required and will be installed as a dependency. HP OneView for RHEV integrates with the RHEV Management Console (RHEV-M) GUI to provide a single point from which to manage both the virtualization and HP ProLiant hardware environments. The “HP OneView for Red Hat Enterprise Virtualization” integration extensions (previously known as “HP Insight Control for Red Hat Enterprise Virtualization”) can be downloaded at hp.com/go/ovrhev.

After installation and initial configuration each server needs to be discovered. This can be accomplished from the web interface. Establishing a Single Sign On trust can be accomplished from the web interface and requires an administrator level account to establish the trust. Detailed instructions on discovering the hardware can be found in the HP OpenView for Red Hat Enterprise Virtualization documentation and online help.

Page 38: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

38

Deploying the Red Hat Enterprise Virtualization Hypervisor

The Red Hat Enterprise Linux 6 Hypervisor Deployment Guide provides detailed instructions on the various means of deploying the hypervisor image. For this solution we chose to use Preboot Execution Environment (PXE) based deployments, with the deployment solution hosted on the manager virtual machine. The Red Hat Enterprise Virtualization Hypervisor packages, DHCP and TFTP servers, should be installed at this time.

yum –y install rhev-hypervisor syslinux-3.86-1.1.el6 livecd-tools dhcp tftp-

server tftp

Version of syslinux At the time of writing a specific version of syslinux was required to resolve dependencies needed to install the livecd-tools. It is recommended that the latest version be used if possible.

Open the firewall to accept incoming DHCP and TFTP requests from the management network. For security reasons these ports should not be opened on the data center network.

iptables -A Mgmt_Net -p udp -m udp --sport 68 --dport 67 -m comment --comment

"DHCPD" -j ACCEPT

iptables -A Mgmt_Net -p udp -m udp --dport 69 -m comment --comment "TFTP" -j

ACCEPT

iptables -A Mgmt_Net -p tcp -m tcp --dport 69 -m comment --comment "TFTP" -j

ACCEPT

The DHCP server configuration documented in the deployment guide assumes that the DHCP server will only be used for deployment purposes. To allow for standard DHCP requests as well as PXE boot some additional configuration is required. Using groups we can define which systems are allowed to DHCP, which are allowed to PXE boot and which are allowed to boot the Hypervisor installer. Define the global settings for the network as well as the additional values needed if the client is a PXE boot request.

subnet 10.251.0.0 netmask 255.255.252.0 {

authoritative;

allow bootp;

allow booting;

ddns-update-style none;

default-lease-time 86400;

max-lease-time 604800;

option subnet-mask 255.255.252.0;

option broadcast-address 10.1.3.255;

option domain-name "private-network.net";

option domain-name-servers 10.251.2.7,10.251.2.8;

option ntp-servers 10.251.2.7,10.251.2.8;

class "pxeclients" {

match if substring(option vendor-class-identifier, 0, 9) = "PXEClient";

option tftp-server-name "10.251.2.3";

next-server 10.251.2.3;

filename "pxelinux.0";

default-lease-time 300;

max-lease-time 1800;

}

Page 39: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

39

The upper portion of the 10.251.1.0 network has been reserved for use by DHCP/PXE clients that are not part of the hypervisor group. Add this range and configure it to not be usable by the hypervisors themselves.

pool {

range 10.251.1.0 10.251.1.128;

deny known-clients;

}

Finally as in the deployment guide define a host entry, as part of our known group, for each of the systems that will be deploying the hypervisor. The MAC address from the first network interface defined in the VC profile will be used to boot. The second network interface could be used to boot as well and can be added optionally.

group {

if substring(option vendor-class-identifier, 0, 9) = "PXEClient" {

filename "rhevh/pxelinux.0";

}

host cr1-enc1-b1-N1 { hardware ethernet 00:17:A4:77:7C:00; fixed-address

cr1-enc1-b1.private-network.net; }

host cr1-enc1-b1-N2 { hardware ethernet 00:17:A4:77:7C:02; fixed-address

cr1-enc1-b1.private-network.net; }

...

}

The host group defines all the IP addresses for our Red Hat Enterprise Virtualization Hypervisor systems; it also modifies the default PXE response directing them to an alternate configuration file.

Follow the deployment guide instructions on creating the PXE boot images from the hypervisor ISO image and place them into /var/lib/tftpboot/rhevh. If you wish to maintain multiple versions of the hypervisor, separate directories will

be required for each one. With the files in place, enable the TFTP service and configure it to start on boot.

Before we start deploying the hypervisor we need to customize the boot options. The default configuration file does not support the use of the virtual serial port during PXE. To enable this we need to add several lines to the start of the pxelinux.cfg/default file. To allow the installer to find the correct installation network we need to add “IPAPPEND

2” to the file as well.

The deployment guide covers the rest of the parameters and how to define them to complete an automated installation. In addition to those settings HP recommends adding some kernel tunings to the boot line for the fibre channel adapter. These are documented in the “HP 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide”. In the end your configuration file should appear similar to the following.

Page 40: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

40

DEFAULT install

CONSOLE 1

SERIAL 0 115200

TIMEOUT 60

PROMPT 1

IPAPPEND 2

LABEL manual

KERNEL vmlinuz0

APPEND rootflags=loop initrd=initrd0.img root=live:/rhev-hypervisor.iso

rootfstype=auto ro liveimg check rootflags=ro crashkernel=128M elevator=deadline

rd_NO_LVM max_loop=256 rd_NO_LUKS rd_NO_MD rd_NO_DM

LABEL install

KERNEL vmlinuz0

APPEND console=tty0 console=ttyS0,115200 rootflags=loop

initrd=initrd0.img root=live:/rhev-hypervisor.iso rootfstype=auto ro liveimg

nocheck rootflags=ro crashkernel=128M elevator=deadline rd_NO_LVM max_loop=256

rd_NO_LUKS rd_NO_MD rd_NO_DM install storage_init=scsi

management_server=rhevm.private-network.net syslog=rhevm.private-network.net

adminpw=$1$RtH1bDhc$BSx5oeUCWvyt4.gJkgzXO0 cim_enabled

cim_password=$1$RtH1bDhc$BSx5oeUCWvyt4.gJkgzXO0

snmp_password=$1$RtH1bDhc$BSx5oeUCWvyt4.gJkgzXO0 ip=dhcp

lpfc.lpfc_devloss_tmo=14 lpfc.lpfc_lun_queue_depth=16

lpfc.lpfc_discovery_threads=32

ONERROR LOCALBOOT 0

LABEL uninstall

KERNEL vmlinuz0

APPEND rootflags=loop initrd=initrd0.img root=live:/rhev-hypervisor.iso

rootfstype=auto ro liveimg check rootflags=ro crashkernel=128M elevator=deadline

uninstall rd_NO_LVM max_loop=256 rd_NO_LUKS rd_NO_MD rd_NO_DM

Install Red Hat Enterprise Virtualization Hypervisor

Connect to the console for each blade and configure the internal storage on the Smart Array as was done for the DL360p servers. After configuring the local array allow the system to continue booting, it should automatically PXE boot and begin deploying the Red Hat Enterprise Virtualization Hypervisor.

While the hosts are deploying, configure a new network on your cluster for the data center network, DCUplink. It should be configured as a VM network on VLAN 1.

Once deployed, the hypervisor will appear in your list to approve on the manager interface. Approve the host but do not configure power control until all the hosts have been approved. Edit the host network configuration to create the bonded interfaces and attach the data center network. The rhevm (management) network should be on a type 5 bond using eth0 and eth1. The data center connection, DCUplink, is also a type 5 bond using eth2 and eth3. Save the network

configuration and activate the host.

Once you have two (2) active hosts you can begin to configure the power control for them. Each host has an iLO with a special user we created earlier for this purpose, the fence user. The iLO address should be used instead of the name, to keep things simple, the iLO address for the host is the same as the host address with the 3rd octet being 0 rather than 2. For example host 10.251.2.32 has an iLO address of 10.251.0.32.

Configure the power control interface as an ipmilan device with the iLO IP. The fence user has a password of “F3nc3M3N0w”. Additional options of “lanplus=1,privlvl=operator” are required. Test the configuration and save it.

Once all the systems are activated we need to enable access to the SAN storage.

Page 41: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

41

Update the Virtual Connect profiles

For each profile that is in use in the Virtual Connect domain, we can now issue an update connecting them to the SAN fabrics. One of the benefits of Virtual Connect is the ability do this from your office rather than having to contact a SAN administrator or walk down to the server room yourself and connect the cables.

Connect to the Virtual Connect Manager (VCM) via SSH; the VCM should be running on 10.251.0.15 currently. If it is not, connecting to a non-master node will inform you which VCM node you should connect to. Once connected, for each profile that is in use set the FCoE connections to the appropriate fabric.

set fcoe-connection Compute_01:1 Fabric=CR1_IC1

set fcoe-connection Compute_01:2 Fabric=CR1_IC2

...

Configure the Red Hat Enterprise Virtualization data center

With all the profiles updated the Red Hat Enterprise Virtualization Hypervisor nodes should now see the 16TB LUN we created on the array. In the Red Hat Enterprise Virtualization Manager, open your data center and create a new Fibre Channel (FC) domain. Create a new “vmGuests” domain using the 16TB LUN presented from the 3PAR array.

Once the domain is active, attach the ISO domain to the data center as well. This will provide access to your uploaded ISO images to create your virtual machines.

Why use a VM?

We chose to use a virtual machine to host the Red Hat Enterprise Virtualization Manager for a number of reasons, among them being security and ease of management. Red Hat has published several articles on how to create a Highly Available Red Hat Enterprise Virtualization Manager as a cluster service (https://access.redhat.com/site/articles/216973). However if you wish to provide shell access to the Red Hat Enterprise Virtualization Manager instance you now must provide access to the full cluster. Running it as a virtual machine allows you to grant access to only that server rather than the entire cluster.

Running the manager as a virtual machine also makes it easier to add additional management services by simply adding additional virtual machines as needed. To add HP OneView for RHEV to monitor your hardware and power/thermal data simply create additional shared LUNs and define another cluster service for the new virtual machine then install the software as normal, no need to deal with complex cluster service rules.

virt-install -n HPIC -r 8192 --vcpus sockets=2 --graphics spice \

--disk=/dev/mapper/Mgmt_VM_VG-HPIC_OS,bus=virtio \

--os-type=windows --os-variant=win7 \

--network network=mgmt-net,model=virtio \

--network network=public-net,model=virtio \

--disk path=/usr/share/virtio-win/virtio-win_amd64.vfd,device=floppy,perms=ro \

--cdrom=/shared/iso/Windows_Server_2008_R2_SP1.iso

Bill of materials

This reference configuration illustrates one possible configuration for a Red Hat Enterprise Virtualization solution. Alternative configurations with more management servers, more (or fewer) workload servers or different memory/CPU configurations are possible. The number of uplinks of both Fibre Channel (FC) and 10GbE per enclosure are also customizable; for this solution 6 FC uplinks and 4 10GbE uplinks per enclosure where chosen. Virtual Connect Stacking was used to simplify the management of the Virtual Connect profiles and environment.

The following BOMs contain electronic license to use (E-LTU) parts. Electronic software license delivery is now available in most countries. HP recommends purchasing electronic products over physical products (when available) for faster delivery and for the convenience of not tracking and managing confidential paper licenses. For more information, please contact your reseller or an HP representative.

Page 42: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

42

Red Hat Entitlements With the Red Hat Enterprise Linux unlimited VM entitlement (G3J24AAE, RHEL Vrtl DC 2 Sckt 3yr 24x7), users have the right to run an unlimited number of desktop or server VMs on each entitled Red Hat Enterprise Virtualization Hypervisor node. As each environment is unique and may not run Red Hat Enterprise Linux exclusively, this entitlement is not included in the parts list for the reference configuration. Licensing of the virtual machines is the responsibility of the customer.

The configuration used in this reference implementation is:

• 2 x DL360p Gen8 Management Servers each containing:

– 128GB of RAM

– 6 x 300GB 15k rpm SAS hard drives

– Optional HP Ethernet 1Gb 4-port 331T Adapter

• 2 x BladeSystem c7000 Platinum enclosures each containing:

– 2 x Virtual Connect FlexFabric Interconnects

– 6 x 8Gb Fibre Channel uplinks per enclosure (3 per VC module)

– 4 x 10GbE uplinks per enclosure (2 per VC module)

– Virtual Connect stacking between enclosures

– 8 x BL460c Gen9 blades each containing

• 2 x Intel Xeon processors

• 128GB of RAM

• 2 x 300GB 15k rpm SAS hard drives

The 3PAR storage configuration documented here was selected based on the number of workload servers in this design. The storage configuration is entirely customizable, this configuration includes a number of solid state disks (SSDs) intended to provide extreme performance for specific workloads.

Note

Part numbers are at time of publication and subject to change. The bill of materials does not include complete support options or other rack and power requirements. If you have questions regarding ordering, please consult with your HP Reseller or HP Sales Representative for more details. http://www8.hp.com/us/en/business-services/it-services/it-services.html

Table 3: Bill of Materials

Qty HP Part Number Description

DL360p Gen8

2 654081-B21 HP DL360p Gen8 8-SFF CTO Server

2 712771-L21 HP DL360p Gen8 E5-2695v2SDHS FIO Kit

2 712771-B21 HP DL360p Gen8 E5-2695v2SDHS Kit

32 713983-B21 HP 8GB 2Rx4 PC3L-12800R-11 Kit

12 652611-B21 HP 300GB 6G SAS 15K 2.5in SC ENT HDD

2 647594-B21 HP Ethernet 1Gb 4-port 331T Adapter

2 684208-B21 HP Ethernet 1GbE 4P 331FLR FIO Adptr

2 734807-B21 HP 1U SFF Easy Install Rail Kit

4 656363-B21 HP 750W CS Plat PL Ht Plg Pwr Supply Kit

Page 43: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

43

Qty HP Part Number Description

BL460c

16 727021-B21 HP BL460c Gen9 10Gb/20Gb FLB CTO Blade

16 726992-L21 HP BL460c Gen9 E5-2640v3 FIO Kit

16 726992-B21 HP BL460c Gen9 E5-2640v3 Kit

128 726719-B21 HP 16GB 2Rx4 PC4-2133P-R Kit

32 759208-B21 HP 300GB 12G SAS 15K 2.5in SC ENT HDD

16 766491-B21 HP FlexFabric 10Gb 2P 536FLB FIO Adptr

16 761871-B21 HP Smart Array P244br/1G FIO Controller

BladeSystem Enclosure

2 681844-B21 HP BLc7000 CTO 3 IN LCD Plat Enclosure

4 571956-B21 HP BLc VC FlexFabric 10Gb/24-port Opt

2 456204-B21 HP BLc7000 DDR2 Encl Mgmt Option

8 453154-B21 HP BLc VC 1G SFP RJ45 Transceiver

8 AJ718A HP 8Gb Short Wave FC SFP+ 1 Pack

8 AJ716B HP 8Gb Short Wave B-Series SFP+ 1 Pack

16 455883-B21 HP BLc 10G SFP+ SR Transceiver

8 733459-B21 HP 2650W Plat Ht Plg Pwr Supply Kit

12 412140-B21 HP BLc Encl Single Fan Option

2 677595-B21 HP BLc 1PH Intelligent Power Mod FIO Opt

Infrastructure

1 BW904A HP 642 1075mm Shock Intelligent Rack

2 QK753B HP SN6000B 16Gb 48/24 FC Switch

2 QK753B 05Y 2.4m Jumper (IEC320 C13/C14, M/F CEE 22)

48 QK724A HP B-series 16Gb SFP+SW XCVR

2 JG505A HP 59xx CTO Switch Solution

2 JG296A HP 5920AF-24XG Switch

8 JD092B HP X130 10G SFP+ LC SR Transceiver

2 JC680A HP A58x0AF 650W AC Power Supply

2 JC680A B2B JmpCbl-NA/JP/TW

4 JG297A HP 5920AF-24XG Bk(pwr)-Frt(prt) Fn Tray

2 JE067A HP 5120-48G EI Switch

4 JD098B HP X120 1G SFP LC BX 10-U Transceiver

Page 44: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

44

Qty HP Part Number Description

3PAR StoreServ

1 QR516B HP 3PAR 7000 Service Processor

10 QR490A HP M6710 2.5in 2U SAS Drive Enclosure

240 QR492A HP M6710 300GB 6G SAS 15K 2.5in HDD

1 QR485A HP 3PAR StoreServ 7400 4-N Storage Base

4 QR486A HP 3PAR 7000 4-pt 8Gb/s FC Adapter

16 QR503A HP M6710 200GB 6G SAS 2.5in SLC SSD

1 BC795A HP 3PAR 7400 Reporting Suite LTU

1 BC773A HP 3PAR 7400 OS Suite Base LTU

168 BC774A HP 3PAR 7400 OS Suite Drive LTU

1 BC781A HP 3PAR 7400 Virtual Copy Base LTU

168 BC782A HP 3PAR 7400 Virtual Copy Drive LTU

1 BC787A HP 3PAR 7400 Adaptive Opt Base LTU

168 BC788A HP 3PAR 7400 Adaptive Opt Drive LTU

1 BC785A HP 3PAR 7400 Dynamic Opt Base LTU

168 BC786A HP 3PAR 7400 Dynamic Opt Drive LTU

1 BW946A HP 42U Location Discovery Kit

1 BW932A HP 600mm Rack Stabilizer Kit

1 BW906A HP 42U 1075mm Side Panel Kit

24 AJ837A HP 15m Multi-mode OM3 LC/LC FC Cable

5 AF547A HP 5xC13 Intlgnt PDU Ext Bars G2 Kit

8 C7537A HP Ethernet 25ft CAT5e RJ45 M/M Cable

In addition to the items above the following are required.

Table 4: Additional Items BOM

Qty HP Part Number Description

2 C7533A HP Ethernet 4ft CAT5e RJ45 M/M Cable

One per compute node J1U50AAE RHEV 2 Sckt 3yr 24x7

One per management server G3J30AAE RHEL Svr 2 Sckt/2 Gst 3yr 24x7

One per management server G3J35AAE RH HA 2 Sckt/2 Gst 3yr

One per management server G3J37AAE RH RS 2 Sckt/2 Gst 3yr

1 BC353A RHEL 6 media kit only

1 J1U56A (Optional) RHEL 7 media kit

Page 45: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

45

Summary

The reference configuration design combines HP BladeSystem and Red Hat Enterprise Virtualization, the open source choice for virtualizing workloads. The configuration guidelines presented provide a foundation for building a high-performance Red Hat Enterprise Virtualization platform optimized to consolidate and provision varying workloads while providing extremely high availability at all levels – from the underlying network and storage fabrics up to the virtual machine (VM) layer.

HP BladeSystem can be sized and scaled in a modular fashion, simplifying scaling up and out as additional resources are required. Additionally the HP BladeSystem architecture helps to not only reduce the footprint of the solution but also reduce the environmental requirements through advanced power and thermal capabilities.

HP Virtual Connect FlexFabric provides a converged fabric and the ability to specifically allocate network ports and associated bandwidth based on the needs of the solution. Virtual Connect allows you to reduce the cable count and costs associated with traditional networking and SAN designs. Coupling these technologies with the HP 3PAR StoreServ storage arrays, results in an extremely dense platform for the deployment of virtualized environments that require high levels of storage performance.

Implementing a proof-of-concept

As a matter of best practice for all deployments, HP recommends implementing a proof-of-concept using a test environment that matches as closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained. For help in implementing a proof-of-concept, contact an HP Services representative (http://www8.hp.com/us/en/business-services/it-services/it-services.html) or your HP partner.

DISCLAIMER OF WARRANTY. This document may contain the following HP or other software: XML, CLI statements, scripts, parameter files. These are provided as a courtesy, free of charge, “AS-IS” by Hewlett-Packard Company (“HP”). HP shall have no obligation to maintain or support this software. HP MAKES NO EXPRESS OR IMPLIED WARRANTY OF ANY KIND REGARDING THIS SOFTWARE INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR NON-INFRINGEMENT. HP SHALL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES, WHETHER BASED ON CONTRACT, TORT OR ANY OTHER LEGAL THEORY, IN CONNECTION WITH OR ARISING OUT OF THE FURNISHING, PERFORMANCE OR USE OF THIS SOFTWARE.

Page 46: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

46

Appendix: IP space

CR-5920AF 10.251.0.2 CR1-Enc1-b10-ilo 10.251.0.32 CR1-Enc2-b9-ilo 10.251.0.57

HPIC 10.251.2.2 CR1-Enc1-b10 10.251.2.32 CR1-Enc2-b9 10.251.2.57

rhevm 10.251.2.3 CR1-Enc1-b11-ilo 10.251.0.33 CR1-Enc2-b10-ilo 10.251.0.58

CR1-5120-24G-1 10.251.0.3 CR1-Enc1-b11 10.251.2.33 CR1-Enc2-b10 10.251.2.58

CR1-5120-24G-2 10.251.0.4 CR1-Enc1-b12-ilo 10.251.0.34 CR1-Enc2-b11-ilo 10.251.0.59

CR1-SANSwitch-1 10.251.0.5 CR1-Enc1-b12 10.251.2.34 CR1-Enc2-b11 10.251.2.59

CR1-SANSwitch-2 10.251.0.6 CR1-Enc1-b13-ilo 10.251.0.35 CR1-Enc2-b12-ilo 10.251.0.60

CR1-mgmt1-ilo 10.251.0.7 CR1-Enc1-b13 10.251.2.35 CR1-Enc2-b12 10.251.2.60

CR1-mgmt1 10.251.2.7 CR1-Enc1-b14-ilo 10.251.0.36 CR1-Enc2-b13-ilo 10.251.0.61

mgmt1 10.251.2.7 CR1-Enc1-b14 10.251.2.36 CR1-Enc2-b13 10.251.2.61

CR1-mgmt2-ilo 10.251.0.8 CR1-Enc1-b15-ilo 10.251.0.37 CR1-Enc2-b14-ilo 10.251.0.62

CR1-mgmt2 10.251.2.8 CR1-Enc1-b15 10.251.2.37 CR1-Enc2-b14 10.251.2.62

mgmt2 10.251.2.8 CR1-Enc1-b16-ilo 10.251.0.38 CR1-Enc2-b15-ilo 10.251.0.63

CR1-Enc1-oa1 10.251.0.13 CR1-Enc1-b16 10.251.2.38 CR1-Enc2-b15 10.251.2.63

CR1-Enc1-oa2 10.251.0.14 CR1-Enc2-oa1 10.251.0.39 CR1-Enc2-b16-ilo 10.251.0.64

CR1-Enc1-ic1 10.251.0.15 CR1-Enc2-oa2 10.251.0.40 CR1-iPDU1 10.251.0.65

CR1-Enc1-ic2 10.251.0.16 CR1-Enc2-ic1 10.251.0.41 CR1-iPDU2 10.251.0.66

CR1-Enc1-b1-ilo 10.251.0.23 CR1-Enc2-ic2 10.251.0.42 CR1-iPDU3 10.251.0.67

CR1-Enc1-b1 10.251.2.23 CR1-Enc2-b1-ilo 10.251.0.49 CR1-iPDU4 10.251.0.68

CR1-Enc1-b2-ilo 10.251.0.24 CR1-Enc2-b1 10.251.2.49 CR1-iPDU5 10.251.0.69

CR1-Enc1-b2 10.251.2.24 CR1-Enc2-b2-ilo 10.251.0.50 CR1-iPDU6 10.251.0.70

CR1-Enc1-b3-ilo 10.251.0.25 CR1-Enc2-b2 10.251.2.50 CR1-iPDU7 10.251.0.71

CR1-Enc1-b3 10.251.2.25 CR1-Enc2-b3-ilo 10.251.0.51 CR1-iPDU8 10.251.0.72

CR1-Enc1-b4-ilo 10.251.0.26 CR1-Enc2-b3 10.251.2.51 SRA-Node 10.251.0.143

CR1-Enc1-b4 10.251.2.26 CR1-Enc2-b4-ilo 10.251.0.52 SRA-SP 10.251.0.144

CR1-Enc1-b5-ilo 10.251.0.27 CR1-Enc2-b4 10.251.2.52 SRA-iPDU1 10.251.0.145

CR1-Enc1-b5 10.251.2.27 CR1-Enc2-b5-ilo 10.251.0.53 SRA-iPDU2 10.251.0.146

CR1-Enc1-b6-ilo 10.251.0.28 CR1-Enc2-b5 10.251.2.53 SRA-iPDU3 10.251.0.147

CR1-Enc1-b6 10.251.2.28 CR1-Enc2-b6-ilo 10.251.0.54 SRA-iPDU4 10.251.0.148

CR1-Enc1-b7-ilo 10.251.0.29 CR1-Enc2-b6 10.251.2.54

CR1-Enc1-b7 10.251.2.29 CR1-Enc2-b7-ilo 10.251.0.55

CR1-Enc1-b8-ilo 10.251.0.30 CR1-Enc2-b7 10.251.2.55

CR1-Enc1-b8 10.251.2.30 CR1-Enc2-b8-ilo 10.251.0.56

CR1-Enc1-b9-ilo 10.251.0.31 CR1-Enc2-b8 10.251.2.56

CR1-Enc1-b9 10.251.2.31

Page 47: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

47

Appendix: SAN zoning

SANTOP zone configuration zonecreate CR1_Mgt1_P1_to_3PAR, "CR1_Mgt1_P1;N1_S1_P1;N3_S1_P1" zonecreate CR1_Mgt2_P1_to_3PAR, "CR1_Mgt2_P1;N1_S1_P1;N3_S1_P1" cfgcreate "SANTOP", "CR1_Mgt1_P1_to_3PAR;CR1_Mgt2_P1_to_3PAR" zonecreate CR1_E1_B01_FlexHBA_P1_to_3PAR, "CR1_E1_B01_FlexHBA_P1;N1_S1_P1;N3_S1_P1" zonecreate CR1_E1_B02_FlexHBA_P1_to_3PAR, "CR1_E1_B02_FlexHBA_P1;N1_S2_P1;N3_S2_P1" zonecreate CR1_E1_B03_FlexHBA_P1_to_3PAR, "CR1_E1_B03_FlexHBA_P1;N1_S2_P3;N3_S2_P3" zonecreate CR1_E1_B04_FlexHBA_P1_to_3PAR, "CR1_E1_B04_FlexHBA_P1;N0_S1_P1;N2_S1_P1" zonecreate CR1_E1_B05_FlexHBA_P1_to_3PAR, "CR1_E1_B05_FlexHBA_P1;N0_S2_P1;N2_S2_P1" zonecreate CR1_E1_B06_FlexHBA_P1_to_3PAR, "CR1_E1_B06_FlexHBA_P1;N0_S2_P3;N2_S2_P3" zonecreate CR1_E1_B07_FlexHBA_P1_to_3PAR, "CR1_E1_B07_FlexHBA_P1;N1_S1_P1;N3_S1_P1" zonecreate CR1_E1_B08_FlexHBA_P1_to_3PAR, "CR1_E1_B08_FlexHBA_P1;N1_S2_P1;N3_S2_P1" zonecreate CR1_E1_B09_FlexHBA_P1_to_3PAR, "CR1_E1_B09_FlexHBA_P1;N0_S1_P1;N2_S1_P1" zonecreate CR1_E1_B10_FlexHBA_P1_to_3PAR, "CR1_E1_B10_FlexHBA_P1;N0_S2_P1;N2_S2_P1" zonecreate CR1_E1_B11_FlexHBA_P1_to_3PAR, "CR1_E1_B11_FlexHBA_P1;N0_S2_P3;N2_S2_P3" zonecreate CR1_E1_B12_FlexHBA_P1_to_3PAR, "CR1_E1_B12_FlexHBA_P1;N1_S1_P1;N3_S1_P1" zonecreate CR1_E1_B13_FlexHBA_P1_to_3PAR, "CR1_E1_B13_FlexHBA_P1;N1_S2_P1;N3_S2_P1" zonecreate CR1_E1_B14_FlexHBA_P1_to_3PAR, "CR1_E1_B14_FlexHBA_P1;N1_S2_P3;N3_S2_P3" zonecreate CR1_E1_B15_FlexHBA_P1_to_3PAR, "CR1_E1_B15_FlexHBA_P1;N0_S1_P1;N2_S1_P1" zonecreate CR1_E1_B16_FlexHBA_P1_to_3PAR, "CR1_E1_B16_FlexHBA_P1;N0_S2_P1;N2_S2_P1" cfgadd "SANTOP", "CR1_E1_B01_FlexHBA_P1_to_3PAR; CR1_E1_B02_FlexHBA_P1_to_3PAR; CR1_E1_B03_FlexHBA_P1_to_3PAR; CR1_E1_B04_FlexHBA_P1_to_3PAR; CR1_E1_B05_FlexHBA_P1_to_3PAR; CR1_E1_B06_FlexHBA_P1_to_3PAR; CR1_E1_B07_FlexHBA_P1_to_3PAR; CR1_E1_B08_FlexHBA_P1_to_3PAR; CR1_E1_B09_FlexHBA_P1_to_3PAR; CR1_E1_B10_FlexHBA_P1_to_3PAR; CR1_E1_B11_FlexHBA_P1_to_3PAR; CR1_E1_B12_FlexHBA_P1_to_3PAR; CR1_E1_B13_FlexHBA_P1_to_3PAR; CR1_E1_B14_FlexHBA_P1_to_3PAR; CR1_E1_B15_FlexHBA_P1_to_3PAR; CR1_E1_B16_FlexHBA_P1_to_3PAR" zonecreate CR1_E2_B01_FlexHBA_P1_to_3PAR, "CR1_E2_B01_FlexHBA_P1;N1_S1_P1;N3_S1_P1" zonecreate CR1_E2_B02_FlexHBA_P1_to_3PAR, "CR1_E2_B02_FlexHBA_P1;N1_S2_P1;N3_S2_P1" zonecreate CR1_E2_B03_FlexHBA_P1_to_3PAR, "CR1_E2_B03_FlexHBA_P1;N1_S2_P3;N3_S2_P3" zonecreate CR1_E2_B04_FlexHBA_P1_to_3PAR, "CR1_E2_B04_FlexHBA_P1;N0_S1_P1;N2_S1_P1" zonecreate CR1_E2_B05_FlexHBA_P1_to_3PAR, "CR1_E2_B05_FlexHBA_P1;N0_S2_P1;N2_S2_P1" zonecreate CR1_E2_B06_FlexHBA_P1_to_3PAR, "CR1_E2_B06_FlexHBA_P1;N0_S2_P3;N2_S2_P3" zonecreate CR1_E2_B07_FlexHBA_P1_to_3PAR, "CR1_E2_B07_FlexHBA_P1;N1_S1_P1;N3_S1_P1" zonecreate CR1_E2_B08_FlexHBA_P1_to_3PAR, "CR1_E2_B08_FlexHBA_P1;N1_S2_P1;N3_S2_P1" zonecreate CR1_E2_B09_FlexHBA_P1_to_3PAR, "CR1_E2_B09_FlexHBA_P1;N0_S1_P1;N2_S1_P1" zonecreate CR1_E2_B10_FlexHBA_P1_to_3PAR, "CR1_E2_B10_FlexHBA_P1;N0_S2_P1;N2_S2_P1" zonecreate CR1_E2_B11_FlexHBA_P1_to_3PAR, "CR1_E2_B11_FlexHBA_P1;N0_S2_P3;N2_S2_P3" zonecreate CR1_E2_B12_FlexHBA_P1_to_3PAR, "CR1_E2_B12_FlexHBA_P1;N1_S1_P1;N3_S1_P1" zonecreate CR1_E2_B13_FlexHBA_P1_to_3PAR, "CR1_E2_B13_FlexHBA_P1;N1_S2_P1;N3_S2_P1" zonecreate CR1_E2_B14_FlexHBA_P1_to_3PAR, "CR1_E2_B14_FlexHBA_P1;N1_S2_P3;N3_S2_P3" zonecreate CR1_E2_B15_FlexHBA_P1_to_3PAR, "CR1_E2_B15_FlexHBA_P1;N0_S1_P1;N2_S1_P1" zonecreate CR1_E2_B16_FlexHBA_P1_to_3PAR, "CR1_E2_B16_FlexHBA_P1;N0_S2_P1;N2_S2_P1" cfgadd "SANTOP", "CR1_E2_B01_FlexHBA_P1_to_3PAR; CR1_E2_B02_FlexHBA_P1_to_3PAR; CR1_E2_B03_FlexHBA_P1_to_3PAR; CR1_E2_B04_FlexHBA_P1_to_3PAR; CR1_E2_B05_FlexHBA_P1_to_3PAR; CR1_E2_B06_FlexHBA_P1_to_3PAR; CR1_E2_B07_FlexHBA_P1_to_3PAR; CR1_E2_B08_FlexHBA_P1_to_3PAR; CR1_E2_B09_FlexHBA_P1_to_3PAR; CR1_E2_B10_FlexHBA_P1_to_3PAR; CR1_E2_B11_FlexHBA_P1_to_3PAR; CR1_E2_B12_FlexHBA_P1_to_3PAR; CR1_E2_B13_FlexHBA_P1_to_3PAR; CR1_E2_B14_FlexHBA_P1_to_3PAR; CR1_E2_B15_FlexHBA_P1_to_3PAR; CR1_E2_B16_FlexHBA_P1_to_3PAR"

SANBOT zone configuration zonecreate CR1_Mgt1_P2_to_3PAR, "CR1_Mgt1_P2;N0_S1_P2;N2_S1_P2" zonecreate CR1_Mgt2_P2_to_3PAR, "CR1_Mgt2_P2;N0_S1_P2;N2_S1_P2" cfgcreate "SANBOT", "CR1_Mgt1_P2_to_3PAR; CR1_Mgt2_P2_to_3PAR" zonecreate CR1_E1_B01_FlexHBA_P2_to_3PAR, "CR1_E1_B01_FlexHBA_P2;N0_S1_P2;N2_S1_P2" zonecreate CR1_E1_B02_FlexHBA_P2_to_3PAR, "CR1_E1_B02_FlexHBA_P2;N0_S2_P2;N2_S2_P2" zonecreate CR1_E1_B03_FlexHBA_P2_to_3PAR, "CR1_E1_B03_FlexHBA_P2;N0_S2_P4;N2_S2_P4"

Page 48: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

48

zonecreate CR1_E1_B04_FlexHBA_P2_to_3PAR, "CR1_E1_B04_FlexHBA_P2;N1_S1_P2;N3_S1_P2" zonecreate CR1_E1_B05_FlexHBA_P2_to_3PAR, "CR1_E1_B05_FlexHBA_P2;N1_S2_P2;N3_S2_P2" zonecreate CR1_E1_B06_FlexHBA_P2_to_3PAR, "CR1_E1_B06_FlexHBA_P2;N1_S2_P4;N3_S2_P4" zonecreate CR1_E1_B07_FlexHBA_P2_to_3PAR, "CR1_E1_B07_FlexHBA_P2;N0_S1_P2;N2_S1_P2" zonecreate CR1_E1_B08_FlexHBA_P2_to_3PAR, "CR1_E1_B08_FlexHBA_P2;N0_S2_P2;N2_S2_P2" zonecreate CR1_E1_B09_FlexHBA_P2_to_3PAR, "CR1_E1_B09_FlexHBA_P2;N1_S1_P2;N3_S1_P2" zonecreate CR1_E1_B10_FlexHBA_P2_to_3PAR, "CR1_E1_B10_FlexHBA_P2;N1_S2_P2;N3_S2_P2" zonecreate CR1_E1_B11_FlexHBA_P2_to_3PAR, "CR1_E1_B11_FlexHBA_P2;N1_S2_P4;N3_S2_P4" zonecreate CR1_E1_B12_FlexHBA_P2_to_3PAR, "CR1_E1_B12_FlexHBA_P2;N0_S1_P2;N2_S1_P2" zonecreate CR1_E1_B13_FlexHBA_P2_to_3PAR, "CR1_E1_B13_FlexHBA_P2;N0_S2_P2;N2_S2_P2" zonecreate CR1_E1_B14_FlexHBA_P2_to_3PAR, "CR1_E1_B14_FlexHBA_P2;N0_S2_P4;N2_S2_P4" zonecreate CR1_E1_B15_FlexHBA_P2_to_3PAR, "CR1_E1_B15_FlexHBA_P2;N1_S1_P2;N3_S1_P2" zonecreate CR1_E1_B16_FlexHBA_P2_to_3PAR, "CR1_E1_B16_FlexHBA_P2;N1_S2_P2;N3_S2_P2" cfgadd "SANBOT", "CR1_E1_B01_FlexHBA_P2_to_3PAR; CR1_E1_B02_FlexHBA_P2_to_3PAR; CR1_E1_B03_FlexHBA_P2_to_3PAR; CR1_E1_B04_FlexHBA_P2_to_3PAR; CR1_E1_B05_FlexHBA_P2_to_3PAR; CR1_E1_B06_FlexHBA_P2_to_3PAR;CR1_E1_B07_FlexHBA_P2_to_3PAR;CR1_E1_B08_FlexHBA_P2_to_3PAR; CR1_E1_B09_FlexHBA_P2_to_3PAR;CR1_E1_B10_FlexHBA_P2_to_3PAR;CR1_E1_B11_FlexHBA_P2_to_3PAR; CR1_E1_B12_FlexHBA_P2_to_3PAR;CR1_E1_B13_FlexHBA_P2_to_3PAR;CR1_E1_B14_FlexHBA_P2_to_3PAR; CR1_E1_B15_FlexHBA_P2_to_3PAR;CR1_E1_B16_FlexHBA_P2_to_3PAR" zonecreate CR1_E2_B01_FlexHBA_P2_to_3PAR, "CR1_E2_B01_FlexHBA_P2;N0_S1_P2;N2_S1_P2" zonecreate CR1_E2_B02_FlexHBA_P2_to_3PAR, "CR1_E2_B02_FlexHBA_P2;N0_S2_P2;N2_S2_P2" zonecreate CR1_E2_B03_FlexHBA_P2_to_3PAR, "CR1_E2_B03_FlexHBA_P2;N0_S2_P4;N2_S2_P4" zonecreate CR1_E2_B04_FlexHBA_P2_to_3PAR, "CR1_E2_B04_FlexHBA_P2;N1_S1_P2;N3_S1_P2" zonecreate CR1_E2_B05_FlexHBA_P2_to_3PAR, "CR1_E2_B05_FlexHBA_P2;N1_S2_P2;N3_S2_P2" zonecreate CR1_E2_B06_FlexHBA_P2_to_3PAR, "CR1_E2_B06_FlexHBA_P2;N1_S2_P4;N3_S2_P4" zonecreate CR1_E2_B07_FlexHBA_P2_to_3PAR, "CR1_E2_B07_FlexHBA_P2;N0_S1_P2;N2_S1_P2" zonecreate CR1_E2_B08_FlexHBA_P2_to_3PAR, "CR1_E2_B08_FlexHBA_P2;N0_S2_P2;N2_S2_P2" zonecreate CR1_E2_B09_FlexHBA_P2_to_3PAR, "CR1_E2_B09_FlexHBA_P2;N1_S1_P2;N3_S1_P2" zonecreate CR1_E2_B10_FlexHBA_P2_to_3PAR, "CR1_E2_B10_FlexHBA_P2;N1_S2_P2;N3_S2_P2" zonecreate CR1_E2_B11_FlexHBA_P2_to_3PAR, "CR1_E2_B11_FlexHBA_P2;N1_S2_P4;N3_S2_P4" zonecreate CR1_E2_B12_FlexHBA_P2_to_3PAR, "CR1_E2_B12_FlexHBA_P2;N0_S1_P2;N2_S1_P2" zonecreate CR1_E2_B13_FlexHBA_P2_to_3PAR, "CR1_E2_B13_FlexHBA_P2;N0_S2_P2;N2_S2_P2" zonecreate CR1_E2_B14_FlexHBA_P2_to_3PAR, "CR1_E2_B14_FlexHBA_P2;N0_S2_P4;N2_S2_P4" zonecreate CR1_E2_B15_FlexHBA_P2_to_3PAR, "CR1_E2_B15_FlexHBA_P2;N1_S1_P2;N3_S1_P2" zonecreate CR1_E2_B16_FlexHBA_P2_to_3PAR, "CR1_E2_B16_FlexHBA_P2;N1_S2_P2;N3_S2_P2" cfgadd "SANBOT", "CR1_E2_B01_FlexHBA_P2_to_3PAR; CR1_E2_B02_FlexHBA_P2_to_3PAR; CR1_E2_B03_FlexHBA_P2_to_3PAR; CR1_E2_B04_FlexHBA_P2_to_3PAR; CR1_E2_B05_FlexHBA_P2_to_3PAR; CR1_E2_B06_FlexHBA_P2_to_3PAR; CR1_E2_B07_FlexHBA_P2_to_3PAR; CR1_E2_B08_FlexHBA_P2_to_3PAR; CR1_E2_B09_FlexHBA_P2_to_3PAR; CR1_E2_B10_FlexHBA_P2_to_3PAR;CR1_E2_B11_FlexHBA_P2_to_3PAR; CR1_E2_B12_FlexHBA_P2_to_3PAR; CR1_E2_B13_FlexHBA_P2_to_3PAR; CR1_E2_B14_FlexHBA_P2_to_3PAR; CR1_E2_B15_FlexHBA_P2_to_3PAR; CR1_E2_B16_FlexHBA_P2_to_3PAR"

Glossary

CLI Command-line interface. An interface comprised of various commands which are used to control operating system responses.

CAN Converged network adapter

CPG Common provisioning group

DHCP Dynamic host configuration protocol

DNS Domain name server

EBIPA Enclosure-based IP Addressing

FCoE Fibre Channel over Ethernet

GRUB GRand Unified Bootloader

Page 49: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

49

HA High availability

iLO Integrated Lights-Out

IOPS Input/output operations per second

iPDU HP Intelligent Power Distribution Unit

IRF Intelligent resilient framework

KVM Kernel Virtual Machine. Virtual machine framework built into the Linux kernel.

LUN Logical Unit Number. A number used to identify a SCSI/iSCSI/Fibre Channel/FCoE device.

MAC address

Media access control address. A unique identifier attached to most forms of networking equipment, which is part of the Ethernet specification.

MII Media Independent Interface

NTP Network time protocol

PXE Preboot Execution Environment

RBSU ROM Based System Utility

RHN Red Hat Network

RSA RSA is an algorithm for public-key cryptography.

SELinux Security Enhanced Linux. Linux kernel feature that provides the mechanism for supporting access control security policies, including United States Department of Defense-style mandatory access controls.

SCSI Small Computer System Interface

SFP+ Enhanced small form-factor pluggable transceiver

SNMP Simple Network Management Protocol

SSD Solid State Disk

SSH Secure shell

STP Spanning tree protocol

TPVV Thin Provisioned Virtual Volume

VCEM Virtual Connect Enterprise Manager

VLAN Virtual local area network

VM Virtual Machine

VV Virtual Volume

WWID World Wide ID. A unique identifier assigned to a Fibre Channel device

WWPN World Wide Port Name. World Wide ID for a specific device port

Page 50: Red Hat Enterprise Virtualization on HP BladeSystem · Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem . 5 . Figure 1: Hardware setup - Front view . The

Technical white paper | Red Hat Enterprise Virtualization on HP BladeSystem

Sign up for updates

hp.com/go/getupdated

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for

HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as

constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Intel and Intel Xeon are trademarks of Intel Corporation in the U.S. and other countries. Oracle and Java are registered trademarks of Oracle and/or its

affiliates. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the

U.S. and other countries.

4AA5-9028ENW, June 2015

For more information

HP BladeSystem, hp.com/go/bladesystem

HP ProLiant Servers, hp.com/go/proliant

HP OneView, hp.com/go/oneview

HP OneView for Red Hat Enterprise Virtualization, hp.com/go/ovrhev

HP Networking, hp.com/go/networking

HP 3PAR Storage, hp.com/go/3par

HP & Red Hat, hp.com/go/redhat

Red Hat Enterprise Virtualization, redhat.com/en/technologies/virtualization

To help us improve our documents, please provide feedback at hp.com/solutions/feedback.