124
Deploying Red Hat ®  Enterprise Linux ®  (RHEL) 5 Virtualization Volume 2: Cluster Version 2.0 November 2008

Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

  • Upload
    dohuong

  • View
    223

  • Download
    3

Embed Size (px)

Citation preview

Page 1: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Deploying Red Hat® Enterprise Linux® (RHEL) 5 VirtualizationVolume 2: Cluster

Version 2.0

November 2008

Page 2: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Deploying Red Hat® Enterprise Linux® 5 Virtualization Volume 2: Cluster

Copyright © 2008 by Red Hat, Inc.

1801 Varsity DriveRaleigh NC 27606-2072 USAPhone: +1 919 754 3700Phone: 888 733 4281Fax: +1 919 754 3701PO Box 13588Research Triangle Park NC 27709 USA

"Red Hat," Red Hat Linux, the Red Hat "Shadowman" logo, and the products listed are trademarks or registered trademarks of Red Hat, Inc. in the United States and other countries. Linux is a registered trademark of Linus Torvalds.

All other trademarks referenced herein are the property of their respective owners.

© 2008 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, V1.0 or later (the latest version is presently available at http://www.opencontent.org/openpub/).

The information contained herein is subject to change without notice. Red Hat, Inc. shall not be liable for technical or editorial errors or omissions contained herein.

Distribution of substantively modified versions of this document is prohibited without the explicit permission of the copyright holder.

Distribution of the work or derivative of the work in any standard (paper) book form for commercial purposes is prohibited unless prior permission is obtained from the copyright holder.

The GPG fingerprint of the [email protected] key is:CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E

2 | www.redhat.com

Page 3: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Table of Contents

1 About this document ........................................................................................ 5

1.1 Audience ............................................................................................................................ 5

1.2 References ....................................................................................................................... 5

1.3 Document Conventions ..................................................................................................... 6

1.4 Terms and Acronyms ........................................................................................................ 6

2 Introduction ...................................................................................................... 6

2.1 What is Virtualization? ....................................................................................................... 6

2.2 What is a Cluster? ............................................................................................................. 7

2.3 Why combine Virtualization and Clusters? ........................................................................ 7

2.4 What to cluster? ................................................................................................................. 7

3 Environment ................................................................................................... 7

3.1 Hardware ........................................................................................................................... 7

3.2 Connectivity ....................................................................................................................... 9

3.3 Conceptual diagram ........................................................................................................ 10

4 Configuration ................................................................................................. 11

4.1 Hosts ................................................................................................................................ 11

4.2 Storage ............................................................................................................................ 11

4.3 Network ............................................................................................................................ 12

4.3.1 Bonding ...................................................................................................................... 13 4.3.2 Network Bridge .......................................................................................................... 14 4.3.3 Interconnect switch .................................................................................................... 14 4.3.4 /etc/hosts .................................................................................................................... 15

4.4 Luci server ....................................................................................................................... 15

4.5 Forming the Cluster ......................................................................................................... 17

4.5.1 Fencing ...................................................................................................................... 26 4.5.2 Quorum Disk - optional .............................................................................................. 36 4.5.3 GFS2 .......................................................................................................................... 41

4.6 Install VM ......................................................................................................................... 53

4.7 Virtual Service Creation ................................................................................................... 61

4.7.1 Evaluate ..................................................................................................................... 64

www.redhat.com | 3

Page 4: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Appendix A: Alternate Configuration .................................................................. 65

A.1 Network ........................................................................................................................... 66

A.2 Cluster Software .............................................................................................................. 67

A.3 Software Update ............................................................................................................. 68

A.4 luci server ........................................................................................................................ 68

A.5 Configure the Guest Cluster ........................................................................................... 68

A.6 Fencing ............................................................................................................................ 68

A.7 Cluster Resources ........................................................................................................... 69

A.7.1 IP Address ................................................................................................................. 70 A.7.2 Web content .............................................................................................................. 72

A.7.2.1 NFS ...................................................................................................................... 72 A.7.2.1.1 NFS server ..................................................................................................... 72 A.7.2.1.2 NFS mount resource ...................................................................................... 74

A.7.2.2 GFS/GFS2 ........................................................................................................... 76 A.7.2.3 Apache controller ................................................................................................. 81

A.7.3 Service ....................................................................................................................... 84 A.7.4 Evaluate ..................................................................................................................... 89

A.7.4.1 Relocation ............................................................................................................ 90 A.7.4.2 Transition and failures ......................................................................................... 93 A.7.4.3 Cluster transitions ................................................................................................ 94

A.7.5 qdisk .......................................................................................................................... 95 A.7.6 Web Content not as a resource ................................................................................ 97

A.8 Clustered hosts with clustered guest .............................................................................. 98

A.8.1 Forming host cluster .................................................................................................. 98 A.8.2 XVM Fencing ........................................................................................................... 101 A.8.3 Evaluate ................................................................................................................... 110

Appendix B: Firewall ......................................................................................... 111

B.1 Clusters ......................................................................................................................... 111

B.2 Luci ................................................................................................................................ 112

B.3 Migration ....................................................................................................................... 112

B.4 xvmd fencing ................................................................................................................. 112

B.5 NFS server .................................................................................................................... 112

Appendix C: SELinux ....................................................................................... 115

C.1 Loadable modules ......................................................................................................... 122

C.2 Setting context on new areas ....................................................................................... 123

Appendix D: Issue Tracking ............................................................................. 124

4 | www.redhat.com

Page 5: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

1 About this documentThis document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise Linux 5. This volume demonstrates the use of Red Hat Cluster Suite with Xen virtual guests. Volume 1 provided details for installing and managing standalone Xen guests.

The recommended solution of clustering hosts and creating a VM service is demonstrated in the body of this paper. Appendix A: Alternate Configuration demonstrates an alternative of clustering guest VMs which have an application as a service.

1.1 AudienceThe target audience is a computer professional with varying degrees of familiarity with Red Hat Enterprise Linux, virtualization and clustering.

The reader should be familiar with the content of Volume 1 of this series and both Volumes 1 and 2 of Deploying a Highly Available Web Server on Red Hat Enterprise Linux 5.

1.2 References

Configuring and Managing a Red Hat Cluster

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Administration/index.html

Installation Guide

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Installation_Guide/index.html

Virtualization Guide

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Virtualization/index.html

Deployment Guide

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Deployment_Guide/index.html

Deploying Red Hat® Enterprise Linux® 5 Virtualization Volume 1: Single System

Deploying a Highly Available Web Server on Red Hat® Enterprise Linux® 5Volume 1: NFS Web Content

Deploying a Highly Available Web Server on Red Hat® Enterprise Linux® 5Volume 2: GFS & Shared Storage

www.redhat.com | 5

Page 6: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

1.3 Document ConventionsAs in most documents with procedural citations, certain paragraphs in this manual are represented in different fonts, typefaces, sizes, and color. This highlighting is helpful in determining command line user input from text file content. Information represented in this manner includes the following:

➢ command names Linux commands like iptables or yum will be differentiated by font.

➢ user inputUser entered commands and their respective output will be displayed as seen below.

# echo “This is an example of command line input and output”This is an example of command line input and output#

➢ file content Listing the content or partial content of a Linux ASCII file will be displayed as seen below.

! This is the appearance of text contained within a file

1.4 Terms and AcronymsCLVMD Cluster Logical Volume Manager Daemon

DLM Distributed Lock Manager

dom0, host Virtualization host running the Xen kernel

domU, guest, VM Virtualized guest machine

GFS Global File System

HBA Host Bus Adapter

2 Introduction

2.1 What is Virtualization?Virtualization allows multiple operating system instances to run concurrently on a single computer; it is a means of separating hardware from a single operating system. Each virtual machine or “guest” OS is managed by a Virtual Machine Monitor (VMM), also known as a hypervisor. Because the virtualization system sits between the guest and the hardware, it can control the guests’ use of CPU, memory, and storage, even allowing a guest OS to migrate from one machine to another.

6 | www.redhat.com

Page 7: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

2.2 What is a Cluster?A cluster is a group of two or more computers working together which, from an end user’s perspective, appear as one server. The high availability aspect of any cluster indicates that it provides services configured in a manner such that a monitored failure on any cluster member will not prevent the continued availability of the service itself.

2.3 Why combine Virtualization and Clusters?A customer can benefit by using a cluster infrastructure with virtualization by:

● increased uptime of applications through both planned and unplanned outages● isolation and security benefits of separating applications to separate virtual machines● cost savings derived from using virtualization to consolidate existing servers

2.4 What to cluster?A cluster configuration with a guest as a service to clustered hosts provides the ability to migrate from host to host. A migration preserves the state of the virtual machine as it is stopped on one host, then restores that state on the new hosts. The option to perform a Live Migration will keep an active guest domain active while the its memory image is duplicated to the target hosts providing for minimal down time and maintains application state. An operator can choose any service or group of services they desire the guest to make available. An alternate solution has clustered guest running on hosts that are either clustered or not clustered. While the flexibility of heterogeneous hosts is gained, active state is not automatically maintained when a service is moved from host to host. The alternate solution is covered in Appendix A.

For the configurations in this volume the web pages were provided via the Apache web server. The recommended configuration will have a VM as a service that will be configured to serve the web pages. The alternate configuration will use a service which can be located on the any of the clustered VMs.

3 Environment

3.1 HardwareThe following table lists the major components of the testbed.

System - Monet

HP DL580 G5Red Hat Enterprise Linux 5.2 (2.6.18-92.1.13.el5xen)Quad Socket, Quad Core (Total of 16 cores)Intel Xeon X7350 @ 2.93GHz 64 GB RAMPCIe options:

2 x Intel-based 82572EI Gigabit Ethernet Controller 2 x QLogic ISP2432-based 4G FC HBA

System - Degas HP DL580 G5Red Hat Enterprise Linux 5.2 (2.6.18-92.1.13.el5xen)

www.redhat.com | 7

Page 8: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Quad Socket, Quad Core (Total of 16 cores)Intel Xeon X7350 @ 2.93GHz 64 GB RAMPCIe options:

2 x Intel 82572EI Gigabit Ethernet Controller 2 x QLogic ISP2432-based 4G FC HBA

System - Renoir

HP DL585 G2Red Hat Enterprise Linux 5.2 (2.6.18-92.1.13.el5xen)Quad Socket, Dual Core (Total of 8 cores)AMD Opteron 8222 SE @ 3.0 GHz72 GB RAMPCIe options:

2 x Intel-based 82572EI Gigabit Ethernet Controller2 x QLogic ISP2432-based 4G FC HBA

Storage ArrayMSA2212fc

24 146GB 15K drives

Storage Interconnect HP StorageWorks 4/16 SAN Switch

Private Interconnect Cisco 3560G

8 | www.redhat.com

Page 9: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

3.2 Connectivity

The diagram below shows the connectivity of the components.

www.redhat.com | 9

Page 10: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

3.3 Conceptual diagram

10 | www.redhat.com

Page 11: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

4 ConfigurationThe following list is the actions that were performed to establish this configuration.

1. Install and configure hosts a. storageb. networks

1. private interconnects2. bonding

2. Establish a luci server

3. Configure clustera. formb. fencingc. quorum disk - optionald. GFS2

4. Install and configure VMa. create guestb. configure web serverc. enable migration

5. Create Virtual service6. Evaluate

4.1 HostsThe host machines of monet and renoir that were used in Volume 1 of this series were used for the alternate configuration in Appendix A. A server, degas, which is a duplicate of monet was used for the recommended configuration.

The steps of configuring of a host was demonstrated in Volume 1 , Appendix A: Host Configuration. Review of that document will provide details.

4.2 StorageThe storage configuration was very similar to the configuration used in volume 1 (see Volume 1, Appendix A.4.5), with the exception that a second fibre connection was made between each of the hosts and the switch. Adding the second fibre path did not require any changes on the host, the previously configured device mapper mulitpath handled the changes.

In Volume 1, the storage array was used to present a 250 GB LUN to each host and was used as the only Physical Volume for a new LVM Volume Group on each node. A 50 GB volume was allocated with a file system and mounted for the Xen images. Also several of the VMs used 10 GB volumes for their OS image.

The changes for this volume related to shared storage. Two new LUNs were presented to both cluster nodes. A 50 MB LUN was created to be used as a quorum disk. A 128 GB LUN was used for a clustered LVM Physical Volume used to make shared Global Files Systems.

www.redhat.com | 11

Page 12: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

The non-blank and uncommented lines of /etc/multipath.conf were set to the following on each of the hosts. While all host specific data appeared on each node, the storage array only presented the LUNs to the proper nodes, therefore the other nodes data was ignored. defaults {

user_friendly_names yes } multipaths {

multipath { wwid 3600c0ff000d5567dfc07764801000000 alias monet_virt

}multipath {

wwid 3600c0ff000d5567d720e1b4901000000alias degas_virt

} multipath {

wwid 3600c0ff000d556079b448f4801000000 alias renoir_virt

} multipath {

wwid 3600c0ff000d5560776ab054901000000 alias hostShare

} multipath {

wwid 3600c0ff000d5567d09ab054901000000 alias hostQdisk

} } devices {

device { vendor "HP” product "MSA2[02]*" path_grouping_policy multibus getuid_callout "/sbin/scsi_id -g -u -s /block/%n" path_selector "round-robin 0" rr_weight uniform prio_callout "/bin/true" path_checker tur hardware_handler "0" failback immediate no_path_retry 12 rr_min_io 100

}}

4.3 NetworkSince the intent of cluster configurations is to provide high availability, bonding multiple interfaces for availability is recommended for both the public and private networks. The bonding will be configured on the host machines. Since virtual machines will bridge to the host network, they will inherit the availability benefits from their hosts.

Be aware that alias settings in /etc/modprobe.conf and name persistence settings in

12 | www.redhat.com

Page 13: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

/etc/sysconfig/network-scritps/ifcfg-eth* or the appropriate file in /etc/udev/rules.d/ could influence the interface names. To avoid this influence the aliases were commented and the ifcfg files were temporarily moved. The systems were then rebooted. The following command was used to determine the correct configuration of the adapters and their manufacturers, and the aliases and ifcfg files were adjusted properly.

# kudzu ­p ­c NETWORK 

4.3.1 BondingEach system's internal Gigabit Ethernet network ports (Broadcom based) will be used for the public network. These are known as eth0 and eth1 for all the systems. The private interconnect on each system uses the Intel-based Gigabit Ethernet PCIe adapters. These varied on they systems as either eth2 and eth3 or eth3 and eth4.

The kernel will need to know that bonding will be used. By default, the maximum number of bonds that the kernel will implement is set to one. Since this configuration will be using two, one each for the public and private interfaces, a kernel parameter will set the maximum to two. This is accomplished by adding the following lines to the /etc/modprobe.conf file.

alias bond0 bonding options bonding max_bonds=2 alias bond1 bonding

The interface names for the bonds will be 'bond0' and 'bond1'. bond0 will be used for the public network and bond1 will be used for private interconnect. An ifcfg-bond# file will be constructed for each bond. The public network which these servers reside associates a specific MAC address to the host machine IP address using DHCP. This MAC address is specified as the MAC address to use for the public bond. The bonding option of mode=1 configures active backup, which provides fault tolerance, only one slave is active at a time and another will become active upon the failure of the current active adapter. The miimon option specified the frequency in milliseconds that MII (Media Independent Interface) should monitor the link. The contents of /dev/sysconfig/network-scripts/ifcfg-bond0 should be similar to the following.

DEVICE=bond0 ONBOOT=yes BOOTPROTO=dhcp MACADDR=00:1E:0B:CE:42:78 BONDING_OPTS="mode=1 miimon=100"

The ifcfg-bond1 specifies the network address and mask statically.

DEVICE=bond1 IPADDR=10.10.10.96 NETMASK=255.255.255.0 ONBOOT=yes BOOTPROTO=static BONDING_OPTS="mode=1 miimon=100"

www.redhat.com | 13

Page 14: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

The interfaces that will be enslaved for each bond will need a /dev/sysconfig/network-scripts/ifcfg-eth#. Each file should be similar to the following but specify the appropriate values for the DEVICE (the interface name) and MASTER (the name of the bond). By specifying a HWADDR, an adapter with the specified MAC will be associated the interface regardless of the load order.

DEVICE=eth1 BOOTPROTO=none MASTER=bond1 SLAVE=yes ONBOOT=yes HWADDR=00:18:71:eb:9a:8e

Upon a reboot, the bond interfaces should be active.

4.3.2 Network BridgeA network bridge is used to connect the host network(s) to the virtual machine(s) network(s). Modifying /etc/xen/xend-config.sxp to specify the bond for the public interface.

(network-script 'network-bridge netdev=bond0')

After rebooting the node, verify the bridge is functioning.

# brctl show bridge name     bridge id               STP enabled     interfaces virbr0          8000.000000000000       yes xenbr0          8000.feffffffffff       no              pbond0                                                         vif0.0 # 

4.3.3 Interconnect switchTo restrict the cluster interconnect traffic from congesting or being congested by any other network traffic, the ports used for the interconnects were placed in a VLAN (virtual LAN) using configuration settings on the switch. A more robust configuration would use multiple switches and have each port of the private bond to the separate switches.

The switch configuration for the ports originally included port security. The concept behind port security is that a single or specified group of MAC addresses will be allowed to send traffic through a port, access by another non-secure address will cause a violation which can be handled in several configurable ways included the shutdown of the port. Since the VMs will each have their own MAC address along with the host, this feature was disabled. The list of MACs could have been provided, however the experimental nature of this configuration was too dynamic to easily maintain.

The cluster software uses IP multicast protocols to communicate. Not all switches support multicast and others may require feature to be enabled or configured.

14 | www.redhat.com

Page 15: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

4.3.4 /etc/hostsWhile DHCP is maintaining the IP address for the public interfaces, the private interface will need to be managed by the operator. Typically, addresses from one of the private ranges as specified by RFC 1918 ( 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) that do not conflict with scheme used for the public interface will be chosen. For the interconnects in this volume used 10.10.10.0/24 for a subnet and took the last byte from the public interface. The entries were appended to the host file for each host in the configurations. The name for the interconnects was the public name with an '-ic' appended.

10.10.10.96 degas-ic.lab.bos.redhat.com degas-ic 10.10.10.100 monet-ic.lab.bos.redhat.com monet-ic 10.10.10.102 renoir-ic.lab.bos.redhat.com renoir-ic

4.4 Luci serverConga was designed as an HTML graphical interface for creating and managing clusters built using Red Hat Cluster Suite software. It is built as an agent/server (ricci/luci) architecture for remote administration of systems as well as a method for managing sophisticated storage configurations. The luci server is accessed via a secure web page and will need to be on the same private network as the systems it will manage. One luci server can manage multiple clusters.

The diagram below illustrates Conga’s architecture.

Since renoir will not be encumbered as part of the cluster, it is an ideal choice to become a luci server. The cluster software should already be installed, and the following command is used to confirm that luci is installed.

# yum list luci Loading "rhnplugin" plugin 

www.redhat.com | 15

Page 16: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Loading "security" plugin rhel­x86_64­server­5      100% |=========================| 1.4 kB    00:00 rhel­x86_64­server­cluste 100% |=========================| 1.4 kB    00:00 rhel­x86_64­server­cluste 100% |=========================| 1.4 kB    00:00 rhel­x86_64­server­vt­5   100% |=========================| 1.4 kB    00:00 rhn­tools­rhel­x86_64­ser 100% |=========================| 1.2 kB    00:00 Installed Packages luci.x86_64                              0.12.0­7.el5           installed # 

An administrative password must be set for luci using luci_admin before the service can be started.

# luci_admin initInitializing the luci server

Creating the 'admin' user

Enter password: <enter password>Confirm password: <re­enter password>

Please wait...The admin password has been successfully set.Generating SSL certificates... The luci server has been successfully initialized

You must restart the luci server for changes to take effect.

Run "service luci restart" to do so#

luci_admin password can be run at any time to change the luci administrative password initially set above.

As the output from the luci_admin command stated, the luci service must be restarted. Also, make sure it will be started on subsequent boots.

# service luci restartShutting down luci:                                        [  OK  ]Starting luci: Generating https SSL certificates...  done  [  OK  ]

Point your web browser to https://<luci_servername>:8084 to access luci# chkconfig luci on # 

16 | www.redhat.com

Page 17: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

The first time luci is accessed via a web browser at https://<luci_servername>:8084, the user will need to accept two SSL certificates before being directed to the login page. Enter the login name and chosen password to view the luci homebase page.

4.5 Forming the ClusterThe creation of the cluster follows the several of the steps detailed in the Deploying a Highly Available Web Server on Red Hat Enterprise Linux 5 Virtualization Volume 1: NFS Web Content.

www.redhat.com | 17

Page 18: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Start by connecting to luci using https://<server address>:8084. This screen prompts for a Login Name and Password. Use 'admin' for the Login Name and supply the Password that was entered during the luci initialization and select the Log in button.

18 | www.redhat.com

Page 19: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

This will present the homebase page. Select the second tab which is the cluster tab.

www.redhat.com | 19

Page 20: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

The cluster tab would show the existing clusters which could be managed, however none have been configured at this point. Select Create a New Cluster from the choices in the menu on the left side of the page.

20 | www.redhat.com

Page 21: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Prior to supplying the all information needed for the cluster, it is suggested to verify that luci is able to talk to the potential cluster members. Enter the fully qualified domain names for the interconnect of the hosts that will be clustered, and select the View SSL cert fingerprints button.

www.redhat.com | 21

Page 22: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

A status box will appear at the bottom of the page, informing the results of luci's attempted communications with the ricci servers on the nodes provided. If an error is displayed suggested debugging steps would be to:

● verify the node names● ping the nodes using the names provided● login to the nodes and verify ricci is running ( pgrep -l ricci )● Check the SELinux and firewall setting (see Appendix #)

22 | www.redhat.com

Page 23: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After verifying communications for the nodes, some additional information will need to be specified to form the cluster. This example demonstrates supplying a Cluster Name and the options to Use locally installed packages and Enable Shared Storage Support. Press Submit.

A confirmation dialog will be presented, select OK.

www.redhat.com | 23

Page 24: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

The luci page will present the progress.

24 | www.redhat.com

Page 25: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After completing the cluster formation, the cluster properties page should be displayed.

www.redhat.com | 25

Page 26: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Selecting the Cluster List item from the menu on the left side of the page will present a window similar to the following, where the user should confirm that the cluster and nodes are green, indicting that they are in the good state.

4.5.1 FencingA cluster is a group of computers that are communicating to provide compute power to the users of the cluster. If the members of the cluster have a break down in communication, there is potential that the members may attempt perform activities that other members do not know about. The concept of quorum, needing a specified amount of votes to have an active cluster, is a mechanism that has been implemented to help. The current settings, as seen above, show the cluster has two votes (one per node) and a minimum is required one. This quorum configuration is a special 2-node setting, typically a majority is needed. With this configuration if the nodes have a failure in communicating with each other, each node has the one vote it needs to form a cluster. This is referred to as a split-brain cluster. If each node were to form a cluster, they could perform operations that could cause data inconsistencies.

26 | www.redhat.com

Page 27: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Fencing is a mechanism that isolates the activities that a node can perform. Some mechanisms relate to the limiting the IO a cluster member can perform. Other methods control the operation of a member by controlling the member's running state.

Primary and secondary methods of fencing are recommended. In this configuration, the primary will to isolate the shared storage, using Brocade fencing. The secondary method will power cycle the cluster member using the HP Integrated Lights Out processor.

The default delay to start fencing members that have not joined the clusters is set to three seconds. This may cause unnecessary fencing, therefore the following increased the delay. By selecting the cluster name link on from the cluster list, the Configure cluster properties page will be presented. Select the Fence tab, set the Post Join Delay value to an appropriate value, and press the Apply button.

www.redhat.com | 27

Page 28: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

To configure the fencing devices, from the cluster list page select one of the node name links, from the cluster properties page select Nodes => <node name> in the menu on the left side of the page. This will present the nodes properties page.

28 | www.redhat.com

Page 29: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Select the Add a fence device to this level link in the Main Fencing Method portion of the page, this will change the link to a pull-down with possible fence devices.

www.redhat.com | 29

Page 30: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

In the pull-down, select Brocade Fabric Switch. Several fields will appear, supply the needed values. Since the configuration has two ports per node, both must be specified. Once all values have been inputted, select Update main fence properties.

After a confirmation window and a Please be patient page, the cluster list page will be displayed.

30 | www.redhat.com

Page 31: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Return to the nodes properties page, and select the Add a fence device to this level link in the Backup Fencing Method area. In the pull-down, HP iLO was selected and the appropriate values were entered. Select the Update backup fence properties button.

The ACPI daemon (acpid) can intercept power button pushes and perform an orderly shutdown instead. It is better for the fence to be immediate so this daemon should be disabled.

# service acpid stop # chkconfig acpid off # 

Similar actions need to be performed to the other members. The existing brocade fence can be selected and used on the additional nodes, specifying the correct ports. A new HP iLO device will need to be added.

www.redhat.com | 31

Page 32: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Verify that the fences are functioning correctly. At the Nodes page, select the Fence this node in the Choose a Task... pull-down for one of the nodes, the press the Go button.

A confirmation pop-up and a Please be patient page will be displayed.

32 | www.redhat.com

Page 33: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Using multipath we can see that all paths have failed, as expected.

# multipath ­l hostQdisk (3600c0ff000d5567d09ab054901000000) dm­2 HP,MSA2212fc [size=48M][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:0:3 sda 8:0   [failed][undef]  \_ 0:0:1:3 sdc 8:32  [failed][undef]  \_ 1:0:0:3 sdg 8:96  [failed][undef]  \_ 1:0:1:3 sdi 8:128 [failed][undef] hostShare (3600c0ff000d5560776ab054901000000) dm­4 HP,MSA2212fc [size=122G][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:2:5 sde 8:64  [failed][undef]  \_ 0:0:3:5 sdf 8:80  [failed][undef]  \_ 1:0:2:5 sdk 8:160 [failed][undef]  \_ 1:0:3:5 sdl 8:176 [failed][undef] degas_virt (3600c0ff000d5567d720e1b4901000000) dm­3 HP,MSA2212fc [size=244G][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:0:8 sdb 8:16  [failed][undef]  \_ 0:0:1:8 sdd 8:48  [failed][undef]  \_ 1:0:0:8 sdh 8:112 [failed][undef]  \_ 1:0:1:8 sdj 8:144 [failed][undef] # 

User intervention is needed to address the fencing. First the operator should understand why the node was fenced before re-establishing the storage connections. The user can interact directly with the switch (telnet, web) to enable the ports. Also the user could use the fence command to enable the ports. Shortly after enabling the ports, multipath will report the paths as active.

# fence_brocade ­a ra­switch1 ­l admin ­p <password> ­n 1 ­o enable success: portenable 1 # fence_brocade ­a ra­switch1 ­l admin ­p <password> ­n 5 ­o enable success: portenable 5 # multipath ­l hostQdisk (3600c0ff000d5567d09ab054901000000) dm­2 HP,MSA2212fc [size=48M][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:0:3 sda 8:0   [active][undef]  \_ 0:0:1:3 sdc 8:32  [active][undef]  \_ 1:0:0:3 sdg 8:96  [active][undef]  \_ 1:0:1:3 sdi 8:128 [active][undef] hostShare (3600c0ff000d5560776ab054901000000) dm­4 HP,MSA2212fc [size=122G][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:2:5 sde 8:64  [active][undef]  \_ 0:0:3:5 sdf 8:80  [active][undef]  \_ 1:0:2:5 sdk 8:160 [active][undef]  \_ 1:0:3:5 sdl 8:176 [active][undef] degas_virt (3600c0ff000d5567d720e1b4901000000) dm­3 HP,MSA2212fc [size=244G][features=1 queue_if_no_path][hwhandler=0] 

www.redhat.com | 33

Page 34: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

\_ round­robin 0 [prio=0][active]  \_ 0:0:0:8 sdb 8:16  [active][undef]  \_ 0:0:1:8 sdd 8:48  [active][undef]  \_ 1:0:0:8 sdh 8:112 [active][undef]  \_ 1:0:1:8 sdj 8:144 [active][undef] 

To verify the secondary fencing devices, the primary must not function correctly. This was forced by temporarily changing the password on the switch. Again the node was fenced.

It was expected that degas would be power cycled, however it was not. Browsing the /var/log/messages file, the following related messages were found.

Nov 14 11:20:38 monet fence_node[15726]: agent "fence_brocade" reports: pattern match timed-out at /sbin/fence_brocade line 146 Nov 14 11:20:38 monet fence_node[15726]: agent "fence_ilo" reports: Net::SSL.pm or Net::SSLeay::Handle.pm not found. Please install the perl-Crypt-SSLeay package from RHN (http://rhn.redhat.com) or Net::SSLeay from CPAN (http://www.cpan.org)

34 | www.redhat.com

Page 35: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Nov 14 11:20:38 monet fence_node[15726]: Fence of "degas-ic.lab.bos.redhat.com" was unsuccessful

Apparently a package, perl-Crypt-SSLeay, that is needed that is not installed with the cluster software. Install the package on the hosts.

# yum ­y install perl­Crypt­SSLeay Loading "rhnplugin" plugin Loading "security" plugin rhel­x86_64­server­cluste 100% |=========================| 1.4 kB    00:00 rhel­x86_64­server­vt­5   100% |=========================| 1.4 kB    00:00 rhel­x86_64­server­cluste 100% |=========================| 1.4 kB    00:00 rhel­x86_64­server­5      100% |=========================| 1.4 kB    00:00 Setting up Install Process Parsing package install arguments Resolving Dependencies ­­> Running transaction check ­­­> Package perl­Crypt­SSLeay.x86_64 0:0.51­11.el5 set to be updated ­­> Finished Dependency Resolution 

Dependencies Resolved 

=============================================================================  Package                 Arch       Version          Repository        Size ============================================================================= Installing:  perl­Crypt­SSLeay       x86_64     0.51­11.el5      rhel­x86_64­server­5   45 k 

Transaction Summary ============================================================================= Install      1 Package(s)         Update       0 Package(s)         Remove       0 Package(s)         

Total download size: 45 k Downloading Packages: (1/1): perl­Crypt­SSLeay­ 100% |=========================|  45 kB    00:00 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction   Installing: perl­Crypt­SSLeay            ######################### [1/1] 

Installed: perl­Crypt­SSLeay.x86_64 0:0.51­11.el5 Complete! # 

Now re-trying to fence degas, the node power cycles and rejoins the cluster. Fencing monet also led to a successful power cycle. The password of the FC switch was returned to the original and the primary fence was also verified on monet.

www.redhat.com | 35

Page 36: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

4.5.2 Quorum Disk - optionalAs stated earlier in the Fencing section, the initial quorum configuration is a special two-node case. Each member has one vote and it only takes one vote to establish quorum for a cluster. This can be used, however, another option is to add a quorum disk (qdisk) which will also provide a vote. With this configuration three votes will be available and two will be needed for quorum.

The previous storage configuration has configured multipath to manage hostQdisk. A minimum of a 10MB partition is needed, here a 50MB partition was used. I

After verifying that the members of the cluster can see the device that will be used for the quorum disk, it is initialized. A label of hostQdisk is used. The command only needs to be issued on one node.

# mkqdisk ­c /dev/mapper/hostQdisk ­l hostQdisk mkqdisk v0.5.2 Writing new quorum disk label 'hostQdisk' to /dev/mapper/hostQdisk. WARNING: About to destroy all data on /dev/mapper/hostQdisk; proceed [N/y] ? y Initializing status block for node 1... Initializing status block for node 2... Initializing status block for node 3... Initializing status block for node 4... Initializing status block for node 5... Initializing status block for node 6... Initializing status block for node 7... Initializing status block for node 8... Initializing status block for node 9... Initializing status block for node 10... Initializing status block for node 11... Initializing status block for node 12... Initializing status block for node 13... Initializing status block for node 14... Initializing status block for node 15... Initializing status block for node 16... # 

Using the '-L' option, information can be retrieved from the qdisk. All members should return the similar information.

# mkqdisk ­L mkqdisk v0.5.2 /dev/sda: 

Magic:                eb7a62c2 Label:                hostQdisk Created:              Fri Nov 14 14:36:15 2008 Host:                 degas.lab.bos.redhat.com Kernel Sector Size:   512 

36 | www.redhat.com

Page 37: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Recorded Sector Size: 512 

/dev/sdc: Magic:                eb7a62c2 Label:                hostQdisk Created:              Fri Nov 14 14:36:15 2008 Host:                 degas.lab.bos.redhat.com Kernel Sector Size:   512 Recorded Sector Size: 512 

/dev/sdg: Magic:                eb7a62c2 Label:                hostQdisk Created:              Fri Nov 14 14:36:15 2008 Host:                 degas.lab.bos.redhat.com Kernel Sector Size:   512 Recorded Sector Size: 512 

/dev/sdi: Magic:                eb7a62c2 Label:                hostQdisk Created:              Fri Nov 14 14:36:15 2008 Host:                 degas.lab.bos.redhat.com Kernel Sector Size:   512 Recorded Sector Size: 512 

/dev/dm­2: Magic:                eb7a62c2 Label:                hostQdisk Created:              Fri Nov 14 14:36:15 2008 Host:                 degas.lab.bos.redhat.com Kernel Sector Size:   512 Recorded Sector Size: 512 

www.redhat.com | 37

Page 38: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

At the Configure cluster properties page, select the Quorum Partition tab. Select to Use a Quorum Partition and enter the appropriate values. The values used specify 5 sec Interval to update the qdisk, the qdisk supplies 1 Votes, the TKO value of 15 indicates the number of missed intervals before a member is declared dead. A node will be declared alive if the heuristics total at least the Minimum Score. While not needed if not heuristics are specified, luci required a value. The device can be specified with either or both the Device and Label, which should match the values specified when the disk was initialized. Select Apply.

After confirmation window and a Please be patient page, the general tab of the Configure cluster properties will be displayed.

38 | www.redhat.com

Page 39: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

However, when the cluster information is displayed from the cluster list page, it still shows the quorum values from the special two-node case.

When the /etc/cluster/cluster.conf is reviewed, the cman values show original settings. Red Hat Bugzilla #467464 corresponds to this issue.

<cman expected_votes="1" two_node="1"/>

Edit the cman tag line which should be similar to above to the following, and increment the cluster_version element in the cluster tag.

<cman expected_votes="3"/>

The commands that follow, update and activate the changed cluster configuration file.

www.redhat.com | 39

Page 40: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

# ccs_tool update /etc/cluster/cluster.conf Config file updated from version 6 to 7 

Update complete. # cman_tool version ­r 7 # 

To effect this change to cman, the cluster should be restarted. While restart is an option, stopping then starting the cluster has proved more dependable. After the cluster reforms, the new quorum configuration can be seen.

[root@monet ~]# clustat Cluster Status for hostCluster @ Fri Nov 14 15:20:19 2008 Member Status: Quorate 

 Member Name                                        ID   Status 

40 | www.redhat.com

Page 41: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

 ­­­­­­ ­­­­                                        ­­­­ ­­­­­­  degas­ic.lab.bos.redhat.com                            1 Online  monet­ic.lab.bos.redhat.com                            2 Online, Local  /dev/mapper/hostQdisk                                  0 Online, Quorum Disk 

[root@monet ~]# 

4.5.3 GFS2The second generation Global File System allows shared access to files using shared storage. GFS2 will be used for dynamic Xen area, /var/lib/xen/images and an common area to share configuration files.

GFS and GFS2 file systems are created on cluster logical volume manger (CLVM) volumes. Changing the following entries in /etc/lvm/lvm.conf will generally allow the display the the multipath alias devices. preferred_names = [ "^/dev/mapper/" ] filter = [ "r|/dev/dm-.*|", "r|/dev/sd.*|", "a|/dev/mapper/.*|", "a/.*/" ] types = [ "device-mapper", 16 ]

www.redhat.com | 41

Page 42: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

luci has options to configure storage. Select the storage tab which will list the nodes available to manage. Select a node in the menu on the left side of the page.

42 | www.redhat.com

Page 43: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

This page will display Hard Drives, Partition Tables, and Volume Groups. Select the link for any device.

www.redhat.com | 43

Page 44: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

While this page provides more details on the device, the only action available is to Reprobe Storage.

Disks used for CLVM need to be initialized.

# pvcreate /dev/mapper/hostShare   Physical volume "/dev/mapper/hostShare" successfully created #

44 | www.redhat.com

Page 45: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

To start the creation of a volume group that will be used for the shared storage, select Volume Groups => New Volume Group. The Volume Group Name should be filled in, the default Extent Size was used, Clustered was set to true, the available Physical Volume was selected, then the Create button was selected.

www.redhat.com | 45

Page 46: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After a confirmation window, a Committing Changes page is displayed.

46 | www.redhat.com

Page 47: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

The Volume Group information will be displayed. Select New Logical Volume.

www.redhat.com | 47

Page 48: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Specify a Logical Volume Name and the Size in GBs. Select GFS2 - Global FS v.2 in the Content pull-down. Once the pull-down is selected additional fields will be displayed and will need to be filled in. The defaults were left selected except an Unique GFS Name was provided, /var/lib/xen/images was specified as the Mountpoint, both Mount and List in /etc/fstab were set to true, finally a sufficient Number of Journals was input. The minimum Number of Journals is the number of machines that will be mount the file system, extras were specified for growth. Once values are input, press the Create button.

48 | www.redhat.com

Page 49: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After a confirmation window, a Committing Changes page was displayed. Initially, several SELinux alerts were logged and while the file system was created, it was not mounted and the entry was not put in /etc/fstab. Bugzilla #473168 was entered to track the SELinux denials.

The operator should verify that the action completed. Use df to check the mount.

# df /var/lib/xen/images Filesystem           1K­blocks      Used Available Use% Mounted on /dev/mapper/hostShareVG­xenImagesLV                       62907776  15695192  47212584  25% /var/lib/xen/images # 

Verify that /etc/fstab has an entry corresponding just created logical volume.

/dev/mapper/hostShareVG-xenImagesLV /var/lib/xen/images gfs2 defaults 1 2

If the result is not as expected, the operator can edit /etc/fstab to confirm the entry. Once the entry is in place, the following command can be used to perform the mount.

# mount /var/lib/xen/images # 

www.redhat.com | 49

Page 50: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

luci should display the details for the Logical Volume. The mount and /etc/fstab entry were only performed on the selected node. Using luci select System List => <other node> => hostShareVG. In the blue bar, select the region that corresponds to the logical voulme that was created. While details are provided, the Mountpoint and the /etc/fstab Mounttab are blank. Fill these field to match the other node. Press Apply.

50 | www.redhat.com

Page 51: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

A second second volume will be created and used for sharing configuration files. Unlike the previous file system, the mount point used does not exists. It must be created.

# mkdir /etc/sharedConfig # 

Back at the hostShareVG page (clicking on the left side of the blue bar will de-select the volume information), select the New Logical Volume button.

www.redhat.com | 51

Page 52: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Specify a Logical Volume Name and a Size. Select GFS2 in the Content pull-down. A Unique GFS Name was specified. The directory that was created was supplied as the Mountpoint. Again Mount and List in /etc/fstab were set to true, and Number of Journals was set large enough to allow for growth. Leaving Journal Size and Clustered as the defaults, the Create button was pressed.

The usual confirmation window and the Committing Changes page will be displayed.

52 | www.redhat.com

Page 53: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

4.6 Install VMVolume 1 demonstrated in detail the installation of a VM, so only the highlights of the VM that will be used to provide the Web server will be provided in this volume. The web server including the web content will be part of the web server. The OS image for the VM will be on the GFS2 share /var/lib/xen/images.

Start virt-manager, select the dom0, then press the New button. Progress through the dialogs specifying a System Name (ra-vm1 was used), virtualization method (paravirtualized), and specify preferred installation media. The storage space is a Simple File

www.redhat.com | 53

Page 54: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

54 | www.redhat.com

Page 55: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

With the current network support, virt-manager will not allow the choice of the previously configured bridged bond for a Shared physical device. This will be addressed after the installation, for now, select the default for the Virtual network.

www.redhat.com | 55

Page 56: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Displayed is the Summary prior to starting the installation.

56 | www.redhat.com

Page 57: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

The install of the Operating System was similar as demonstrated in Volume 1. Since the VM will be serving web pages, confirm that the Web server software was selected.

www.redhat.com | 57

Page 58: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

During the first boot, again the configuration followed the example set forth in Volume 1. At the firewall configuration select the WWW and Secure WWW as Trusted services.

After the installation and first boot is complete, shutdown the VM. To use the bond based bridge, edit the vif line in /etc/xen/ra-vm1 from:

vif = [ "mac=00:16:3e:1e:0a:09,bridge=virbr0" ]

to:

vif = [ "mac=00:16:3e:1e:0a:09,bridge=xenbr0" ]

Start the VM, and the network should be operational. To ensure the latest versions of the software are on the VM, either update through the Package Updater GUI or yum command line.

# yum ­y update

58 | www.redhat.com

Page 59: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Populate /var/www/html with the desired content. Issue the following commands to start the Web server.

# chkconfig httpd on # service httpd start Starting httpd:                                            [  OK  ] # 

Using the address of the VM, the web pages are accessible.

www.redhat.com | 59

Page 60: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

To prepare the VM for migration, the same VM configuration file needs to be accessible to the cluster members. Currently it only exist on the machine which the VM was installed. On this machine, move the file to the shared configuration directory. The SELinux file context should be configured for any files and the directory.

# mv /etc/xen/ra­vm1 /etc/sharedConfig/ #

On each of the cluster member, a link will be created to the shared file where Xen will look for the file. This link will allow libvirt based tool to access and manipulate the VM. Once the VM has been placed under control of the cluster, the operator should use the cluster to control the starting and stopping of the VM and not other tools.

# ln ­s /etc/sharedConfig/ra­vm1 /etc/xen/ra­vm1 # 

The VM should be able to started on either cluster member and the web page should be accessible.

To enable migration, the /etc/xen/xend-config.sxp will need to be edited. The following lines were uncommented and/or modified on each system.

(xend-relocation-server yes) (xend-relocation-port 8002) (xend-relocation-address '') (xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$ ^monet{-ic}$ ^degas{-ic}$ ^monet{-ic}\\.*\\.redhat\\.com$ ^dega s{-ic}\\.*\\.redhat\\.com$')

The relocation port was opened in the firewall, then each node was rebooted. Some intermediate changes to the xend-relocation-hosts-allow were not valid and the following error was seen.

# virsh list libvir: Xen Daemon error : internal error failed to connect to xend libvir: Xen Daemon error : internal error failed to connect to xend error: failed to connect to the hypervisor # 

Once the nodes have been rebooted with the migration settings in place, test migration and live migration across the cluster nodes.

virsh # migrate ra­vm1 xen+ssh://[email protected] 

virsh # migrate ­­live ra­vm1 xen+ssh://[email protected] 

virsh # 

60 | www.redhat.com

Page 61: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

4.7 Virtual Service CreationSince the virtual machine has been verified to run and migrate on all cluster hosts, the management of the virtual machine will be place under control of the cluster. The next commands will stop the hosts from controlling the state of guests during host boot and shutdown. The commands will stop any running guest. The commands should be issued on each cluster member.

# chkconfig xendomains off # service xendomains stop Shutting down Xen domains:[done]                           [  OK  ] # 

Selecting Services => Add a Virtual Service on the menu on the left side of the page will present the Create a Virtual Machine Service page. The user will need to supply input to the fields:

● Virtual machine name – as used in the VM creation● Path to VM configuration files – the directory where the VM configuration file resides● Automatically start this service● Run exclusive – start service only on hosts where no other service is running● Failover Domain – select existing domain or None● Recovery policy – select from among the options after a service failures

○ Disable – do not attempt to restart○ Restart – first attempt to start on same member, if it can not, relocate service before

starting○ Relocate – relocate to another member

● Migration type – select either of the choice○ Live – Uses Xen live migration○ Pause – stop VM saving memory image, then start applying saved memory image

www.redhat.com | 61

Page 62: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Once all the desired values have been input, select Create Virtual Machine Service.

62 | www.redhat.com

Page 63: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After a Please be patient page, the services page was displayed.

The status is reported as Running, and after a check the pages were verified as being served.

There are several tasks available to be performed on the service. Some are available when the service is active:

● configure – allows the changing any of the values specified when created● restart● stop● relocate to <node> – VM is stopped on current node the started on target node● migrate to <node> – VM's state is maintained across the hosts

www.redhat.com | 63

Page 64: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Other tasks are available when the service has been stopped:● configure● enable● delete ● start on <node>

4.7.1 EvaluateAccess to the web page was verified in steady state. However, access was also checked while the configuration was placed under various maintenance and failure activities. The following table shows how the web server was affected and what recovery was seen for each of the listed activities.

Activity Web Server Recovery

migrate - pause

little down time, state maintained

moved to other node

migrate - live

very little down time, state maintained

moved to other node

relocate little down time, state not maintained

instance stopped, new instance on other node

reboot guest little down time, state not maintained

stayed on node

reboot hosts little down time, state not maintained

move to other node

guest public net fail - ifdown eth0

activity ceased no recovery

public net fail - cables pulled

activity ceased no recovery

host public net fail - ifdown bond0

very little down time, state maintained

moved to other node

host private net fail - ifdown bond1

little down time, state not maintained

node fences, moved to other node

storage down timeinitial node fenced, second node usually fenced,CLVM was not clean on boot

64 | www.redhat.com

Page 65: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Appendix A: Alternate ConfigurationThe alternate configuration will form a cluster from VMs on separate hosts. In this solution the virtual cluster will create a web service that can fail from member to member. The service will relocate while the VMs will remain on the nodes that they were created. Since the option to migrate a VM is not part of this configuration, this configuration supports heterogeneous hosts.

The diagram below provides a conceptual overview of the configuration.

The following list is the actions needed to establish this configuration. 1. Install and configure hosts 2. Install and configure guests3. Update networks

www.redhat.com | 65

Page 66: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

a. private interconnectsb. bonding

4. Create VM to be luci server

5. Update channel subscriptions for VM to include clustering and cluster storage6. Install newly subscribed software on guests

a. configure web server7. Update hosts and guests software8. Configure cluster

a. formb. quorum disk - optionalc. resourced. fencinge. service

9. Evaluate10.Cluster hosts11.Update Fencing12.Evaluate

The installation of the hosts and guests were demonstrated in Volume 1 and the same hosts and a subset of the guests will be used for this configuration. The virtual machines that were used are Red Hat Enterprise Linux 5.2 paravirtualized guests. These VMs were created with 10G of storage, 4 vCPUs and 4G memory.

 A.1 NetworkA network bridge is used to connect the host network(s) to the virtual machine(s) network(s). The default Xen configuration scripts support a single bridge, however the alternate configuration will use a bridge for a public interface and another bridge for the private interface. This is accomplished by creating a wrapper script that calls the network-bridge script for each of the bridges. Creating a file in the /etc/xen/scripts directory with contents similar to the following, which was named network-bridge-monet.

#!/bin/bash dir=$(dirname "$0") "$dir/network-bridge" "$@" vifnum=0 netdev=bond0 bridge=xenbr0 "$dir/network-bridge" "$@" vifnum=1 netdev=bond1 bridge=xenbr1

The script will need execute permissions:

#chmod +x /etc/xen/scripts/network­bridge­monet

This new script will need to be referenced in the /etc/xen/xend-config.sxp. Verify that there is only one uncommented line that defines the network-script.

66 | www.redhat.com

Page 67: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

(network-script network-bridge-monet)

A reboot of the host will establish the network bridges.

The guests that were configured in Volume 1 did not use bonded NICs. The public interface was a defined bridge, so while the configuration on the host side changed the guest will still work since the bridge name is still xenbr0 and vifnum is also still 0. The vif line in virtual machine configuration file in the /etc/xen directory should look like similar to the following.

vif = [ "mac=00:16:3e:69:70:0a,bridge=xenbr0" ]

Before adding the interconnect, a MAC address will need to be generated.

# echo 'import virtinst.util ; print virtinst.util.randomMAC()' | python 00:16:3e:31:8a:31# 

To add the interconnect, a second interface needs to be added to the vif line.vif = [ "mac=00:16:3e:69:70:0a,bridge=xenbr0",

"mac=00:16:3e:31:8a:31,bridge=xenbr1" ]

Upon boot of the guest, the system-config-network application was used to activate the network.

The addresses for the private interconnects will need to appear in /etc/hosts.

10.10.10.100 monet-ic.lab.bos.redhat.com monet-ic 10.10.10.102 renoir-ic.lab.bos.redhat.com renoir-ic 10.10.10.118 ra-vm2-ic.lab.bos.redhat.com ra-vm2-ic 10.10.10.119 ra-vm3-ic.lab.bos.redhat.com ra-vm3-ic 10.10.10.123 ra-vm7-ic.lab.bos.redhat.com ra-vm7-ic

 A.2 Cluster SoftwareThe “Cluster” and “Cluster Storage” channel subscriptions were added to the guests then the corresponding software was installed on each guest. This procedure was demonstrated in detail in Volume 1 for a host system, so only the highlights will be presented here

To add the needed entitlements to the guest, log into your RHN account. Select a host machine then Virtualization link. Select the link for the virtual machine which will be involved in the cluster. On this Software Channels page, select the box in front of the RHEL Cluster-Storage and RHEL Clustering options. Then, after making any other desired selections, select the Change Subscriptions button at the bottom of the page.

The additional channels will show up as available groups for install. The clustering groups and the Web Server were all installed using the yum command. Package Manager could have been used instead.

www.redhat.com | 67

Page 68: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

# yum ­y groupinstall "Cluster Storage" Clustering "Web Server"  

The other VM that will be clustered will need to have similar update applied.

 A.3 Software UpdateA customers subscription to Red Hat Network allows then access to the latest security updates, bug fixes and enhancements. A user can use the following command to see if any software is available, and the extent of the updates.

# yum check­update 

All the available updates were installed using the following command.

# yum ­y update  The update was performed all the hosts and guests that will be used for the cluster configuration.

 A.4 luci serverA VM that will not be used in the cluster, was configured to be a luci server. The VM will need access to the private interconnect. The steps in Section 4.4 can be followed to complete the luci configuration.

 A.5 Configure the Guest ClusterForming the guest cluster follows steps quite similar to those in Section 4.5, so cross reference to view any screen shots.

Start by connecting to the luci the server. At the cluster tab, select Create a New Cluster from the choices in the menu on the left side of the page. Verify that luci is able to talk to the cluster members. Continue the cluster creation by filling in the Cluster Name and Root Password fields, select to Use locally installed packages, select Enable Shared Storage Support, then press the Submit button.

Selecting the Cluster List item from the menu on the left side of the page will present a page where the user should confirm that the cluster and nodes are green, indicting that they are in the good state.

 A.6 FencingThe current configuration has limited options for fencing. The storage is not shared, and is local to the guest's host. While a Xen virtual machine fencing daemon exists, it requires that the host are clustered. This leaves the option that controls the power of the hosts. Each of the hosts has an Integrated Lights Out (ILO) processor and the support to use them for fencing is part of the cluster software installed. Note, this is not ideal, because the entire host will be fenced (power cycled) if the guest cluster has issues but does provide the safety

68 | www.redhat.com

Page 69: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

required.

The default delay to start fencing joining nodes is set to zero. Select the Configure cluster properties page, then the Fence tab, set the Post Join Delay value to an appropriate value, and press the Apply button.

To configure the fence, from the cluster list page select one of the nodes, from the cluster properties page select Nodes => <node name> in the menu on the left side of the page. This will present the node properties page. Since there is only one option for fencing, select the Add a fence device to this level link in the Main Fencing Method portion of the page.

This will change the link to a pull down of possible fence devices. Select HP iLO. Once the fence device has been chosen input fields are displayed. Provide a Name, Hostname (the iLO's address), Login, and Password then select the Update main fence properties button.

Similar actions need to be performed to the other members.

The ACPI daemon (acpid) can intercept power button pushes and perform an orderly shutdown instead. It is better for the fence to be immediate so this daemon should be disabled.

# service acpid stop # chkconfig acpid off # 

 A.7 Cluster ResourcesCluster configurations can use and provide a variety of resources that can be used to provide a service to the users of the cluster. Since the application that is used in this paper is a web service, three resources will be needed:

● IP address● Web Content● Apache controller

www.redhat.com | 69

Page 70: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

 A.7.1 IP AddressSince the application can move from node to node, having an separate address that can be relocated from member to member is needed. From the cluster properties page, select Resources => Add a Resource in the menu on the left side of the page.

70 | www.redhat.com

Page 71: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

In the Select a Resource Type pull-down select IP address. The fields for IP Address Resource Configuration will be displayed. Enter an available IP address and select to Monitor link then press Submit.

www.redhat.com | 71

Page 72: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After a confirmation pop-up and a Please be patient temporary page, resources for the cluster will be displayed.

 A.7.2 Web contentInitially the web content will be supplied by a NFS server followed by switching to GFS2.

 A.7.2.1 NFS

There are two sides of configuring NFS, first the machine which is serving the content must be configured, then the cluster must be configured to use the content.

A.7.2.1.1 NFS server

The NFS server configuration will use the same NFS server that was used for providing the

72 | www.redhat.com

Page 73: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

media in Volume 1. The information that follows was mostly reproduced from Volume 1 with the specifics for serving the web content versus installation media being changed.

The /webcontent directory was made and should be populated with the web files that will be served.

# mkdir /webcontent#

Check to see if NFS has been started on the server, milo in this case.

# pgrep ­l nfs # 

Since nothing was returned from the above command it means that the are no NFS daemons running, so start them for this boot. And configure so that they will start on subsequent boots.

# service nfs start Starting NFS services:                                     [  OK  ] Starting NFS quotas:                                       [  OK  ] Starting NFS daemon:                                       [  OK  ] Starting NFS mountd:                                       [  OK  ] # chkconfig nfs on # pgrep ­l nfs 13283 nfsd4 13284 nfsd 13285 nfsd 13286 nfsd 13287 nfsd 13288 nfsd 13289 nfsd 13290 nfsd 13291 nfsd # 

Specify which systems can have access to the NFS share by adding entries to /etc/exports for the guests that will be members of the cluster. Only the systems listed will have access and this access will be readonly.

/webcontent ra-vm2.lab.bos.redhat.com(ro) /webcontent ra-vm3.lab.bos.redhat.com(ro)

Next, actually re-export the directories and show what the server is exporting.

# exportfs ­r # showmount ­e Export list for milo.lab.bos.redhat.com: /webcontent                          ra­vm3.lab.bos.redhat.com,ra­vm2.lab.bos.redhat.com /usr/kits/rhel­5.2­server­x86_64­dvd ra­vm8.lab.bos.redhat.com,ra­vm7.lab.bos.redhat.com,ra­vm6.lab.bos.redhat.com,ra­vm5.lab.bos.redhat.com,ra­

www.redhat.com | 73

Page 74: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

vm4.lab.bos.redhat.com,ra­vm3.lab.bos.redhat.com,ra­vm2.lab.bos.redhat.com,ra­vm1.lab.bos.redhat.com,monet.lab.bos.redhat.com,renoir.lab.bos.redhat.com # 

A.7.2.1.2 NFS mount resource

Before adding the resource the mount point will need to be created on each member. Intentionally the mount point name is not the same as the server exported name.

# mkdir /web­content # 

Confirm that there are no issues mount or un-mounting the service on each member.

# mount milo.lab.bos.redhat.com:/webcontent /web­content # ls /web­content RELEASE­NOTES­en.html  RELEASE­NOTES­U1­en.html  RELEASE­NOTES­U2­en.html test.html # umount /web­content 

74 | www.redhat.com

Page 75: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

At the Add a Resource page, select the NFS mount in the Select a Resource Type pull-down. The user will be presented with the option fields for the NFS mount resource. Enter the appropriate values then press Submit.

● Name – the name the resource will be know by● Mount point – the directory that the service will be mounted on the member● Host – the NFS server● Export path – the directory that the server is exporting to be mounted● NFS version – older servers may only support NFS3● Options – several choices are available, see man nfs and man mount

○ ro – read only○ soft – on a major timeout report an I/O error versus retrying indefinitely○ context=system_u:object_r:httpd_sys_content_t – if SELinux is active, this option

will set the given file label on all the contents of the mounted directory\● Force unmount – kill all processes using the mount point when it tries to unmount

www.redhat.com | 75

Page 76: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After another confirmation dialog and temporary Please be patient page, the update resource page will be displayed.

 A.7.2.2 GFS/GFS2

This section will provided steps on how to make the web-content served by shared storage. Most of the screens are similar to those in section 4.5.3 GFS .

A shared LUN will need to be presented to the hosts. A 64GB LUN was created and presented to all hosts from the FC array. After a reboot, multipath is used to display the LUN.

# multipath ­ll mpath2 (3600c0ff000d5567de4a9b24801000000) dm­2 HP,MSA2212fc [size=61G][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:0:4  sda 8:0   [active][ready] 

76 | www.redhat.com

Page 77: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

 \_ 0:0:1:4  sdb 8:16  [active][ready]  \_ 1:0:0:4  sdg 8:96  [active][ready]  \_ 1:0:1:4  sdh 8:112 [active][ready] msa10_vol2 (3600c0ff000d556079b448f4801000000) dm­3 HP,MSA2212fc [size=238G][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:2:2  sdc 8:32  [active][ready]  \_ 0:0:3:2  sde 8:64  [active][ready]  \_ 1:0:2:2  sdi 8:128 [active][ready]  \_ 1:0:3:2  sdk 8:160 [active][ready] qdisk (3600c0ff000d55607e0bbf74801000000) dm­4 HP,MSA2212fc [size=48M][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:2:10 sdd 8:48  [active][ready]  \_ 0:0:3:10 sdf 8:80  [active][ready]  \_ 1:0:2:10 sdj 8:144 [active][ready]  \_ 1:0:3:10 sdl 8:176 [active][ready] # vi /etc/multipath.conf 

Using the WWID from the multipath, add an entry similar to the following to /etc/multipath.conf on each of the hosts. This will make a device named /dev/mapper/guestShare.

multipath { wwid 3600c0ff000d5567de4a9b24801000000 alias guestShare }

Restart multipath, and verify the new device name.

# service multipathd restart Stopping multipathd daemon:                                [  OK  ] Starting multipathd daemon:                                [  OK  ] # multipath ­ll msa10_vol2 (3600c0ff000d556079b448f4801000000) dm­3 HP,MSA2212fc [size=238G][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:2:2  sdc 8:32  [active][ready]  \_ 0:0:3:2  sde 8:64  [active][ready]  \_ 1:0:2:2  sdi 8:128 [active][ready]  \_ 1:0:3:2  sdk 8:160 [active][ready] qdisk (3600c0ff000d55607e0bbf74801000000) dm­4 HP,MSA2212fc [size=48M][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:2:10 sdd 8:48  [active][ready]  \_ 0:0:3:10 sdf 8:80  [active][ready]  \_ 1:0:2:10 sdj 8:144 [active][ready]  \_ 1:0:3:10 sdl 8:176 [active][ready] guestShare (3600c0ff000d5567de4a9b24801000000) dm­2 HP,MSA2212fc [size=61G][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:0:4  sda 8:0   [active][ready]  \_ 0:0:1:4  sdb 8:16  [active][ready]  \_ 1:0:0:4  sdg 8:96  [active][ready] 

www.redhat.com | 77

Page 78: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

 \_ 1:0:1:4  sdh 8:112 [active][ready] # 

The device will need to be added to the guest configuration files in /etx/xen. In this example, it will be known as xvdd on the guests.

disk = [ "tap:aio:/var/lib/xen/images/ra-vm2.img,xvda,w", "phy:/dev/mapper/qdisk,xvdb,w!", "phy:/dev/mapper/guestShare,xvdd,w!" ]

luci has options to configure storage. Select the storage tab which will list the nodes available to manage. Select a node in the menu on the left side of the page. The page will display Hard Drives, Partition Tables, and Volume Groups. Select the link for the xvdd device. While this page provide more details on the device, the only action available is to Reprobe Storage.

GFS file systems are made on CLVM volume. The disk will need to be initialized for CLVM.

# pvcreate /dev/xvdd   Physical volume "/dev/xvdd" successfully created # 

Now that the device has been initialize, reprobe the storage. Select Volume Groups from the menu on the left side of the page. To start the creation of a volume group that will be used for the shared storage, select New Volume Group. The Volume Group Name should be filled in, the default Extent Size was used, Clustered was set to true, the available Physical Volume was selected, then the Create button was selected. After a confirmation window, a Committing Change page is displayed. When complete, the Volume Group information will be displayed.

Select New Logical Volume. Specify a Logical Volume Name and the Size in GBs. Select GFS2 - Global FS v.2 in the Content pull-down. Once the pull-down is selected additional fields will be displayed and will need to be filled in. The defaults were left selected except an Unique GFS Name was provided and sufficient Number of Journals was input. The minimum Number of Journals is the number of machines that will be mount the file system. Once values are inputted, press the Create button. After a confirmation window and the Committing Changes page, information of the applied configuration will be displayed.

Now that the file system has been created, create a resource that can be used to provide this content.

Before adding the resource, the mount point will need to be created on each member, if not previously created. The mountpoint name is the same as used for the NFS resource.

# mkdir /web­content # 

78 | www.redhat.com

Page 79: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Confirm that there are no issues mount or un-mounting the service on each member.

# mount /dev/mapper/guestShareVG­guestWebLV /web­content # umount /web­content # 

# ls ­Z /web­content/ ­rw­r­­r­­  root root root:object_r:file_t             RELEASE­NOTES­en.html ­rw­r­­r­­  root root root:object_r:file_t             RELEASE­NOTES­U1­en.html ­rw­r­­r­­  root root root:object_r:file_t             RELEASE­NOTES­U2­en.html ­rwxr­xr­x  root root root:object_r:file_t             test.html # chcon ­R system_u:object_r:httpd_sys_content_t /web­content # ls ­Z /web­content/ ­rw­r­­r­­  root root system_u:object_r:httpd_sys_content_t RELEASE­NOTES­en.html ­rw­r­­r­­  root root system_u:object_r:httpd_sys_content_t RELEASE­NOTES­U1­en.html ­rw­r­­r­­  root root system_u:object_r:httpd_sys_content_t RELEASE­NOTES­U2­en.html ­rwxr­xr­x  root root system_u:object_r:httpd_sys_content_t test.html # 

www.redhat.com | 79

Page 80: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

To add the resource, select the Resources then Add a Resource. In the Select a Resource Type pull-down, select GFS file system. The fields available to configure the GFS resource will be displayed. A unique resource Name was provided. The previously created Mount point and the Device path were specified. Press Submit.

80 | www.redhat.com

Page 81: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After a confirmation window and a Please be patient page, the resources page will be displayed. This resource was actually added last.

 A.7.2.3 Apache controller

The Apache controller resource will start and stop the Apache daemon. However, first each member should be configured so that Apache will not be started automatically leaving it controlled by the soon to be added script resource. # chkconfig ­­del httpd #  

At the Add a resource page, select Script in the Select a Resource Type pull-down. While a Apache type exist as a resource type, several issue were seen in an attempt to use it. There

www.redhat.com | 81

Page 82: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

will be two fields for the script resource. First the name the resource will be known. The second path to the script. The path to the supplied script, /etc/init.d/httpd, which usually starts and stops the daemon on boots and shutdowns is supplied. When the data is entered, select Submit.

82 | www.redhat.com

Page 83: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After the usual confirmation dialog and Please be patient temporary page, the updated resource page is once again displayed.

The Apache configuration file should be checked for your environment. Two changes are needed to support the configuration as presented by editing the /etc/httpd/conf/httpd.conf file. The first is changing the address that the daemon listens to be the address of the IP Address resource.

Listen 10.16.40.166:80

The other is edit is the directive which specifies the directory where the served files are located.

DocumentRoot "/web-content"

www.redhat.com | 83

Page 84: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

 A.7.3 ServiceNow that the resources have been configured, a service will be created using them. Select Service => Add a Service in the menu on the left side of the page. The following fields need to be filled in:

● Service name – identifier for the service● Automatically start this service – start service once cluster starts● Run exclusive – if selected, service will on start on member with no other service● Failover domain – select between none and the configured● Recovery policy – specify options to recover from a service failure

○ Restart — Restart the service in the node the service is currently located. If the service cannot be restarted in the the current node, the service is relocated.

○ Relocate — Relocate the service before restarting. Do not restart the node where the service is currently located.

○ Disable — Do not restart the service at all. ● Add a resource to this service – select this button

84 | www.redhat.com

Page 85: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

In the Use an existing global resource pull-down select the IP address resource. Select the Add a resource to this service button and add the Apache-resource.

www.redhat.com | 85

Page 86: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

At the time of the writing of this paper, issues existed with both web content resources, which will be address below. Press the Submit button.

A confirmation dialog will be presented. After a Please be patient temporary page, the services page will be presented.

As stated above at the time of the writing of this paper an issue, which is tracked by Red Hat Bugzilla #444381, related to not writing the NFS mount resource correctly. To compensate the following steps will need to be performed:

● Edit the cluster resource in the cluster configuration file● Update the cluster configuration file on all members● Make the new cluster configuration file

When the /etc/cluster/cluster.conf is edited on a cluster member, the version will need to be incremented, the netfs resource will need to have the exportpath modifier changed to export,

86 | www.redhat.com

Page 87: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

and a netfs resource will need to be added to the service.

The following are snippets of the original file:<?xml version="1.0"?> <cluster alias="guestCluster" config_version="13" name="guestCluster"> [...] <resources> <ip address="10.16.40.166" monitor_link="1"/> <netfs exportpath="/webcontent" force_unmount="1" host="milo.lab.bos.redhat.com" mountpoint="/web-content" name="content-resource" nfstype="nfs4" options="ro,soft,context=&quot;system_u:object_r:httpd_sys_content_t:s0&quot;"/> <script file="/etc/init.d/httpd" name="apache-resource"/> </resources> <service autostart="1" exclusive="0" name="apache-service" recovery="restart"> <ip ref="10.16.40.166"/> <script ref="apache-resource"/> </service> [...]

The edited version should look like the following:

<?xml version="1.0"?> <cluster alias="guestCluster" config_version="14" name="guestCluster">[...] <resources> <ip address="10.16.40.166" monitor_link="1"/> <netfs export="/webcontent" force_unmount="1" host="milo.lab.bos.redhat.com" mountpoint="/web-content" name="content-resource" nfstype="nfs4" options="ro,soft,context=&quot;system_u:object_r:httpd_sys_content_t:s0&quot;"/> <script file="/etc/init.d/httpd" name="apache-resource"/> </resources> <service autostart="1" exclusive="0" name="apache-service" recovery="restart"> <ip ref="10.16.40.166"/> <netfs ref="content-resource"/> <script ref="apache-resource"/> </service> [...]

The GFS resource is for GFS not GFS2, however can easily be modified to support GFS2. The fstype must be edited from 'gfs' to 'gfs2'. After this change luci could be used to add the resource to the service, in this case, it is added during this edit.

<?xml version="1.0"?> <cluster alias="guestCluster" config_version="13" name="guestCluster"> [...] <resources> <ip address="10.16.40.166" monitor_link="1"/> <script file="/etc/init.d/httpd" name="apache-resource"/> <clusterfs device="/dev/guestShareVG/guestWebLV" force_unmount="0" fsid="22105" fstype="gfs" mountpoint="/web-content" name="guestWebGFS" self_fence="0"/> </resources> <service autostart="1" exclusive="0" name="apache-service" recovery="restart">

www.redhat.com | 87

Page 88: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

<ip ref="10.16.40.166"/> <script ref="apache-resource"/> </service> [...]

The config_version will need to be changed, along with the clusterfs resource's fstype, then add the clusterfs resource to the service.

<?xml version="1.0"?> <cluster alias="guestCluster" config_version="14" name="guestCluster"> [...] <resources> <ip address="10.16.40.166" monitor_link="1"/> <script file="/etc/init.d/httpd" name="apache-resource"/> <clusterfs device="/dev/guestShareVG/guestWebLV" force_unmount="0" fsid="22105" fstype="gfs2" mountpoint="/web-content" name="guestWebGFS" self_fence="0"/> </resources> <service autostart="1" exclusive="0" name="apache-service" recovery="restart"> <ip ref="10.16.40.166"/> <script ref="apache-resource"/> <clusterfs ref="guestWebGFS"/> </service>[...]

To update the configuration file on all the cluster members, issue the following command on the member which the file was edited.

# ccs_tool update /etc/cluster/cluster.conf Config file updated from version 13 to 14 

Update complete. # 

Now, have cluster manger use this updated version and confirm that it is used.

# cman_tool version ­r 14# cman_tool status | grep ­i "Config version" Config Version: 14# 

88 | www.redhat.com

Page 89: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

The node page provides a quick view of the status of the nodes and services.

 A.7.4 EvaluateAt this point the service is functional. The initial evaluation will make sure the web pages are accessible, then further evaluation is will verify access after relocation, node transitions, and induced failures.

Some additional operations are also demonstrated.

www.redhat.com | 89

Page 90: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

 A.7.4.1 Relocation

At the services page, the Choose a Task... pull-down has an option relocate the service to a service that it is not currently running on. After selecting the option, press the Go button.

90 | www.redhat.com

Page 91: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

This will present a confirmation dialog. After the temporary Please be patient page, the services page will be displayed. The status of the service is displayed as stopped, which may be a temporary condition, therefore reload the web page.

www.redhat.com | 91

Page 92: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After the reload, the status is the service is running on the other member. The web page was checked and was confirmed available.

There are command line options to displaying the status and relocating a service which are demonstrated below.

# clustat Cluster Status for guestCluster @ Fri Oct 17 17:01:50 2008 Member Status: Quorate 

 Member Name                                                     ID   Status  ­­­­­­ ­­­­                                                     ­­­­ ­­­­­­  ra­vm2­ic.lab.bos.redhat.com                                        1 Online, Local, rgmanager  ra­vm3­ic.lab.bos.redhat.com                                        2 Online, rgmanager 

92 | www.redhat.com

Page 93: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

 Service Name                                             Owner (Last) State          ­­­­­­­ ­­­­                                             ­­­­­ ­­­­­­ ­­­­­          service:apache­service                                   ra­vm3­ic.lab.bos.redhat.com                             started       # clusvcadm ­r apache­service Trying to relocate service:apache­service...Success service:apache­service is now running on ra­vm2­ic.lab.bos.redhat.com # clustat Cluster Status for guestCluster @ Fri Oct 17 17:03:04 2008 Member Status: Quorate 

 Member Name                                                     ID   Status  ­­­­­­ ­­­­                                                     ­­­­ ­­­­­­  ra­vm2­ic.lab.bos.redhat.com                                        1 Online, Local, rgmanager  ra­vm3­ic.lab.bos.redhat.com                                        2 Online, rgmanager 

 Service Name                                             Owner (Last) State          ­­­­­­­ ­­­­                                             ­­­­­ ­­­­­­ ­­­­­          service:apache­service                                   ra­vm2­ic.lab.bos.redhat.com                             started       # 

 A.7.4.2 Transition and failures

The configuration was tested with the following transition and failures on the node actively serving the service:

● guest reboot● host reboot● virsh destroy of guest● power cycle host● disable/disconnect a FC port● disable/disconnect all the FC ports from one node● disable/disconnect a public network port● disable/disconnect all the public network ports from one node● ifdown guest public network● disable/disconnect a private interconnect port● disable/disconnect all the private interconnect ports from one node● ifdown guest private network

The results from the node transitions were as expected, where the service relocated to the node not undergoing the transition. The guest destroy did cause the host to be fenced.

www.redhat.com | 93

Page 94: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

The FC cable pulls did not disrupt the guest or service. The web pages were served from a NFS mount therefore would not be affected from the detachment of unrelated storage. However, the guest images were served from FC based disks and did not appear to be affected. Querying multipath did show the failed paths.

# multipath ­ll sda: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" sde: checker msg is "tur checker reports path is down" sdg: checker msg is "tur checker reports path is down" msa10_vol2 (3600c0ff000d556079b448f4801000000) dm­2 HP,MSA2212fc [size=238G][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][enabled]  \_ 0:0:2:2  sda 8:0   [failed][faulty]  \_ 0:0:3:2  sdc 8:32  [failed][faulty]  \_ 1:0:2:2  sde 8:64  [failed][faulty]  \_ 1:0:3:2  sdg 8:96  [failed][faulty] # 

Causing the public bond to switch the active NIC caused a very short pause in accessing the web page. Pulling both cables did not start a relocation and the web page was inaccessible. Since the cables were pulled on the host, which does stop both host and guest traffic from leaving the machine, however, the bridge remains active which masks the cable pulls from the guest. The ifdown caused the service to fail. After re-establish the network with ifup the service remained in a failed state. An attempt to enable the service failed, it first had to be disabled the enabled.

# clusvcadm ­d apache­service Local machine disabling service:apache­service...Success # clusvcadm ­e apache­service Local machine trying to enable service:apache­service...Success service:apache­service is now running on ra­vm3­ic.lab.bos.redhat.com # 

The effects of pulling the active cable of the private interconnect bond were negligible. Either the pulling of both cables or ifdown of the interface started a fencing situation. On one occasion both nodes fenced.

 A.7.4.3 Cluster transitions

luci provides a means to perform a few cluster cluster transitions. For these transitions, the cluster software is transitioned, the nodes remain operational. While a cluster restart is an available option, it did not consistently perform correctly, so it is recommended to stop then start the cluster instead.

94 | www.redhat.com

Page 95: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

To stop the cluster, select Stop this cluster in the pull-down in the cluster list page. Then press Go. This will present a confirmation window then a Please be patient page. The next displayed page is the nodes of the just stopped cluster.

From the cluster list page, select Start this cluster then Go to start the cluster. After the confirmation window and Please be patient page, the nodes page for the cluster will be displayed. The initial status indicates that the service is not running on either node. After a short wait and a page refresh, the service status changed to Running.

 A.7.5 qdiskAs stated earlier, the current quorum configuration is a special two-node case. Each member has one vote and it only takes one vote to establish quorum for a cluster. In keeping with the two nodes, another option is to add a quorum disk (qdisk) which will also provide a vote. With this configuration three votes will be available and two will be needed for quorum.

The disk device that will be use for quorum will need to be accessed by all members of the cluster. In this configuration, the hosts will need to access a shared disk then present this disk to the guest using the same device name.

The configuration of the storage will need to present the LUN to all the host. A minimum of a 10MB partition is needed, here a 50MB partition was used. Initially multipath shows the device as mpath1.

# multipath ­ll msa10_vol2 (3600c0ff000d556079b448f4801000000) dm­2 HP,MSA2212fc [size=238G][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:2:2  sda 8:0   [active][ready]  \_ 0:0:3:2  sdc 8:32  [active][ready]  \_ 1:0:2:2  sde 8:64  [active][ready]  \_ 1:0:3:2  sdg 8:96  [active][ready] mpath1 (3600c0ff000d55607e0bbf74801000000) dm­3 HP,MSA2212fc [size=48M][features=1 queue_if_no_path][hwhandler=0] \_ round­robin 0 [prio=0][active]  \_ 0:0:2:10 sdb 8:16  [active][ready]  \_ 0:0:3:10 sdd 8:48  [active][ready]  \_ 1:0:2:10 sdf 8:80  [active][ready]  \_ 1:0:3:10 sdh 8:112 [active][ready] # 

Using the WWID information provided by multipath, the /etc/mutlipath.conf is edit to add the new device. An alias is established, calling the device qdisk. This will be done on all the hosts that that guest's for members of the cluster.

multipath { wwid 3600c0ff000d55607e0bbf74801000000 alias qdisk }

www.redhat.com | 95

Page 96: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Now the configuration files for the guest in /etc/xen were edited to add a entry for the disk. The disk will be known as xvdb and the 'w!' states that it will be read/write in a shared mode.

disk = [ "tap:aio:/var/lib/xen/images/ra-vm2.img,xvda,w", "phy:/dev/mapper/qdisk,xvdb,w!" ]

Once the members of the cluster can see the device that will be used for the quorum disk, it need to be initialized. A label of guest_qdisk is used.

# mkqdisk ­c /dev/xvdb ­l guest_qdisk mkqdisk v0.5.2 Writing new quorum disk label 'guest_qdisk' to /dev/xvdb. WARNING: About to destroy all data on /dev/xvdb; proceed [N/y] ? y Initializing status block for node 1... Initializing status block for node 2... Initializing status block for node 3... Initializing status block for node 4... Initializing status block for node 5... Initializing status block for node 6... Initializing status block for node 7... Initializing status block for node 8... Initializing status block for node 9... Initializing status block for node 10... Initializing status block for node 11... Initializing status block for node 12... Initializing status block for node 13... Initializing status block for node 14... Initializing status block for node 15... Initializing status block for node 16... #

Using the '-L' option, information can be retrieved from the qdisk. All members should return the same information.

# mkqdisk ­L mkqdisk v0.5.2 /dev/xvdb: 

Magic:                eb7a62c2 Label:                guest_qdisk Created:              Tue Oct 21 14:55:01 2008 Host:                 ra­vm2.lab.bos.redhat.com Kernel Sector Size:   512 Recorded Sector Size: 512 

96 | www.redhat.com

Page 97: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

At the Configure cluster properties page, select the Quorum Partition tab. Select to Use a Quorum Partition and enter the appropriate values. The values use specify 5 sec Interval to update the qdisk, the qdisk supplies 1 Votes, the TKO value of 15 indicates the number of missed intervals before a member is declared dead. A node will be declared alive if the heuristics total at least the Minimum Score. While not needed if not heuristics are specified, luci required a value of at least 1. The device can be specified with either or both the Device and label, which should match the values specified when the disk was initialized. A confirmation window will be displayed.

Just as in the recommended solution, when the cluster information is displayed from the cluster list page, the quorum values are from the special two node case. When the /etc/cluster/cluster.conf is reviewed, the cman values show original settings.

<cman expected_votes="1" two_node="1"/>

Edit the above line to be similar to the following, and increment the version.

<cman expected_votes="3"/>

Update and activate the changed cluster configuration file.

With this change to cman, the cluster should be restarted. As stated above, stopping and starting the cluster was a more dependable method. After the cluster reforms, the new quorum configuration is active.

 A.7.6 Web Content not as a resourceIn the examples above, web content was part of the service as either a NFS mount or GFS resource. A modification of this configuration is to not have a web content resource. For this the NFS or GFS mount will be put in the /etc/fstab and be accessible on each cluster member, whether or not the service is located on the member.

Add a line similar to the following for the NFS mount:

milo.lab.bos.redhat.com:/webcontent /web-content nfs ro,soft,context="system_u:object_r:httpd_sys_content_t:s0" 0 0

The line for the GFS2 mount should be similar to:

/dev/guestShareVG/guestWebLV /web-content gfs2 rw 1 3

www.redhat.com | 97

Page 98: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

 A.8 Clustered hosts with clustered guestThis evolution to the configuration will cluster the hosts of the existing VM cluster. The following list is the actions that were performed.

1. Configure clustera. formb. fencing

2. change guest fencing3. evaluate

The diagram below provides a conceptual picture of this configuration.

 A.8.1 Forming host clusterThe cluster formation steps were similar to the other mentioned in this paper. Start the process by selecting Create a New Cluster from the cluster tab. Provide a Cluster Name, and the Node Hostnames and Root Passwords for the hosts of the guest cluster. Ensures that the nodes are able to communicate. Since GFS2 will be used, verify that Enable Shared Storage

98 | www.redhat.com

Page 99: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Support is selected, and in this case the choice to Use locally installed packages was selected. Select Submit to form the cluster. After pressing Submit a confirmation window will display. After selecting OK, a Please be patient window will display the progress as the cluster is formed. After the cluster has formed, the cluster properties page will be displayed. Select the Fence tab, and increase the Post Join Delay. XVM fencing will be addressed later. Select Apply. After a confirmation window and a Please be patient page, the cluster properties page will displayed. The Configuration Version should have been updated.

Fence devices need to be added to each of the nodes. Select Nodes in the menu on the left side of the page. Select the Manage Fencing for this Node link for one of the members.

In the Main Fencing Method area of the the page, select the Add a fence device to this level link. In the pulldown that appears, select the device that will be used, in this case HP iLO. This will make available several fields appear. Supply the appropriate values for the Name, Hostname, Login, and Password. Select Update main fence properties.

www.redhat.com | 99

Page 100: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After a confirmation window and a Please be patient page, the nodes properties page will be displayed. Bugzilla 469874 relates to an issue where cluster instability is seen. By disabling rgmanager, increase stability was seen. For this configuration, the resource group manager is not needed, since no resources or services will be configured. In the Cluster daemons running on this node area of the page, on the rgmanager line, remove the check in the Enabled at start-up box. Select Update node daemon properties.

100 | www.redhat.com

Page 101: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After a confirmation and Please be patient page, the node properties page will be redisplayed. The rgmanager will still be running. If the user does not want to wait until a reboot, the following command will stop rgmanager

# service rgmanager stop Shutting down Cluster Service Manager... Waiting for services to stop:                              [  OK  ] Cluster Service Manager is stopped. # 

After issuing the command, a refresh of the nodes properties page will show the correct status of rgmanager.

A fence device and the rgmanager daemon configuration should be duplicated on the other cluster members.

 A.8.2 XVM FencingXVM fencing will become the primary fencing method for the guest, therefore the entire host will not be effected when a fencing need arrives for the guest. The iLO will serve as a backup fencing method.

For the fencing, additional firewall ports will need to be opened as specified in Appendix B.

www.redhat.com | 101

Page 102: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

First the daemon will need to be configured on the host. Select the fence tab of the Configure cluster properties page. Enter a host and guest in the appropriate fields. Press the Retrieve cluster nodes button.

102 | www.redhat.com

Page 103: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Once the nodes of both clusters are retrieved, select the Create and distribute keys button.

www.redhat.com | 103

Page 104: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

A Please be patient page will display the progress. After the Configure cluster properties page, Fence tab, is displayed. Select the Run XVM fence daemon box then press Apply.

104 | www.redhat.com

Page 105: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After a confirmation window and Please be patient page, General tab will be displayed.

The XVM fence daemon will not have been started on hosts. Either reboot the hosts or restart the cluster.

The current main fence on the guest will need to be removed. On a nodes properties page, select Remove this device for the HP iLO. While this will remove the fence from being used, the device still exists in the configuration file.

www.redhat.com | 105

Page 106: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Select the Add a fence device to this level link in the Main Fencing Method area. In the pulldown, select Virtual Machine Fencing. Specify a Name and the Domain, then press Update main fencing properties.

106 | www.redhat.com

Page 107: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After a confirmation window and a Please be patient page, the nodes properties page will be displayed. Bugzilla 469965 describes a problem adding a existing fence device. Also a user is not allowed to add a fence device with a name of an existing fence device. To remove a fence device from the configuration, select Shared Fence Devices on the menu on the left side of the page.

www.redhat.com | 107

Page 108: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Select the link that is the name of the fence device on the menu that is displayed. This will display the corresponding Configure a fence device page. Select Delete this fence device.

After a confirmation window and Please be patient page, the shared fence devices will be displayed.

108 | www.redhat.com

Page 109: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Back at the nodes properties page, the backup fence device can be added. Select the Add a fence device to this level link in the Backup Fencing Method area. In the pulldown, select the HP iLO then enter the appropriate values, then select Update backup fence properties.

www.redhat.com | 109

Page 110: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

To test the new XVM fence, select Nodes in the menu on the left side of the page. In one of the pull-downs, select Fence this node, then press Go.

After a confirmation window and the Please be patient page, the nodes page will be displayed. The testing showed one guest was successfully brought down and then started, however the other guest was only brought down and need to started by hand. The guest that did not start had a file-based boot disk along with additional physical disks. This issue relates to Bugzilla 462727.

 A.8.3 EvaluateA similar series of faults that were performed on the clusters. The most noticeable difference, was that during a private interconnect failure, only the guest was fenced, yielding less down time for the application and limiting the impact to the entire host.

110 | www.redhat.com

Page 111: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Appendix B: FirewallFirewall protection is provide using iptables. The RH-Firewall-1-INPUT chain will maintain records related to the firewall. The setting used for the host were presented in Volume 1. The settings in this volume will relate to clusters, luci, fencing, and migration.

The following command will list the active settings, with all ports listed numerically.# iptables ­nL

Once the active firewall has been configured as desired, issuing the following command will# service iptables save

 B.1 ClustersThe commands to instruct the firewall to accept traffic needed for the cluster are listed below, per cluster daemon.

opeanais [5404,5405]:

# iptables ­I RH­Firewall­1­INPUT ­m state ­­state NEW ­m multiport ­p udp ­­dport 5404,5405 ­j ACCEPT# 

rgmanager [41966, 41967, 41968, 41969]:

# iptables ­I RH­Firewall­1­INPUT ­m state ­­state NEW ­m multiport ­p tcp ­­dports 41966,41967,41968,41969 ­j ACCEPT# 

ricci [11111]:

# iptables ­I RH­Firewall­1­INPUT ­m state ­­state NEW ­m multiport ­p tcp ­­dports 11111 ­j ACCEPT# 

dlm [21064]:

# iptables ­I RH­Firewall­1­INPUT ­m state ­­state NEW ­m multiport ­p tcp ­­dports 21064 ­j ACCEPT# 

cssd [50006, 50007, 50008, 50009]:

# iptables ­I RH­Firewall­1­INPUT ­m state ­­state NEW ­m multiport ­p tcp ­­dports 50006,50008,50009 ­j ACCEPT#

# iptables ­I RH­Firewall­1­INPUT ­m state ­­state NEW ­m multiport ­p udp ­­dports 50007 ­j ACCEPT# 

www.redhat.com | 111

Page 112: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

 B.2 LuciIf the luci server has firewall rules enforced, that specific IP port (8084) will require enabling:

# iptables ­I RH­Firewall­1­INPUT ­m state ­­state NEW ­m multiport ­p tcp ­­dports 8084 ­j ACCEPT# 

 B.3 MigrationThe ability to migrate VMs requires the use a communications port. The specific port is configurable, however the steps in this volume used the default, 8002. # iptables ­I RH­Firewall­1­INPUT ­m state ­­state NEW ­m multiport ­p tcp ­­dports 8002 ­j ACCEPT# 

 B.4 xvmd fencingThe alternate configuration used fence_xvmd. Ports on the hosts and guest will need to be opened, the port is the same however the protocols differ.

Host:# iptables ­I RH­Firewall­1­INPUT ­m state ­­state NEW ­m multiport ­p udp ­­dports 1229 ­j ACCEPT# 

Guest:# iptables ­I RH­Firewall­1­INPUT ­m state ­­state NEW ­m multiport ­p tcp ­­dports 1229 ­j ACCEPT# 

 B.5 NFS serverThe alternate solution demonstrated using a NFS server to provide the web content. As seen in Volume 1, several of the daemons used by a NFS server are not ties to specific ports. To work with a firewall on the NFS server so that the client will be able to connect to the NFS share, the daemons will need to be configured to specific ports.

The ports that the some of daemons that NFS uses typically vary. These will be set to specific ports by adding the following lines to /etc/sysconfig/nfs.

STATD_PORT=10002

112 | www.redhat.com

Page 113: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001

Rebooting the node will make sure that the node uses these new ports.

The ports for the portmapper and nfs daemons were well known and defined. Verify that the remaining daemons are using the ports that were defined,

# rpcinfo ­p    program vers proto   port     100000    2   tcp    111  portmapper     100000    2   udp    111  portmapper     100011    1   udp  10005  rquotad     100011    2   udp  10005  rquotad     100011    1   tcp  10005  rquotad     100011    2   tcp  10005  rquotad     100003    2   udp   2049  nfs     100003    3   udp   2049  nfs     100003    4   udp   2049  nfs     100021    1   udp  30001  nlockmgr     100021    3   udp  30001  nlockmgr     100021    4   udp  30001  nlockmgr     100021    1   tcp  30001  nlockmgr     100021    3   tcp  30001  nlockmgr     100021    4   tcp  30001  nlockmgr     100003    2   tcp   2049  nfs     100003    3   tcp   2049  nfs     100003    4   tcp   2049  nfs     100005    1   udp  10004  mountd     100005    1   tcp  10004  mountd     100005    2   udp  10004  mountd     100005    2   tcp  10004  mountd     100005    3   udp  10004  mountd     100005    3   tcp  10004  mountd     100024    1   udp  10002  status     100024    1   tcp  10002  status # 

The operator will need to open the ports that the daemons are using to allow the systems that need access to the NFS mount access through the firewall. The commands below will open all the used ports and save the changes, however some may have been opened in previous configuration steps so the use may not need to issue all the commands listed.

# iptables ­I  RH­Firewall­1­INPUT ­p udp ­m udp ­­dport 111 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p udp ­m udp ­­dport 2049 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p udp ­m udp ­­dport 10002 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p udp ­m udp ­­dport 10003 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p udp ­m udp ­­dport 10004 ­j ACCEPT 

www.redhat.com | 113

Page 114: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

# iptables ­I  RH­Firewall­1­INPUT ­p udp ­m udp ­­dport 10005 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p udp ­m udp ­­dport 30001 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p tcp ­m tcp ­­dport 111 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p tcp ­m tcp ­­dport 2049 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p tcp ­m tcp ­­dport 10002 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p tcp ­m tcp ­­dport 10003 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p tcp ­m tcp ­­dport 10004 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p tcp ­m tcp ­­dport 10005 ­j ACCEPT # iptables ­I  RH­Firewall­1­INPUT ­p tcp ­m tcp ­­dport 30001 ­j ACCEPT # service iptables save Saving firewall rules to /etc/sysconfig/iptables:          [  OK  ] #

114 | www.redhat.com

Page 115: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Appendix C: SELinuxSeveral denials were seen while developing this paper. Many were already known and have been addresses. RHN was used to look for a policy more current than on the systems. First, the current version the machines are running needs to be known.

# rpm ­qa | grep selinux­policy selinux­policy­2.4.6­137.1.el5_2 selinux­policy­targeted­2.4.6­137.1.el5_2 # 

www.redhat.com | 115

Page 116: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After logging into RHN, “selinux-policy” was entered in to the search bar with the pull-down to Packages then the Search button was pressed.

116 | www.redhat.com

Page 117: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

In the Package Search page that was displayed, What to search was changed to Name Only and Where to search was changed to Channels relevant to your systems. Press the Search button.

After updating, a similar page will be presented, select the link for selinux-policy at the bottom of the page.

www.redhat.com | 117

Page 118: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

The first choice found was a Beta of a later policy, selinux-policy-2.4.6-170-el5.noarch. Select this link.

118 | www.redhat.com

Page 119: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

At the bottom of this page, select the Download Package link.

A confirmation dialog will be presented.

www.redhat.com | 119

Page 120: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Perform a similar search for “selinux-policy-targeted”. A matching version is found, however not the first choice. Select this link.

120 | www.redhat.com

Page 121: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Download the package.

Using rpm upgrade both packages on all OSes.

# rpm ­Uvh selinux­policy­2.4.6­170.el5.noarch.rpm selinux­policy­targeted­2.4.6­170.el5.noarch.rpm warning: selinux­policy­2.4.6­170.el5.noarch.rpm: Header V3 DSA signature: NOKEY, key ID 897da07a Preparing...                ########################################### [100%]    1:selinux­policy         ########################################### [ 50%]    2:selinux­policy­targeted########################################### [100%] /sbin/restorecon reset /etc/httpd/run context system_u:object_r:etc_t:s0­>system_u:object_r:httpd_config_t:s0 [...]/sbin/restorecon reset /var/run/audispd_events context system_u:object_r:auditd_var_run_t:s0­>system_u:object_r:audisp_var_run_t:s0 # 

www.redhat.com | 121

Page 122: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

After upgrading each node, the current alerts were deleted using the troubleshooter then the system was rebooted.

 C.1 Loadable modulesAfter loading the downloaded policies, several denials were encountered. Each time a loadable policy was generated, loaded, copied to the other node, and loaded. The following commands were used to generate a new policy. After the <policy name>.pp was created, it was scp'd to the other host and applied.

# cat /var/log/audit/audit.log | audit2allow ­l ­M <policy name># semodule ­i <policy name>.pp

The following are listings of the text versions of the loadable policies that were generate during this paper. Bugzilla #473168 was entered to track.

module host 1.0;

require { type ricci_modstorage_t; type mount_exec_t; type consoletype_exec_t; type var_lib_t; class dir search; class file { execute getattr };

}

#============= ricci_modstorage_t ============== allow ricci_modstorage_t consoletype_exec_t:file { execute getattr }; allow ricci_modstorage_t mount_exec_t:file execute; allow ricci_modstorage_t var_lib_t:dir search;

module host2 1.0;

require { type ricci_modstorage_t; type consoletype_exec_t; class file { read execute_no_trans };

}

#============= ricci_modstorage_t ============== allow ricci_modstorage_t consoletype_exec_t:file { read execute_no_trans };

module host3 1.0;

require { type debugfs_t; type ricci_modstorage_t; type fs_t; type file_t; type default_t; type mount_exec_t; type initrc_t;

122 | www.redhat.com

Page 123: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

type etc_runtime_t; class capability { setuid setgid }; class unix_stream_socket connectto; class file { write read execute_no_trans append }; class dir { search getattr mounton }; class filesystem mount;

}

#============= ricci_modstorage_t ============== allow ricci_modstorage_t debugfs_t:dir search; allow ricci_modstorage_t default_t:dir { getattr mounton }; allow ricci_modstorage_t etc_runtime_t:dir search; allow ricci_modstorage_t etc_runtime_t:file { write append }; allow ricci_modstorage_t file_t:dir getattr; allow ricci_modstorage_t fs_t:filesystem mount; allow ricci_modstorage_t initrc_t:unix_stream_socket connectto; allow ricci_modstorage_t mount_exec_t:file { read execute_no_trans }; allow ricci_modstorage_t self:capability { setuid setgid };

module host4 1.0;

require { type ricci_modstorage_t; class capability sys_admin;

}

#============= ricci_modstorage_t ============== allow ricci_modstorage_t self:capability sys_admin;

 C.2 Setting context on new areasTwo GFS2 file systems were created and mounted. The /var/lib/xen/images has a defined context. The /etc/sharedConfig does not have a related so a context is defined. Then the context is restored to both areas.

# semanage fcontext ­a ­t etc_t "/etc/sharedConfig(/.*)?" # restorecon /etc/sharedConfig# restorecon /var/lib/xen/images

www.redhat.com | 123

Page 124: Deploying Red Hat Enterprise Linux (RHEL) 5 … About this document This document is the second of a planned series detailing the use of server virtualization on Red Hat Enterprise

Appendix D: Issue Tracking

Red Hat Bugzilla #467464 adding qdisk to existing cluster fails to update cman entry in cluster.conf

Red Hat Bugzilla #473168SELinux policy preventing the mount of GFS2 while making lv via luci

Red Hat Bugzilla #444381 conga writes 'exportpath', 'nfstype' instead of 'export', 'fstype' attributes for netfs

Red Hat Bugzilla #469874 Openais appears to fail, causing cluster member to fence

Red Hat Bugzilla #469965 Adding existing fence device fails

Red Hat Bugzilla #462727 Problems occur when executing 'xm reboot' on the second time

124 | www.redhat.com