82
IBM Platform HPC Version 4.2 IBM Platform HPC, Version 4.2 Installation Guide SC27-6107-02

Platform HPC 4.2 Install c2761072

Embed Size (px)

DESCRIPTION

Installation IBM platforme HPC 4.2

Citation preview

Page 1: Platform HPC 4.2 Install c2761072

IBM Platform HPCVersion 4.2

IBM Platform HPC, Version 4.2Installation Guide

SC27-6107-02

���

Page 2: Platform HPC 4.2 Install c2761072
Page 3: Platform HPC 4.2 Install c2761072

IBM Platform HPCVersion 4.2

IBM Platform HPC, Version 4.2Installation Guide

SC27-6107-02

���

Page 4: Platform HPC 4.2 Install c2761072

NoteBefore using this information and the product it supports, read the information in “Notices” on page 71.

First edition

This edition applies to version 4, release 2, of IBM Platform HPC (product number 5725-K71) and to all subsequentreleases and modifications until otherwise indicated in new editions.

© Copyright IBM Corporation 1994, 2014.US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contractwith IBM Corp.

Page 5: Platform HPC 4.2 Install c2761072

Contents

Chapter 1. Installation planning . . . . 1Preinstallation roadmap . . . . . . . . . . 2Installation roadmap. . . . . . . . . . . . 3

Chapter 2. Planning. . . . . . . . . . 5Planning your system configuration . . . . . . 5Planning a high availability environment . . . . . 7

Chapter 3. Preparing to install PHPC . . 9PHPC requirements . . . . . . . . . . . . 9

High availability requirements . . . . . . . 10Prepare a shared file system . . . . . . . 12

Configure and test switches . . . . . . . . . 12Plan your network configuration . . . . . . . 13Installing and configuring the operating system onthe management node . . . . . . . . . . . 13

Red Hat Enterprise Linux prerequisites . . . . 15SUSE Linux Enterprise Server (SLES) 11.xprerequisites . . . . . . . . . . . . . 16

Chapter 4. Performing an installation 17Comparing installation methods . . . . . . . 17Quick installation roadmap . . . . . . . . . 19Quick installation . . . . . . . . . . . . 20Custom installation roadmap . . . . . . . . 23Custom installation . . . . . . . . . . . . 24

Chapter 5. Performing a silentinstallation . . . . . . . . . . . . . 29Response file for silent installation. . . . . . . 29

Chapter 6. Verifying the installation . . 35

Chapter 7. Taking the first steps afterinstallation . . . . . . . . . . . . . 37

Chapter 8. Troubleshooting installationproblems. . . . . . . . . . . . . . 39Configuring your browser . . . . . . . . . 40

Chapter 9. Setting up a high availabilityenvironment . . . . . . . . . . . . 41Preparing high availability . . . . . . . . . 41Enable a high availability environment . . . . . 43

Completing the high availability enablement . . . 44Configure IPMI as a fencing device . . . . . 44Create a failover notification. . . . . . . . 45

Setting up SMTP mail settings . . . . . . 45Verifying a high availability environment . . . . 46Troubleshooting a high availability environmentenablement . . . . . . . . . . . . . . 47

Chapter 10. Upgrading IBM PlatformHPC . . . . . . . . . . . . . . . . 49Upgrading to Platform HPC Version 4.2 . . . . . 49

Upgrade planning . . . . . . . . . . . 49Upgrading checklist . . . . . . . . . 49Upgrading roadmap . . . . . . . . . 50

Upgrading to Platform HPC 4.2 without OSreinstall. . . . . . . . . . . . . . . 50

Preparing to upgrade . . . . . . . . . 50Backing up Platform HPC . . . . . . . 51Performing the Platform HPC upgrade . . . 52Completing the upgrade . . . . . . . . 53Verifying the upgrade . . . . . . . . . 55

Upgrading to Platform HPC 4.2 with OS reinstall 55Preparing to upgrade . . . . . . . . . 55Backing up Platform HPC . . . . . . . 57Performing the Platform HPC upgrade . . . 57Completing the upgrade . . . . . . . . 58Verifying the upgrade . . . . . . . . . 60

Troubleshooting upgrade problems . . . . . 60Rollback to Platform HPC 4.1.1.1 . . . . . . 61

Upgrading entitlement. . . . . . . . . . . 63Upgrading LSF entitlement . . . . . . . . 63Upgrading PAC entitlement . . . . . . . . 63

Chapter 11. Applying fixes . . . . . . 65

Chapter 12. References . . . . . . . 67Configuration files . . . . . . . . . . . . 67

High availability definition file . . . . . . . 67Commands . . . . . . . . . . . . . . 68

pcmhatool . . . . . . . . . . . . . . 68

Notices . . . . . . . . . . . . . . 71Trademarks . . . . . . . . . . . . . . 73Privacy policy considerations . . . . . . . . 73

© Copyright IBM Corp. 1994, 2014 iii

Page 6: Platform HPC 4.2 Install c2761072

iv Installing IBM Platform HPC Version 4.2

Page 7: Platform HPC 4.2 Install c2761072

Chapter 1. Installation planning

Installing and configuring IBM® Platform HPC involves several steps that youmust complete in the appropriate sequence. Review the preinstallation andinstallation roadmaps before you begin the installation process.

The Installation Guide contains information to help you prepare for your PlatformHPC installation, and includes steps for installing Platform HPC.

As part of the IBM Platform HPC installation, the following components areinstalled:v IBM Platform LSF®

v IBM Platform MPI

Workload management with IBM Platform LSF

IBM Platform HPC includes a workload management component for loadbalancing and resource allocation.

Platform HPC includes a Platform LSF workload management component. IBMPlatform LSF is an enterprise-class software that distributes work across existingheterogeneous IT resources creating a shared, scalable, and fault-tolerantinfrastructure, delivering faster, more reliable workload performance. LSF balancesload and allocates resources, while providing access to those resources. LSFprovides a resource management framework that takes your job requirements,finds the best resources to run the job, and monitors its progress. Jobs always runaccording to host load and site policies.

This LSF workload management component is installed as part of Platform HPCinstallation, and the workload management master daemon to be configuredrunning on the node same as the Platform HPC management node.

For more information on IBM Platform LSF, refer to the IBM Platform LSFAdministration guide. You can find the IBM Platform LSF here:http://public-IP-address/install/kits/kit-phpc-4.2/docs/lsf/, wherepublic-IP-address is the public IP address of your Platform HPC management node

To upgrade your product entitlement for LSF refer to “Upgrading LSF entitlement”on page 63.

IBM Platform MPI

By default, IBM Platform MPI is installed with IBM Platform HPC. For buildingMPI applications, you must have one of the supported compilers installed. Refer tothe IBM Platform MPI release notes for a list of supported compilers. The IBMPlatform MPI release notes are in the /opt/ibm/platform_mpi/doc/ directory.

For more information on submitting and compiling MPI jobs, see the IBM PlatformMPI User's Guide 9.1 (SC27-5319-00).

© Copyright IBM Corp. 1994, 2014 1

Page 8: Platform HPC 4.2 Install c2761072

Preinstallation roadmapBefore you begin your installation, ensure that the preinstallation tasks arecompleted.

There are two cases to consider before installing Platform HPC, including:v Installing Platform HPC on a bare metal management node.v Installing Platform HPC on a management node that already has an operating

system installed.

If you are installing Platform HPC on a management node that already has anoperating system that is installed, you can omit preinstallation actions 4 and 5.

Table 1. Preinstallation roadmap

Actions Description

1. Plan your cluster Review and plan your cluster setup.

Refer to “Planning your systemconfiguration” on page 5.

1. Review Platform HPC requirements Make sure that the minimum hardwarerequirements are met, including:

v Hardware requirements

v Software requirements

Refer to “PHPC requirements” on page 9.

2. Configure and test switches Ensure that the necessary switches areconfigured to work with Platform HPC.

Refer to “Configure and test switches” onpage 12.

3. Plan your network configuration Before proceeding with the installation,plan your network configuration,including:

v Provision network information

v Public network information

v BMC network information

Refer to “Plan your networkconfiguration” on page 13.

4. Obtain a copy of your operating system If the operating system is not installed,you must obtain a copy of your operatingsystem and install it.

5. Install and configure your operatingsystem

Ensure that you configure your operatingsystem:

v Decide on a partitioning layout

v Meet the Red Hat Enterprise Linux 6.xprerequisites

Refer to “Installing and configuring theoperating system on the managementnode” on page 13.

6. Obtain a copy of IBM Platform HPC If you do not have a copy of IBMPlatform HPC, you can download it fromIBM Passport Advantage®.

2 Installing IBM Platform HPC Version 4.2

Page 9: Platform HPC 4.2 Install c2761072

Installation roadmapThis roadmap helps you navigate your way through the PHPC installation.

Table 2. Installation roadmap

Actions Description

1. Select an installation method Choose an installation method from thefollowing:

v Installing PHPC using the installer.Using the installer you have thefollowing choices:

– Quick installation

– Custom installation

v Installing PHPC using silent mode

Refer to Chapter 1, “Installationplanning,” on page 1.

2. Perform the installation Follow your installation method tocomplete the PHPC installation.

3. Verify your installation Ensure that PHPC is successfullyinstalled.

Refer to Chapter 6, “Verifying theinstallation,” on page 35.

4. Troubleshooting problems that occurredduring installation

If an error occurs during installation, youcan troubleshoot the error.

Refer to Chapter 8, “Troubleshootinginstallation problems,” on page 39.

5. (Optional) Upgrading productentitlement

Optionally, you can update your productentitlement for LSF.

Refer to “Upgrading LSF entitlement” onpage 63.

6. (Optional) Apply PHPC fixes After you install PHPC, you can check ifthere are any fixes available though theIBM Fix Central.

Refer to Chapter 11, “Applying fixes,” onpage 65.

Chapter 1. Installation planning 3

Page 10: Platform HPC 4.2 Install c2761072

4 Installing IBM Platform HPC Version 4.2

Page 11: Platform HPC 4.2 Install c2761072

Chapter 2. Planning

Before you install IBM Platform HPC and deploy system, you must decide on yournetwork topology, and system configuration.

Planning your system configurationUnderstand the role of the management node and plan your system settings andconfigurations accordingly. IBM Platform HPC software is installed on themanagement node after the management node meets all requirements.

The management node is responsible for the following functions:v Administration, management, and monitoring of the systemv Installation of compute nodesv Operating system distribution management and updatesv System configuration managementv Kit managementv Provisioning templatesv Stateless and stateful managementv User logon, compilation, and submission of jobs to the systemv Acting as a firewall to shield the system from external nodes and networksv Acting as a server for many important services, such as DHCP, NFS, DNS, NTP,

HTTP

The management node connects to both a provision and public network. Below,the management node connects to the provision network through the Ethernetinterface that is mapped to eth1. It connects to the public network through theEthernet interface that is mapped to eth0. The public network refers to the mainnetwork in your company or organization. A network switch connects theinstallation and compute nodes together to form a provision network.

© Copyright IBM Corp. 1994, 2014 5

Page 12: Platform HPC 4.2 Install c2761072

Each compute node can be connected to the provision network and the BMCnetwork. Multiple compute nodes are responsible for calculations. They are alsoresponsible for running batch or parallel jobs.

For networks where compute nodes have the same port for an Ethernet and BMCconnection, the provision and BMC network can be the same. Below, is an exampleof a system where compute nodes share a provisioning port.

Figure 1. System with a BMC network

6 Installing IBM Platform HPC Version 4.2

Page 13: Platform HPC 4.2 Install c2761072

Note: For IPMI using a BMC network, you must use eth0 in order for the BMCnetwork to use the provision network.

Although other system configurations are possible, the two Ethernet interfaceconfigurations is the most common. By default, eth0 is connected to the provisioninterface and eth1 is connected to the public interface. Alternatively, eth0 can be thepublic interface and eth1 the provision interface.

Note: You can also connect compute nodes to an InfiniBand network after theinstallation.

The provision network connects the management node and compute nodes istypically a Gigabit or 100-Mbps Ethernet network. In this simple setup, theprovision network serves three purposes:v System administrationv System monitoringv Message passing

It is common practice, however, to perform message passing over a much fasternetwork using a high-speed interconnect such as InfiniBand. A fast interconnectprovides benefits such as higher throughput and lower latency. For moreinformation about a particular interconnect, contact the appropriate interconnectvendor.

Planning a high availability environmentA high availability environment includes two installed PHPC management nodeslocally with same software and network configuration (except the hostname and IPaddress). High availability is configured on both management nodes to control keyservices.

Figure 2. System with a combined provision and BMC network

Chapter 2. Planning 7

Page 14: Platform HPC 4.2 Install c2761072

8 Installing IBM Platform HPC Version 4.2

Page 15: Platform HPC 4.2 Install c2761072

Chapter 3. Preparing to install PHPC

Before installing PHPC, steps must be taken to ensure all prerequisite are met.

Before installing PHPC, you must complete the following steps:v Check the PHPC requirements. You must make sure that the minimum hardware

and software requirements are met.v Configure and test switches.v Plan network configuration.v Obtain a copy of the operating system. Refer to the PHPC requirements for a list

of supported operating systems.v Install an operating system for the management node.v Obtain a copy of the product.

PHPC requirementsYou must make sure that the minimum hardware and software requirements aremet.

Hardware requirements

Before you install PHPC, you must make sure that minimum hardwarerequirements are met.

Minimum hardware requirements for the management node:v 100 GB free disk spacev 4 GB of physical memory (RAM)v At least one static Ethernet configured interface

Note: For IBM PureFlex™ systems, the management node must be a node that isnot in the IBM Flex Chassis.

Minimum requirements for compute node for stateful package-based installations:v 1 GB of physical memory (RAM) for compute nodesv 40 GB of free disk spacev One static Ethernet interface

Minimum requirements for compute node for stateless image-based installations:v 4 GB of physical memory (RAM)v One static Ethernet interface

Optional hardware can be configured before the installation:v Additional Ethernet interfaces for connecting to other networksv Additional BMC interfacesv Additional interconnects for high-performance message passing, such as

InfiniBand

Note: Platform HPC installation on an NFS server is not supported.

© Copyright IBM Corp. 1994, 2014 9

Page 16: Platform HPC 4.2 Install c2761072

Software requirements

One of the following operating systems is required:v Red Hat Enterprise Linux (RHEL) 6.5 x86 (64-bit)v SUSE Linux Enterprise Server (SLES) 11.3 x86 (64-bit)

High availability requirementsYou must make sure that these requirements are met before you set up highavailability.

Management node requirements

Requirements for the primary management node and the secondary managementnode in a high availability environment:v The management nodes must have the same or similar hardware requirements.v The management nodes must have the same partition layout.

After you prepare the secondary management node, you can ensure that thesecondary node uses the same partition schema as the primary managementnode. Use df -h and fdisk -l to check the partition layout. If the secondarynode has a different partition layout, reinstall the operating system with thesame partition layout.

v The management nodes must use the same network settings.v The management nodes must use the same network interface to connect to the

provision and public networks.Ensure that the same network interfaces are defined for the primary andsecondary management nodes. On each management node, issue the ifconfigcommand to check that the network settings are the same. Additionally, ensurethat the IP address of same network interface is in the same subnet. If not,reconfigure the network interfaces on the secondary management nodeaccording to your network plan.

v The management nodes must be configured with the same time, time zone, andcurrent date.

Virtual network requirements

Virtual network information is needed to configure and enable high availability.Collect the following high availability information:v Virtual management node namev Virtual IP address for public networkv Virtual IP address for provision networkv Shared directory for user homev Shared directory for system work data

Note: In a high availability environment, all IP addresses (management nodes IPaddresses and virtual IP address) are in the IP address range of your network. Toensure that all IP addresses are in the IP address range of your network, you canuse sequential IP addresses. Sequential IP addresses can help avoid any issues. Forexample:

10 Installing IBM Platform HPC Version 4.2

Page 17: Platform HPC 4.2 Install c2761072

Table 3. Example: Sequential IP addresses

Network IP address range

Primarymanagementnode

Secondarymanagementnode

Virtual IPaddress

public 192.168.0.3-192.168.0.200

192.168.0.3 192.168.0.4 192.168.0.5

provision 172.20.7.3-172.20.7.200

172.20.7.3 172.20.7.4 172.20.7.5

Shared file system requirements

Shared file systems are required to set up a high availability environment inPlatform HPC. By default, two shared directories are required in a high availabilityenvironment; one to store user data and one to store system work data. In a highavailability environment, all shared file systems must be accessible by theprovision network for both the management nodes and compute nodes.

The following shared file systems must already be created on your shared storageserver before you set up and enable a high availability environment:

Shared directory for system work data

v The minimum available shared disk space that is required is 40 GB.Required disk space varies based on the cluster usage.

v The read, write, and execute permissions must be enabled for theoperating system root user and the Platform HPC administrator. Bydefault, the Platform HPC administrator is phpcadmin.

Shared directory for user data (/home)

v Ensure that there is enough disk space for your data in your /homedirectory. The minimum available shared disk space that is required is 4GB, and it varies based on the disk space requirements for each user andthe total user number. If not provided, the user data is stored togetherwith system work data.

v The read and write permissions must be enabled for all users.

Additionally, the following shared file system requirements must be met:v The shared file systems cannot be one of the management nodes.v The shared file systems should be specific to and only use for the high

availability environment. This ensures that no single point of failure (SPOF)errors occur.

v If the IP address of the shared storage server is in the network IP address rangethat is managed by Platform HPC, it must be added as an unmanaged device tothe cluster to avoid any IP address errors. Refer to Unmanaged devices.

v If using an external NAS or NFS server to host the shared directories that areneeded for high availability, the following parameters must be specified in theexports entries:rw,sync,no_root_squash,fsid=num

where num is an integer and should be different for each shared directory.

For example, to create a shared data and a shared home directory on an externalNFS server, use the following commands:

Chapter 3. Preparing to install 11

Page 18: Platform HPC 4.2 Install c2761072

mkdir -p /export/datamkdir -p /export/home

Next, modify the /etc/exports file on the external NFS server./export/ 172.20.7.0/24(rw,sync,no_root_squash,fsid=0)

Note: If you are using two different file systems to create the directories, ensurethat the fsid parameter is set for each export entry. For example:/export/data 172.20.7.0/24(rw,sync,no_root_squash,fsid=3)/export/home 172.20.7.0/24(rw,sync,no_root_squash,fsid=4)

Prepare a shared file systemBefore you enable high availability, prepare a shared file system. A shared filesystem is used in high availability to store shared work and user settings.

Procedure1. Confirm that the NFS server can be used for the high availability configuration

and that it is accessible from the Platform HPC management nodes. Run thefollowing command on both management nodes to ping the NFS server fromprovision network.# ping -c 2 -I eth1 192.168.1.1PING 192.168.1.1 (192.168.1.1) from 192.168.1.3 eth1: 56(84) bytes of data.64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.051 ms64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.036 ms

2. View the list of all NFS shared directories available on the NFS server.# showmount -e 192.168.1.1Export list for 192.168.1.1:/export/data 192.168.1.0/255.255.255.0/export/home 192.168.1.0/255.255.255.0

3. Add the NFS server as an IP pool to the Platform HPC system. This preventsthe IP address of the NFS server from being allocated to a compute node andensures that the NFS server name can be resolved consistently across thecluster. On the primary management node, run the following commands.#nodeaddunmged hostname=nfsserver ip=192.168.1.1Created unmanaged node.#plcclient.sh -p pcmnodeloaderLoaders startup successfully.

Configure and test switchesBefore installing IBM Platform HPC ensure that your Ethernet switches areconfigured properly.

Some installation issues can be caused by misconfigured network switches. Theseissues include: nodes that cannot PXE boot, nodes cannot download a kickstart file,nodes cannot go into interactive startup. To ensure that the Ethernet switches areconfigured correctly, complete the following steps:1. Disable the Spanning Tree on switched networks.2. If currently disabled, enable PortFast on the switch.

Different switch manufacturers may use different names for Portfast. It is theforwarding scheme that the switch uses. For best installation performance, theswitch begins forwarding the packets as it begins receiving them. This speedsthe PXE booting process. Enabling PortFast if it is supported by the switch isrecommended.

12 Installing IBM Platform HPC Version 4.2

Page 19: Platform HPC 4.2 Install c2761072

3. If currently disabled, enable multicasting on the switch. Certain switches mightneed to be configured to allow multicast traffic on the private network.

4. Run diagnostics on the switch to ensure that the switch is connected properly,and there are no bad ports or cables in the configuration.

Plan your network configurationBefore installing Platform HPC, ensure that you know the details of your networkconfiguration, including if you are setting up a high availability environment.

Information about your network is required during installation, includinginformation about the management nodes, and network details.

Note: If you are setting up a high availability environment, collect the informationfor both management nodes, the primary management node and the secondarymanagement node.

The following information is needed to setup and configure your network.

Plan your network details, including:v Provision network information:

– Network subnet– Network domain name– Static IP address range

v Public network information:– Network subnet– Network domain name– Static IP address range

v BMC network information:– Network subnet– Network domain name– Static IP address range

v Management node information:– Node name (use a fully qualified domain name with a public domain suffix,

for example: management.domain.com)– Static IP address and subnet mask for public network– Static IP address and subnet mask for provision network– Default gateway address– External DNS server IP address

Note: For a high availability environment, the management node information isrequired for both the primary management node and the secondary managementnode.

Installing and configuring the operating system on the managementnode

Before you can create the PHPC management node, you must install an operatingsystem on the management node.

Chapter 3. Preparing to install 13

Page 20: Platform HPC 4.2 Install c2761072

Complete the following steps to install the operating system on the managementnode:1. Obtain a copy of the operating system.2. Install and configure the operating system.

Before you install the operating system on the management node, ensure that thefollowing conditions are met:v Decide on a partitioning layout. The suggested partitioning layout is as follows:

– Ensure that the /opt partition has a least 4 GB– Ensure that the /var partition has at least 40 GB– Ensure that the /install partition has at least 40 GB

Note: After you install Platform HPC, you can customize the disk partitioningon compute nodes by creating a custom script to configure Logical VolumeManager (LVM) partitioning.

v Configure at least one static network interface.v Use a fully qualified domain name (FQDN) for the management node.v The /home directory must be writable.

If the /home directory is mounted by autofs, you must first disable the autofsconfiguration:# chkconfig autofs off# service autofs stop

To make the /home directory writable, run the following command as root:# chmod u+w /home# ls -al / |grep home

v The package openais-devel must be removed manually if it is already installed.v Before you install PHPC on the management node, make sure that shadow

passwords authentication is enabled. Run setup and make sure Use ShadowPasswords is checked.

v Ensure that IPv6 is enabled for remote power and console management. Do notdisable IPv6 during the operating system installation. To enable IPv6 do thefollowing:For RHEL: If the disable-ipv6.conf file exists in the /etc/modprobe.d directory,comment out the following line to disable IPv6: install ipv6 /bin/true

For SLES: If 50-ipv6.conf file exists in the /etc/modprobe.d directory, commentout the following line to disable IPv6: #install ipv6 /bin/true

Note: After you install the operating system, ensure that the operating system timeis set to the current real time. Use the date command to check the date on theoperating system, and date -s command to set the date. For example: date -s"20131017 04:57:00"

Important:

The management node does not support installing on an operating system that isupgraded through yum or zypper update. Do not run a yum update (RHEL) orzypper update (SLES) before installing PHPC. You can update the managementnode's operating system after installation. If you do upgrade your operatingsystem through yum or zypper then you must roll back your changes beforeproceeding with the PHPC installation.

14 Installing IBM Platform HPC Version 4.2

Page 21: Platform HPC 4.2 Install c2761072

If you are installing the Red Hat Enterprise Linux (RHEL) 6.x operating system,see the additional RHEL prerequisites.

If you are installing the SUSE Linux Enterprise Server (SLES) 11.x operatingsystem, see the additional SLES prerequisites.

After all the conditions and prerequisites are met, install the operating system.Refer to the operating system documentation for how to install the operatingsystem.

Red Hat Enterprise Linux prerequisites

Before you install Platform HPC on Red Hat Enterprise Linux (RHEL) 6.x, youmust ensure the following:1. The 70-persistent-net.rules file is created under /etc/udev/rules.d/ to make the

names persistent across reboots.2. Before installing PHPC, you must stop the NetworkManager service. To stop

the NetworkManager service, run the following command:/etc/init.d/NetworkManager stop

3. Disable SELinux.a. On the management node, edit the /etc/selinux/config file to set

SELINUX=disabled.b. Reboot the management node.

4. Ensure that the traditional naming scheme ethN is used. If you have a systemthat does not use the traditional naming scheme ethN, you must revert to thetraditional naming scheme ethN:a. Rename all ifcfg-emN and ifcfg-p* configuration files and modify the

contents of the files accordingly. The content of these files isdistribution-specific (see /usr/share/doc/initscripts-version for details).For example, ifcfg-ethN files in RHEL 6.x contain a DEVICE= field which isassigned with the emN name. Modify it to suit the new naming schemesuch as DEVICE=eth0.

b. Comment the HWADDR variable in the ifcfg-eth* files if present as it is notpossible to predict here which of the network devices is named eth0, eth1etc.

c. Reboot the system.d. Log in to see the ethN names.

1. Check whether the package net-snmp-perl is installed on the managementnode. If not, you must install it manually from the second RHEL 7 on POWERISO.

2. Before installing PHPC, you must stop the NetworkManager service. To stopthe NetworkManager service, run the following command:/etc/init.d/NetworkManager stop

3. Disable SELinux.a. On the management node, edit the /etc/selinux/config file to set

SELINUX=disabled.b. Reboot the management node.

Chapter 3. Preparing to install 15

Page 22: Platform HPC 4.2 Install c2761072

SUSE Linux Enterprise Server (SLES) 11.x prerequisites

Before you install Platform HPC on SUSE Linux Enterprise Server (SLES), youmust complete the following steps.1. You must disable AppArmor. To disable AppArmor, complete the following

steps:a. Start the YaST configuration and setup tool.b. From the System menu, select the System Services (Runlevel) option.c. Select the Expert Mode option.d. Select the boot.apparmor service, go to the Set/Reset menu and select

Disable the service.e. To save the options click OK.f. Exit the YaST configuration and setup tool by clicking OK.

2. If the createrepo and perl-DBD-Pg packages are not installed, complete thefollowing steps:a. To install the packages, prepare the following ISO images:

v Installation ISO image: SLES-11-SP3-DVD-x86_64-GM-DVD1.isov SDK ISO image: SLE-11-SP3-SDK-DVD-x86_64-GM-DVD1.iso

b. Create a software repository for each ISO image using the YaSTconfiguration and setup tool. You must create a software repository for boththe installation ISO image and the SDK ISO image. To create a softwarerepository, complete the following steps:1) Start the YaST configuration and setup tool in a terminal.2) From the Software menu, select the Software Repositories option and

click Add.3) Select the Local ISO Image option and click Next.4) Enter the Repository Name and select a Path to ISO Image. Click Next.5) Click OK to save the options and exit the YaST configuration and setup

tool.c. Install the createrepo and perl-DBD-Pg packages, run the following

command: zypper install createrepo perl-DBD-Pg

3. Reboot the management node.

16 Installing IBM Platform HPC Version 4.2

Page 23: Platform HPC 4.2 Install c2761072

Chapter 4. Performing an installation

Install PHPC using the installer. The installer enables you to specify yourinstallation options.

After the installation starts, the installer automatically checks the hardware andsoftware configurations. The installer displays the following based on the results:v OK - if no problems are found for the checked itemv WARNING - if configuration of an item does not match the requirements;

installation continues despite the warningsv FAILED - if the installer cannot recover from an error, the installation quits

The installer (phpc-installer) displays the corresponding error message forproblems that are detected and automatically ends the installation. If there areerrors, you must resolve the identified problems then rerun the phpc-installeruntil all installation requirements are met.

Usage notesv Do not use an NFS partition or a local /home partition for the depot (/install)

mount point.v In the quick installation, the default values are used for values not specified

during installation.v A valid installation path for the installer must be used. The installation path

cannot include special characters such as a colon (:), exclamation point (!) orspace, and the installation cannot begin until a valid path is used.

Comparing installation methodsIBM Platform HPC can be installed using an interactive installer in one of twomethods, the quick installation method and the custom installation method. Thequick installation method sets up quickly sets up basic options with defaultoptions. The custom installation method provides added installation options andenables the administrator to specify additional system configurations.

Below is a complete comparison table of the two installation methods and thedefault values provided by the installer.

Table 4. Installer option comparison

Options Default value

Optionincluded inthe Quickinstallation?(Yes/No)

Optionincluded inthe Custominstallation?(Yes/No)

Select a mount point for thedepot (/install) directory.

/ Yes Yes

Select the location that you wantto install the operating systemfrom.

CD/DVD drive Yes Yes

Specify a provision networkinterface.

eth0 Yes Yes

© Copyright IBM Corp. 1994, 2014 17

Page 24: Platform HPC 4.2 Install c2761072

Table 4. Installer option comparison (continued)

Options Default value

Optionincluded inthe Quickinstallation?(Yes/No)

Optionincluded inthe Custominstallation?(Yes/No)

Specify a public networkinterface.

eth1 Yes Yes

Do you want to enable a publicnetwork connection?

Yes Yes Yes

Do you want to enable thepublic interface firewall?

Yes No Yes

Do you want to enable NATforwarding on the managementnode?

Yes No Yes

Enable a BMC network that usesthe default provisioningtemplate?

No Yes Yes

Select a BMC network, optionsinclude:

v Create a new network

v Public network

v Provision Network

Create a new network Yes Yes

If creating a new BMC network,specify a subnet for the BMCnetwork

N/A Yes Yes

If creating a new BMC network,specify a subnet mask for theBMC network

255.255.255.0 Yes Yes

If creating a new BMC network,specify a gateway IP address forthe BMC network

N/A No Yes

If creating a new BMC network,specify an IP address range forthe BMC network

192.168.1.3-192.168.1.254 Yes Yes

Specify the hardware profileused by your BMC network.Hardware profile, optionsinclude:

v IPMI

v IBM_Flex_System_x

v IBM_System_x_M4

v IBM_iDataPlex_M4

v IBM_NeXtScale_M4

IBM_System_x_M4 Yes Yes

Set the domain name for theprovision network.

private.dns.zone Yes Yes

Set the domain name for thepublic network.

public.com Yes Yes

18 Installing IBM Platform HPC Version 4.2

Page 25: Platform HPC 4.2 Install c2761072

Table 4. Installer option comparison (continued)

Options Default value

Optionincluded inthe Quickinstallation?(Yes/No)

Optionincluded inthe Custominstallation?(Yes/No)

Specify the provisioningcompute node IP address range.This is generated based onmanagement node interface.

10.10.0.3-10.10.0.200 No Yes

Do you want to provisioningcompute nodes with the nodediscovery method?

Yes No Yes

Specify the node discovery IPaddress range. This is generatedbased on management nodeinterface.

10.10.0.201-10.10.0.254 No Yes

Set the IP addresses of the nameservers.

192.168.1.40,192.168.1.50 No Yes

Specify the NTP server. pool.ntp.org No Yes

Do you want to export the /homedirectory?

Yes No Yes

Set the database administratorpassword.

pcmdbpass No Yes

Set the default root password forcompute nodes.

PASSW0RD No Yes

Quick installation roadmapBefore you begin your quick installation, use the following roadmap to prepareyour values for each installation option. You can choose to use the default examplevalues for some or all of the options.

Table 5. Preparing for PHPC quick installation

Option Example values Your values

1. Select a mount point forthe depot (/install)directory.

/

2. Select the location that youwant to install theoperating system from.

CD/DVD drive

3. Specify a provisionnetwork interface.

eth0

4. Specify a public networkinterface.

eth1

5. Enable a BMC networkthat uses the defaultprovisioning template?

Yes

Chapter 4. Performing an installation 19

Page 26: Platform HPC 4.2 Install c2761072

Table 5. Preparing for PHPC quick installation (continued)

Option Example values Your values

6. Select a BMC network,options include:

v Create a new network

v Public network

v Provision Network

Create a new network

7. If creating a new BMCnetwork, specify a subnetfor the BMC network.

192.168.1.0

8. If creating a new BMCnetwork, specify a subnetmask for the BMCnetwork.

255.255.255.0

9. Specify the hardwareprofile used by your BMCnetwork. Hardware profile,options include:

v IPMI

v IBM_Flex_System_x

v IBM_System_x_M4

v IBM_iDataPlex_M4

v IBM_NeXtScale_M4

IBM_System_x_M4

10. Set the provision networkdomain name.

private.dns.zone

11. Set a domain name for thepublic network? (Yes/No)

Yes

12. Set the public domainname.

public.com or FQDN

Quick installationYou can configure the management node by using the quick installation option.

Before you begin

PHPC installation supports the Bash shell only.v Before you start the PHPC installation, you must boot into the base kernel. The

Xen kernel is not supported.v User accounts that are created before PHPC is installed are automatically

synchronized across compute nodes during node provisioning. User accountsthat are created after PHPC is installed are automatically synchronized acrosscompute nodes when the compute nodes are updated.

v You must be a root user to install.v Installing PHPC requires you to provide the OS media. If you want to use the

DVD drive, ensure that no applications are actively using the drive (includingany command shell). If you started the PHPC installation in the DVD directory,you can suspend the installation (Ctrl-z), change to another directory (cd ~),and then resume the installation (fg). Alternately, you can start the installationfrom another directory (for example: cd ~; python mount_point/phpc-installer).

20 Installing IBM Platform HPC Version 4.2

Page 27: Platform HPC 4.2 Install c2761072

v The /home mount point must have writable permission. Ensure that you have thecorrect permissions to add new users the /home mount point.

About this task

The installer completes pre-checking processes and prompts you to answerquestions to complete the management node configuration. The following stepssummarize the installation of PHPC on your management node:1. License Agreement2. Management node pre-check3. Specify installation settings4. Installation

Complete the following installation steps:

Procedure1. Choose one of the following installation methods:

v Download the PHPC ISO to the management node.v Insert the PHPC DVD into the management node.

2. Mount the PHPC installation media:v If you install PHPC from ISO file, mount the ISO into a directory such as

/mnt. For example:# mount -o loop phpc-4.2.x64.iso /mnt

v If you install PHPC from DVD media, mount to a directory such as /mnt.

Tip: Normally, the DVD media is automatically mounted to/media/PHPC-program_number. To start the installer, run: /media/PHPC-program_number/phpc-installer. If the DVD is mounted without executepermission, you must add python in front of the command (python/media/PHPC-program_number/phpc-installer).

3. Start the PHPC installer, issue the following command:# /mnt/phpc-installer

4. Accept the license agreement and continue.5. Management node pre-checking automatically starts.6. Choose the Quick Installation option as your installation method.7. Select a mount point for the depot (/install) directory. The depot (/install)

directory stores installation files for PHPC. The PHPC management nodechecks for the required disk space.

8. Select the location that you want to install the operating system from. Theoperating system version that you select must be the same as the operatingsystem version on the management node.

OS Distribution installation from the DVD drive:Insert the correct OS DVD disk into the DVD drive. The disk isverified and added to the depot (/install) directory after you confirmthe installation. If the PHPC disk is already inserted, make sure toinsert the OS disk after you copy the PHPC core packages.

OS Distribution installation from an ISO image or mount point:Enter the path for the OS Distribution or mount point, forexample:/iso/rhel/6.x/x86_64/rhel-server-6.x-x86_64-dvd.iso ThePHPC management node verifies that the operating system is asupported distribution, architecture, and version.

Chapter 4. Performing an installation 21

Page 28: Platform HPC 4.2 Install c2761072

Note: If the OS distribution is found on more than one ISO image,use the first ISO image during the installation. After the PHPCinstallation is completed, you can add the next ISO image from theWeb Portal.

If you choose to install from an ISO image or mount point, you must enter theISO image or mount point path.

9. Select a network interface for the provisioning network.10. Select how the management node is connected to the public network. If the

management node is not connected to the public network, select: It is notconnected to the public network.

11. Enable a BMC network that uses the default provisioning template. If youchoose to enable a BMC network, you must specify the following options:a. Select a BMC network. Options include:

v Public networkv Provision Networkv Create a new network. If you create a new BMC network, specify the

following options:– A subnet for the BMC network.– A subnet mask for the BMC network.

b. Select hardware profile for the BMC network.12. Enter a domain name for the provisioning network.13. Set a domain name for the public network.14. Enter a domain name for the public network.15. A summary of your selected installation settings is displayed. To change any

of these settings, press ‘99’ to reselect the settings or press '1' to begin theinstallation.

Results

You successfully completed the PHPC installation. You can find the installation loghere: /opt/pcm/log/phpc-installer.log.

To configure PHPC environment variables, run the following command: source/opt/pcm/bin/pcmenv.sh. Configuration is not required for new login sessions.

What to do next

After you complete the installation, verify that your PHPC environment is setupcorrectly.

To get started with PHPC, using your web browser, you can access the Web Portalat http://hostname:8080 or http://IPaddress:8080. Log in with the user accountroot and default password Cluster on the management node.

22 Installing IBM Platform HPC Version 4.2

Page 29: Platform HPC 4.2 Install c2761072

Custom installation roadmapBefore you begin your custom installation, use the following roadmap to prepareyour values for each installation option. You can choose to use the default examplevalues for some or all of the options.

Table 6. Preparing for PHPC custom installation

Options Example values Your values

1. Select a mount point for the depot(/install) directory.

/

2. Select the location that you want to installthe operating system from.

CD/DVD drive

3. Specify a provision network interface. eth0

4. Specify a public network interface. eth1

5. Do you want to enable the publicinterface firewall (Yes/No)

Yes

6. Do you want to enable NAT forwardingon the management node? (Yes/No)

Yes

7. Enable a BMC network that uses thedefault provisioning template?

Yes

8. Select one of the following options forcreating your BMC networks:

a. Create a new network and specify thefollowing options:

Yes

i. Subnet 192.168.1.0

ii. Subnet mask 255.255.255.0

iii. Gateway IP address 192.168.1.1

iv. IP address range 192.168.1.3-192.168.1.254

b. Use the public network N/A

c. Use the provision Network N/A

9. Specify the hardware profile used byyour BMC network. Hardware profile,options include:

v IPMI

v IBM_Flex_System_x

v IBM_System_x_M4

v IBM_iDataPlex_M4

v IBM_NeXtScale_M4

IBM_System_x_M4

10. Set the provision network domain name. private.dns.zone

11. Set a domain name for the publicnetwork? (Yes/No)

Yes

12. Set the public domain name. public.com or FQDN

13. Specify the provisioning compute node IPaddress range. This is generated based onmanagement node interface.

10.10.0.3-10.10.0.200

14. Do you want to provisioning computenodes with the node discovery method?(Yes/No)

Yes

Chapter 4. Performing an installation 23

Page 30: Platform HPC 4.2 Install c2761072

Table 6. Preparing for PHPC custom installation (continued)

Options Example values Your values

15. Specify the node discovery IP addressrange. This is generated based onmanagement node interface.

10.10.0.201-10.10.0.254

16. Set the IP addresses of the name servers. 192.168.1.40,192.168.1.50

17. Specify the NTP server. pool.ntp.org

18. Do you want to export the /homedirectory? (Yes/No)

Yes

19. Set the database administrator password. pcmdbadm

20. Set the default root password forcompute nodes.

Cluster

Custom installationYou can configure the management node by using the custom installation option.

Before you begin

Note: PHPC installation supports the Bash shell only.v Before you start the PHPC installation, you must boot into the base kernel. The

Xen kernel is not supported.v User accounts that are created before PHPC is installed are automatically

synchronized across compute nodes during node provisioning. User accountsthat are created after PHPC is installed are automatically synchronized acrosscompute nodes when the compute nodes are updated.

v You must be a root user to install.v Installing PHPC requires you to provide the OS media. If you want to use the

DVD drive, ensure that no applications are actively using the drive (includingany command shell). If you started the PHPC installation in the DVD directory,you can suspend the installation (Ctrl-z), change to another directory (cd ~),and then resume the installation (fg). Alternately, you can start the installationfrom another directory (for example: cd ~; python mount_point/phpc-installer).

v The /home mount point must have writable permission. Ensure that you have thecorrect permissions to add new users the /home mount point.

About this task

The installer completes pre-checking processes and prompts you to answerquestions to complete the management node configuration. The following stepssummarize the installation of PHPC on your management node:1. License Agreement2. Management node pre-check3. Specify installation settings4. Installation

Complete the following installation steps:

24 Installing IBM Platform HPC Version 4.2

Page 31: Platform HPC 4.2 Install c2761072

Procedure1. Choose one of the following installation methods:

v Download the PHPC ISO to the management node.v Insert the PHPC DVD into the management node.

2. Mount the PHPC installation media:v If you install PHPC from ISO file, mount the ISO into a directory such as

/mnt. For example:# mount -o loop phpc-4.2.x64.iso /mnt

v If you install PHPC from DVD media, mount to a directory such as /mnt.

Tip: Normally, the DVD media is automatically mounted to/media/PHPC-program_number. To start the installer, run: /media/PHPC-program_number/phpc-installer. If the DVD is mounted without executepermission, you must add python in front of the command (python/media/PHPC-program_number/phpc-installer).

3. Start the PHPC installer, issue the following command:# /mnt/phpc-installer

4. Accept the license agreement and continue.5. Management node pre-checking automatically starts.6. Select the Custom Installation option.7. Select a mount point for the depot (/install) directory. The depot (/install)

directory stores installation files for PHPC. The PHPC management nodechecks for the required disk space.

8. Select the location that you want to install the operating system from. Theoperating system version that you select must be the same as the operatingsystem version on the management node.

OS Distribution installation from the DVD drive:Insert the correct OS DVD disk into the DVD drive. The disk isverified and added to the depot (/install) directory after you confirmthe installation. If the PHPC disk is already inserted, make sure toinsert the OS disk after you copy the PHPC core packages.

OS Distribution installation from an ISO image or mount point:Enter the path for the OS Distribution or mount point, forexample:/iso/rhel/6.x/x86_64/rhel-server-6.x-x86_64-dvd.iso ThePHPC management node verifies that the operating system is asupported distribution, architecture, and version.

Note: If the OS distribution is found on more than one ISO image,use the first ISO image during the installation. After the PHPCinstallation is completed, you can add the next ISO image from theWeb Portal.

If you choose to install from an ISO image or mount point, you must enter theISO image or mount point path.

9. Select a network interface for the provisioning network.10. Enter the IP address range that is used for provisioning compute nodes.11. Choose whether to provision compute nodes automatically with the node

discovery method.12. Enter a node discovery IP address range to be used for provisioning compute

nodes by node discovery. The node discovery IP address range is a temporaryIP address range that is used to automatically provision nodes by using the

Chapter 4. Performing an installation 25

Page 32: Platform HPC 4.2 Install c2761072

auto node discovery method. This range cannot overlap the range that isspecified for the provisioning compute nodes.

13. Select how the management node is connected to the public network. If themanagement node is not connected to the public network, select: It is notconnected to the public network. If your management node is connected toa public network, optionally, you can enable the following settings:a. Enable PHPC specific rules for the management node firewall that is

connected to the public interface.b. Enable NAT forwarding on the management node for all compute nodes.

14. Enable a BMC network that uses the default provisioning template. If youchoose to enable a BMC network, you must specify the following options:a. Select a BMC network.

v Public networkv Provision Networkv Create a new network. If you create a new BMC network, specify the

following options:– A subnet for the BMC network.– A subnet mask for the BMC network.– A gateway IP address for the BMC network.– An IP address range for the BMC network.

b. Specify a hardware profile for the BMC network.

Table 7. Available hardware profiles based on hardware type

Hardware Hardware profile

Any IPMI-based hardware IPMI

IBM Flex System® x220, x240, and x440 IBM_Flex_System_x

IBM System x3550 M4, x3650 M4, x3750 M4 IBM_System_x_M4

IBM System dx360 M4 IBM_iDataPlex_M4

IBM NeXtScale nx360 M4 IBM_NeXtScale_M4

15. Enter a domain name for the provisioning network.16. Set a domain name for the public network.17. Enter a domain name for the public network.18. Enter the IP addresses of your name servers that are separated by commas19. Set the NTP server.20. Export the home directory on the management node and use it for all

compute nodes.21. Enter the PHPC database administrator password.22. Enter the root account password for all compute nodes.23. A summary of your selected installation settings is displayed. To change any

of these settings, press ‘99’ to reselect the settings or press '1' to begin theinstallation.

What to do next

After you complete the installation, verify that your PHPC environment is setupcorrectly.

26 Installing IBM Platform HPC Version 4.2

Page 33: Platform HPC 4.2 Install c2761072

To get started with PHPC, using your web browser, you can access the Web Portalat http://hostname:8080 or http://IPaddress:8080. Log in with the root useraccount and password on the management node.

Chapter 4. Performing an installation 27

Page 34: Platform HPC 4.2 Install c2761072

28 Installing IBM Platform HPC Version 4.2

Page 35: Platform HPC 4.2 Install c2761072

Chapter 5. Performing a silent installation

Silent installation installs IBM Platform HPC software using a silent response file.You can specify all of your installation options in the silent installation file beforeinstallation.

Before you complete the installation using silent mode, complete the followingactions:v Install the operating system on the management node.v Ensure that you have the correct permissions to add new users the /home mount

point.

To complete the silent installation, complete the following steps:1. Mount the PHPC installation media:

v If you install PHPC from ISO file, mount the ISO into a directory such as/mnt. For example:# mount -o loop phpc-4.2.x64.iso /mnt

v If you install PHPC from DVD media, mount to a directory such as /mnt.

Tip: Normally, the DVD media is automatically mounted to/media/PHPC-program_number. To start the installer, run: /media/PHPC-program_number/phpc-installer. If the DVD is mounted without executepermission, you must add python in front of the command (python/media/PHPC-program_number/phpc-installer).

2. Prepare the response file with installation options. The silent response filephpc-autoinstall.conf.example is located in the /docs directory in thePlatform HPC ISO.

Note: If the OS distribution is found on more than one ISO image, use the firstISO image during the installation. After the PHPC installation is completed,you can add the next ISO image from the Web Portal.

3. Run the silent installation:mnt/phpc-installer -f path_to_phpc-autoinstall.conf

where mnt is your mount point and path_to_phpc-autoinstall.conf is the locationof your silent install file.

Usage notesv A valid installation path must be used. The installation path cannot include

special characters such as a colon (:), exclamation point (!) or space, and theinstallation cannot begin until a valid path is used.

Response file for silent installationResponse file for IBM Platform HPC silent installation.# IBM Platform HPC 4.2 Silent Installation Response File# The silent installation response file includes all of the options that can# be set during a Platform HPC silent installation.

# ******************************************************************** ## NOTE: For any duplicated options, only the last value is used ## by the silent installation. #

© Copyright IBM Corp. 1994, 2014 29

Page 36: Platform HPC 4.2 Install c2761072

# NOTE: Configuration options cannot start with a space or tab. ## ******************************************************************** #

[General]

## depot_path### The depot_path option sets the path of the Platform HPC depot (/install) directory.##Usage notes:## 1. The Platform HPC installation requires a minimum available disk space of 40 GB.## 2. If you specify depot_path = /usr/local/pcm/, the installer places all Platform# HPC installation contents in the /usr/local/pcm/install directory and creates# a symbol link named /install that points to the /usr/local/pcm/install directory.## 3. If you specify depot_path = /install or depot_path = /, the installer places# all Platform HPC installation content into the /install directory.## 4. If you have an existing /install mount point, by default, the installation# program places all installation contents into the /install directory regardless# of the depot_path value.

depot_path = /

## private_cluster_domain## The private_cluster_domain option sets the provisioning network’s domain# name for the cluster. The domain must be a fully qualified domain name.# This is a mandatory option.

private_cluster_domain = private.dns.zone

## provisioning_network_interface## The provisioning_network_interface option sets one network device on the# Platform HPC management node to be used for provisioning compute# nodes. An accepted value for this option is a valid NIC name that exists on# the management node. Values must use alphanumeric characters and cannot use# quotations ("). The value ’lo’ is not supported. This is a mandatory option.

provisioning_network_interface = eth0

## public_network_interface## The public_network_interface option sets a network device on the Platform HPC# management node that is used for accessing networks outside of the cluster.The value# must be a valid NIC name that exists on the management node. The value cannot be# the same as the value specified for the provisioning_network_interface option.# The value cannot be ’lo’ and cannot include quotations (").# If this option is not defined, no public network interface is defined.

#public_network_interface = eth1

[Media]

## os_path## The os_path option specifies the disc, ISO, or path of the first OS distribution

30 Installing IBM Platform HPC Version 4.2

Page 37: Platform HPC 4.2 Install c2761072

used to# install the Platform HPC node. The os_path is a mandatory option.## The os_path option must use one of the following options:# - full path to CD-ROM device, for example: /dev/cdrom# - full path to an ISO file, for example: /root/rhel-server-6.4-x86_64-dvd.iso# - full path to a directory where an ISO is mounted, for example: /mnt/basekit#

os_path = /root/rhel-server-<version>-x86_64-dvd.iso

[Advanced]# NOTE: By default, advanced options use a default value if no value is specified.

## excluded_kits## The excluded_kits option lists specific kits that do not get installed.# This is a comma-separated list. The kit name should be same with the name# defined in the kit configuration file. If this option is not defined,# by default, all kits are installed.

#excluded_kits = kit1,kit2

## static_ip_range## The static_ip_range options sets the IP address range used for provisioning# compute nodes. If this option is not defined, by default, the value is# automatically based on the provision network.

#static_ip_range = 10.10.0.3-10.10.0.200

## discovery_ip_range## The discovery_ip_range option sets the IP address range that is used for provisioning# compute nodes by node discovery. This IP address range cannot overlap with the IP range# used for provisioning compute nodes as specified by the static_ip_range option. You# can set the discovery_ip_range value to ’none’ if you do not want to use node discovery.# If this option is not defined, the default value is set to none.

#discovery_ip_range = 10.10.0.201-10.10.0.254

## enable_firewall## The enable_firewall option enables Platform HPC specific rules for the management# node firewall to the public interface. This option is only available if the# public_network_interface is set to yes. If this option is not defined, by default,# the value is set to yes.

#enable_firewall = yes

## enable_nat_forward## The enable_nat_forward option enables NAT forwarding on the management node# for all compute nodes. This option is only available if the enable_firewall# option is set to yes. If this option is not defined, by default,# the value is set to yes.

#enable_nat_forward = yes

## enable_bmcfsp#

Chapter 5. Performing a silent installation 31

Page 38: Platform HPC 4.2 Install c2761072

# The enable_bmcfsp option enables a BMC or FSP network with the default provisioning template.# This option indicates which network is associated with BMC or FSP network.This is a# mandatory option. If this option is not defined, by default, a BMC or FSP network is# not enabled.# Options include: new_network, public, provision# new_network option: Creates a new BMC or FSP network by specifyingi the followingoptions for the# the new network# [bmcfsp_subnet]# [bmcfsp_subnet_mask]# [bmcfsp_gateway]# [bmcfsp_iprange]# will be applied to create a new network# public option: Creates a BMC or FSP network that uses the public network.# provision option: Creates a BMC or FSP network that uses the provision network.

#enable_bmcfsp = new_network

## bmcfsp_subnet## Specify the subnet for the BMC or FSP network. This value must be different than thevalue used by# the public and provision networks. Otherwise, the BMC or FSP network set up fails. Thisoption is# required if enable_bmcfsp = new_network.

#bmcfsp_subnet = 192.168.1.0

## bmcfsp_subnet_mask## Specify the subnet mask for the BMC netwrok. This option is required ifenable_bmcfsp = new_network.

#bmcfsp_subnet_mask = 255.255.255.0

## bmcfsp_gateway## Specify the gateway IP address for the BMC or FSP network.This option is available ifenable_bmcfsp = new_network.

#bmcfsp_gateway = 192.168.1.1

## bmcfsp_iprange## Specify the IP address range for the BMC or FSP network. This option is required ifenable_bmcfsp = new_network

#bmcfsp_iprange = 192.168.1.3-192.168.1.254

## bmcfsp_hwprofile## Specify a hardware profile to associate with the BMC or FSP network. This option isrequired if enable_bmcfsp = new_network.## bmcfsp_hwprofile options:# For x86-based systems, the following are supported hardware profile options:## IBM_System_x_M4: IBM System x3550 M4, x3650 M4, x3750 M4# IBM_Flex_System_x: IBM System x220, x240, x440# IBM_iDataPlex_M4: IBM System dx360 M4# IPMI: Any IPMI-based hardware#

32 Installing IBM Platform HPC Version 4.2

Page 39: Platform HPC 4.2 Install c2761072

# For POWER systems, the following are supported hardware profile options:# IBM_Flex_System_p: IBM System p260, p460

#bmcfsp_hwprofile = IBM_System_x_M4

## nameservers## The nameservers option lists the IP addresses of your external name servers# using a comma-separated list.If this option is not define, by default,# the value is set to none.

#nameservers = 192.168.1.40,192.168.1.50

## ntp_server## The ntp_server option sets the NTP server.If this option is not defined,# by default, this value is set to pool.htp.org.

#ntp_server = pool.ntp.org

## enable_export_home## The enable_export_home option specifies if the /home mount point exports to# the management node. The export home directory is used on all all compute nodes.# If this option is not defined, by default, this value is set to yes.

#enable_export_home = yes

## db_admin_password## The db_admin_password option sets the Platform HPC database administrator password.# If this option is not defined, by default, this value is set to pcmdbadm.

#db_admin_password = pcmdbadm

## compute_root_password## The compute_root_password option sets the root account password for all compute nodes.# If this option is not defined, by default, this value is set to Cluster.

#compute_root_password = Cluster

## cluster_name## The cluster_name option sets the cluster name for the Platform HPC workload manager.# The cluster name must be a string containing any of the following characters: a-z, A-Z,# 0-9 or underscore (_). The string length cannot exceed 39 characters.# If this option is not defined, by default, this value is set to phpc_cluster.

#cluster_name = phpc_cluster

## cluster_admin## The cluster_admin specifies the Platform HPC workload manager administrator. This# can be a single user account name, or a comma-separated list of several user account# list. The first user account name in the list is the primary LSF administrator and# it cannot be the root user account. For example: cluster_admin=user_name1,user_name2...# If this option is not defined, by default, this value is set to phpcadmin.

#cluster_admin = phpcadmin

Chapter 5. Performing a silent installation 33

Page 40: Platform HPC 4.2 Install c2761072

34 Installing IBM Platform HPC Version 4.2

Page 41: Platform HPC 4.2 Install c2761072

Chapter 6. Verifying the installation

Ensure that you have successfully installed PHPC.

Note: You can find the installation log file phpc-installer.log in the /opt/pcm/logdirectory. This log file includes details and results about your PHPC installation.

To verify that your installation is working correctly, log in to the management nodeas a root user and complete the following tasks:1. Source PHPC environment variables.

# . /opt/pcm/bin/pcmenv.sh

2. Check that the PostgreSQL database server is running.# service postgresql status(pid 13269) is running...

3. Check that the Platform HPC services are running.# service phpc status

Show status of the LSF subsystemlim (pid 31774) is running...res (pid 27663) is running...sbatchd (pid 27667) is running...

SERVICE STATUS WSM_PID PORT HOST_NAMEWEBGUI STARTED 16550 8080 hjc-ip200

SERVICE STATUS WSM_PID HOST_NAMEjobdt STARTED 5836 hjc-ip200plc STARTED 5877 hjc-ip200plc_group2 STARTED 5917 hjc-ip200purger STARTED 5962 hjc-ip200vdatam STARTED 6018 hjc-ip200

4. Log in to the Web Portal.a. Open a supported web browser. Refer to the Release Notes for a list of

supported web browsers.b. Go to http://mgtnode-IP:8080, where mgtnode-IP is the real management

node IP address. If you are connected to a public network, you can alsonavigate to http://mgtnode-hostname:8080, where mgtnode-hostname is thereal management node hostname.

c. Log in as an administrator or a user. An administrator has administrativeprivileges that include managing cluster resources. A user account is notable to manage cluster resources but can manage jobs.By default, PHPC creates a default administrative account where theusername and password is phpcadmin and phpcadmin. This default phpcadminadministrator account has all administrative privileges.

d. After you log in, the Resource Dashboard is displayed in the Web Portal.

© Copyright IBM Corp. 1994, 2014 35

Page 42: Platform HPC 4.2 Install c2761072

36 Installing IBM Platform HPC Version 4.2

Page 43: Platform HPC 4.2 Install c2761072

Chapter 7. Taking the first steps after installation

After your installation is complete, as an administrator you can get started withmanaging your clusters.

The following tasks can be completed to get started with Platform HPC:v Enabling LDAP support for user authenticationv Provision your nodes by adding the nodes to your clusterv Modify your provisioning template settings

– Manage image profiles– Manage network profiles

v Set up the HTTPS connectionv Submit jobsv Create resource reportsv Create application templates

For more information about IBM Platform HPC, see the Administering IBM PlatformHPC guide.

For the latest release information about Platform HPC 4.2, see Platform HPC onIBM Knowledge Center at http://www.ibm.com/support/knowledgecenter/SSDV85_4.2.0.

© Copyright IBM Corp. 1994, 2014 37

Page 44: Platform HPC 4.2 Install c2761072

38 Installing IBM Platform HPC Version 4.2

Page 45: Platform HPC 4.2 Install c2761072

Chapter 8. Troubleshooting installation problems

Troubleshooting problems that occurred during the IBM Platform HPC installation.

To help troubleshoot your installation, you can view the phpc-installer.log filethat is found in the /opt/pcm/log directory. This file logs the installation steps, andany warnings and errors that occurred during the installation.

Note: During the installation, the installation progress is logged in a temporarydirectory that is found here: /tmp/phpc-installer.

To view detailed error messages, run the installer in DEBUG mode whentroubleshooting the installation. To run the installer in debug mode, set thePCM_INSTALLER_DEBUG environment variable. When running in DEBUG mode, theinstaller does not clean up all the files when an error occurs. The DEBUG modealso generates extra log messages that can be used to trace the installer's execution.Set the PCM_INSTALLER_DEBUG environment variable to run the installer in DEBUGmode:# PCM_INSTALLER_DEBUG=1 hpc-ISO-mount/phpc-installer

where hpc-ISO-mount is the mount point.

Note: Only use the PCM_INSTALLER_DEBUG environment variable, to troubleshoot aPHPC installation using the interactive installer. Do not use it for installing PHPCusing silent install.

Common installation issues include the following issues:v The Platform HPC installer fails with the error message “Cannot reinstall

Platform HPC. Platform HPC is already installed.” To install a new PlatformHPC product, you must first uninstall the installed product.

v During management node pre-checking, one of the checks fails. Ensure that allPlatform HPC requirements are met and rerun the installer. For moreinformation about Platform HPC see the Release Notes®.

v Setting up shared NFS export fails during installation. To resolve this issue,complete the following steps:1. Check the rpcbind status.

# service rpcbind status

2. If rpcbind is stopped, you must restart it and run the S03_base_nfs.rc.pyscript.# service rpcbind start

# cd /opt/pcm/rc.pcm.d/# pcmconfig -i ./S03_base_nfs.rc.py

v Cannot log in to the Web Portal, or view the Resource Dashboard in the WebPortal.– Configure your web browser. Your web browser must be configured to accept

first-party and third-party cookies. In some cases, your browser defaultsettings can block these cookies. In this case, you need to manually changethis setting.

– Restart the Web Portal. In most cases, the services that are required to run theWeb Portal start automatically. However, if the Web Portal goes down, you

© Copyright IBM Corp. 1994, 2014 39

Page 46: Platform HPC 4.2 Install c2761072

can restart services and daemons manually. From the command line, issue thefollowing command:# pmcadmin stop ; pmcadmin start

Configuring your browserTo properly configure your browser, you must have the necessary plug-insinstalled.

About this task

If you are using Firefox as your browser, you are required to have the Flash andJRE plug-ins installed. To install the Flash and JRE plug-ins, complete the followingsteps:

Procedure1. Install the appropriate Adobe Flash Player plug-in from the Adobe website

(http://get.adobe.com/flashplayer).2. Check that the Flash plug-in is installed. Enter about:plugins into the Firefox

address field.Shockwave Flash appears in the list.

3. Check that the Flash plug-in is enabled. Enter about:config into the Firefoxaddress field. Find dom.ipc.plugins.enabled in the list and ensure that it has avalue of true. If it is set to false, double-click it to enable.

4. Restart Firefox.5. Download the appropriate JRE plug-in installer from the Oracle website

(http://www.oracle.com/technetwork/java/javase/downloads/index.html).The 64-bit rpm installer (jre-7u2-linux-x64.rpm) is recommended.

6. Exit Firefox.To run Java™ applets within the browser, you must install the JRE plug-inmanually. For more information about installing the JRE plug-in manually, goto http://docs.oracle.com/javase/7/docs/webnotes/install/linux/linux-plugin-install.html.

7. In the package folder, run the command:rpm -ivh jre-7u2-linux-x64.rpm

8. When the installation is finished, enter the following commands:cd /usr/lib64/mozilla/plugins

ln -s /usr/java/jre1.7.0_02/lib/amd64/libnpjp2.so

9. Check that the JRE plug-in was installed correctly. Start Firefox and enterabout:plugins into the Firefox address field.Java(TM) Plug-in 1.7.0_02 is displayed in the list.

40 Installing IBM Platform HPC Version 4.2

Page 47: Platform HPC 4.2 Install c2761072

Chapter 9. Setting up a high availability environment

Setup an IBM Platform HPC high availability environment.

To setup a high availability (HA) environment in Platform HPC, complete thefollowing steps.

Table 8. High availability environment roadmap

Actions Description

Ensure that the high availabilityrequirements are met

Requirements for setting up a shared storagedevice and a secondary management nodemust be met.

Preparing high availability Set up the secondary management nodewith an operating system and Platform HPCinstallation.

Enable a Platform HPC high availabilityenvironment

Set up Platform HPC high availability on theprimary and secondary management nodes.

Complete the high availability enablement After high availability is enabled setup upthe compute nodes.

Verify Platform HPC high availability Ensure that Platform HPC high availabilityis running correctly on the primary andsecondary management nodes.

Troubleshooting enablement problems Troubleshooting problems that occurredduring a Platform HPC high availabilityenvironment setup.

Preparing high availabilityPreparing an IBM Platform HPC high availability environment.

Before you begin

Ensure that all high availability requirements are met and a shared file system iscreated on a shared storage server.

About this task

To prepare a high availability environment, set the secondary management nodewith the same operating system and PHPC version as on the primary managementnode. After the secondary management node is set up, the necessary SSHconnections and configuration must be made between the primary managementnode and the secondary management node.

Procedure1. Install the operating system on the secondary node. The secondary

management node must use the same operating system and version as used onthe primary management node. Both management nodes must use the samenetwork and must be connected to the same network interface.Refer to “Installing and configuring the operating system on the managementnode” on page 13.

© Copyright IBM Corp. 1994, 2014 41

Page 48: Platform HPC 4.2 Install c2761072

2. Ensure that the time and time zone is the same on the primary and secondarymanagement nodes.a. To verify the current time zone, run the cat /etc/sysconfig/clock

command. To determine the correct time zone, refer to the informationfound in the /usr/share/zoneinfo directory.

b. If the time zone is incorrect, update the time zone. To update the timezone, set the correct time zone in the /etc/sysconfig/clock file.For example:For RHEL:ZONE=”US/Eastern”

For SLES:TIMEZONE=”America/New_York”

c. Set the local time in the /etc/localtime file, for example:ln –s /usr/share/zoneinfo/US/Eastern /etc/localtime

d. Set the date on both management nodes. Issue the following command onboth management nodes.date -s current_time

e. If the management nodes already have PHPC installed, run the followingcommand on both management node to get the system time zone.lsdef -t site -o clustersite -i timezone

If the system time zones are different, update the system time zone on thesecondary node, run the following command:chdef -t site -o clustersite timezone=US/Eastern

3. Install PHPC on the secondary node. You must use the same PHPC ISO file asyou used for the management node. You can complete the installation using theinstaller or the silent installation.The installer includes an interactive display where you can specify yourinstallation options, make sure to use the same installation options as theprimary management node. Installation options for the primary managementnode are found in the installation log file (/opt/pcm/log/phpc-installer.log)on the primary management node. Refer to Chapter 4, “Performing aninstallation,” on page 17.If you use the silent installation to install PHPC, you can use the same responsefile for both management nodes. Refer to Chapter 5, “Performing a silentinstallation,” on page 29

4. Verify that the management nodes can access the shared file systems, issue theshowmount -e nfs-server-ip command, where nfs-server-ip is the IP address ofthe NFS server that connects to the provision network.

5. Add the secondary management node entry to the /etc/hosts file on theprimary management node. Ensure that the failover node name can be resolvedto the secondary management node provision IP address. Run the commandbelow on the primary management node.echo "secondary-node-provision-ip secondary-node-name" >> /etc/hosts#ping secondary-node-name

where secondary-node-provision-ip is the provision IP address of the secondarynode and secondary-node-name is the name of the secondary node.For example: #echo "192.168.1.4 backupmn" >> /etc/hosts

6. Backup and configure a passwordless SSH connection between the primarymanagement node and the secondary node.

42 Installing IBM Platform HPC Version 4.2

Page 49: Platform HPC 4.2 Install c2761072

# Back up the SSH key on the secondary node.ssh secondary-node-name cp –rf /root/.ssh /root/.ssh.PCMHA

# Configure passwordless SSH between the management node and the secondary node.cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keysscp –r /root/.ssh/* secondary-node-name:/root/.ssh

where secondary-node-provision-ip is the provision IP address of the secondarynode and secondary-node-name is the name of the secondary node.

7. Prepare the compute nodes. These steps are used for provisioned computenodes that you do not want to reprovision.a. Shutdown the LSF services on the compute nodes.

# xdsh __Managed ’service lsf stop’

b. Unmount and remove the /home and /share mount points on the computenodes.# updatenode __Managed ’mountnfs del’# xdsh __Managed 'umount /home'# xdsh __Managed 'umount /shared'

Enable a high availability environmentEnable an IBM Platform HPC high availability environment.

Before you begin

Ensure that the secondary management node is installed and setup correctly.Ensure that SSH connections are configured and network settings are correctbetween the primary management node and the secondary management node.

About this task

You can set up the high availability environment using the high availabilitymanagement tool (pcmhatool). The tool defines and sets up a high availabilityenvironment between the management nodes using a predefined high availabilitydefinition file.

Note: The high availability management tool (pcmhatool) supports Bash shell only.

Procedure1. Define a high availability definition file according to your high availability

settings, including: virtual name, virtual IP address, and shared storage. Thehigh availability definition file example ha.info.example is in the/opt/pcm/share/examples/HA directory. Refer to “High availability definitionfile” on page 67.

2. Set up a high availability environment.Setup can take several minutes to synchronize data to shared storage. Ensurethat the shared storage server is always available. Issue the following commandon the primary management node.pcmhatool config -i ha-definition-file -s secondary-management-node

where ha-definition-file is the high availability definition file that you created instep 1, and secondary-management-node is the name of the secondarymanagement node.

Chapter 9. Setting up high availability 43

Page 50: Platform HPC 4.2 Install c2761072

Usage notes1. During a high availability enablement, some of the services start on the standby

management node instead of the active management node. After a fewminutes, they switch to the active management node.

2. If the management node crashes during the high availability environmentsetup, rerun the pcmhatool command and specify the same options. Runningthis command again cleans up the incomplete environment and starts the highavailability enablement again.

3. You can find the enablement log file (pcmhatool.log) in the /opt/pcm/logdirectory. This log file includes details and results about the high availabilityenvironment setup.

4. If you enable high availability, the pcmadmin command cannot be used to restartthe PERF loader.In a high availability, use the following commands to restart the PERF loader:pcm-ha-support start --service PLCpcm-ha-support start --service PLC2pcm-ha-support start --service JOBDTpcm-ha-support start --service PTCpcm-ha-support start --service PURGER

What to do next

After the high availability enablement is complete, verify that the Platform HPChigh availability environment is set up correctly.

Completing the high availability enablementAfter high availability is enabled, you can set up and configure additional options,such as configuring an IPMI device as a fencing device to protect your highavailability cluster from malfunctioning nodes and services. You can also set upemail notification when a failover is triggered.

Configure IPMI as a fencing deviceIn a high availability cluster that has only two management nodes, it is importantto configure fencing on an IPMI device. Fencing is the process of isolating a nodeor protecting shared resources from a malfunctioning node within a highavailability environment. The fencing process locates the malfunctioning node anddisables it.

Use remote hardware control to configure fencing on an IPMI device.

Before you begin

This fencing method requires both management nodes to be controlled remotelyusing IPMI. If your management nodes are on a power system or using a differentremote power control method, you must create the corresponding fencing scriptaccordingly.

Procedure1. Create an executable fencing script on the shared file system. For example, you

can use the example fencing script (fencing_ipmi.sh) that is found in the/opt/pcm/share/examples/HA directory. Run the following commands to createthe script on a shared file system. Ensure that you modify fencing_ipmi.sh toyour real environment settings.

44 Installing IBM Platform HPC Version 4.2

Page 51: Platform HPC 4.2 Install c2761072

mkdir -p /install/failovercp /opt/pcm/share/examples/HA/fencing_ipmi.sh /install/failover

2. Edit the HA controller service agent configuration file (ha_wsm.cfg) in the/opt/pcm/etc/failover directory on the active management node. In the[__Failover__] section, set the value for fencing_action parameter to theabsolute path of your custom script. For example:fencing_action =/install/failover/fencing_ipmi.sh

3. Restart the PCMHA service agent.pcm-ha-support start --service PCMHA

Create a failover notificationCreate a notification, such as an email notification, for a triggered failover.

Before you begin

Note: Before you can send email for a triggered failover, you must configure yourmail parameters. Refer to “Setting up SMTP mail settings.”

Procedure1. Create an executable script on the shared file system. For example, you can use

an executable script that sends an email when a failover is triggered. Anexample send email script (send_mail.sh) is in the /opt/pcm/share/examples/HAdirectory. Run the following commands to create the script on a shared filesystem. Ensure that you modify send_mail.sh to your real environmentsettings.mkdir -p /install/failovercp /opt/pcm/share/examples/HA/send_mail.sh /install/failover

2. Edit the high availability controller configuration file (ha_wsm.cfg) on themanagement node in the /opt/pcm/etc/failover directory. In the[__Failover__]section, set the failover_action parameter to the absolute path of your customscript. For example:failover_action=/install/failover/send_mail.sh

3. Restart the high availability environment.pcm-ha-support start --service PCMHA

Setting up SMTP mail settingsSpecify SMTP mail settings in IBM Platform HPC.

Before you begin

To send email from Platform HPC, an SMTP server must already be installed andconfigured.

Procedure1. Log in to the Web Portal as the system administrator.2. In the System & Settings tab, click General Settings.3. Expand the Mail Settings heading.

a. Enter the mail server (SMTP) host.b. Enter the mail server (SMTP) port.c. Enter the user account. This field is only required by some servers.d. Enter the user account password. This field is only required by some

servers.

Chapter 9. Setting up high availability 45

Page 52: Platform HPC 4.2 Install c2761072

4. Click Apply.

Results

SMTP server settings are configured. Platform HPC uses the configured SMTPserver to send email. The account from which the mail is sent is the user emailaccount. However, if the user email account is not specified then the email accountuses the management node name as the email address.

Verifying a high availability environmentVerify an IBM Platform HPC high availability environment.

Before you begin

You can find the enablement log file (pcmhatool.log) in the /opt/pcm/log directory.This log file includes details and results about your PHPC enablement.

Procedure1. Log on to the management node as a root user.2. Source Platform HPC environment variables.

# . /opt/pcm/bin/pcmenv.sh

3. Check that Platform HPC high availability is configured.# pcmhatool info

Configuring status: �OK�================================================================HA group members: master, failoverVirtual node name: virtualmnVirtual IP for <eth0:0>: 192.168.0.100Virtual IP for <eth1:0>: 172.20.7.100Shared work directory on: 172.20.7.200:/export/dataShared home directory on: 172.20.7.200:/export/home

4. Check that Platform HPC services are running. All services must be in stateSTARTED, for example:# service phpc statusShow status of the LSF subsystemlim (pid 29003) is running...res (pid 29006) is running...sbatchd (pid 29008) is running...SERVICE STATE ALLOC CONSUMER RGROUP RESOURCE SLOTS SEQ_NO INST_STATE ACTIPLC �STARTED� 32 /Manage* Manag* master 1 1 RUN 9PTC �STARTED� 34 /Manage* Manag* master 1 1 RUN 8PURGER �STARTED� 35 /Manage* Manag* master 1 1 RUN 7WEBGUI �STARTED� 31 /Manage* Manag* master 1 1 RUN 4JOBDT �STARTED� 36 /Manage* Manag* master 1 1 RUN 6PLC2 �STARTED� 33 /Manage* Manag* master 1 1 RUN 5PCMHA �STARTED� 28 /Manage* Manag* master 1 1 RUN 1PCMDB �STARTED� 29 /Manage* Manag* master 1 1 RUN 2XCAT �STARTED� 30 /Manage* Manag* master 1 1 RUN 3

5. Log in to the Web Portal.a. Open a supported web browser. Refer to the Release Notes for a list of

supported web browsers.b. Go to http://mgtnode-virtual-IP:8080, where mgtnode-virtual-IP is the

management node virtual IP address. If you are connected to a publicnetwork, you can also navigate to http://mgtnode-virtual-hostname:8080,where mgtnode-virtual-hostname is the virtual management node hostname.

46 Installing IBM Platform HPC Version 4.2

Page 53: Platform HPC 4.2 Install c2761072

If HTTPS is enabled, go to https://mgtnode-virtual-IP:8443 orhttps://mgtnode-virtual-hostname:8443 to log in to the web portal.

c. Log in as an administrator or user. An administrator has administrativeprivileges that include managing cluster resources. A user account is notable to manage cluster resources but can manage jobs.

d. After you log in, the Resource Dashboard is displayed in the Web Portal.Under the Cluster Health option, both management nodes are listed.

Troubleshooting a high availability environment enablementTroubleshooting an IBM Platform HPC high availability environment.

To help troubleshoot your high availability enablement, you can view the log file isfound here /opt/pcm/log/pcmhatool.log. This file logs the high availabilityenablement steps, and any warnings and errors that occurred during the highavailability enablement.

Common high availability enablement issues include the following issues:v When you run a command on the management node, the command stops

responding.To resolve this issue, log in the management node with a new session. Ensurethat the external NFS server is available and check the network connection tothe NFS server is available. If you cannot log in the management node, try toreboot it.

v When you check the Platform HPC service status, one of the service agentstatuses is set to ERROR.When the monitored service daemon is down, the service agent attempts torestart it several times. If it continually fails, the service agent is set to ERROR.To resolve this issue, check the service daemon log for more detail on how toresolve this problem. If the service daemon can be started manually, restart theservice agent again, issue the following command:pcm-ha-support start --service service_name

where service_name is the name of the service that is experiencing the problem.v Services are running on the standby management node after an automatic

failover occurs due to a provision network failure.Platform HPC high availability environment uses the provision network forheartbeat communication. The provision network failure causes the managementnodes to lose the communication, and fencing to stop working. To resolve thisissue, stop the service agents manually, issue the following command:pcm-ha-support stop --service all

v Parsing high availability settings fails.To resolve this issue, ensure that the high availability definition file does nothave any formatting errors, the correct virtual name, and that the IP addressdoes not conflict with an existing node. conflicts with existing managed node.Also, ensure that the xCAT daemon is running by issuing the command tabdumpsite.

v During the pre-checking, one of the checks fails.To resolve this issue, ensure that all Platform HPC high availability requirementsare met and rerun the high availability enablement tool.

v Syncing data to shared directory fails.

Chapter 9. Setting up high availability 47

Page 54: Platform HPC 4.2 Install c2761072

To resolve this issue, ensure that the network connection to the external sharedstorage is stable during the high availability enablement. If a timeout occursduring data synchronization, rerun the tool by setting PCMHA_NO_CLEANenvironment variable. This environment variable ensures that existing data onthe NFS server is unchanged.#PCMHA_NO_CLEAN=1 pcmhatool config –i ha-definition-file –s secondary-management-node

where ha-definition-file is the high availability definition file andsecondary-management-node is the name of the secondary management node.

v Cannot log in to the Web Portal, or view the Resource Dashboard in the WebPortal.All Platform HPC services are started a few minutes after the high availabilityenablement. Wait a few minutes and try again. If the issue persists, run the highavailability diagnostic tool to check the running status.#pcmhatool check

48 Installing IBM Platform HPC Version 4.2

Page 55: Platform HPC 4.2 Install c2761072

Chapter 10. Upgrading IBM Platform HPC

Upgrade IBM Platform HPC from Version 4.1.1.1 to Version 4.2. Additionally, youcan upgrade the product entitlement files for Platform Application Center or LSF.

Upgrading to Platform HPC Version 4.2Upgrade from Platform HPC Version 4.1.1.1 to Version 4.2. The upgrade procedureensures that the necessary files are backed up and necessary files are restored.

The following upgrade paths are available:v Upgrading from Platform HPC 4.1.1.1 to 4.2 without OS reinstallv Upgrading from Platform HPC 4.1.1.1 to 4.2 with OS reinstall

If any errors occur during the upgrade process, you can roll back to an earlierversion of Platform HPC.

For a list of all supported upgrade procedures, refer to the Release notes for PlatformHPC 4.2 guide.

Upgrade planningUpgrading IBM Platform HPC involves several steps that you must complete inthe appropriate sequence. Review the upgrade checklist and upgrade roadmapbefore you begin the upgrade process.

Upgrading checklistUse the following checklist to review the necessary requirements before upgrading.

In order to upgrade to the newest release of IBM Platform HPC, ensure you meetthe following criteria before proceeding with the upgrade.

Table 9.

Requirements Description

Hardware requirements Ensure that you meet the hardwarerequirements for Platform HPC.

Refer to “PHPC requirements” on page 9.

Software requirements Ensure that you meet the softwarerequirements for Platform HPC.

Refer to “PHPC requirements” on page 9.

External storage device Obtain an external storage to store thenecessary backup files. Make sure that theexternal storage is larger than the size ofyour backup files.

Obtain a copy of the Platform HPC 4.2 ISO Get a copy of Platform HPC 4.2

(Optional) Obtain a copy of the latestsupported version operating system

Optionally, you can upgrade your operatingsystem to the latest supported version.

© Copyright IBM Corp. 1994, 2014 49

Page 56: Platform HPC 4.2 Install c2761072

Upgrading roadmapOverview of the upgrade procedure.

Table 10. Upgrading Platform HPC

Actions Description

1. Upgrading checklist Ensure that you meet all of the requirementsbefore upgrading Platform HPC.

2. Preparing to upgrade Before you can upgrade to the newest releaseof Platform HPC you must complete specifictasks.

3. Creating a Platform HPC 4.1.1.1backup

Create a backup of your current Platform HPC4.1.1.1 settings and database. This backup isused to restore your existing settings to thenewer version of Platform HPC.

4. Perform the Platform HPC upgrade Perform the upgrade using your chosen path:

v Upgrading to Platform HPC 4.2 without OSreinstall

v Upgrading to Platform HPC 4.2 with OSreinstall

5. Completing the upgrade Ensure that data is restored and services arerestarted.

6. Verifying the upgrade Ensure that PHPC is successfully upgraded.

7. (Optional) Applying fixes After you upgrade PHPC, you can check ifthere are any fixes available though the IBMFix Central.

Upgrading to Platform HPC 4.2 without OS reinstallUpgrade your existing installation of IBM Platform HPC to the most recent versionwithout reinstalling the operating system on the management node.

Note that if you are upgrading Platform HPC to Version 4.2 without reinstallingthe operating system, the PMPI kit version is not upgraded.

Preparing to upgradeBefore upgrading your IBM Platform HPC installation, there are some steps youshould follow to ensure your upgrade is successful.

Before you begin

To prepare for your upgrade, ensure that you have the following items:v You must have an external backup to store the contents of your 4.1.1.1 backup.v The Platform HPC 4.2 ISO file.v If you are upgrading the operating system, make sure that you have the RHEL

ISO file, and that you have a corresponding OS distribution created.

For additional requirements refer to “Upgrading checklist” on page 49.

About this task

Before you upgrade to the next release of Platform HPC, you must complete thefollowing steps:

50 Installing IBM Platform HPC Version 4.2

Page 57: Platform HPC 4.2 Install c2761072

Procedure1. Mount the Platform HPC installation media:

mount -o loop phpc-4.2.x64.iso /mnt

2. Upgrade the pcm-upgrade-tool package.For RHEL:rpm -Uvh /mnt/packages/repos/kit-phpc-4.2-rhels-6-x86_64/pcm-upgrade-tool-*.rpm

For SLES:rpm -Uvh /mnt/packages/repos/kit-phpc-4.2-sles-11-x86_64/pcm-upgrade-tool-*.rpm

3. Set up the upgrade environment.export PATH=${PATH}:/opt/pcm/libexec/

4. Prepare an external storage.a. Ensure that the external storage has enough space for the backup files. To

check how much space you require for the back, run the followingcommands:# du -sh /var/lib/pgsql/data# du -sh /install/

Note: It is recommended that the size of your external storage is greaterthan the combined size of the database and the /install directory.

b. On the external storage, create a directory for the database backup.mkdir /external-storage-mnt/db-backup

where the external-storage-mnt is the backup location on your externalstorage.

c. Create a directory for the configuration file backup.mkdir /external-storage-mnt/config-backup

where the external-storage-mnt is the backup location on your externalstorage.

5. Determine which custom metrics you are using, if any. The custom metrics arelost in the upgrade process, and can manually be re-created after the upgrade iscompleted.

6. If you created any new users after Platform HPC was installed, you mustinclude these new users in your backup./opt/xcat/bin/updatenode mn-host-name -F

where mn-host-name is the name of your management node.

Backing up Platform HPCCreate a backup of your current Platform HPC installation that includes a backupof the database and settings before you upgrade to a newer version of PlatformHPC.

Note: The backup procedure does not back up any custom configurations. Afterthe upgrade procedure is completed, the following custom configurations can bemanually re-created:v Customization to the PERF loader, including internal data collection and the

purger configuration filesv Customization to the Web Portal Help menu navigationv Addition of custom metricsv Alert polices

Chapter 10. Upgrading 51

Page 58: Platform HPC 4.2 Install c2761072

v LDAP packages and configurations

Before you begin

Platform HPC does not back up or restore LSF configuration files or data. Beforeyou upgrade, make sure to back up your LSF configuration files and data. Afterthe upgrade is complete, you can apply your backed up configuration files anddata.

Procedure1. Stop Platform HPC services:

pcm-upgrade-tool.py services --stop

2. Create a database backup on the external storage. The database backup backsup the database data and schema.pcm-upgrade-tool.py backup --database -d /external-storage-mnt/db-backup/

where external-storage-mnt is the backup location on your external storage. Thebackup includes database files and the backup configuration file pcm.conf.

3. Create a configuration file backup on the external storage.pcm-upgrade-tool.py backup --files -d /external-storage-mnt/config-backup/

Performing the Platform HPC upgradePerform the upgrade without reinstalling the operating system and restore yoursettings.

Before you begin

Ensure that a backup of your previous settings was created before you proceedwith the upgrade.

Procedure1. Upgrade Platform HPC from 4.1.1.1 to 4.2, complete the following steps:

a. Upgrade the database schema.pcm-upgrade-tool.py upgrade --schema

b. If you created custom metrics in Platform HPC 4.1.1.1, you can manuallyre-create them. See more about Defining metrics in Platform HPC.

c. Start the HTTP daemon (HTTPd).For RHEL:# service httpd start

For SLES:# service apache2 start

d. Start the xCAT daemon.# service xcatd start

e. Upgrade Platform HPC.pcm-upgrade-tool.py upgrade --packages -p /root/phpc-4.2.x64.iso

f. Copy the Platform HPC entitlement file to the /opt/pcm/entitlementdirectory.

2. Restore settings and database data, complete the following steps:a. Stop the xCAT daemon.

/etc/init.d/xcatd stop

b. Restore database data from a previous backup.

52 Installing IBM Platform HPC Version 4.2

Page 59: Platform HPC 4.2 Install c2761072

pcm-upgrade-tool.py restore --database -d /external-storage-mnt/db-backup/

where external-storage-mnt is the backup location on your external storageand db-backup is the location of the database backup.

c. Restore configuration files from a previous backup.pcm-upgrade-tool.py restore --files -f /external-storage-mnt/config-backup/20130708-134535.tar.gz

where config-backup is the location of the configuration file backup.3. Upgrade the LSF component from Version 9.1.1 to LSF 9.1.3.

a. Create an LSF installer configuration file (lsf.install.config) and add it tothe /install/kits/kit-phpc-4.2/other_files directory. Refer to thelsf.install.config in the /install/kits/kit-phpc-4.1.1.1/other_filesdirectory and modify the parameters as needed.

b. Replace LSF postscripts to in directory /install/postscripts/.cp /install/kits/kit-phpc-4.2/other_files/KIT_phpc_lsf_setup /install/postscripts/cp /install/kits/kit-phpc-4.2/other_files/KIT_phpc_lsf_config /install/postscripts/cp /install/kits/kit-phpc-4.2//other_files/lsf.install.config /install/postscripts/phpc

c. Extract LSF installer package to a temp directory. The LSF installer packageis placed at /install/kits/kit-phpc-4.2/other_files/ For example:tar xvzf /install/kits/kit-phpc-4.2/other_files/lsf9.1.3_lsfinstall_linux_x86_64.tar.Z -C /tmp/lsf

d. Run the LSF installation.1) Navigate to the LSF installer directory.

cd /tmp/lsf

2) Copy the lsf.install.config configuration file from/install/kits/kit-phpc-4.2/other_files.cp /install/kits/kit-phpc-4.2/other_files/lsf.install.config ./

3) Run the LSF installer../lsfinstall -f lsf.install.config

Completing the upgradeTo complete the upgrade to the next release of IBM Platform HPC, you mustrestore your system settings, database settings, and update the compute nodes.

Procedure1. Restart Platform HPC services.

pcm-upgrade-tool.py services --reconfig

2. Refresh the database and configurations:pcm-upgrade-tool.py upgrade --postupdate

3. If you previously installed GMF and the related monitoring packages withPlatform HPC, you must manually reinstall these packages. To check whichmonitoring packages are installed, run the following commands:rpm -qa | grep chassis-monitoringrpm -qa | grep switch-monitoringrpm -qa | grep gpfs-monitoringrpm -qa | grep gmf

a. Uninstall the GMF package and the monitoring packages.rpm -e --nodeps pcm-chassis-monitoring-1.2.1-1.x86_64rpm -e --nodeps pcm-switch-monitoring-1.2.1-1.x86_64rpm -e --nodeps pcm-gpfs-monitoring-1.2.1-1.x86_64rpm -e --nodeps pcm-gmf-1.2-1.x86_64

b. Install the GMF package that is found in the /install/kits/kit-pcm-4.2/repos/kit-phpc-4.2-rhels-6-x86_64 directory.

Chapter 10. Upgrading 53

Page 60: Platform HPC 4.2 Install c2761072

rpm -ivh pcm-gmf-1.2-1.x86_64.rpm

c. Install the switch monitoring package that is found in the/install/kits/kit-pcm-4.2/repos/kit-phpc-4.2-rhels-6-x86_64 directory.rpm -ivh pcm-switch-monitoring-1.2.1-1.x86_64.rpm

d. Install the chassis monitoring package that is found in the/install/kits/kit-pcm-4.2/repos/kit-phpc-4.2-rhels-6-x86_64 directory.rpm -ivh pcm-chassis-monitoring-1.2.1-1.x86_64.rpm

e. If you have GPFS installed, run the following command to install the GPFSmonitoring package. The GPFS monitoring package is available in the/install/kits/kit-pcm-4.2/repos/kit-phpc-4.2-rhels-6-x86_64 directory.rpm –ivh pcm-gpfs-monitoring-1.2.1-1.x86_64.rpm

f. Restart Platform HPC services.# pcmadmin service restart --group ALL

4. Upgrade compute nodes.a. Check if the compute nodes are reachable. Compute node connections can

get lost during the upgrade process, ping the compute nodes to ensure thatthey are connected to the management node:xdsh noderange "/bin/ls"

For any compute nodes that have lost connection and cannot be reached,use the rpower command to reboot the node:rpower noderange reset

where noderange is a comma-separated list of nodes or node groupsb. Update compute nodes to include the Platform HPC package.

updatenode noderange -S

where noderange is a comma-separated list of nodes or node groups.c. Restart monitoring services.

xdsh noderange "source /shared/ibm/platform_lsf/conf/ego/phpc_cluster/kernel/profile.ego; egosh ego shutdown -f; egosh ego start -f"

where noderange is a comma-separated list of nodes or node groups.5. Restart the LSF cluster. Run the following command on the management node.

lsfrestart -f

6. An SSL V3 security issue exists within the Tomcat server when HTTPS isenabled. If you have not previously taken steps to fix this issue, you can skipthis step. Otherwise, if you have HTTPS enabled, complete the following stepsto fix this issue.a. Edit the $GUI_CONFDIR/server.xml file. In the connector XML tag, set the

sslProtocol value from SSL to TLS, and save the file. For example:<Connector port="${CATALINA_HTTPS_START_PORT}" maxHttpHeaderSize="8192”maxThreads="${CATALINA_MAX_THREADS}" minSpareThreads="25" maxSpareThreads="75"enableLookups="false" disableUploadTimeout="true"acceptCount="100" scheme="https" secure="true"clientAuth="want" sslProtocol="�TLS�" algorithm="ibmX509"compression="on" compressionMinSize="2000"compressableMimeType="text/html,text/xml,text/css,text/javascript,text/plain"connectionTimeout="20000" URIEncoding="UTF-8"/>

b. Restart the Web Portal service.pcmadmin service stop --service WEBGUIpcmadmin service start --service WEBGUI

54 Installing IBM Platform HPC Version 4.2

Page 61: Platform HPC 4.2 Install c2761072

Verifying the upgradeEnsure that the upgrade procedure is successful and that Platform HPC is workingcorrectly.

Note: A detailed log of the upgrade process can be found in the upgrade.log filein the /opt/pcm/log directory.

Procedure1. Log in to the management node as a root user.2. Source Platform HPC environment variables.

# . /opt/pcm/bin/pcmenv.sh

3. Check that the PostgreSQL database server is running.# service postgresql status(pid 13269) is running...

4. Check that the Platform HPC services are running.# service xcatd statusxCAT service is running

# service phpc status

Show status of the LSF subsystemlim (pid 15858) is running...res (pid 15873) is running...sbatchd (pid 15881) is running...

[ OK ]SERVICE STATE ALLOC CONSUMER RGROUP RESOURCE SLOTS SEQ_NO INST_STATE ACTIRULE-EN* STARTED 18 /Manage* Manag* * 1 1 RUN 17PCMD STARTED 17 /Manage* Manag* * 1 1 RUN 16JOBDT STARTED 12 /Manage* Manag* * 1 1 RUN 11PLC STARTED 13 /Manage* Manag* * 1 1 RUN 12PURGER STARTED 11 /Manage* Manag* * 1 1 RUN 10PTC STARTED 14 /Manage* Manag* * 1 1 RUN 13PLC2 STARTED 15 /Manage* Manag* * 1 1 RUN 14WEBGUI STARTED 19 /Manage* Manag* * 1 1 RUN 18ACTIVEMQ STARTED 16 /Manage* Manag* * 1 1 RUN 15

5. Check that the correct version of Platform HPC is running.# cat /etc/phpc-release

6. Log in to the Web Portal.a. Open a supported web browser. Refer to the Release Notes for a list of

supported web browsers.b. Go to http://mgtnode-IP:8080, where mgtnode-IP is the real management

node IP address. If you are connected to a public network, you can alsonavigate to http://mgtnode-hostname:8080, where mgtnode-hostname is thereal management node hostname.

c. Log in as a root user. The root user has administrative privileges and mapsto the operating system root user.

d. After you log in, the Resource Dashboard is displayed in the Web Portal.

Upgrading to Platform HPC 4.2 with OS reinstallUpgrade your existing installation of IBM Platform HPC to the most recentversion, and reinstall or upgrade the operating system on the management node.

Preparing to upgradeBefore upgrading your IBM Platform HPC installation, there are some steps youshould follow to ensure your upgrade is successful.

Chapter 10. Upgrading 55

Page 62: Platform HPC 4.2 Install c2761072

Before you begin

To prepare for your upgrade, ensure that you have the following items:v You must have an external backup to store the contents of your 4.1.1.1 backup.v The Platform HPC 4.2 ISO file.v If you are upgrading the operating system, make sure that you have the RHEL

ISO file, and that you have a corresponding OS distribution created.

For additional requirements refer to “Upgrading checklist” on page 49.

About this task

Before you upgrade to the next release of Platform HPC, you must complete thefollowing steps:

Procedure1. Mount the Platform HPC installation media:

mount -o loop phpc-4.2.x64.iso /mnt

2. Upgrade the pcm-upgrade-tool package.For RHEL:rpm -Uvh /mnt/packages/repos/kit-phpc-4.2-rhels-6-x86_64/pcm-upgrade-tool-*.rpm

For SLES:rpm -Uvh /mnt/packages/repos/kit-phpc-4.2-sles-11-x86_64/pcm-upgrade-tool-*.rpm

3. Set up the upgrade environment.export PATH=${PATH}:/opt/pcm/libexec/

4. Prepare an external storage.a. Ensure that the external storage has enough space for the backup files. To

check how much space you require for the back, run the followingcommands:# du -sh /var/lib/pgsql/data# du -sh /install/

Note: It is recommended that the size of your external storage is greaterthan the combined size of the database and the /install directory.

b. On the external storage, create a directory for the database backup.mkdir /external-storage-mnt/db-backup

where the external-storage-mnt is the backup location on your externalstorage.

c. Create a directory for the configuration file backup.mkdir /external-storage-mnt/config-backup

where the external-storage-mnt is the backup location on your externalstorage.

5. Determine which custom metrics you are using, if any. The custom metrics arelost in the upgrade process, and can manually be re-created after the upgrade iscompleted.

6. If you created any new users after Platform HPC was installed, you mustinclude these new users in your backup./opt/xcat/bin/updatenode mn-host-name -F

where mn-host-name is the name of your management node.

56 Installing IBM Platform HPC Version 4.2

Page 63: Platform HPC 4.2 Install c2761072

Backing up Platform HPCCreate a backup of your current Platform HPC installation that includes a backupof the database and settings before you upgrade to a newer version of PlatformHPC.

Note: The backup procedure does not back up any custom configurations. Afterthe upgrade procedure is completed, the following custom configurations can bemanually re-created:v Customization to the PERF loader, including internal data collection and the

purger configuration filesv Customization to the Web Portal Help menu navigationv Addition of custom metricsv Alert policesv LDAP packages and configurations

Before you begin

Platform HPC does not back up or restore LSF configuration files or data. Beforeyou upgrade, make sure to back up your LSF configuration files and data. Afterthe upgrade is complete, you can apply your backed up configuration files anddata.

Procedure1. Stop Platform HPC services:

pcm-upgrade-tool.py services --stop

2. Create a database backup on the external storage. The database backup backsup the database data and schema.pcm-upgrade-tool.py backup --database -d /external-storage-mnt/db-backup/

where external-storage-mnt is the backup location on your external storage. Thebackup includes database files and the backup configuration file pcm.conf.

3. Create a configuration file backup on the external storage.pcm-upgrade-tool.py backup --files -d /external-storage-mnt/config-backup/

Performing the Platform HPC upgradePerform the upgrade with reinstalling the operating system and restore yoursettings.

Before you begin

Ensure that you have prepared for the upgrade and have an existing backup ofyour previous settings.

Procedure1. Reinstall the management node, complete the following steps:

a. Record the following management node network settings: hostname, IPaddress, netmask, and default gateway.

b. If you are upgrading to a new machine, you must power off the oldmanagement node before you power on the new management node.

c. Reinstall the RHEL 6.5 operating system on the management node. Ensureyou use the same network settings as the old management node, including:hostname, IP address, netmask, and default gateway.

Chapter 10. Upgrading 57

Page 64: Platform HPC 4.2 Install c2761072

Refer to “Installing and configuring the operating system on themanagement node” on page 13 on more information on installing an RHELoperating system.

2. Install Platform HPC 4.2. In this step, the RHEL operating system is specified.If you are using a different operating system, specify the operating systemaccordingly.a. Locate the default silent installation template autoinstall.conf.example in

the docs directory in the installation ISO.mount -o loop phpc-4.2.x64.rhel.iso /mntcp /mnt/docs/phpc-autoinstall.conf.example ./phpc-autoinstall.conf

b. Edit the silent installation template and set the os_kit parameter to theabsolute path of the operating system ISO.vi ./phpc-autoinstall.conf

c. Start the installation by running the silent installation./mnt/phpc-installer -f ./phpc-autoinstall.conf

3. Set up your environment.export PATH=${PATH}:/opt/pcm/libexec/

4. Restore settings and database data, complete the following steps:a. Stop Platform HPC services.

pcm-upgrade-tool.py services -–stop

b. If you created custom metrics in Platform HPC 4.1.1.1, you can manuallyre-create them. Refer to the "Defining metrics in Platform HPC" section inthe Administering Platform HPC guide for more information.

c. Restore database data from a previous backup.pcm-upgrade-tool.py restore --database -d /external-storage-mnt/db-backup/

where external-storage-mnt is the backup location on your external storageand db-backup is the location of the database backup.

d. Restore configuration files from a previous backup.pcm-upgrade-tool.py restore --files -f /external-storage-mnt/config-backup/20130708-134535.tar.gz

where config-backup is the location of the configuration file backup.Related information:“Installing and configuring the operating system on the management node” onpage 13

Completing the upgradeTo complete the upgrade to the next release of IBM Platform HPC and completethe operating system reinstallation, you must restore your system settings,database settings, and update the compute nodes.

Procedure1. Restart Platform HPC services.

pcm-upgrade-tool.py services --reconfig

2. By default, the OS distribution files are not backed up or restored. The OSdistribution files can be manually created after the management node upgradeis complete and before upgrading the compute nodes. To recreate an OSdistribution, run the following commands:a. Mount the operating system.

# mount -o loop rhel-6.4-x86_64.iso /mnt

58 Installing IBM Platform HPC Version 4.2

Page 65: Platform HPC 4.2 Install c2761072

where rhel-6.4-x86_64.iso is the name of the OS distribution.b. Create a new backup directory. The backup directory must be the same as

the OS distribution path. To determine the OS distribution path, use thelsdef -t osdistro rhels6.4-x86_64 command to get the OS distributionpath.# mkdir /install/rhels6.4/x86_64

c. Synchronize the new directory.# rsync -a /mnt/* /install/rhels6.4/x86_64

3. Refresh the database and configurations:pcm-upgrade-tool.py upgrade --postupdate

4. Update compute nodes. If you want to upgrade the compute nodes to a higherOS version, you must reprovision them. Otherwise, complete this step.a. Check if the compute nodes are reachable. Compute node connections can

get lost during the upgrade process, ping the compute nodes to ensure thatthey are connected to the management node:xdsh noderange "/bin/ls"

For any compute nodes that have lost connection and cannot be reached,use the rpower command to reboot the node:rpower noderange reset

where noderange is a comma-separated list of nodes or node groupsb. Recover the SSH connection to the compute nodes.

xdsh noderange -K

where noderange is a comma-separated list of nodes or node groups.c. Update compute nodes to include the Platform HPC 4.2 package.

updatenode noderange -S

where noderange is a comma-separated list of nodes or node groups.d. Restart monitoring services.

xdsh noderange "source /opt/pcm/ego/profile.platform;egosh ego shutdown -f;egosh ego start -f"

where noderange is a comma-separated list of nodes or node groups.5. By default, the LDAP configurations are not backed up or restored. If you want

to enable LDAP, refer to "LDAP user authentication" section in theAdministering Platform HPC guide.

6. An SSL V3 security issue exists within the Tomcat server when HTTPS isenabled. If you have not previously taken steps to fix this issue, you can skipthis step. Otherwise, if you have HTTPS enabled, complete the following stepsto fix this issue.a. Edit the $GUI_CONFDIR/server.xml file. In the connector XML tag, set the

sslProtocol value from SSL to TLS, and save the file. For example:<Connector port="${CATALINA_HTTPS_START_PORT}" maxHttpHeaderSize="8192”maxThreads="${CATALINA_MAX_THREADS}" minSpareThreads="25" maxSpareThreads="75"enableLookups="false" disableUploadTimeout="true"acceptCount="100" scheme="https" secure="true"clientAuth="want" sslProtocol="�TLS�" algorithm="ibmX509"compression="on" compressionMinSize="2000"compressableMimeType="text/html,text/xml,text/css,text/javascript,text/plain"connectionTimeout="20000" URIEncoding="UTF-8"/>

Chapter 10. Upgrading 59

Page 66: Platform HPC 4.2 Install c2761072

b. Restart the Web Portal service.pcmadmin service stop --service WEBGUIpcmadmin service start --service WEBGUI

Verifying the upgradeEnsure that the upgrade procedure is successful and that Platform HPC is workingcorrectly.

Note: A detailed log of the upgrade process can be found in the upgrade.log filein the /opt/pcm/log directory.

Procedure1. Log in to the management node as a root user.2. Source Platform HPC environment variables.

# . /opt/pcm/bin/pcmenv.sh

3. Check that the PostgreSQL database server is running.# service postgresql status(pid 13269) is running...

4. Check that the Platform HPC services are running.# service xcatd statusxCAT service is running

# service phpc status

Show status of the LSF subsystemlim (pid 15858) is running...res (pid 15873) is running...sbatchd (pid 15881) is running...

[ OK ]SERVICE STATE ALLOC CONSUMER RGROUP RESOURCE SLOTS SEQ_NO INST_STATE ACTIRULE-EN* STARTED 18 /Manage* Manag* * 1 1 RUN 17PCMD STARTED 17 /Manage* Manag* * 1 1 RUN 16JOBDT STARTED 12 /Manage* Manag* * 1 1 RUN 11PLC STARTED 13 /Manage* Manag* * 1 1 RUN 12PURGER STARTED 11 /Manage* Manag* * 1 1 RUN 10PTC STARTED 14 /Manage* Manag* * 1 1 RUN 13PLC2 STARTED 15 /Manage* Manag* * 1 1 RUN 14WEBGUI STARTED 19 /Manage* Manag* * 1 1 RUN 18ACTIVEMQ STARTED 16 /Manage* Manag* * 1 1 RUN 15

5. Check that the correct version of Platform HPC is running.# cat /etc/phpc-release

6. Log in to the Web Portal.a. Open a supported web browser. Refer to the Release Notes for a list of

supported web browsers.b. Go to http://mgtnode-IP:8080, where mgtnode-IP is the real management

node IP address. If you are connected to a public network, you can alsonavigate to http://mgtnode-hostname:8080, where mgtnode-hostname is thereal management node hostname.

c. Log in as a root user. The root user has administrative privileges and mapsto the operating system root user.

d. After you log in, the Resource Dashboard is displayed in the Web Portal.

Troubleshooting upgrade problemsTroubleshooting problems that occur when upgrading to the new release of IBMPlatform HPC.

60 Installing IBM Platform HPC Version 4.2

Page 67: Platform HPC 4.2 Install c2761072

To help troubleshoot your upgrade process, you can view the upgrade.log file thatis found in the /opt/pcm/log directory. This file logs informational informationabout the upgrade procedure, and logs any warnings or errors that occur duringthe upgrade process

Common upgrade problems include the following issues:v Cannot log in to the Web Portal after upgrading to Platform HPC Version 4.2. To

resolve this issue try the following resolutions:– Restart the Web Portal. In most cases, the services that are required to run the

Web Portal start automatically. However, if the Web Portal goes down, youcan restart services and daemons manually. From the command line, issue thefollowing command:# pcmadmin service restart --service WEBGUI

Then run the following command from the management node to resolve thisissue:/opt/pcm/libexec/pcmmkcert.sh /root/.xcat/keystore_pcm

v After upgrading to Platform HPC Version 4.2, some pages in the Web Portal donot display or display old data. To resolve this issue, clear your web browsercache and relogin to the Web Portal.

v After upgrading to Platform HPC Version 4.2, some pages in the Web Portal donot display. Run the following command from the management node to resolvethis issue:/opt/pcm/libexec/pcmmkcert.sh /root/.xcat/keystore_pcm

v If any of the following errors are found in the upgrade.log file that is found inthe /opt/pcm/log directory, they can be ignored and no further actions need tobe taken.psql:/external-storage-mnt/db-backup/pmc_group_role.data.sql:25: ERROR:permission denied: "RI_ConstraintTrigger_17314" is a system trigger

psql:/external-storage-mnt/db-backup/pmc_group_role.data.sql:29: ERROR:permission denied: "RI_ConstraintTrigger_17314" is a system trigger

psql:/opt/pcm/etc/upgrade/postupdate/4.2/update-pcmgui-records.sql:7:ERROR: duplicate key value violates unique constraint "ci_purge_register_pkey"DETAIL: Key (table_name)=(pcm_node_status_history) already exists.

psql:/opt/pcm/etc/upgrade/postupdate/4.2/update-pcmgui-records.sql:11:ERROR: duplicate key value violates unique constraint "ci_purge_register_pkey"DETAIL: Key (table_name)=(lim_host_config_history) already exists.

psql:/opt/pcm/etc/upgrade/postupdate/4.2/update-pcmgui-records.sql:13:ERROR: duplicate key value violates unique constraint "pmc_role_pkey"DETAIL: Key (role_id)=(10005) already exists.

psql:/opt/pcm/etc/upgrade/postupdate/4.2/update-pcmgui-records.sql:15:ERROR: duplicate key value violates unique constraint "pmc_resource_permission_pkey"DETAIL: Key (resperm_id)=(11001-5) already exists.

psql:/opt/pcm/etc/upgrade/postupdate/4.2/update-pcmgui-records.sql:18:ERROR: duplicate key value violates unique constraint "pmc_role_permission_pkey"DETAIL: Key (role_permission_id)=(10009) already exists.

Rollback to Platform HPC 4.1.1.1Revert to the earlier version of Platform HPC.

Before you begin

Before you rollback to Platform HPC 4.1.1.1, ensure that you have both thePlatform HPC 4.1.1.1 ISO and the original operating system ISO.

Chapter 10. Upgrading 61

Page 68: Platform HPC 4.2 Install c2761072

Procedure1. Reinstall the management node, complete the following steps:

a. Record the following management node network settings: hostname, IPaddress, netmask, and default gateway.

b. Reinstall the original operating system on the management node. Ensureyou use the same network settings as the old management node, including:hostname, IP address, netmask, and default gateway.Refer to “Installing and configuring the operating system on themanagement node” on page 13 on more information on installing anoperating system.

2. Install Platform HPC 4.1.1.1, complete the following steps:a. Locate the default silent installation template autoinstall.conf.example in

the docs directory in the installation ISO.mount -o loop phpc-4.1.1.1.x86_64.iso /mntcp /mnt/docs/phpc-autoinstall.conf.example ./phpc-autoinstall.conf

b. Edit the silent installation template and set the os_kit parameter to theabsolute path for the operating system ISO.vi ./phpc-autoinstall.conf

c. Start the installation by running the installation program specifying thesilent installation file./mnt/phpc-installer -f ./phpc-autoinstall.conf

3. Restore settings and database data, complete the following steps:a. Set up the environment:

export PATH=${PATH}:/opt/pcm/libexec/

b. Stop Platform HPC services:pcm-upgrade-tool.py services –-stop

c. If you created custom metrics in Platform HPC 4.1.1.1, you can manuallyre-create them. Refer to the "Defining metrics in Platform HPC" section inthe Administering Platform HPC guide for more information.

d. Restore database data from a previous backup.pcm-upgrade-tool.py restore --database -d /external-storage-mnt/db-backup/

where external-storage-mnt is the backup location on your external storageand db-backup is the location of the database backup.

e. Restore configuration files from a previous backup.pcm-upgrade-tool.py restore --files -f /external-storage-mnt/config-backup/20130708-134535.tar.gz

where config-backup is the location of the configuration file backup.4. Restart Platform HPC services:

pcm-upgrade-tool.py services --reconfig

5. Reinstall compute nodes, if needed.v If the compute nodes have Platform HPC 4.1.1.1 installed, recover the SSH

connection for all compute nodes:xdsh noderange -K

where noderange is a comma-separated list of nodes or node groups.v If the compute nodes have Platform HPC 4.1.1.1 or 4.2 installed, they must

be reprovisioned to use Platform HPC 4.2.

62 Installing IBM Platform HPC Version 4.2

Page 69: Platform HPC 4.2 Install c2761072

Upgrading entitlementIn IBM Platform HPC, you can upgrade your LSF or PAC entitlement file fromExpress to Standard.

Upgrading LSF entitlementIn IBM Platform HPC, you can upgrade your LSF entitlement file from Express toStandard.

Before you begin

To upgrade your product entitlement for LSF, contact IBM client services for moredetails and to obtain the entitlement file.

About this task

To upgrade your entitlement, as a root user, complete the following steps on thePlatform HPC management node:

Procedure1. Copy the new entitlement file to the unified entitlement path

(/opt/pcm/entitlement/phpc.entitlement).2. Restart LSF.

lsfrestart

3. Restart the Web Portal.pmcadmin stoppmcadmin start

Results

Your LSF entitlement is upgraded to the standard version.

Upgrading PAC entitlementIn IBM Platform HPC, after upgrading your Platform Application Center (PAC)entitlement file from Express Edition to Standard Edition, ensure that you are ableto connect to the remote jobs console.

Before you begin

To upgrade your product entitlement for PAC, contact IBM client services for moredetails and to obtain the entitlement file.

About this task

After you upgrade to PAC Standard, complete the following steps to connect to theremote jobs console.

Procedure1. Log in to the Web Portal.2. From the command line, update the vnc_host_ip.map configuration file in the

$GUI_CONFDIR/application/vnc directory. The vnc_host_ip.map file must specifythe IP address that is mapped to the host name.

Chapter 10. Upgrading 63

Page 70: Platform HPC 4.2 Install c2761072

# cat vnc_host_ip.map# This file defines which IP will be use for the host, for example#hostname1=192.168.1.2system3750=9.111.251.141

3. Kill any VNC server sessions if they exist.vncserver -kill :${session_id}

4. Go to the /opt/pcm/web-portal/gui/work/.vnc/${USER}/ directory. If the VNCsessions files, vnc.console and vnc.session, exist, then delete them.

5. Restart the VNC server.#vncserver :1 ;vncserver :2

6. Restart the Web Portal.7. Stop the iptables service on the management node.8. Verify that the remote job console is running.

a. Go to the Jobs tab, and click Remote Job Consoles.b. Click Open My Console.c. If you get the following error, then you are missing the VncViewer.jar file.

Cannot find the required VNC jar file:/opt/pcm/web-portal/gui/3.0/tomcat/webapps/platform/pac/vnc/lib/VncViewer.jar.For details about configuring remote consoles, see "Remote Console".

To resolve this error, copy the VncViewer.jar file to the/opt/pcm/web-portal/gui/3.0/tomcat/webapps/platform/pac/vnc/libdirectory. Issue the following command:#cp /opt/pcm/web-portal/gui/3.0/tomcat/webapps/platform/viewgui/common/applet/VncViewer.jar/opt/pcm/web-portal/gui/3.0/tomcat/webapps/platform/pac/vnc/lib/VncViewer.jar

Results

Using PAC Standard Edition, you are able to connect to the remote jobs console.

64 Installing IBM Platform HPC Version 4.2

Page 71: Platform HPC 4.2 Install c2761072

Chapter 11. Applying fixes

Check for any new fixes that can be applied to your Platform HPC installation.

Note: In a high availability environment, ensure that the same fixes are applied onthe primary management node and the failover node

About this task

Fixes are available for download from IBM Fix Central website.

Note: In a high availability environment, ensure that the same fixes are applied onthe primary management node and the failover node.

Procedure1. Go to IBM Fix Central.2. Locate the product fixes, by selecting the following options:

a. Select Platform Computing as the product group.b. Select Platform HPC as the product name.c. Select 4.2 as the installed version.d. Select your platform.

3. Download each individual fix.4. Apply the fixes from the command line.

a. Extract the fix tar file.b. From the directory where the fix files are extracted to, run the installation

script to install the fix.

© Copyright IBM Corp. 1994, 2014 65

Page 72: Platform HPC 4.2 Install c2761072

66 Installing IBM Platform HPC Version 4.2

Page 73: Platform HPC 4.2 Install c2761072

Chapter 12. References

Configuration files

High availability definition fileHigh availability definition file specifies values to configure high availability.

High availability definition file

The high availability definition file specifies values to configure a high availabilityenvironment.virtualmn-name:

nicips.eth0:0=eth0-IP-addressnicips.eth1:0=eth1-IP-address

sharefs_mntp.work=work-directorysharefs_mntp.home=home-directory

virualmn-name:Specifies virtual node name of the active management node, wherevirualmn-name is the name of the virtual node.

The virtual node name must be a valid node name. It cannot be a fullyqualified domain name, it must be the short name without the domain name.

This line must end with a colon (:).

nicips.eth0:0=eth0-IP-addressSpecifies the virtual IP address of a virtual NIC connected to the managementnode, where eth0-IP-address is an IP address.

For example: nicips.eth0:0=172.20.7.5

Note: A virtual NIC does not need to be created and the IP address does notneed to be configured. The pcmhatool command automatically creates theneeded configurations.

nicips.eth1:0=eth1-IP-addressSpecifies the virtual IP address of a virtual NIC connected to the managementnode, where eth1-IP-address is an IP address.

For example: nicips.eth1:0=192.168.1.5

Note: A virtual NIC does not need to be created and the IP address does notneed to be configured. The pcmhatool command automatically creates theneeded configurations.

sharefs_mntp.work=work-directorySpecifies the shared storage location for system work data, where work-directoryis the shared storage location. For example: 172.20.7.200:/export/data.

If the same shared directory is used for both user home data and system workdata, specify this parameter as the single shared directory.

Only NFS is supported.

© Copyright IBM Corp. 1994, 2014 67

Page 74: Platform HPC 4.2 Install c2761072

sharefs_mntp.home=home-directorySpecifies the shared storage location for user home data, where home-directoryis the shared storage location. For example: 172.20.7.200:/export/home.

If the same shared directory is used for both user home data and system workdata, do not specify this parameter. The specified sharefs_mntp.workparameter, is used as the location for both user home data and system workdata.

Only NFS is supported.

Example

The following is an example of a high availability definition file:# A virtual node namevirtualmn:

# Virtual IP address of a virtual NIC connected to the management node.nicips.eth0:0=192.168.0.100nicips.eth1:0=172.20.7.100

# Shared storage for system work datasharefs_mntp.work=172.20.7.200:/export/data

# Shared storage for user home datasharefs_mntp.home=172.20.7.200:/export/home

Commands

pcmhatoolan administrative command interface to manage a high availability environment

Synopsis

pcmhatool [-h | --help] | [-v | --version]

pcmhatool subcommand [options]

Subcommand List

pcmhatool config -i | --import HAINFO_FILENAME -s | --secondary SMN_NAME[-q | --quiet] [-h | --help]

pcmhatool reconfig -s|--standby SMN_NAME [-q|--quiet] [-h|--help]

pcmhatool info [-h|--help]

pcmhatool failto -t|--target SMN_NAME [-q|--quiet] [-h|--help]

pcmhatool failmode -m|--mode FAILOVER_MODE [-h|--help]

pcmhatool status [-h|--help]

pcmhatool check [-h|--help]

68 Installing IBM Platform HPC Version 4.2

Page 75: Platform HPC 4.2 Install c2761072

Description

The pcmhatool command manages a high availability environment. It is used toenable high availability, display settings, set the failover mode, trigger a failover,and show high availability data and running status.

Options

-h | --helpDisplays the pcmhatool command help information.

-v | --versionDisplays the pcmhatool command version information.

Subcommand Options

config -i HAINFO_FILENAME -s SMN_NAMESpecifies high availability settings to be used to enable high availabilitybetween the primary management node and the secondary management node,where HAINFO_FILENAME is the high availability definition file andSMN_NAME is the name of the secondary management node.

-i|--import HAINFO_FILENAMESpecifies the import file name of the high availability definition file, whereHAINFO_FILENAME is the name of the high availability definition file.

-s|--secondary SMN_NAMESpecifies the secondary management node name, where SMN_NAME is thename of the secondary management node.

reconfig -s|--standby SMN_NAMEEnables high availability on the standby management node after themanagement node is reinstalled, where SMN_NAME is the name of thestandby management node.

infoDisplays high availability settings, including: the virtual IP address, themanagement node name, and a list of shared directories.

failto -t|--target SMN_NAMESets the specified standby management node to an active management node,where SMN_NAME is the current standby management node.

failmode -m|--mode FAILOVER_MODESets the failover mode, where FAILOVER_MODE is set to auto for automaticfailover or manual for manual failover. In automatic mode, the standby nodetakes over the cluster when it detects the active node has failed. In manualmode, the standby node only takes over the cluster if the pcmhatool failtocommand is issued.

statusDisplays the current high availability status, including: state of the nodes,failover mode and status of running services. Nodes that are in unavail stateare unavailable and indicate a node failure or lost network connection.

checkDisplays high availability diagnostic information related to the high availabilityenvironment, including current status data, failure and correction data.

Chapter 12. References 69

Page 76: Platform HPC 4.2 Install c2761072

70 Installing IBM Platform HPC Version 4.2

Page 77: Platform HPC 4.2 Install c2761072

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document inother countries. Consult your local IBM representative for information on theproducts and services currently available in your area. Any reference to an IBMproduct, program, or service is not intended to state or imply that only that IBMproduct, program, or service may be used. Any functionally equivalent product,program, or service that does not infringe any IBM intellectual property right maybe used instead. However, it is the user's responsibility to evaluate and verify theoperation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matterdescribed in this document. The furnishing of this document does not grant youany license to these patents. You can send license inquiries, in writing, to:

IBM Director of LicensingIBM CorporationNorth Castle DriveArmonk, NY 10504-1785U.S.A.

For license inquiries regarding double-byte character set (DBCS) information,contact the IBM Intellectual Property Department in your country or sendinquiries, in writing, to:

Intellectual Property LicensingLegal and Intellectual Property LawIBM Japan Ltd.1623-14, Shimotsuruma, Yamato-shiKanagawa 242-8502 Japan

The following paragraph does not apply to the United Kingdom or any othercountry where such provisions are inconsistent with local law: INTERNATIONALBUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS”WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OFNON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULARPURPOSE. Some states do not allow disclaimer of express or implied warranties incertain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors.Changes are periodically made to the information herein; these changes will beincorporated in new editions of the publication. IBM may make improvementsand/or changes in the product(s) and/or the program(s) described in thispublication at any time without notice.

Any references in this information to non-IBM Web sites are provided forconvenience only and do not in any manner serve as an endorsement of those Websites. The materials at those Web sites are not part of the materials for this IBMproduct and use of those Web sites is at your own risk.

© Copyright IBM Corp. 1994, 2014 71

Page 78: Platform HPC 4.2 Install c2761072

IBM may use or distribute any of the information you supply in any way itbelieves appropriate without incurring any obligation to you.

Licensees of this program who wish to have information about it for the purposeof enabling: (i) the exchange of information between independently createdprograms and other programs (including this one) and (ii) the mutual use of theinformation which has been exchanged, should contact:

IBM CorporationIntellectual Property LawMail Station P3002455 South Road,Poughkeepsie, NY 12601-5400USA

Such information may be available, subject to appropriate terms and conditions,including in some cases, payment of a fee.

The licensed program described in this document and all licensed materialavailable for it are provided by IBM under terms of the IBM Customer Agreement,IBM International Program License Agreement or any equivalent agreementbetween us.

Any performance data contained herein was determined in a controlledenvironment. Therefore, the results obtained in other operating environments mayvary significantly. Some measurements may have been made on development-levelsystems and there is no guarantee that these measurements will be the same ongenerally available systems. Furthermore, some measurement may have beenestimated through extrapolation. Actual results may vary. Users of this documentshould verify the applicable data for their specific environment.

Information concerning non-IBM products was obtained from the suppliers ofthose products, their published announcements or other publicly available sources.IBM has not tested those products and cannot confirm the accuracy ofperformance, compatibility or any other claims related to non-IBM products.Questions on the capabilities of non-IBM products should be addressed to thesuppliers of those products.

All statements regarding IBM's future direction or intent are subject to change orwithdrawal without notice, and represent goals and objectives only.

This information contains examples of data and reports used in daily businessoperations. To illustrate them as completely as possible, the examples include thenames of individuals, companies, brands, and products. All of these names arefictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, whichillustrates programming techniques on various operating platforms. You may copy,modify, and distribute these sample programs in any form without payment toIBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operatingplatform for which the sample programs are written. These examples have notbeen thoroughly tested under all conditions. IBM, therefore, cannot guarantee or

72 Installing IBM Platform HPC Version 4.2

Page 79: Platform HPC 4.2 Install c2761072

imply reliability, serviceability, or function of these programs. The sampleprograms are provided "AS IS", without warranty of any kind. IBM shall not beliable for any damages arising out of your use of the sample programs.

Each copy or any portion of these sample programs or any derivative work, mustinclude a copyright notice as follows:

© (your company name) (year). Portions of this code are derived from IBM Corp.Sample Programs. © Copyright IBM Corp. _enter the year or years_.

If you are viewing this information softcopy, the photographs and colorillustrations may not appear.

TrademarksIBM, the IBM logo, and ibm.com® are trademarks of International BusinessMachines Corp., registered in many jurisdictions worldwide. Other product andservice names might be trademarks of IBM or other companies. A current list ofIBM trademarks is available on the Web at "Copyright and trademark information"at http://www.ibm.com/legal/copytrade.shtml.

Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo,Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks orregistered trademarks of Intel Corporation or its subsidiaries in the United Statesand other countries.

Java and all Java-based trademarks and logos are trademarks or registeredtrademarks of Oracle and/or its affiliates.

Linux is a trademark of Linus Torvalds in the United States, other countries, orboth.

LSF, Platform, and Platform Computing are trademarks or registered trademarks ofInternational Business Machines Corp., registered in many jurisdictions worldwide.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks ofMicrosoft Corporation in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks ofothers.

Privacy policy considerationsIBM Software products, including software as a service solutions, (“SoftwareOfferings”) may use cookies or other technologies to collect product usageinformation, to help improve the end user experience, to tailor interactions withthe end user, or for other purposes. In many cases no personally identifiableinformation is collected by the Software Offerings. Some of our Software Offeringscan help enable you to collect personally identifiable information. If this SoftwareOffering uses cookies to collect personally identifiable information, specificinformation about this offering’s use of cookies is set forth below.

Depending upon the configurations deployed, this Software Offering may usesession and persistent cookies that collect each user’s user name, for purposes ofsession management. These cookies cannot be disabled.

Notices 73

Page 80: Platform HPC 4.2 Install c2761072

If the configurations deployed for this Software Offering provide you as customerthe ability to collect personally identifiable information from end users via cookiesand other technologies, you should seek your own legal advice about any lawsapplicable to such data collection, including any requirements for notice andconsent.

For more information about the use of various technologies, including cookies, forthese purposes, see IBM’s Privacy Policy at http://www.ibm.com/privacy andIBM’s Online Privacy Statement at http://www.ibm.com/privacy/details thesection entitled “Cookies, Web Beacons and Other Technologies” and the “IBMSoftware Products and Software-as-a-Service Privacy Statement” athttp://www.ibm.com/software/info/product-privacy.

74 Installing IBM Platform HPC Version 4.2

Page 81: Platform HPC 4.2 Install c2761072
Page 82: Platform HPC 4.2 Install c2761072

����

Printed in USA

SC27-6107-02