16
Issued: November 2013 IBM ® Platform LSF ® Best practices Deploying IBM Platform LSF on a Linux HPC Cluster Jin Ma Software Developer: LSF Systems & Technology Group Chong Chen Principal Architect: LSF Product Family Systems & Technology Group

Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

  • Upload
    vandang

  • View
    295

  • Download
    6

Embed Size (px)

Citation preview

Page 1: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Issued: November 2013

IBM® Platform LSF®

Best practices Deploying IBM Platform LSF on a

Linux HPC Cluster

Jin Ma

Software Developer: LSF

Systems & Technology Group

Chong Chen

Principal Architect: LSF Product Family

Systems & Technology Group

Page 2: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 2 of 16

Executive Summary ..................................................................................................................... 3

Introduction .................................................................................................................................. 4

Plan LSF Installation .................................................................................................................... 5

Prepare LSF Installation .............................................................................................................. 7

Perform LSF Installation ........................................................................................................... 10

Fine-tune LSF Configuration .................................................................................................... 11

Set up services to start and shut down LSF Clusters ............................................................ 13

Conclusion .................................................................................................................................. 14

Further reading........................................................................................................................... 14

Contributors ................................................................................................................................ 14

Notices ......................................................................................................................................... 15

Contacting IBM .......................................................................................................................... 16

Page 3: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 3 of 16

Executive Summary IBM Platform LSF provides a rich set of features to satisfy specific requirements of HPC

customers. The planning and deployment of LSF is the most significant factor of smooth,

efficient, and scalable operation of LSF cluster(s). Such planning and deployment

involves system set up and tuning both inside and outside of LSF so that LSF works

optimally with the hardware and software in the cluster.

Before using this Best Practice Guide you should have taken LSF Administrators Training

and have knowledge and experience of HPC system administration. This document

describes the best practices for planning, installing, and configuring IBM Platform LSF on

large scale clusters running HPC workloads.

Page 4: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 4 of 16

Introduction This document serves as a best practice guide for cluster administrators to do the

following tasks:

Plan LSF installation

Prepare LSF installation

Perform LSF installation

Fine tune LSF configuration parameters

Start the LSF cluster and put it into production.

Most HPC clusters consist of a large number of nodes, so it is impractical to manually set

up a large cluster node by node. Any kind of cluster deployment and management tool

should have been installed and operational, for example, IBM Platform Cluster Manager

(PCM) or IBM Extreme Cloud Administration Toolkit (XCAT).

Page 5: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 5 of 16

Plan LSF Installation A typical LSF HPC cluster should consist of an LSF master node and master candidates for failover,

login and job submission nodes, and the backend compute nodes. The LSF master node and LSF master

candidate’s nodes must be very reliable machines with high computing power, relatively large memory,

as well as fast network access to a shared file system, and network connection to all the login and

compute nodes.

HPC clusters normally run large scale parallel jobs with a heavy load on CPU, memory, network, and

file I/O, so you LSF installation plan should focus on minimizing interference between LSF and

applications to be run by LSF.

LSF install location (LSF_TOP directory)

By default, LSF installs all binaries as well as sets all configuration (LSF_CONFDIR), log

(LSF_LOGDIR) and work (LSB_SHAREDIR) directories under LSF_TOP. You should set

LSF_TOP to a directory on a reliable shared file system that is accessible from all nodes. LSF_TOP

should also have minimum interference with resources that the application jobs will use. For

instance, if an LSF application uses GPFS as scratch space, you may consider putting the LSF

installation on a separate shared file system to minimize the impact.

LSF work directory LSB_SHAREDIR

This directory must reside on a very stable and reliable shared file system to allow fast read/write

access from the master node and master candidates for failover and recovery.

Read access from login nodes may be required if your site allows end users to query job history

and accounting information from event files and accounting files.

LSF log directory LSF_LOGDIR

The default LSF log directory is under LSF_TOP/log. You should consider defining the directory

on the local disk of each compute node so that LSF file I/O does not hit one physical disk all the

time.

$HOME

By default, $HOME is used as LSF_SPOOLDIR to store temporary job files (stdout, stderr, job

scripts). $HOME should be shared among all nodes.

LSF_TMPDIR

By default, the system-defined temporary directory (/tmp) is used as the LSF temporary

directory as well. You can customize the LSF_TMPDIR if /tmp is critical for other services (e.g.,

IBM GPFS).

Page 6: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 6 of 16

LSF host names and LSF communication interface

Nodes are commonly equipped with multiple network interface cards (NICs) for different

purposes: management, application, Infiniband, etc. LSF supports multiple NICs by mapping all

host aliases and IP addresses to a specific official host name. This can be configured in the

LSF_ENVIDIR/hosts file.

The LSF hosts file is like the OS /etc/hosts file, where the host IP address is the first column,

the official host name is the second column, and the rest of the columns are host aliases. This is

required especially when DNS or the OS (/etc/hosts) returns a host name (network interface)

that is not to be used in LSF. Once the LSF hosts file is defined, you can use any host names and

aliases defined in it, LSF will recognize them and map to the official host name.

It is highly recommended that LSF uses the IPoIB protocol over the Infiniband Interface if it has

been installed and enabled. LSF hosts must list the node IPoIB name in the column right after IP

address column. And the routing table should also be properly configured.

The following is an example of identifying Infiniband network and setting LSF host names and

communications to IPoIB, if enabled.

Run ifconfig to identify the Infiniband NIC interface:

-bash-4.1$ ifconfig eth0 Link encap:Ethernet HWaddr E4:1F:13:EB:DE:8A inet addr:10.18.0.21 Bcast:10.18.255.255 Mask:255.255.0.0 … ib0 Link encap:InfiniBand HWaddr 80:00:00:48:FE:80:00:00:00:00:00:00:00:0 0:00:00:00:00:00:00 inet addr:10.12.0.21 Bcast:10.12.255.255 Mask:255.255.0.0 … lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host

This is how you should configure the LSF_CONFDIR/hosts file and all LSF configuration files.

cat $LSF_ENVDIR/hosts #---------------------------------------------------------------# # LSF Master host #---------------------------------------------------------------# 10.12.0.1 master-ib master-ib.company.com 10.18.0.1 master-ib master master.company.com #---------------------------------------------------------------# # Compute Nodes #---------------------------------------------------------------# 10.12.0.21 n0101-ib n0101-ib.company.com 10.18.0.21 n0101-ib n0101.company.com n01 #eth1 of the node is ignored here

Use n0101-ib in all LSF configuration files.

Later, end-users will be use both n1001-ib or n0101.

Page 7: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16

# lsload n0101-ib HOST_NAME status r15s r1m r15m ut pg ls it tmp swp mem n0101-ib ok 0.8 0.9 1.1 3% 0.0 1 0 4482M 4.9G 18.6G # lsload n0101 HOST_NAME status r15s r1m r15m ut pg ls it tmp swp mem n0101-ib ok 0.8 0.9 1.1 3% 0.0 1 0 4482M 4.9G 18.6G

Use the ip route command to check the routing table to make sure communications using IB

addresses will go through the Infiniband card.

-bash-4.1$ ip route 10.18.0.0/16 dev eth0 proto kernel scope link src 10.14.1.1 10.12.0.0/16 dev ib0 proto kernel scope link src 10.12.1.1 Default nexthop via 10.12.0.17 dev ib0 weight 1 nexthop via 10.12.0.18 dev ib0 weight 1 nexthop via 10.12.0.19 dev ib0 weight 1 nexthop via 10.12.0.20 dev ib0 weight 1 nexthop via 10.12.0.21 dev ib0 weight 1 nexthop via 10.12.0.22 dev ib0 weight 1

Prepare LSF Installation

On the LSF master node, master candidate nodes, and all compute nodes, the following OS level

configurations are recommended to accommodate large number of LSF connections on master node and

network connections created by application jobs on compute nodes. After changing the configuration,

reboot the nodes or run sysctl –p to apply configuration changes.

# cat /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 # Controls the default maximum size of a message queue kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 net.nf_conntrack_max = 131072

Page 8: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 8 of 16

# New Settings for ARP Cache and IP fragments # Increase the arp cache size and prevent arp cache expiration - depends on cluster size # Ethernet/IB addresses in the cluster net.ipv4.neigh.default.gc_thresh1=12000 net.ipv4.neigh.default.gc_thresh2=12500 net.ipv4.neigh.default.gc_thresh3=16384 net.ipv4.tcp_fin_timeout=5 #fast socket close and reuse net.ipv4.neigh.default.gc_stale_time=2000000 net.ipv4.neigh.default.gc_interval=2000000 net.ipv4.neigh.default.base_reachable_time=2000000000 net.ipv4.neigh.default.base_reachable_time_ms=2000000000 # Increase the maximum memory used to reassemble IP fragments # use values recommended during PE installation net.ipv4.ipfrag_low_thresh=1048576 net.ipv4.ipfrag_high_thresh=8388608 # Increase TCP memory buffers net.ipv4.tcp_mem = 16777216 16777216 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.wmem_max=16777216 net.core.rmem_max=16777216 net.ipv4.conf.ib0.arp_filter=1 net.ipv4.conf.ib0.arp_ignore=1

# Set to the same as "socketMaxListenConnections" from mmlsconfig net.core.somaxconn=6144

On the master node, root must have an unlimited file descriptor limit. On compute nodes, all users must

have unlimited file descriptor limit as shown below:

cat /etc/security/limits.conf | grep -v "^#" * soft memlock unlimited * hard memlock unlimited ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 255211 max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) unlimited pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

You should run a test program to check ping from each node to all other nodes. This makes sure that

network connections have no issues. The test program may require pre-populating the ARP table on each

node. Use the ip neigh command to show the current ARP cache as shown below:

ip neigh | head dev ib0 lladdr ….......... REACHABLE 10.12.0.18 dev ib0 lladdr ….......... REACHABLE 10.12.0.19 dev ib0 lladdr ….......... REACHABLE 10.12.0.20 dev ib0 lladdr ….......... REACHABLE

Page 9: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 9 of 16

Linux cgroups offers many useful features for Linux HPC clusters, including accurate job accounting for

CPU and memory usage, memory and CPU fencing, process tracking etc. On compute nodes, you should

configure the cgroups system to make use of those features.

For Linux kernel below version 3.0, for example, Red Hat 6.2, 6.3 and 6.4 and SUSE 11 service pack 1, you

can add the following line to /etc/fstab:

cgroup /cgroup/freezer cgroup freezer,ns 0 0 cgroup /cgroup/cpuset cgroup cpuset 0 0 cgroup /cgroup/cpuacct cgroup cpuacct 0 0 cgroup /cgroup/memory cgroup memory 0 0

1. For Linux kernel above version 3.0, for example, SuSE 11 SP 2, the following contents should be

added to /etc/fstab:

cgroup /cgroup/freezer cgroup freezer 0 0 cgroup /cgroup/cpuset cgroup cpuset 0 0 cgroup /cgroup/cpuacct cgroup cpuacct 0 0 cgroup /cgroup/memory cgroup memory 0 0

2. Make sure following directories exist:

/cgroup/freezer /cgroup/cpuset /cgroup/cpuacct /cgroup/memory

3. Run the following commands to mount the cgroups file system:

mount –a –t cgroup

You can also use cgconfig to manage cgroups by adding the following configuration to

/etc/cgconfig.conf:

mount { freezer = /cgroup/freezer; cpuset = /cgroup/cpuset; cpuacct = /cgroup/cpuacct; memory = /cgroup/memory; }

To start or restart the cgconfig service, use /etc/init.d/cgconfig start|restart.

Normally, cgconfig is not installed by default. To install it, sue the rpm package libcgroup for Red

Hat and libcgroup1 for SUSE.

Use the file /proc/mounts to check if cgroup systems have been mounted successfully. It should contain

following lines as shown below:

cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0 cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0 cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0 cgroup /cgroup/memory cgroup rw,relatime,memory 0 0

Page 10: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 10 of 16

Use following commands to unmount the cgroup subsystem:

unmount –a –t cgroup

This command will unmount all cgroup type mounting points listed in /etc/fstab.

Or you can unmount them individually, as follows:

umount /cgroup/freezer umount /cgroup/cpuset umount /cgroup/cpuacct umount /cgroup/memory

Perform LSF Installation Always download and install the latest IBM LSF distribution, then download and apply the latest

corresponding service pack release on top of it.

To install the LSF distribution tar files and start the cluster, you need an LSF entitlement file:

platform_lsf_std_entitlement.dat for LSF Standard Edition

platform_lsf_adv_entitlement.dat for LSF Advanced Edition

See Installing IBM Platform LSF on UNIX and Linux (SC27-5314-01) for complete installation procedures.

The latest IBM LSF distribution can be downloaded from IBM Passport Advantage.

Page 11: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 11 of 16

The IBM LSF Service Packs can be downloaded from IBM Fix Central. Enter the appropriate search

criteria to be directed to the correct Service Packs. For example, the IBM LSF service pack for LSF 9.1.1 is

LSF 9.1.1.1 as shown below:

Fine-tune LSF Configuration After installing LSF binaries, you can fine-tune LSF configuration to suit HPC cluster workload and use

cases. The following configuration parameters in lsf.conf and lsb.params are recommended to

make LSF function smoothly across a large number nodes with workload consisting primarily of large

parallel jobs across many nodes:

lsf.conf

# Required parameters LSF_HPC_EXTENSIONS="LSB_HCLOSE_BY_RES CUMULATIVE_RUSAGE SHORT_EVENTFILE" LSF_PROCESS_TRACKING=Y LSF_LINUX_CGROUP_ACCT=Y LSB_RESOURCE_ENFORCE=”memory cpu” #memory/cpu enforcement by cgroup LSF_API_CONNTIMEOUT=10 #Timeout when connecting to LIM (default is 5) LSB_DISABLE_LIMLOCK_EXCL=Y LSB_MAX_JOB_DISPATCH_PER_SESSION=500 LSF_RES_SYNCUP_INTERVAL=0 LSB_QUERY_ENH=Y LSB_QUERY_PORT=<unique port number> # Required parameters for large HPC parallel jobs # Job level environmental variables override the configurations LSB_FANOUT_TIMEOUT_PER_LAYER=60 LSF_DJOB_TASK_REG_WAIT_TIME=600 LSB_DJOB_HB_INTERVAL=60 LSB_DJOB_RU_INTERVAL= 600 LSF_RES_ALIVE_TIMEOUT =120

Page 12: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 12 of 16

# Commented out the followings lines if they are defined in lsf.conf # #LSF_VPLUGIN="/usr/lib/libxmpi.so:/usr/lib32/libxmpi.so:/usr/lib64/libxmpi.so" #LSF_ASPLUGIN="/usr/lib64/libarray.so" #LSF_BMPLUGIN="/usr/lib64/libbitmask.so" #LSF_CPUSETLIB="/usr/lib64/libcpuset.so" #LSF_ENABLE_EXTSCHEDULER=Y #LSB_RLA_PORT=6883 #LSB_CPUSET_BESTCPUS=Y

lsb.params

# must not define SBD_SLEEP_TIME lower than default SBD_SLEEP_TIME=30 #concurrent job queries mbatchd can handle MAX_CONCURRENT_JOB_QUERY=100 Force mbatchd to fork a child to switch event file MIN_SWITCH_PERIOD=3600 # ENABLE_EVENT_STREAM =N # Disable streaming Of lsbatch System Events # Total number of nodes plus LSB_MAX_JOB_DISPATCH_PER_SESSION. The default value is 300 MAX_SBD_CONNS =5000 # set max reservations per user. ADVRSV_USER_LIMIT =100 # This is to enable preemption based on affinity # PREEMPT_JOBTYPE = AFFINITY # Enable LSF to automatically clean up jobs to free up job allocation # when the first execution host is unavailable. REMOVE_HUNG_JOB_FOR=runlimit:host_unavail

Scheduling Policies

LSF has a number of scheduling and job runtime features to improve the scalability, performance and

resource management and usage for large-scale parallel workload. For example, scheduling features may

include topology-aAware scheduling with LSF compute units, backfill scheduling, resource reservations,

resource limits, preemption, fairshare scheduling, guaranteed service level agreement, affinity- and

NUMA-aware scheduling, energy-aware scheduling, and sdvance reservations. Job runtime features

include: resource usage limit enforcement, and Linux cgroups.

For detailed information about these and other LSF scheduling features, see Administering IBM Platform

LSF (SC27- 5302-01) or the other Best Practice guides listed below..

Page 13: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 13 of 16

Set up services to start and shut down LSF Clusters

LSF provides a script named lsf_daemons under LSF_SERVERDIR. It takes the following parameters start | stop | restart | status | force_reload

You can install lsf_daemons so that LSF can be started or stopped as a system service.

You should set up a symbolic link to lsf_daemons to make use of system start up service.

ln $LSF_SERVERDIR/lsf_daemons /etc/init.d/lsf

Use service lsf start | stop |restart | reload to start- and shut down LSF on a node. You

can also integrate this step to distributed cluster management software. For example, with IBM XCAT,

you can run the following:

xdsh nodegroup -t3 ‘service lsf start’

Or add it to the XCAT post-script to be run right after a node is booted up.

Start LSF daemons in the following sequence:

◦ Make sure nodes are booted up.Start the LSF master node first.

◦ Start login nodes.

◦ Finally, start compute nodes. You should start compute nodes rack by rack with a 15 second

interval between racks.

Shut down LSF in the following sequence:

◦ Always shut down compute nodes first.

◦ Then shut down the login nodes.

◦ Finally, shut down the LSF master host.

To apply patches, depending on the patched binaries, you may need to run badmin qclose all to

stop new jobs from being dispatched, then wait for all currently running jobs to finish.

To reboot or disconnect a node, you should put the node into a system advanced reservation or use

badmin hclose node_name” to close the host and drain the node until the currently running jobs

finish, then reboot and disconnect the nodes.

After LSF is started, you can run lsload and bhosts command to verify LSF daemons are up and

running correctly. LSF should be able to respond to the query and report LSF status. And host status

should show ok. Use badmin showstatus to get overall status of cluster, including total number of

cores, hosts, workload, and users.

Page 14: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 14 of 16

Conclusion This document describes the best practice to plan, install, and configure IBM Platform LSF for large scale

clusters running HPC workloads.

Further reading Installing IBM Platform LSF on UNIX and Linux (SC27-5314-01)

Administering IBM Platform LSF (SC27- 5302-01)

Using Compute Units for LSF cluster topology scheduling

Best Practices: Using MPI under IBM Platform LSF

Best Practices: Using Affinity Scheduling in IBM Platform LSF

Contributors

Jin Ma

Software Developer: LSF

Chong Chen

Principal Architect: LSF Product Family

Page 15: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 15 of 16

Notices This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other

countries. Consult your local IBM representative for information on the products and services

currently available in your area. Any reference to an IBM product, program, or service is not

intended to state or imply that only that IBM product, program, or service may be used. Any

functionally equivalent product, program, or service that does not infringe any IBM

intellectual property right may be used instead. However, it is the user's responsibility to

evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in

this document. The furnishing of this document does not grant you any license to these

patents. You can send license inquiries, in writing, to:

IBM Director of Licensing

IBM Corporation

North Castle Drive

Armonk, NY 10504-1785

U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where

such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES

CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER

EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-

INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do

not allow disclaimer of express or implied warranties in certain transactions, therefore, this

statement may not apply to you.

Without limiting the above disclaimers, IBM provides no representations or warranties

regarding the accuracy, reliability or serviceability of any information or recommendations

provided in this publication, or with respect to any results that may be obtained by the use of

the information or observance of any recommendations provided herein. The information

contained in this document has not been submitted to any formal IBM test and is distributed

AS IS. The use of this information or the implementation of any recommendations or

techniques herein is a customer responsibility and depends on the customer’s ability to

evaluate and integrate them into the customer’s operational environment. While each item

may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee

that the same or similar results will be obtained elsewhere. Anyone attempting to adapt

these techniques to their own environment does so at their own risk.

This document and the information contained herein may be used solely in connection with

the IBM products discussed in this document.

This information could include technical inaccuracies or typographical errors. Changes are

periodically made to the information herein; these changes will be incorporated in new

editions of the publication. IBM may make improvements and/or changes in the product(s)

and/or the program(s) described in this publication at any time without notice.

Any references in this information to non-IBM websites are provided for convenience only

and do not in any manner serve as an endorsement of those websites. The materials at those

websites are not part of the materials for this IBM product and use of those websites is at your

own risk.

IBM may use or distribute any of the information you supply in any way it believes

appropriate without incurring any obligation to you.

Any performance data contained herein was determined in a controlled environment.

Therefore, the results obtained in other operating environments may vary significantly. Some

measurements may have been made on development-level systems and there is no

guarantee that these measurements will be the same on generally available systems.

Furthermore, some measurements may have been estimated through extrapolation. Actual

results may vary. Users of this document should verify the applicable data for their specific

environment.

Page 16: Deploying IBM Platform LSF on a Linux HPC Cluster€¦ · Deploying IBM Platform LSF on a Linux HPC Cluster Page 7 of 16 # lsload n0101-ib HOST_NAME status r15s r1m r15m ut ... file

Deploying IBM Platform LSF on a Linux HPC Cluster Page 16 of 16

Information concerning non-IBM products was obtained from the suppliers of those products,

their published announcements or other publicly available sources. IBM has not tested those

products and cannot confirm the accuracy of performance, compatibility or any other

claims related to non-IBM products. Questions on the capabilities of non-IBM products should

be addressed to the suppliers of those products.

All statements regarding IBM's future direction or intent are subject to change or withdrawal

without notice, and represent goals and objectives only.

This information contains examples of data and reports used in daily business operations. To

illustrate them as completely as possible, the examples include the names of individuals,

companies, brands, and products. All of these names are fictitious and any similarity to the

names and addresses used by an actual business enterprise is entirely coincidental.

COPYRIGHT LICENSE: © Copyright IBM Corporation 2013. All Rights Reserved.

This information contains sample application programs in source language, which illustrate

programming techniques on various operating platforms. You may copy, modify, and

distribute these sample programs in any form without payment to IBM, for the purposes of

developing, using, marketing or distributing application programs conforming to the

application programming interface for the operating platform for which the sample

programs are written. These examples have not been thoroughly tested under all conditions.

IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these

programs.

Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International

Business Machines Corporation in the United States, other countries, or both. If these and

other IBM trademarked terms are marked on their first occurrence in this information with a

trademark symbol (® or ™), these symbols indicate U.S. registered or common law

trademarks owned by IBM at the time this information was published. Such trademarks may

also be registered or common law trademarks in other countries. A current list of IBM

trademarks is available on the Web at “Copyright and trademark information” at

www.ibm.com/legal/copytrade.shtml

Windows is a trademark of Microsoft Corporation in the United States, other countries, or

both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

Contacting IBM To provide feedback about this paper, write to [email protected]

To contact IBM in your country or region, check the IBM Directory of Worldwide Contacts at

http://www.ibm.com/planetwide