CISL Mission for User Support CISL Mission for User Support CISL also supports special colloquia, workshops

  • View

  • Download

Embed Size (px)

Text of CISL Mission for User Support CISL Mission for User Support CISL also supports special colloquia,...

  • CISL Mission for User Support

    CISL also supports special colloquia, workshops and computational campaigns giving users special privileges and access to facilities and services above normal service levels.

    CISL will provide a balanced set of services to enable researchers to utilize community resources securely, easily, and effectively. CISL Strategic Plan

  • CISL Facilities Overview

    ¨  HPC Systems ¤  Yellowstone ¤  Cheyenne

    ¨  Data Storage and Archival ¤  Glade ¤  HPSS

    ¨  Data Analysis and Visualization ¤  Geyser and Caldera

    ¨  Allocations ¨  Additional Training Resources ¨  Contacting User support ¨  Questions

  • Yellowstone

  • Yellowstone - IBM iDataPlex

    ¨  Compute Nodes ¤  4,518 Compute Nodes ¤  16 cores, 2.6 GHz Intel Sandy Bridge per node ¤  32 GB 1600 MHz DDR3 Memory per node ¤  72,288 Total Cores, 144 TB Total Memory ¤  6 login nodes (yslogin1, yslogin2, … yslogin6)

    ¨  Interconnect ¤  Mellanox FDR Infiniband Network ¤  Fat-tree topology

    ¨  Uses GLADE file system ¨  Available through 2017 ¨  More info on yellowstone


  • Cheyenne

  • Cheyenne – SGI ICE XA

    ¨  Compute Nodes ¤  4,032 Compute Nodes ¤  36 cores, 2.3 GHz Intel Broadwell cores per node ¤  2400 MHz DDR4 Memory per node

    n  64 GB on 3168 nodes n  128 GB on 864 nodes

    ¤  145,152 Total Cores, 313 TB Total Memory ¤  6 login nodes (cheyenne1, cheyenne2, … cheyenne6)

    ¨  Interconnect ¤  Mellanox EDR Infiniband Fabric ¤  9D hypercube topology

    ¨  Uses GLADE file system ¨  Available through ~2020 ¨  More info on Cheyenne


  • GLADE Shared File System

    GLADE is a high performance GPFS file system shared across all CISL HPC and DAV computers

    File space Quota Back Up Purge Period Description

    /glade/u/home/username 25 GB Yes None Home directories

    /glade/scratch/username 10 TB No 45 days Temporary space for short term use

    /glade/p/work/username 512 GB No None Work space for longer term storage

    /glade/p/project_code N/A No None Shared space for project allocations

  • System Access with Yubikey

    ¨  Use Secure Shell (ssh) to log on with yubikey ¤  Terminal, Cygwin, PuTTY, etc

    ¨  Using your Yubikey token ¤  When you log in to cheyenne ssh –X your screen displays a response: Token_Response: ¤  Enter your PIN number on the screen (do not hit enter), then touch the

    yubikey button. This will insert a new one-time password (OTP) and a return. ¤  The yubikey is activated by a light touch, don’t need to press hard.

    ¨  More information on Yubikey:

  • Supported Shells

    ¨  Two primary (supported) shells ¤  tcsh ¤  bash

    ¨  The bash shell is default for new accounts, tcsh may be carried over from previous accounts

    ¨  Change default shell in SAM (System Account Manager) ¤

    ¨  More information

  • Machine Specific Start Files (bash)

    ¨  Settings for ~/.profile:

    alias rm=“rm -i”

    PS1="\u@\h:\w> "

    if [[ $HOSTNAME == yslogin* ]]; then

    # Yellowstone settings

    alias bstat=“bjobs -u all”


    # Cheyenne settings

    alias qjobs=“qstat -u $USER”


  • Machine Specific Start Files (tcsh)

    ¨  Settings for ~/.tcshrc

    tty > /dev/null if ( $status == 0 ) then alias rm “rm -i” set prompt = "%n@%m:%~"

    if ( $HOSTNAME =~ yslogin* ) then # Yellowstone settings alias bstat “bjobs -u all” else # Cheyenne settings alias qjobs “qstat -u $USER” endif endif

  • Login Node Usage

    ¨  The login nodes are shared by all HPC users, and there are multiple user work loads running on each login node

    ¨  Your programs compete with those of other users for cores and memory

    ¨  Login node usage should be limited to: ¤  Reading and writing text/code ¤  Compiling programs ¤  Performing small data transfers ¤  Interacting with the job scheduler

    ¨  Programs that use excessive resources on the login nodes will be terminated

    ¨  More intensive work should be run on the shared nodes, DAV nodes, or the compute nodes

  • Supported Compilers

    CISL supports multiple compilers on HPC systems ¨  Intel C, C++, Fortran

    ¤  icc, icpc, ifort, mpiicc, mpiicpc, mpiifort

    ¨  GNU C, C++, Fortran ¤  gcc, g++, gfortran, mpigcc

    ¨  Portland Group C, C++, Fortran (Yellowstone only) ¤  pgcc, pgCC, pgfortran, pgf90, mpipcc, mpipf90

    More information:

  • Supported MPI Libraries

    ¨  Multiple MPI Libraries are available on Cheyenne ¤ SGI MPT ¤  Intel MPI ¤ OpenMPI ¤ MPICH

    ¨  The MPI Libraries available depend on the compiler you are using

    ¨  Switching compilers and libraries (and other sotware) is done using the modules command.

  • Commonly Used Software

    ¨  BLAS - Basic Linear Algebra Subroutines ¨  HDF5 - Hierarchical Data Format ¨  LAPACK and ScaLAPACK ¨  MKL - Math Kernel Library of general-purpose math routines ¨  GSL - GNU Scientific Library for C and C++ programmers ¨  NetCDF - Network Common Data ¨  PnetCDF – Parallel netCDF ¨  NCL – NCAR Command Language ¨  CDO – Climate Data Operators ¨  IDL – Interactive Data Language ¨  R – Statistical Computing Environment ¨  Python – Scripting Language ¨  Matlab – High Level Interactive Mathematical Environment

  • Using Modules

    ¨  Modules help manage user software, including compilers, libraries and dependencies between them.

    ¨  The software available through modules is hierarchical and organized in a tree. The software you have loaded will determine what branch of the tree you are in.

    ¨  The primary dependencies are due to compilers and MPI libraries.

    ¨  Software available within a branch should be mutually compatible.

  • Using Modules

    Helpful module commands: ¨  module av

    - lists available modules ¨  module list

    - show the modules currently loaded ¨  module load/unload

    - load module module-name into the environment ¨  module swap

    - swap module 1 for module 2 ¨  module help

    - display help on module commands ¨  module help

    - display help specific to module-name

  • Using Modules

    Helpful module commands (continued) ¨  module whatis

    - short info on module-name ¨  module save

    - save currently loaded module set as set-name ¨  module restore

    - reload all modules in saved set set-name ¨  module purge

    - remove all loaded modules from environment ¨  module reset

    - reset module environment More Info:

  • Job Control With PBS

    ¨  Job submission (qsub) ¤  qsub script_name

    ¨  Job Monitoring (qstat) ¤  qstat

    ¤  qstat ¤  qstat -u $USER

    ¤  qstat -Q

    ¨  Job Removal (qdel) ¤  qdel

  • Example PBS Job Script



    #PBS -A

    #PBS -q regular

    #PBS -l walltime=00:30:00

    #PBS -l select=4:ncpus=36:mpiprocs=36:ompthreads=1

    #PBS -j oe

    #PBS -o log.oe

    # Run WRF with SGI MPT

    mpiexec_mpt ./wrf.exe

    For more info:

  • HPSS

  • HPSS Introduction

    ¨  High Performance Storage System (100+ PB of storage) ¨  Hierarchical Storage Interface (HSI) is the primary interface for

    data transfer to/from HPSS along with metadata access and data management.

    ¨  HPSS Tape Archiver (HTAR) is used to package files on your file system to a single archive file and then send it to HPSS.

    ¨  HPSS is used for long term archiving of files, not for short term temporary storage

  • Hierarchical Storage Interface (HSI)

    ¨  POSIX like interface ¨  Different ways to invoke HSI

    ¤  Command line invocation n  hsi cmd n  hsi cget hpssfile (from your default dir on HPSS) n  hsi cput myfile (to your default dir on HPSS)

    ¤  Open an HSI session n  hsi to start a session; end, exit, quit to stop session. n  restricted shell-like environment

    ¤  hsi “in cmdfile” n  File of commands scripted in “cmdfile”

    ¨  Navigating HPSS while in HSI session ¤  On HPSS file system: pwd , cd, ls, , mkdir, cdls ¤  On GLADE file system: lpwd, pcd, lls, lmkdir, lcdls

    ¨  More info