2
HPRC is a dedicated computing resource used for cutting-edge, collaborative and transformative research and discovery at Texas A&M University. A DEDICATED RESOURCE FOR RESEARCH AND DISCOVERY In 2016, the High Performance Research Computing (HPRC) group became an Intel® Parallel Computing Center with a primary focus on modernizing OpenFOAM to increase parallelism and scalability through optimizations that leverage Intel ® Xion Phi™ processors. Since 1989, HPRC has been a dedicated resource for research and discovery at Texas A&M. HPRC supports more than 2,500 users, including more than 450 faculty members. Computing resources are used for cutting-edge, collaborative, and transformative research including, but not limited to, materials development, quantum optimization, and climate prediction. HPRC promotes emerging computing technology to researchers and assists them in using it for research and discovery. HPRC HARDWARE, SOFTWARE, AND TRAINING RESOURCES New users can apply for accounts at hprc.tamu.edu. The website offers training and documentation for HPRC systems and software. HPRC provides a broad range of regularly scheduled training sessions and workshops for our users. These sessions may be included in formal classes that have a technical and scientific computing focus. HPRC also hosts the Summer Computing Academy to promote computing among high school students. High Performance RESEARCH COMPUTING vpr.tamu.edu hprc.tamu.edu CONTACTS HONGGAO LIU Director [email protected] FRANCIS DANG Associate Director for HPRC Systems [email protected] DHRUVA CHAKRAVORTY Associate Director for User Services and Research [email protected] LISA M. PÉREZ Associate Director for Advanced Computing Enablement [email protected] Electrostatic potential map for Si cluster and solvent decomposition products by Perla Balbuena. USERS, INCLUDING 450 FACULTY MEMBERS 2500 SUPPORTS MORE THAN TOTAL PEAK PERFORMANCE WITH 12.9 PB OF HIGH-PERFORMANCE STORAGE 920 TF ADMINISTERS 4 CLUSTERS

High Performance RESEARCH COMPUTING 2300 SUPPORTS …€¦ · dual-GPU accelerators, 16 Intel Knights Landing processor, and an Intel Omni-Path Architecture (OPA) interconnect. ADA

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: High Performance RESEARCH COMPUTING 2300 SUPPORTS …€¦ · dual-GPU accelerators, 16 Intel Knights Landing processor, and an Intel Omni-Path Architecture (OPA) interconnect. ADA

HPRC is a dedicated computing resource used for cutting-edge, collaborative and transformative research and discovery at Texas A&M University.

A DEDICATED RESOURCE FOR RESEARCH AND DISCOVERYIn 2016, the High Performance Research Computing (HPRC) group became an Intel® Parallel Computing Center with a primary focus on modernizing OpenFOAM to increase parallelism and scalability through optimizations that leverage Intel® Xion Phi™ processors.

Since 1989, HPRC has been a dedicated resource for research and discovery at Texas A&M. HPRC supports more than 2,500 users, including more than 450 faculty members. Computing resources are used for cutting-edge, collaborative, and transformative research including, but not limited to, materials development, quantum optimization, and climate prediction. HPRC promotes emerging computing technology to researchers and assists them in using it for research and discovery.

HPRC HARDWARE, SOFTWARE, AND TRAINING RESOURCES

New users can apply for accounts at hprc.tamu.edu. The website offers training and documentation for HPRC systems and software. HPRC provides a broad range of regularly scheduled training sessions and workshops for our users. These sessions may be included in formal classes that have a technical and scientific computing focus. HPRC also hosts the Summer Computing Academy to promote computing among high school students.

High Performance

RESEARCH COMPUTING

vpr.tamu.eduhprc.tamu.edu

CONTACTS

HONGGAO LIU [email protected]

FRANCIS DANG Associate Director for HPRC Systems [email protected]

DHRUVA CHAKRAVORTY Associate Director for User Services and Research [email protected]

LISA M. PÉREZ Associate Director for Advanced Computing Enablement [email protected]

Electrostatic potential map for Si cluster and solvent decomposition products by Perla Balbuena.

USERS, INCLUDING 450 FACULTY MEMBERS

2500SUPPORTS MORE THAN

TOTAL PEAK PERFORMANCE WITH 12.9 PB OF

HIGH-PERFORMANCE STORAGE

920TFADMINISTERS 4 CLUSTERS

Page 2: High Performance RESEARCH COMPUTING 2300 SUPPORTS …€¦ · dual-GPU accelerators, 16 Intel Knights Landing processor, and an Intel Omni-Path Architecture (OPA) interconnect. ADA

TERRA The 320-node heterogeneous cluster with 8,520 Intel Broadwell cores, 48 NVIDIA K80 dual-GPU accelerators, 16 Intel Knights Landing processor, and an Intel Omni-Path Architecture (OPA) interconnect.

ADA The 874-node hybrid cluster with Intel Ivy Bridge processors and a Mellanox FDR-10 Infiniband interconnect is HPRC’s main cluster and will be refreshed in late 2020. Ada includes 68 NVIDIA K20 GPUs and 24 Intel Xeon Phi 5110P co-processors. Ada also includes four 2TB memory nodes and eleven 1TB memory nodes for jobs requiring large amounts of memory.

LONESTAR The latest in a series of Lonestar clusters hosted at TACC, Lonestar 5 is comprised of 1252 Cray XC40 nodes. Jointly funded by The University of Texas System, Texas A&M University, and Texas Tech University, it provides additional resources to Texas A&M researchers. Allocation requests are made through the HPRC request page.

ViDal A 24-node secure and compliant computing environment supports data intensive research using sensitive person level data or proprietary licensed data to meet the myriad legal requirements of handling such data (e.g., HIPAA, Texas HB 300, NDA). It has 16 compute nodes with 192 GB Ram each and 4 large memory nodes with 1.5 TB Ram each, and 4 GPU nodes with 192 GB Ram and two NVIDIA V100 GPUs each.

Produced by Research Communications 8/2020

ADVANCED SUPPORT PROGRAM (ASP)

HPRC provides technical assistance to research teams across campus that goes beyond general consulting. HPRC offers collaborations in research projects with a large computational component. Under the ASP, one or more HPRC analysts will contribute expertise and experience in several areas of high performance computing.

OUR COLLABORATIVE CONTRIBUTIONS INCLUDE:

 X porting applications to our clusters

 X analyzing and optimizing code performance

 X developing parallel code from serial versions and analyzing performance

 X bioinformatics and genomics

 X optimizing serial and parallel I/O code performance

 X optimal use of mathematical libraries

 X code development and design

RESOURCES AVAILABLE

UPCOMING SYSTEMS

1112 TAMUCollege Station, Texas 77843-1112 [email protected]

Located on the College Station campus: 114A Henderson Hall 222 Jones St. (Behind the ILSB)

GRACE Grace will be a new heterogeneous HPC cluster replacing the University’s flagship supercomputer Ada. The new cluster will provide a minimum aggregate peak performance computing capacity of 5 PFLOPS. The Grace cluster will be composed of 800+ compute nodes, 100 V100S (or A100) GPU compute nodes, 12 single precision T4/RTX GPU compute nodes, 8 large memory (3 or 4TB) compute nodes, 5 login nodes, and 6 management servers. The Grace cluster will have an HDR 3:1 InfiniBand interconnect and 5+ PB of usable high-performance storage based on either IBM Spectrum Scale V5 or Lustre software. The Grace cluster will be deployed in fall 2020.

FASTER FASTER (Fostering Accelerated Scientific Transformations, Education, and Research) is a composable high-performance data-analysis and computing instrument funded by the NSF MRI (Major Research Instrumentation) program. FASTER adopts the innovative Liqid composable software-hardware approach combined with cutting-edge technologies such as state of the art CPUs and GPUs, NVMe (Non-Volatile Memory Express) based storage, and high-speed interconnect. The composable and configurable techniques will allow researchers to use resources efficiently, enabling more science. Thirty percent of FASTER’s computing resources will be allocated to researchers nationwide by the NSF XSEDE (Extreme Science and Engineering Discovery Environment) program. The FASTER system will be deployed in early spring 2021.