18
Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee meeting 9 April 2013 Research Technologies UITS Indiana University

Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee

Embed Size (px)

Citation preview

Big Red II & Supporting InfrastructureCraig A. Stewart, Matthew R. Link, David Y

Hancock

Presented at IUPUI Faculty Council Information

Technology Subcommittee meeting 9 April 2013

Research Technologies

UITS

Indiana University

License TermsPlease cite as: Stewart, C.A. Big Red II & Supporting Infrastructure, 2013. Presentation. Presented at: IUPUI Faculty Council Information Technology Subcommittee meeting (Indianapolis, IN 9 April 2013). http://hdl.handle.net/2022/16981Items indicated with a © are under copyright and used here with permission. Such items may not be reused without permission from the holder of copyright except where license terms noted on a slide permit reuse. Except where otherwise noted, contents of this presentation are copyright 2013 by the Trustees of Indiana University. This document is released under the Creative Commons Attribution 3.0 Unported license (http://creativecommons.org/licenses/by/3.0/). This license includes the following terms: You are free to share – to copy, distribute and transmit the work and to remix – to adapt the work under the following conditions: attribution – you must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). For any reuse or distribution, you must make clear to others the license terms of this work.

Overview

• The big goal: transform the way IU generally uses advanced computing to advance basic research, scholarship, translational research, and artistic expression

• Big Red II• IU Bloomington Data Center• IU Research Network• Data Capacitor II• Research Home Directories• Quarry is still here!!!• IU Cyberinfrastructure Gateway

Flikr user hassanrafeek CC license

Big Red II - What is it?

4

Big Red II – System Specifications

5

System Size – 1056 nodes (264 blades)• 344 XE6 compute nodes (86 blades)

2.5 GHz 16-core Abu Dhabi

64 GB system memory• 676 XK7 Compute (169 blades)

2.3 GHz 16-core Interlagos

NVIDIA K20 GPU (Kepler)32 GB system memory5 GB video memory

• 36 service & IO nodesFile System & Storage

• Boot Raid• 180 TB Lustre File system (1 GB/s)

Interconnect: GeminiTopology: 11 x 6 x 8 3D

TorusPeak Performance: 1.0003 PFLOPSTotal Memory Size: 43.6 TBTotal XE6 Cores: 11,008Total XK7 Cores: 10,816Total x86-64 Cores: 21,824

6

Big Red II – Logical Diagram

7

Cray XE/XK Node Functions

Big Red II – Can I log in yet?

8

• Stability testing & early user mode – April 2013• Dedication – April 26, 2013• General availability – sometime after that

Big Red II – What will we do with it?

9

• HPC applications in Extreme Scalability Mode• ISV support through Cluster Compatibility Mode• High Throughput Computing (HTC)• PGI/Intel compilers• OpenACC Support• GPU-enabled applications

NAMD AMBER

CHARMM GROMACS

NWChem MILC

Matlab

Big Red II – Similar Systems

10

• 11 PFLOPS XE6/XK7 Blue Waters at NCSA

• 20 PFLOPS XK7 - Titan at ORNL

Sup

port

ing

Infr

astr

uctu

re -

Net

wor

king

Supporting File Systems – Data Capacitor II

12

• 2 SFA12K40 with 10 84-slot chassis each• 1680 total 3 TB SATA drives – 5,040 TB raw capacity• 16 Object Storage Servers (96 GB RAM) • 2 Metadata Servers (192 GB RAM)

• SFA6620 storage system with 96 600 GB 15K RPM SAS drives & 20 3 TB SATA drives

• 8 Lustre routers – provide access to DCII via a 10GbE• Bandwidth – >20 GB/s via Ethernet; >40 GB/s via InfiniBand

Supporting File Systems – Data Capacitor II

13

Supporting File Systems – Home Directories

The Home File System uses the DDN SFA12K20E with embeddedGridScaler (GPFS) as NAS storage

Each SFA12K20E has 5 60-slot chassis,140 3TB NL SAS drives for user data and 10 450GB 15K RPM SAS drives formetadata.

Each system has 2 servers for non-NFS client access

Each system has multiple 10 GbEconnections for clients

Native GPFS clients are supported andcan be added if licensed

System 1 -- IUB System 2 -- IUPUI

Quarry is still here!

Condominium computing – on Quarry• Condominium computing - UITS-managed, departmentally

owned research IT systems or system components: • You can purchase nodes that are compatible with IU's Quarry

cluster, have them installed in the very secure IUB Data Center, have them available when you want to use them, and have them managed, backed up, and secured by UITS RT staff.

• You get access to your nodes within seconds of requesting their use.

• When they are not in use by you, they become available to others in the IU community, thereby expanding the computing capability available to the IU community while conserving natural resources and energy.

IU Cyberinfrastructure

Gateway

Questions?

References• PTI – pti.iu.edu• Research Technologies - http://pti.iu.edu/rt• Cray XE6/XK7 - http://www.cray.com/Products/• NVIDIA K20 - http://www.nvidia.com/object/nvidia-kepler.html• Condominium computing - http://kb.iu.edu/data/azof.html• NCSA Blue Waters - http://www.ncsa.illinois.edu/BlueWaters/• ORNL Titan - http://www.olcf.ornl.gov/titan/• DataDirect Networks - http://www.ddn.com/