Upload
jodie-elliott
View
224
Download
3
Tags:
Embed Size (px)
Citation preview
RG, 2005/2006 HPC Methods 1
High Performance Computing MethodsHigh Performance Computing Methods
Ralf Gruber, [email protected]
Content and informations
RG, 2005/2006 HPC Methods 2
AT: What you hear today, must not be valid tomorrow.These days, you must be flexible at any time.
RG: Theory is good, examples better, even if they are only valid at the time they are executed.
“HPC methods” = “Numerical experimentations”
HPC methods
4 credits
RG, 2005/2006 HPC Methods 3
Part 1: Computer architectures and optimisationPart 2: Approximation methodsPart 3: Efficient computing
Exercises
Content
RG, 2005/2006 HPC Methods 4
1.1. Evolution of supercomputing1.1.1. Introduction to HPC1.1.2. Historical aspects: Hardware1.1.3. Historical aspects: Software and algorthmics1.1.4. Bibliography
1.2. Single processor architecture and optimisation1.2.1. Memory and processor architectures1.2.2. Data representation and pipelining1.2.3. System software issues1.2.4. Application related single processor parameters1.2.5. Application optimisation on a single processor
1.3. Parallel computer architectures1.3.1. SMP and NUMA architectures 1.3.2. Communication network technologies1.3.3. Cluster architectures1.3.4. Parameterisation of parallel applications 1.3.5. Grid computing
Part 1: Computer architectures and optimisation
RG, 2005/2006 HPC Methods 5
2.1. Stable numerical approach2.1.1. grad-div equations: Primal form2.1.2. Bilinear elements leading to spectral pollution2.1.3. Ecological solution for a Cartesian grid2.1.4. Problems with triangular meshes2.1.5. Spectral pollution for a non-Cartesian grid 2.1.6. Non-polluting finite hybrid element method2.1.7. Dual formulation approach2.1.8. Isoparametric transformation 2.1.9. Bibliography
2.2. Improve precision by an h-p approach2.2.1. Standard h-p finite elements2.2.2. Hybrid h-p Method
2.3. Improve precision by mesh adaptation2.3.1. The redistribution (r) method2.3.2. 1D Example: Optimal boundary layer2.3.3. 2D Example: The Gyrotron
Part 2: Approximation methods
RG, 2005/2006 HPC Methods 6
3.1. Programme environment for rapid prototyping3.1.1. Domain decomposition3.1.2. Memcom 3.1.3. Astrid3.1.4. Baspl++ 3.1.5. Gyrotron example3.1.6. 3D example: S3.1.7. Electrofilter
3.2. Mathematical libraries3.2.1. Introduction on direct and iterative solvers 3.2.2. BLAS, LAPACK, ScaLAPACK, ARPACK 3.2.3. MUMPS: Direct matrix solver3.2.4. PETSc: Iterative matrix solver3.2.5. FFTW, PRNG3.2.6. matlab3.2.7. Visualisation
3.3. Parallel computing3.3.1. Direct matrix solver3.3.2. Iterative matrix solver and MPI implementation
Part 3: Efficient computing
RG, 2005/2006 HPC Methods 7
E1: Exercises proposed during course
E2: Practical work. by attendees. proposed by RG
Exercises
RG, 2005/2006 HPC Methods 8
Proposals for practical work (together with specialists)
Parallel Poisson finite element solver1. For existing 2D solver, realise interface to Petsc 2. For existing 2D solver, realise interface to Mumps3. For existing 2D solver, realise a preconditioner
Graddiv eigenvalue solvers using h-p method4. Convergence study for primal and dual forms with quadrangles and Cartesian gird5. Influence of a non-Cartesian grid6. Triangular elements7. Replace Lapack eigenvalue solver by a more efficient one (Arpack)
Personal programme 8. Optimise existing programme9. Parallelise an existant serial programme10. Replace a solver by a more efficient one
Plasma physics programme11. Optimize VMEC 12. Optimize TERPSICHORE
RG, 2005/2006 HPC Methods 9
Practical work : 60%15’ presentation and questions
Questions on the course: 40%15’
Exam
RG, 2005/2006 HPC Methods 10
Course:
Thursday 16:15-18:0027.10./3.11.05 CM106 Friday 10:15-12:0028.10.05-27.1.06 ME B31
Exercises:
Thursday 16:15-18:0010.11.-9.2.06 CM103 Friday 10:15-12:003.2./10.2.06 ?
Dates/Places
RG, 2005/2006 HPC Methods 11
Team
Course:
[email protected] (35906)
Exercises:
Vincent Keller (33856)Ralf Gruber
Trach-Minh Tran MPICH, Math. libraries, Linux, plasma physicsAli Tolou (33565) Pleiades, Linux
RG, 2005/2006 HPC Methods 12
Architecture de serveurs: Le passé à l’EPFL
90 94 98 02
Computer architectures
Application-related R&D
year
Cray-T3D
PATPEPFLCray
JPL, PSCLLNL, LANL
applications
customised MPP
Cray-2
AstridEPFL
environment
parallel vector
Cray-1
vector
0686
commodity clusters
Swiss-T1
EPFLETHZ, CSCS
SCS AGCompaq
SNL/ORNL
integration
GeneProt
10
Swiss GRIDEPFL GRID
EU GRID?
Pleiades
GRID: ISS
CoreGRID
GRIDS
Forall
ielnx
industry relevance
SOS workshops in HPC Commodity Computing
Sandia NL 5-9.9.05 at EPFL:Oak Ridge NL CoreGRID Summer SchoolSwitzerland
Global GRID?
buy HPC
RG, 2005/2006 HPC Methods 13
Parallel computer architectures accessible by EPFL
Cluster Site Vendor node procs/node network 1 network 2
NoW LIN-EPFL Logics Pentium 4 1 FE Bus (EPNET) -
Pleiades1 STI-EPFL Logics Pentium 4 1 FE switch -
Pleiades2 STI-EPFL DELL Xeon 64 1 GbE switch -
Mizar DIT-EPFL Dalco Opteron 2 Myrinet -
BlueGene DIT-EPFL IBM Power 4 2 3D Grid/Torus Fat Tree
Horizon CSCS Cray Opteron 1 3D Grid/Torus -
SX-5 CSCS NEC vector 1 SMP -
Regatta CSCS IBM Power 4 1 Colony -
WWW Internet
RG, 2005/2006 HPC Methods 14
Parallel computer architectures accessible by EPFL
Cluster P R
[Gflops/s]
P R
[Gflops/s]
M
[Gwords/s]
VM
[f/w]
C
[Gwords/s]
VC
[f/w]
L
[s]
B
[]
NoW 10 6 60 0.8 7.5 0.0032 19’200 60 750
Pleiades1 132 5.6 739 0.8 7 0.4 1’792 60 750
Pleiades2 120 5.6 672 0.8 7 3.75 179 60 7’500
Mizar 160 9.6 1’536 1.6 6 10 154 10 2’500
BlueGene 4’096 5.6 22’937 0.7 8 1’065 22 2.5 4’800
Horizon 1’100 5.2 5’720 0.8 6.5 1’760 3.3 6.8 52’000
SX-5 16 8 128 8 1 128 - - -
Regatta 256 5 1’300 0.4 12 16 80 10* 640*
WWW 100/F 8 800 0.016 5’000 1000 64’000
RG, 2005/2006 HPC Methods 15
Pleiades
RG, 2005/2006 HPC Methods 16
SWITCH ProCurve 5300 (76.8Gb/s, 144 ports FE, 8 ports GBE)192.168.0.0
128.178.87.0
EP
NE
T
Fe1 Fe2
NC NC NC NC
132 P4 2.8 GHz/2GB
…
PCPC
10 P4 3/2.8 GHz/2GB
…Sw
itch
FE
24x
24 NC
NC
22 P
4 1.
8GH
z/1G
B
…
LIN offices
Pleiades 1
P2P1
Itanium 1.3 GHz/2GB
RG, 2005/2006 HPC Methods 17
SWITCH Black Diamond 8810 (432 Gb/s, 144 GbE ports)192.168.0.0
128.178.87.0
EP
NE
T
GbE
NC NC NC NC
120 Xeon (64bits) 2.8 GHz/4GB
…
Pleiades 2
RG, 2005/2006 HPC Methods 18
132 Pentium 4 (32 bit) 2.8 GHz processors -> 5.6 Gflop/s peak2 GB dual access DDR memory (max. 6.4 GB/s)80 GB disk (7200 turns per minute)Motherboard based on chipset Intel 875P0.5 (#1-#100)/1(#101-#132) MB secondary cacheLow cost (CHF 1’600 per processor)NFS for I/O
Linux: SuSE 10.0ifc and icc compilers from Intelgcc MKL mathematical library
Processors of Pleiades 1 cluster (November 2003)
RG, 2005/2006 HPC Methods 19
120 Xeon (64 bit) 2.8 GHz servers -> 5.6 Gflop/s peak4 GB dual access DDR memory (max. 6.4 GB/s)40 GB disk (7200 turns per minute)Motherboard based on Intel E75201 MB secondary cacheNFS for user files, PVFS with 8 I/O nodes for scratch filesLow voltage processors (140 W per server)
Linux: SuSE 10.0ifort and icc compilers from Intelgcc MKL mathematical library
Processors of Pleiades 2 cluster (November 2005)
RG, 2005/2006 HPC Methods 20
Software on Pleiades
SuSE linux 10.0SystemImagerOpenPBS resource management /Maui scheduling with fairshareNFS, PVFSMPICH, PVM, (MPI-FCI)ifc/icc/gcc compilersMKL (Blas/Lapack): basic mathematical library
RG, 2005/2006 HPC Methods 21
Software on Pleiades
petsc, aztec: parallel iterative matrix solversScalaPack, mumps: direct parallel matrix solversarpack, Parpack: eigenvalue solver, serial and parallelnag: general numerical library, serialGSL: GNU scientific libraryfftw: serial and parallel Fast Fourier Transforms ("the best in the west")sprng: serial and parallel random number generatorOpenDX: visualisation system, serialparaview: parallel visualisation system
RG, 2005/2006 HPC Methods 22
Software on Pleiades
memcom/astrid/baspl++: program environment based on domain decompositionmatlab: technical computing environmentFluentcfx/cfxturbogridicemcfdgamessothers
RG, 2005/2006 HPC Methods 23
Access to cluster Pleiades
Ali Tolou will open accounts.
Please put your name on the circulating sheet.
Test if .bashrc includes the links to Memcom/Baspl++/Astrid, Matlab, paraview, and ifort compiler:
module add smrmodule add matlabmodule add paraviewmodule switch intel_comp/7.1 intel_comp/9.0
RG, 2005/2006 HPC Methods 24
How to get exercises
To get exercises on your account:
scp -p -r ~rgruber/Cours5_6/* .
You will get:
daphne: The Gyrotron simulation programmedivrot: Eigenvalue computation of the 2D grad(div)
and curl(curl) operatorsDOOR: A test example of parsolh2pm: The eigenvalue code using a H2pMmanuals: Manuals of MUMPS, Astrid, petsc, Aztecparsol: Parallel 2D Poisson solverplayground: Benchmark codes, exercises, and resultsS: A test case of the 3D ASTRID solverStokes: The 2D Stokes eigenvalue solver
with div(v)=0 basis functionsterpsichore: The 3D ideal MHD stability programmeVMEC: The 3D MHD equilibrium programme
There is a README in each directory.
RG, 2005/2006 HPC Methods 25
Contributions
W.A. Cooper (benchmarks VMEC/TERPSICHORE)M. Deville (some tables)J. Dongarra (Top500)S. Merazzi (MEMCOM/BASPL++/ASTRID)A. Tolou (Pleiades’ system manager, LINUX)T.-M. Tran (MPI, math. libraries, benchmarks, Pleiades)
RG, 2005/2006 HPC Methods 26
Announcement of complementary courses
Trach-Minh Tran , CRPP
MPI, Introduction à la programmation parallèle
Février 2006Inscription: [email protected], DIT
Some doctoral schools give 2 credits
RG, 2005/2006 HPC Methods 27
Announcement of a "General Course"
Laurent Villard , CRPP and André Jaun, KTH
Numerical methods for PDE
On-line course with exercises
6 credits
RG, 2005/2006 HPC Methods 28
Questions ?