15

Cluster currently consists of: 1  Dell PowerEdge 2950

  • Upload
    yama

  • View
    47

  • Download
    0

Embed Size (px)

DESCRIPTION

Cluster currently consists of: 1  Dell PowerEdge 2950 3.6Ghz Dual, quad core Xeons (8 cores) and 16G of RAM Original GRIDVM - SL4 VM-Ware host 1 Dell PowerEdge SC1435 2.8Ghz Dual quad-Core Opteron's (8 Cores) and 16G of RAM File Server, with 8.6 TB of disk space - PowerPoint PPT Presentation

Citation preview

Page 1: Cluster currently consists of: 1  Dell PowerEdge  2950
Page 2: Cluster currently consists of: 1  Dell PowerEdge  2950

Cluster currently consists of:

1  Dell PowerEdge 29503.6Ghz Dual, quad core Xeons (8 cores) and 16G of RAMOriginal GRIDVM - SL4 VM-Ware host

1 Dell PowerEdge SC14352.8Ghz Dual quad-Core Opteron's (8 Cores) and 16G of RAM File Server, with 8.6 TB of disk space

11 Dell PowerEdge SC14352.8Ghz Dual quad-Core Opteron's (8 Cores) and 16G of RAM Worker nodes

16 Dell  PowerEdge M605 blades2.8Ghz Dual six-Core Opteron's (12 Cores) and 32G of RAM Worker nodes

Total: 296 cores

Page 3: Cluster currently consists of: 1  Dell PowerEdge  2950

25 June 2013DST / NRF Research

Infrastructure3

UJ-ATLAS : ATHENA installed, using Pythia event generator to study various Higgs scenarios,

Page 4: Cluster currently consists of: 1  Dell PowerEdge  2950

Diamond Ore Sorting (Mineral-PET)S Ballestrero, SH Connell, M Cook, M Tchonang, Mz Bhamjee + MultotecGEANT4 MonteCarlo

Online diamond detection

Online diamond detection

Monte Carlo simulation

Page 5: Cluster currently consists of: 1  Dell PowerEdge  2950

Diamond Ore Sorting (Mineral-PET)

Simulation of radiation dose as a function of position from a body of radioactive material

Misaligned – before optimisation

Simplified numerical model

After optimisation

Full physics Monte-Carlo

PET point source image - automatic detector parameter tweaking

Page 6: Cluster currently consists of: 1  Dell PowerEdge  2950

Monte Carlo (GEANT4)

Particle Tracking – Accelerator Physics, Detector Physics

Page 7: Cluster currently consists of: 1  Dell PowerEdge  2950

The stellar astrophysics group at UJ

Astrophysics projects on the UJ fast computing cluster:

A large group of researchers, co-led by Chris Engelbrecht at UJ, involving three SA institutions, an Indian institution and eight postgraduate students in total, are working on important corrections to current theories of stellar structure and evolution. To this end, we use the UJ cluster for two essential functions:

1. Performing Monte Carlo simulations of random processes in order to determine the statistical significance of purported eigenfrequency detections in telescope data. A typical run takes about 48 hours running on the majority of the cluster nodes.

2. Running stellar models containing new physics in the stellar structure codes

(not in use yet, but implementation later in 2013 expected)

Page 8: Cluster currently consists of: 1  Dell PowerEdge  2950

The stellar astrophysics group at UJ

Astrophysics projects on the UJ fast computing cluster:

A large group of researchers, co-led by Chris Engelbrecht at UJ, involving three SA institutions, an Indian institution and eight postgraduate students in total, are working on important corrections to current theories of stellar structure and evolution. To this end, we use the UJ cluster for two essential functions:

1. Performing Monte Carlo simulations of random processes in order to determine the statistical significance of purported eigenfrequency detections in telescope data. A typical run takes about 48 hours running on the majority of the cluster nodes.

2. Running stellar models containing new physics in the stellar structure codes

(not in use yet, but implementation later in 2013 expected)

Page 9: Cluster currently consists of: 1  Dell PowerEdge  2950

The stellar astrophysics group at UJ

Astrophysics projects on the UJ fast computing cluster:

A large group of researchers, co-led by Chris Engelbrecht at UJ, involving three SA institutions, an Indian institution and eight postgraduate students in total, are working on important corrections to current theories of stellar structure and evolution. To this end, we use the UJ cluster for two essential functions:

1. Performing Monte Carlo simulations of random processes in order to determine the statistical significance of purported eigenfrequency detections in telescope data. A typical run takes about 48 hours running on the majority of the cluster nodes.

2. Running stellar models containing new physics in the stellar structure codes

(not in use yet, but implementation later in 2013 expected)

Page 10: Cluster currently consists of: 1  Dell PowerEdge  2950
Page 11: Cluster currently consists of: 1  Dell PowerEdge  2950
Page 12: Cluster currently consists of: 1  Dell PowerEdge  2950

Successful test : 20 September 2012 CHAIN Interoperability, where some of SA Grid participated UJ, UFS, UCT and CHPC

Shown below – gLite sites in SA

Page 13: Cluster currently consists of: 1  Dell PowerEdge  2950

Feature of the UJ Research Cluster

Maintain interoperability on 2 Grids : OSG and gLite

Virtual machines (compute element and user interface for each platform)

Shown below – OSG sites

Page 14: Cluster currently consists of: 1  Dell PowerEdge  2950

Currently in the middle of an upgrade:

Nodes and virtual machines running a spread of Scientific Linux 4, 5 and 6 to keep services online.

System administrator is a South African currently based at CERN in Europe. Able to administer the cluster using remote tools.

Using Pixie and Puppet, can reboot a node and reinstall to any version of Scientific Linux and EMI (European middleware initiative) within 45 minutes.

Page 15: Cluster currently consists of: 1  Dell PowerEdge  2950

Trying to maintain usability by - SAGrid- ATLAS (Large Hadron Collider)- ALICE (Large Hadron Collider)- e-NMR (Bio-molecular)- OSG

ATLAS jobs running for last 9 months, in production queue (in test mode) for last 4 weeks.

Difficult to keep both OSG and gLite running – when the one demands an upgrade, the other breaks. Important though – grids are all about joining computers; we are helping to keep compatibility between the two big physics grids.

Currently on the to-do list:Finish partially completed Scientific Linux upgradeReturn OSG to functional statusSet up IMPI implementation – allow complete remote control at lower level than OS.