Upload
charleen-welch
View
225
Download
0
Tags:
Embed Size (px)
Citation preview
ATLAS Computing at Harvard
John Huth
ATLAS Computing at Harvard
• Two functions– Supply computing power (storage and CPU)
for Harvard investigators to support analysis and simulation
– Act as a Tier 2 facility, jointly with BU (Northeast Tier 2 – NET2).
• Recent addition of substantial local resources for computing, connected to the Open Science Grid
The ATLAS computing model uses a tiered system in order to
enable all members speedy access to all reconstructed data needed for analysis
and raw data needed for monitoring, calibration, and
alignment
ATLAS Computing ModelTier-0 at CERN
Archives and distributes RAW dataProvides first pass processing
Restricted to central production group
~10 Tier-1 FacilitiesStores select RAW data, stores derived data,
and performs processing on RAW dataRestricted to working group managers
Regional Tier-2 FacilitiesResources for local research, such as analysis,
simulation, and calibrationOpen to all members of the collaboration
Local Tier-3 FacilitiesTypically clusters housed at a university or lab
Allows fast analysis of derived datasetsTypically open only to local members
RAW data from
ATLAS
Simulated data
RAW data
Derived data
Derived data
Computing Requirements
Integrating computing and storage resources from more than 50 sites, OSG is a U.S. distributed computing infrastructure designed for large-scale scientific research
OSG provides the software framework, middleware, and oversight for more than 35 Virtual Organizations (VOs), which provide local resources and user services
OSG is funded and supported by the NSF and DOE
Open Science Grid
Harvard and Scientific Computing
• A recent and dramatic increase in support for scientific computing at Harvard– Hardware, facilities and personnel
• Commitment to a major scientific computing center along Oxford Street– Currently rental space at 1 Summer Street
Facility– Dedicated support for high-end computing
supplied by University
Harvard’s Role in ATLAS Computing Model
BNL Tier-1CPU: 4900 kSi2K
Disk: 2000 TBTape: 1000 TB
CERN Tier-0CPU: 4480 kSi2K
Disk: 330 TBTape: 1620 TB
BU ATLAS ClusterCPU: 700 kSi2K
Disk: 236 TBTape: 0 TB
Harvard UniversityOdyssey ClusterCPU: 5600 kSi2K
Disk: 300 TBTape: 0 TB
Northeast Tier-2
Overall flow of data into NET2
FAS Computing
15th April 2008 MIT Network Meeting
Odyssey physical installation at 1 Summer Street
15th April 2008 MIT Network Meeting
Capabilities of Odyssey
• General purpose Intel x86_64
• 4096 CPU cores (9,543GHz) 16TB DRAM
• Infiniband fully non blocking
• ~ 31.7 TFlop @ 84% efficiency
FAS Cluster configuration - 1
Odyssey Performance
Tests of the Odyssey cluster showed that a sustained rate up to 600 jobs could be handled under even heavy concurrent loads. These were mainly simulation jobs with light I/O.
Computing Capabilities
• Based on the CPU ratings, the Harvard cluster has as much aggregate CPU power as all other US ATLAS Tier 2’s combined.
• Based on concurrent usage (other applications), it is likely that we can at least triple the CPU power available for the NET2 with Odyssey online.
• Available for Harvard ATLAS as priority, a huge resource for analysis and data storage. – Example of “leverage” envisaged for Tier 2’s