20
Reporter: Andrei.Chevel@p npi.spb.ru -- From: PNPI 19 September 2000 Computing in HEP (plan) Introduction The scale of computing in LHC Regional Computing Russian Regional Computing Facility Institute Level Computing Computing cluster at PNPI Computing cluster at Stony Brook Conclusion

Computing in HEP (plan)

  • Upload
    nuwa

  • View
    45

  • Download
    3

Embed Size (px)

DESCRIPTION

Computing in HEP (plan). Introduction The scale of computing in LHC Regional Computing Russian Regional Computing Facility Institute Level Computing Computing cluster at PNPI Computing cluster at Stony Brook Conclusion. Introduction. - PowerPoint PPT Presentation

Citation preview

Page 1: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Computing in HEP (plan)

Introduction The scale of computing in LHC Regional Computing

– Russian Regional Computing Facility Institute Level Computing

– Computing cluster at PNPI– Computing cluster at Stony Brook

Conclusion

Page 2: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Introduction

PNPI - St. Petersburg Nuclear Physics Institute (http://pnpi.spb.ru)– High Energy Physics Division– Neutron Physics Department– Theory Physics Division– Molecular and Biology Physics Division

Page 3: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

CSD responsibilities

HEPD Centralized Computing (http://www.pnpi.spb.ru/comp_home.html)– Computing Cluster– Computing Server

HEPD Local Area Network HEPD and PNPI Connectivity (excluding

terrestrial channel) HEPD and PNPI InfoStructure

Page 4: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

High Energy Physics Division Computer LAN

About 200 hosts Most of them are 10Mbit/sec Central part of LAN consists of

100Mbit/sec segments (Full Duplex)– Based on 3Com Switches 3300

LAN is distributed over 5 buildings – Effective distance is about 800 m

Fiber Optic Cable is used in between buildings

Page 5: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Page 6: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Page 7: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Page 8: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Related Proposals

CERN GRID sitehttp://grid.web.cern.ch/grid/

– Particle Physics Data Grid (http://www.cacr.caltech.edu/ppdg/)

– High Energy Physics Grid Initiative (http://nicewww.cern.ch/~les/grid/welcome.html)

– MONARC Project (http://www.cern.ch/MONARC/)

Page 9: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

What is an inspiration?

Last year (1999) that book became famous.

Immediately after a range of proposals were submitted to various agencies and funds.

Main idea is to create World Wide Computing Infrastructure.

Page 10: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Page 11: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

RRCF in St.Petersburg

Possible partners– Petersburg Nuclear Physics Institute;– S.Petersburg University;– S.Petersburg Technical University;– Institute for High Performance Computing

& Data Bases;– RUNNET ?

Page 12: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Russian Reginal Computing Facility

Regional Computing Centre for LHC– CSD takes participation in Russian activity

in creating the Russian Regional Computing Centre for LHC (see http://www.pnpi.spb.ru/RRCF/RRCF)

Page 13: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

RRCF in St.Petersburg

Page 14: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

RRCF in St.PetersburgComputing Power

Total Computing Capacity about 90K SPECint95

http://nicewww.cern/ch/~les/monarc/base_config.html

Pentium III/700 has about 35 SPECint95 About 2570 processors

– or about 640 machines for 4 processor – or 640/4 (institutes) = about 160 machines.

Page 15: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Computing Cluster

PC/II/400/128EIDE 6GB

PC/III/450/256EIDE 6GB

PC/II/266/256EIDE 5GB

Switch 100Mb

Dual PII/450/512Ultra2WideSCSI 18GB

http://www.pnpi.spb.ru/pcfarm RedHat 6.1CODINECERNlib

Root

Page 16: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

BCFpc hardware (picture)

Page 17: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Another Examplefor Computing Cluster

University at Stony Brook, Departnet of Chemistry, Laboratory of Nulear Chemistry

– DEC Alpha 4100 (500 MHz).– 32 machines (dual PIII/500, 256 MB);– Tape Robot for 3 TB;– RAID array 1.5 TB.

Page 18: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Problems

Security – against attacks from Internet and Intranet;– against unplanned data losses;

To keep up to date the software base on whole cluster.

To save network bandwidth by keeping locally the special cache for experimental data.

Appropriate Batch System.

Page 19: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Possible Plan for Small Physics Laboratory

To install the hardware: (3-5) PCs (about 0.8-1.3 GHz, 0.5GB of main memory, DLT stacker on the base DLT-8000, Switch with 1 Gbit uplink, etc., etc.).

To install the software: Linux, CERNlib, Objectivity/DB, GLOBUS, etc., etc.

To prepare and test logical connectivity to CERN and to Regional Computing Facility for LHC.

Page 20: Computing in HEP (plan)

Reporter: [email protected] -- From: PNPI

19 September 2000

Conclusion

Relatively new situation for many small and midrange laboratories (like PNPI):– Main direction in HEP computing at PNPI is

to create good front-end for High Performance Computing Facilities plus MSS outside Institute.

– We have to collect all the available financial, technical, administrative resources and plan to work in close collaboration.