Upload
dane-porter
View
24
Download
1
Tags:
Embed Size (px)
DESCRIPTION
Development of LHCb Computing Model F Harris. Overview of proposed workplan to produce ‘baseline computing model for LHCb’. WHY are we worrying NOW about this?. HOFFMAN REVIEW (starting Jan 2000) How will the LHC experiments do their computing? Answers in late 2000 - PowerPoint PPT Presentation
Citation preview
26 Nov 1999 F Harris LHCb computing workshop
1
Development of LHCb Computing Model
F Harris
Overview of proposed workplan to produce ‘baseline computing model for LHCb’
26 Nov 1999 F Harris LHCb computing workshop
2
WHY are we worrying NOW about this?
HOFFMAN REVIEW (starting Jan 2000) How will the LHC experiments do their computing? Answers
in late 2000 The basic logical data flow model, patterns of use,
resources for tasks The preferred distributed resource model
(CERN,regions,institutes)
Computing MOUs in 2001
Countries (UK,Germany, …) are planning now for ‘development’ facilities
26 Nov 1999 F Harris LHCb computing workshop
3
Proposed project organisation to do the work
Tasks and Deliverables(1)
Logical data flow model (all data-sets and processing tasks) Data Flow Model Specification
Resource requirements Data volumes,rates,CPU needs by task (these are essential parameters
for model development) - measure current status and predict the future
URD giving distributions
Use Cases map demands for reconstruction, simulation, analysis, calibration and
alignment onto the model (e.g. physics groups working) Document ‘patterns of usage’ and resulting demands on resources -
‘workflow specification’
Reconstructed Data
Analysis Group Tags
Analysis Job Results
Monte Carlo Generator
Detector Simulation
Generator Data
Reconstruction
Event Tag Generation
Real Data Production
Sim. Raw Data Raw Data
Reconstruction Data
Event Tags
Event Tags
Analysis Job
Analysis Group Prod.
Analysis Data
Data analysisData Production
Detector/DAQ Analysis Cycle
200 kB 100 kB
70 kB
0.1 kB
10kB
150 kB
0.1 kB
200 Hz
LHCb datasets and processing stages (must update CPU and store reqts.)
26 Nov 1999 F Harris LHCb computing workshop
5
A General view of Analysis (G Corti)(? Patterns of group and user analysis)
MC generator
Raw Data
DST (reconstructed particles , primary vertex)
Reconstruction
Data Acquisition
Detector Simulation
(Group) Analysis ππBo
d
*DKBod
ρπBod
os
od KJ/ψB
MC “truth”Data
“Reconstructed” Physics channels
Physicist Analysis
....,,
,,,,
os
o
K
e
26 Nov 1999 F Harris LHCb computing workshop
6
Status of simulated event production (since june)E van Herwijnen
Sicb version Eventtype Sec/event Mbytes/1000events
Nr. of producedevents
v116/n(correctedsmearing)
41 50 700 10K
v116/n(correctedsmearing)
350000 140 1200 10K
v116/n(correctedsmearing)
350100 190 1250 10K
v116/n(correctedsmearing)
350200 190 1200 10K
v118 41 145 1000 50kv118 350000 275 1750 50kv118 350100 305 1800 50kv118 350200 300 1750 50kv118 (shorthcal)
41 145 1000 10k
v118 (shorthcal)
350200 305 1800 10k
v119 41 150 900 10kv119 350000 260 1500 10kv119 350100 310 1600 10kv119 350200 300 1500 10k
26 Nov 1999 F Harris LHCb computing workshop
7
Tasks and Deliverables (2)
Resource distribution Produce description of distribution of LHCb institutes, regional centres
and resources (equipment and people), and the connectivity Resource Map with network connectivity. List of people and equipment…
Special requirements for remote working (OS platfoms,s/w distribution,videoconferencing..) URD on ‘Remote working…’
Technology Tracking (Follow PASTA. Data Management s/w, GAUDI data management….) Technology trend figures Capabilities of data management s/w
26 Nov 1999 F Harris LHCb computing workshop
8
Mock-up of an Offline Computing Facility for an LHC Experiment at CERN
(Les Robertson July 99 with ‘old’ experiment estimates)
Purpose investigate the feasibility of building LHC computing
facilities using current cluster architectures, conservative assumptions about technology evolution
scale & performance technology power footprint cost reliability manageability
http://nicewww.cern.ch/~les/monarc/lhc_farm_mock_up/index.htm
26 Nov 1999 F Harris LHCb computing workshop
9
year 2004 2005 2006 2007
total cpu (SI95) 70'000 350'000 520'000 700'000disks (TB) 40 340 540 740
LAN thr-put (GB/sec) 6 31 46 61
tapes (PB) 0.2 1 3 5
tape I/O (GB/sec) 0.2 0.3 0.5 0.5
Farm capacity and evolution
250 Gbps
0.8 Gbps
8 Gbps
…………1400 boxes160 clusters40 sub-farms
12 Gbps*
480 Gbps*
3 Gbps*
1.5 Gbps
100 drives
12 Gbps
5400 disks
340 arrays
……...
LAN-SAN routers
LAN-WAN routers
CERN
CMS Offline Farmat CERN circa 2006
lmr for Monarc study- april 1999
tapes
0.8 Gbps (daq)
0.8 Gbps
5 Gbps
disks
processors
storage network
storage network
farm network
* assumes all disk & tape traffic on storage network double these numbers if all disk & tape traffic through LAN-SAN router
26 Nov 1999 F Harris LHCb computing workshop
11
Tasks and Deliverables(3)
Candidate Computer Models evaluation
Map data and tasks to facilities (try different scenarios) Develop spreadsheet model with key parameters-get ‘average
answers’ Develop simulation model with distributions
Evaluate different models (performance,cost,risk…..) Establish a BASELINE MODEL
>> BASELINE COMPUTING MODEL together with cost, performance,risk analysis
26 Nov 1999 F Harris LHCb computing workshop
12
The Structure of the Simulation Program (I Legrand)
User Directory
Config FilesInitializing DataDefine Activities (jobs)
GUI
Monarc Package
Network Package Data Model Package
Auxiliary toolsGraphics
SIMULATION ENGINE
ParametersPrices
....
Processing Package
Statistics
Regional Center, Farm, AMS, CPU Node, Disk, Tape
Job, Active Jobs, Physics Activities, Init Functions,
LAN, WAN, MessagesData Container, Database
Database Index
Dynamically Loadable Modules
26 Nov 1999 F Harris LHCb computing workshop
13
Proposed composition and organisation of working group
Contacts from each country Contacts from other LHCb projects (can/will have multi-function
people..) DAQ Reconstruction Analysis MONARC IT (PASTA +?)
Project Plan (constraint - timescales to match requests from the review..)
Monthly meetings? (with videoconferencing) 1st meeting week after LHCb week (first try at planning execution of
tasks) Documentation
all on WWW (need a WEBMASTER)