Upload
pearl-crawford
View
215
Download
1
Embed Size (px)
Citation preview
1
Status report on DAQStatus report on DAQ
E. Dénes – Wigner RCF/NFI
ALICE Bp meeting – March 30 2012
2
Visit of ALICE DAQ group in Wigner RCF
•March 6-7, 2012•CERN participants:
• Paolo Gubellino, ALICE spokesperson• Pierre Vande Vyvre, head of ALICE DAQ group• Filippo Costa, member of ALICE DAQ group
•From Wigner RCF:• Péter Lévai• Ervin Dénes• Tivadar Kiss• György Rubin• Tamás Tölyhi
•Students:• Kristóf Blutman, BME• Gábor Kiss, ELTE• Hunor Melegh, BME
3
Main talks
•Péter Lévai: About Wigner RCF
•Paolo Gubellino: ALICE upgrade strategy (incl. the detector upgrades)
•Pierre Vande Vyvre: ALICE online upgrade*
•2 half-day discussion
•Self introduction of students
•Roadmap*
•Visit to Cerntech
* See following slides
4
Present Online Architecture
ALICE Mar.2012 P. Vande Vyvre – CERN/PH4
GDC TDSM
CTP
LTU
TTC
FERO FERO
LTU
TTC
RCU RCU
LDCLDC
BUSY BUSY
Rare/All
Event Fragment
Sub-event
Event
File
Storage Network
PDS
L0, L1a, L2
L0, L1a, L2
360 DDLs
D-RORCD-RORC
EDM
LDC
D-RORC D-RORC
Load Bal. LDC LDC
D-RORC D-RORC
HLT Farm
FEPFEP
H-RORC H-RORC
DDL
H-RORC
10 DDLs
10 D-RORC
10 HLT LDC
120 DDLs
DADQM
DSS
Event Building Network
430 D-RORC
125 Detector LDC
75 GDC30 TDSM
18 DSS60 DA/DQM
75 TDS
Archiving on Tapein the ComputingCentre (Meyrin)
5
ALICE Upgrade Strategy• Strategy recently approved by ALICE presents a global and coherent
plan to upgrade the experiment for 2018 (Long Shutdown 2 LS2)• “Upgrade Strategy for the ALICE Central Barrel”
https://aliceinfo.cern.ch/ArtSubmission/node/108
• Key concepts for running the experiment at high rate• Pipelined electronics
• Triggering the TPC would limit this rate → Continuous readout
• Reduce the data volume by topological trigger and online reconstruction
• Major upgrade of the detector electronics and of the online systems
• Online upgrade design:• Data buffer and processing off detector
• Fast Trigger Processor (FTP): 2 hw trigger levels to accommodate various detector latencies and max. readout rate
• HLT: 2 sw trigger levels:• ITS, TRD, TOF, EMC (reduce the rate before building TPC event)
• Final decision using the data from all detectors
• Common DAQ /HLT farm to minimize cost while preserving the present flexibility of running modes
ALICE Mar.2012 5 P. VANDE VYVRE CERN-PH
6
Trigger Rates
ALICE Mar.2012 6 P. VANDE VYVRE CERN-PH
• Key concepts for running ALICE at high rate• Hardware trigger-less architecture whenever needed and affordable• Pipelined electronics and continuous readout• Reduction of the data volume by topological trigger and 2 steps online reconstruction
Trigger Levels
DetectorsTrigger with
pp beamsTrigger with Pb-Pb beams
Frequency (kHz)
Latency Frequency (kHz)
Latency
No Trigger ITS, TPC, TRD, EMCal, PHOS
Continuous read-out at 10 MHz
Level 0 (hw) TOF (Pb-Pb) 2000 1.2 s 50 1.2 s
Level 1 (hw) TOF (p-p), Muon 10-20 10 s 20 10 s
Level 2 (sw) 10-25 1 s
Level 3 (sw) 5-25 10 s
7
Present TPC Readout
ALICE Mar.2012 7 P. VANDE VYVRE CERN-PH
RCUCTP
FEC
DAQ
HLT
216
160 MB/s
216
DDL 2.0 Gb/s
216
DDL 2.0 Gb/s
FEC
1
12
216
TTC
FEC
216
160 MB/s
FEC
1
13
• Present readout : • Links: DDL at 2 Gb/s. 216 DDLs for the TPC used at 1.6 Gb/s
• PC adapters: D-RORC and H-RORC
• Up to 6 RORCs/PC
DRORC
HRORC
8
Readout Upgrade
ALICE Mar.2012 8 P. VANDE VYVRE CERN-PH
FEC2 EPN600
10.0 or 40Gb/s
FLP 10 Gb/s
~7000Network
DAQ and HLT systems
600
10.0 or 40Gb/s• Upgrade readout:
• Non-zero suppressed TPC data 57 Tb/s (570 kchannels x 10 bits x 10 MHz)
• ~7000 links at 10 Gb/s for TPC. ~7800 links for the whole experiment.
• TBD: FEC2 characteristics (GEM readout, very simple FEC vs more links)
• DDL3 and RORC3 for LS2 upgrade (ALICE common solution)• Address the needs of the new architecture for the period after LS2 (2018)
• DDL3 links at 10 Gb/s• Exact requirements to be addressed (radiation tolerance/hardness)
• Different physical layers possible (CERN GBT, Eth, FCS, wavelength multiplex.)
• First-Level Processors (FLPs)• ~650 FLPs needed: 12 detector links 10 Gb/s, 1 network link at 10 or 40 Gb/s
• Data readout and first-level data processing:• ZS, cluster finder and compression could be performed by the RORC FPGA
• Regional online reconstruction
9
Future evolution of DDL & RORC
ALICE Mar.2012 P. Vande Vyvre – CERN/PH9
•New common DAQ/HLT DDL2 and RORC2•Prototype under design by Heiko Engel (Udo Kebschull’s team/Frankfurt) and Tivadar Kiss (Budapest) (Ready in 2012)
• It will address the upgrade needs with the present architecture for the period (2014-16) between Long Shutdown 1 (LS1) and LS2
• Includes 12 DDL2 links at 6 Gb/s. 6 links to DAQ LDC 36 Gb/s.
•PCIe V2 8 lanes (500 MB/s/lane) 32 Gb/s of I/O capacity.
•Data processing in the FPGA (e.g cluster finding)
4
4
4
• Currently: 5 links at 2Gb/s per PC 10 Gb/s of I/O capacity
• Prototype under development• 12 links at 6 Gb/s• 6 links to DAQ LDC 36 Gb/s.• PCIe Gen2 8 lanes (500 MB/s/lane)
32 Gb/s of I/O capacity• Final system
• 12 links at 10 Gb/s per PC• PCIe Gen3 16 lanes I/O 128 Gb/s
10
Upgrade Online (after LS2)
ALICE Mar.2012 10 P. VANDE VYVRE CERN-PH
10 Gb/sor
40 Gb/s
FLP 10 Gb/s
L2
Network
DAQ and HLT
10 Gb/sor
40 Gb/s
FLP
EPNFLP
TPC
TRD
Muon
FTP
L0L1
FLPTOFEPN
FLPITS
L3
L3
FLP
FLPPHOS
FLPEMC
Trigger Detectors ~ 650 FLPs ~1250 EPNs
~7800 DDL3
11
Next Steps
ALICE Mar.2012 P. Vande Vyvre – CERN/PH11
•Major upgrade foreseen for the online systems during LS2• Improve the Strategy Note (Mar ‘12)
• More detailed and optimized cost estimate (Mar ‘12)
•Prototyping, some using the present online systems (‘12)• Continue to gain experience with using HLT online results as basis of
offline reconstruction, e.g. the cluster finding
• Essential to reach the necessary data reduction factors after LS2
•R&D program (‘12-’16)•Fast Trigger Processor
•Readout links
•Event-building and HLT dataflow network
•Online reconstruction: online use of “offline software”
•Efficient use of new computing platforms (many-cores and GPUs)
•Online Conceptual Design Report (TBD)•Requirements from detectors
•Key results from R&D
12
Short and Mid Term SW, FW, HW Tasks (for us)
• During LS1: some detector will upgrade to C-RORC• C-RORC prototype: June 2012, Tivadar• C-RORC firmware: September 2012, Filippo• C-RORC software: Q4 2012, Ervin• Tailoring of the DDG for the C-RORC: Q4 2012, Filippo, Ervin
• Technology Research Project• Members: Pierre, Filippo, Tivadar, Ervin, Gyuri, Gusty?, students?• Preparing a list of technological companies, deadline: end of April• Preparing questionnaires for the meetings with companies: end of May• Scheduling meetings with industrial companies:
• At CERN: middle of June• In the US: October
• Preparation of a spread sheet for comparison the different solutions:• Preliminary: end of September, final: Q1/2013
• Setting up and running demonstrations in Budapest: PCIe-over-fibre, 2012• Recommended capital investments:
• High-end server motherboard• PCIe expansion box over-fibre connection
13
Longer time tasks
• Building demonstrations with selected technologies
• Deadline for defining the common interface for the detectors: June 2014
• Deadline for a Conceptual Design Report: ~ End of 2014
• Deadline for delivering prototypes of "DDL3" for the detectors: Q4 2015
• Deliver prototype software (Q4 2015)
• Production of DDL3 (Q3 2016 - Q2 2017)
• Release production software (Q2 2017)
• Detector installation and commissioning at Point 2 (Q3 2017)
• A small DAQ with limited performance (but full functionality) has to be installed by Q3 of 2017
Thank you for your attention
Reserved slides
16
A quick look to the past:the DDL and the D-RORC
•Project started in 1995
•Strong push from the management:George Vesztergombi and now Péter Lévai
•Work of a solid and competent team:Ervin Dénes, Tivadar Kiss, György Rubin, Csaba Soós, and many others contributed to this successful project
•The Budapest team delivered to the ALICE experiment a common and reliable solution for the data transfer from the detectors to the online systems : the DDL and the D-RORC
ALICE Mar.2012 16 P. VANDE VYVRE CERN-PH
17
DDL and D-RORC
ALICE Mar.2012 P. Vande Vyvre – CERN/PH17
•~ 500 DDLs and ~450 D-RORCs
•Collect all the data of ALICE (~2 PB/year)
•Configure the electronics of several detectors, in particular the biggest one (TPC)
•Goal for the future: Repeat this success story !
18
Continuous Readout
ALICE Mar.2012 P. Vande Vyvre – CERN/PH18
Time t+3x t+2x t+x t t-x t-2x
TRD
EMC
TPC
ITS
TOF
Events n+2 n+1 n
FLP
Event Building &Processing
19
Online reconstruction
ALICE Mar.2012 P. Vande Vyvre – CERN/PH19
•50 kHz Pb–Pb collisions inspected with the least possible bias using topological triggers and on-line particle identification
•HI run 2011: online cluster finding data compression factor of 4 for TPC•Two HLT scenarios for the upgrade:
1. Partial event reconstruction: Further factor of 5 → overall reduction factor of 20. Close to what could be achieved and tested now. Rate to tape: 5 kHz.
2. Full event reconstruction: Overall data reduction by a factor of 100. Rate to tape: 25 kHz.
HI run 2018: min. bias event size ~75 MB ~4-1 MB after data volume reduction. Throughput to mass storage: 20 GB/s.
•Target rate after LS2 can only be reached with online reconstruction• Build on success of last year with TPC cluster finding • Extremely ambitious: online calibration and data processing• Essential to gain experience with using online results as basis of offline
reconstruction, e.g. the cluster finding. Mandatory to reach the necessary data reduction factor after LS2.
20
Network Routers and Switchesfor the Event Building
ALICE Mar.2012 P. Vande Vyvre – CERN/PH20
•Present DAQ network (Force 10 now DELL)• Exascale E1200i 3.5 Tb/s
• Capacity of present DAQ router not adequate
•Technology 1: Ethernet (Brocade MLX Series)• MLX32e up to 15.3 Tb/s, 32 line card slots
Up to 256x10GbE ports, 32x100GbE ports
• Line cards: 2x100 Gb/s or 8x10 Gb/s
•Technology 2: Infiniband (Mellanox)• MIS6500 up to 51.8 Tb/s, 648 ports at 40 Gb/s
21
Processing Power
ALICE Mar.2012 P. Vande Vyvre – CERN/PH21
Current HLTCurrently HLT processing rates with full TPC reconstruction:•Either ~200 Hz central events (65 MB) or ~800Hz min bias events•216 front-end nodes (equivalent to the FLPs)
•TPC cluster finder on H-RORC FPGA and coordinates transformation by the CPU
•60 tracking nodes with GPU (equivalent of the EPNs)HLT•Assuming linear scaling of rates and using current HLT technology:•Level 2 (reconstruction using ITS, TRD, EMC)
•50 kHz min bias events (15 MB, ~4 times less CPU) →50 kHz/800Hz*60 nodes*1/4= 1250 nodes (CPU+GPU)
•Level 3 (full reconstruction incl. TPC)•25 kHz central events →
25 kHz/200Hz*60 nodes = 7500 nodes (CPU+GPU)
•8750 nodes of 2011 → 1250 nodes of 2018Event Processing nodes ~1250 EPNs
• Current HLT processing nodes:• Intel Xeon E5520, 2.2 GHz 4 cores
Global track merging + reconstruction, some trigger algorithms
• Nvidia GTX480/580 ~90% of the tracking
• Performance gain 33%/yr during 7 yearsFactor 7 from 2011 to 2018
• Overall chip performance gain →Significant R&D on computing and algorithms concepts