Architecture and Dataflow Overview
LHCb Data-Flow ReviewSeptember 2001
Beat JostCern / EP
Beat Jost, Cern 2 Data-Flow Review Sep. 2001
Overall Architecture
Read-out Network (RN)
RU RU RU
6-15 GB/s
6-15 GB/s
50 MB/sVariable latency
L2 ~10 ms
L3 ~200 ms
Control &
Monitoring
LA
N
Read-out units (RU)
Timing&
FastControl
Level-0
Front-End Electronics
Level-1
VELO TRACK ECAL HCAL MUON RICH
LHCb Detector
L0
L1
Level 0Trigger
Level 1Trigger
40 MHz
1 MHz
40-100 kHz
Fixed latency 4.0 s
Variable latency <2 ms
Datarates
40 TB/s
1 TB/s
1 MHz
Front End Links
Trigger Level 2 & 3Event Filter
SFC SFC
CPU
CPU
CPU
CPU
Sub-Farm Controllers (SFC)
Storage
Th
rott
le
Front-End Multiplexers (FEM)
Functional Components•Timing and Fast Controls (TFC)•Front-End Multiplexing (FEM)•Readout Unit (RU)•Readout Network (RN)•Sub-Farm Controllers (SFC)•CPU FarmExternal Interfaces/Sub-Systems•Front-End Electronics•Triggers (Level-0 and Level-1)•Accelerator and Technical Services•(Controls & Monitoring)
Beat Jost, Cern 3 Data-Flow Review Sep. 2001
Functional Requirements
Transfer the physics data from the output of the Level-1 Electronics to the the CPU farm for analysis and later to permanent storage
Dead-time free operation within the design parameters
Reliable and ‘error-free’, or at least error-detecting Provide Timing information and distribute trigger
decisions Provide monitoring information to the controls and
monitoring system Support independent operation of sub-parts of the
system (partitioning)
Beat Jost, Cern 4 Data-Flow Review Sep. 2001
Performance Requirements
LHCb in NumbersNumber of Channels ~1'000'000Bunch crossing rate 40 MHzLevel-0 accept rate <1.1 MHzLevel-1 accept rate 40 kHzReadout Rate 40 kHzEvent Size 100-150 kBEvent Building Bandwidth 4-6 GB/sLevel-2 accept rate ~5 kHzLevel-3 accept rate ~200 Hz
Level-2/3 CPU Power 2·106 MIPSData rate to Storage ~50 MB/s
LHCb DAQ in Numbers
Level-1 Boards ~400Front-End Links ~400Link Technology (optical?) GbEFEM+RU ~120Links into Readout Network 70-100
The System will be designed against the nominal Level-1 trigger rate of 40 kHz, with a possible upgrade path to a Level-1 trigger rate of 100 kHz. Lead-time ~6-12 months Scalability
Beat Jost, Cern 5 Data-Flow Review Sep. 2001
General Design Criteria
Uniformity As much commonality as possible among sub-systems and
sub-detectors Reduced implementation effort Reduced maintenance effort (bug fixed once is fixed for all) Reduced cost
Simplicity Keep individual components as simple as possible in
functionality Minimize probability of component failure Important for large numbers
Keep protocols as simple as possible to maximize reliability Strict separation of controls and data paths throughout
the systemPossibly at the cost of increased performance
requirements in certain areas
Beat Jost, Cern 6 Data-Flow Review Sep. 2001
Specific Choices (1)
Only point-to-point links, no shared buses across modules… For the physics data obvious For controls desirable Clear separation between data path and control path
Link and Network Technology (optical) Gb Ethernet as uniform technology from the output of the
Level-1 electronics to the input to the SFC, because of its (expected) abundance and longevity (15+ years)
Readout Protocol Pure push-trough protocol throughout the system, i.e. every source
of data sends them on as soon as available Only raw Ethernet frames, no higher-level network protocol (IP) No vertical nor horizontal communications, besides data
(->Throttle mechanism for flow control)
Beat Jost, Cern 7 Data-Flow Review Sep. 2001
Specific Choices (2)
Integrated Experiment Control System (ECS) Same tools and mechanisms for detector and dataflow
controls Preserving operational independence
Crates and Boards The DAQ components will be housed in standard LHCb crates
(stripped-down VME crates) The Components will be implemented on standard LHCb
boards (9Ux400mm VME-like, without VME slave interface)
Beat Jost, Cern 8 Data-Flow Review Sep. 2001
Constraints and Environment
The DAQ system will be located at Point 8 of the LHC Some equipment will be located underground…
all the Level-1 electronics FEM/RU? (parts) of readout network?
…and some on the surface (parts) of the readout network SFCs CPU-farm Computing infrastructure (CPUs, Disks, etc…) Control Room Consoles etc.
No DAQ Equipment will be located in radiation areas Issues
Cooling/Ventilation Floor-space
Optical GbEthernet allows free distribution
Beat Jost, Cern 9 Data-Flow Review Sep. 2001
Summary
Design criteria Simplicity, Commonality, Uniformity
Potentially with higher cost in certain areas Lot of advantages in operation of the system
Designed around Gb Ethernet as basic link technology throughout the system (except individual farm nodes)
Pure push protocol without higher network protocol No shared buses for neither data nor controls Controls and data paths are separated throughout the
system