38
AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — [email protected] ETH Zurich and MIT

AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — [email protected] ETH Zurich and [email protected]

Embed Size (px)

Citation preview

Page 1: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

AMS-02 Computing and Ground Data Handling

CHEP 2004September 29, 2004. Interlaken

Alexei Klimentov — [email protected] Zurich and MIT

Page 2: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

2September 29. 2004. CHEP04. Alexei Klimentov.

Outline

AMS – particle physics experiment

STS91 precursor flight

AMS-02 ISS mission Classes of AMS data Data Flow Ground centers Data Transmission SW AMS-02 distributed Monte-Carlo Production

Page 3: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

3September 29. 2004. CHEP04. Alexei Klimentov.

AMS : a particle physics experiment in space

PHYSICS GOALS :

Accurate, high statistics measurements of charged, cosmic ray spectra in space > 0.1GV

Nuclei and e-/e+ spectra measurement

• The study of dark matter (90% ?)

• Determination of the existence or absence of antimatter in the Universe Look for negative nuclei as anti-Helium, anti-Carbon

• The study of the origin and composition of cosmic raysMeasure isotopes D, He, Li, Be…

AMS, the Alpha Magnetic Spectrometer, scheduled for a three years mission on the International Space

Station (ISS).

Page 4: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

4September 29. 2004. CHEP04. Alexei Klimentov.

Magnet : Nd2Fe14BTOF : trigger, velocity and ZSi Tracker : charge sign, rigidity, Z Aerogel Threshold Cerenkov : velocityAnticounters : reject multi particle events

Results :Anti-matter search : He / He = 1.1x10-6

Charged Cosmic Ray spectra Pr, D, e-/e+, He, NGeomagnetic effects on CR under/over geomagnetic cutoff components

100M events recordedTrigger rates 0.1-1kHzDAQ lifetime 90%

_

The operation principles of the apparatus have been tested in space during a precursor flight :

AMS-01

Page 5: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

5September 29. 2004. CHEP04. Alexei Klimentov.

Page 6: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

6September 29. 2004. CHEP04. Alexei Klimentov.

AMS-02.

Super Conducting Magnet (B = 1 Tesla). Transition Radiation Detector (TRD) , rejects

protons better than 10-2, lepton identification up to 300 GeV.

Time Of Flight Counters (TOF), time of flight measurement to an accuracy of 100 ps.

Silicon Tracker, 3D particle trajectory measurement with coordinate resolution 10um, and energy loss measurement.

Anti-Coincidence Veto Counters (ACC) reject particles that leave or enter via the shell of the magnet.

Ring Image Cherenkov Counter (RICH), measures velocity and charge of particles and nucleis.

Electromagnetic Calorimeter (ECAL). Measures energy of gamma-rays, e-,e+ , distinguishes e-/e+ from hadrons.

Page 7: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

7September 29. 2004. CHEP04. Alexei Klimentov.

DAQ Numbers

Sub-detector Channels Total Raw KBits

Tracker 196,608 3,146

ToF+ACC 384 49

TRD 5,248 84

RICH 27,760 348

ECAL 2,592 47

Total 232.6K ~3700

Raw data rate : 3.7 Mbit x 200-2000 Hz = 0.7-7 GBit/s

Data reduction, filtering : 2 Mbit/s

AMS Power budget : 2KW

Page 8: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

8September 29. 2004. CHEP04. Alexei Klimentov.

AMS

Page 9: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

9September 29. 2004. CHEP04. Alexei Klimentov.

AMS02 Ground Support Centers

Payload Operations Control Center (POCC) at CERN (first 2-3 months in Houston TX).“counting room”, usual source of commands.receives Health & Status (H&S), monitoring and science data in real-time

receives NASA video.voice communication with NASA flight operations.

Science Operations Center (SOC) at CERN (first 2-3 months in Houston TX).receives the complete copy of ALL data.data processing and science analysis.data archiving and distribution to Universities and Laboratories.

Ground Support Computers (GSC) at Marshall Space Flight Center Huntsville AL.

receives data from NASA -> buffer -> retransmit to Science Center.

Regional Centers.Aachen, ITEP, Karlsruhe, Lyon, Madrid, Milan, MIT, Nanjing, Shanghai, Taipei,

Yale -> 19centers.analysis facilities to support geographically close Universities.

Page 10: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

10September 29. 2004. CHEP04. Alexei Klimentov.

Classes of AMS Data (Health & Status data)

Critical Health and Status Data.– Status of detector

Magnet State (charging, persistent, quenched…)

Input Power (1999 W)

Temperature (low, high)

DAQ state (active, stuck)

Rate < 1 Kbit/sec

need in Real-Time (RT) to AMS Payload Operation and Control Center (POCC), to ISS crew and NASA ground

Page 11: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

11September 29. 2004. CHEP04. Alexei Klimentov.

Classes of AMS Data (Monitoring data)

Monitoring (House-Keeping, Slow Control) Data

All slow control data from all slow control sensors

Data rate ~ 10 Kbit/sec

needed in Near Real Time (NRT) to AMS POCC

visible to ISS crew

complete copy “later” (close to NRT) for science analysis

Page 12: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

12September 29. 2004. CHEP04. Alexei Klimentov.

Classes of AMS Data (Science data)

Science Data

events, sub-detectors calibrations

samples approximately 10% to POCC to monitor detector performance in RT

complete copy “later” to SOC for event reconstruction and physics analysis.

2 Mbit/sec orbit/average

Page 13: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

13September 29. 2004. CHEP04. Alexei Klimentov.

Classes of AMS Data (Flight Ancillary Data)

Flight Ancillary Data

ISS latitude, attitude, speed, etc

needed in Near Real Time (NRT) to AMS POCC

complete copy “later” (close to NRT) for science analysis

2 Kbit/sec

Page 14: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

14September 29. 2004. CHEP04. Alexei Klimentov.

Commands

Command Types simple, fixed (few bytes – require H&S data visibility)

short, variable length (<1KByte – require monitoring data) files, variable length (K to Mbytes – requires science data)

In the beginning we may need to command intensively, over the long haul anticipate :

a few simple or short commands per orbit occasional (daily-weeks) periods of heavy commanding very occasional (weekly-monthly) file loading

Command Sources Ground : one source of commands – POCCCrew via ACOP – contingency use only of simple or short commands

Page 15: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

15September 29. 2004. CHEP04. Alexei Klimentov.

Page 16: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

16September 29. 2004. CHEP04. Alexei Klimentov.

AMS Crew Operations Post (ACOP)

Serves as internal recording device to preserve data Allows for burst mode playback operations up to 20 times original speed to

assist in data management Allows access to MRDL link (another path to ground), and will enable

AMS to take advantage of future ISS upgrades such as 100baset MRDL Potential for additional data compression/ triggering functions to minimize

data downlink Serves as additional command interface

– Upload of files to AMS (adjust main triggering)– Direct commanding to AMS

ACOP is a general purpose computer, the main duties

of ACOP are to :

Page 17: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

17September 29. 2004. CHEP04. Alexei Klimentov.

AMS Ground Data Handling

How much of (and how soon) AMS data gets to the ground centers determines how well :

Detector performance can be monitored Detector performance can be optimized Detector performance can be tuned into physics

Page 18: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

18September 29. 2004. CHEP04. Alexei Klimentov.

AMS Ground Data Centers

Page 19: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

19September 29. 2004. CHEP04. Alexei Klimentov.

Ground Support Computers

At Marshall Space Flight Center (MSFC), Huntsville Al Receives data from NASA Payload Operation and

Integration Center (POIC) Buffers data until retransmission to the AMS Science

Operation Center (SOC) and if necessary to AMS Payload Operations and Control Center (POCC)

Runs unattended 24h/day, 7 days/week Must buffer about 600 GB (data for 2 weeks)

Page 20: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

20September 29. 2004. CHEP04. Alexei Klimentov.

Payload Operation and Control Center

AMS “counting room” Usual source of AMS commands Receives H&S, monitoring, science and NASA data in

real-time mode Monitor the detector state and performance Process about 10% of data in near real time mode to

provide fast information to the shift taker Video distribution “box” Voice loops with NASA

Page 21: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

21September 29. 2004. CHEP04. Alexei Klimentov.

Science Operation Center

Receives the complete copy of ALL data Data reconstruction and processing, generates event

summary data and does event classification Science analysis Archive and record ALL raw, reconstructed and H&S data Data distribution to AMS Universities and Laboratories

Page 22: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

22September 29. 2004. CHEP04. Alexei Klimentov.

Regional Centers

Analysis facility to support local AMS Universities and Laboratories

Monte-Carlo Production Mirroring DST (ESD) Provide access to SOC data storage (event visualization,

detector and data production status, samples of data , video distribution)

Page 23: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

23September 29. 2004. CHEP04. Alexei Klimentov.

Telescience Resource Kit (TReK)

TReK is a suite of software applications that provide:– Local ground support system functions.– An interface with the POIC to utilize POIC remote ground support

system services.

TReK is suitable for individuals or payload teams that need to monitor and control low/medium data rate payloads.

The initial cost of a TReK system is less than $5,000.

M.Schneider MSFC/NASA

Page 24: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

InternationalSpace Station

Processed P/L Data

TDRS

P/L Uplinks

P/L Uplinks

US.Investigator

Sites

TelescienceSupportCenters(TSC’s)

P/L Uplinks

P/L User Data

P/L User Data

P/L User Data

SpaceShuttle

POIC(EHS,PDSS,PPS)

IP’s

SSCC/MCC-H

P/L Uplinks

White Sands Complex

ISS Payload Telemetry and Command Flow

P/L Uplinks

M.Schneider MSFC/NASA

Page 25: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

25September 29. 2004. CHEP04. Alexei Klimentov.

White Sands Complex

ISS

TDRS

PDSS Payload & UDSM Packets (UDP)

User Defined GSE Packets (UDP)

Custom Data Packets (TCP)

POIC

EHS

PDSS

TelemetryProcessingApplication

Telemetry DB

UserProgram

TReKAPI

TREK

Telemetry Services

TReK Telemetry Capabilities: * Receive, Process, Record, Forward, and Playback Telemetry Packets. * Display, Record, and Monitor Telemetry Parameters * View Incoming Telemetry Packets (Hex/Text Format) * Telemetry Processing Statistics

M.Schneider MSFC/NASA

Page 26: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

26September 29. 2004. CHEP04. Alexei Klimentov.

White Sands Complex

ISS

TDRS

Command Connection (TCP)

SSCC

CommandProcessingApplication

Command DB

UserProgram

TReKAPI

TREKPOIC

EHS

Command DB

Command Services

TReK Command Capabilities: * Command System Status & Configuration Information * Remotely Initiated Command (Cmd Built from POIC DB) * Remotely Generated Command (Cmd Built at Remote Site) * Command Updates * Command Responses * Command Session Recording/Viewing * Command Track * Command Statistics

M.Schneider MSFC/NASA

Page 27: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

27September 29. 2004. CHEP04. Alexei Klimentov.

EVoDS Voice Switch

VOIPTelephonyGateways

VoiceLoops

AdministratorServer

ConferenceServers

VirtualPrivate

NetworkServer

Remote Sites

MSFC Payload Operations and Integration Center

IVoDS User Client PC’s

NASA, Research, and Public IP

Networks

AdministratorClient PC

PAYCOMClient PC

LAN

EVoDS Keysets

Internet Voice Distribution System (IVoDS)

1

• Windows NT/2000 PC with COTS sound card and headset

• Web-based for easy installation and use• PC location very mobile – anywhere on LAN• Challenge: minor variations in PC hardware and

software configurations at remote sites

EncryptedIP VoicePackets

K.Nichols MSFC/NASA

Page 28: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

28September 29. 2004. CHEP04. Alexei Klimentov.

IVoDS User Client

Capabilities Monitor 8 conferences simultaneously Talk on one of these eight conferences using spacebar, ‘Click to Talk’ button, or ‘Mic Lock’ User selects from authorized subset of available voice conferences Volume control/mute for individual conferences Assign talk and monitor privileges per user and conference Show lighted talk traffic per conference Talk to crew on Space (Air) to Ground if enabled by PAYCOM Save and Load conference configuration Set password

K.Nichols MSFC/NASA

Page 29: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

29September 29. 2004. CHEP04. Alexei Klimentov.

Data Transmission

Facing the long running period (3+ years) and the way how data will be transmitted from the detector to the ground centers.

High Rate Data Transfer between MSFC Al and AMS centers (POCC, SOC)

will become a paramount importance

Page 30: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

30September 29. 2004. CHEP04. Alexei Klimentov.

Data Transmission SW

to speed up data transfer to encrypt sensitive data and not encrypt bulk data to run in batch mode with automatic retry in case of failure … starting to look around and came up with bbftp (still looking for a good

network monitoring tools) (bbftp developed in BaBar and used to transmit data from SLAC to IN2P3@Lyon) adapted

it for AMS, wrote service and control programs

Page 31: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

31September 29. 2004. CHEP04. Alexei Klimentov.

Data Transmission SW (the inside details)

Server copy data files between directories (optional) scan data directories and make list of files to be transmitted purge successfully transmitted files and do book-keeping of transmission sessions

Client periodically connect to server and check if new data available bbftp new data and update transmission status in the catalogues.

Page 32: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

32September 29. 2004. CHEP04. Alexei Klimentov.

Data Transmission Tests

Page 33: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

33September 29. 2004. CHEP04. Alexei Klimentov.

AMS Distributed Data Production

Computer simulation of detector response is a good possibility to study not only detector performance, but also to test HW and SW solutions that will be used for AMS-02 data processing

Data are generated in 19 Universities and Laboratories, transmitted to CERN and then available for the analysis

Page 34: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

34September 29. 2004. CHEP04. Alexei Klimentov.

Year 2004 MC Production

Started Jan 15, 2004 Central MC Database Distributed MC Production Central MC storage and archiving Distributed access

Page 35: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

35September 29. 2004. CHEP04. Alexei Klimentov.

AMS Distributed Data Production

CORBA client/server for inter-process communication Central relational database (ORACLE) to store

regional centers description, list of authorized users, list of known hosts, jobs parameters, files catalogues, version of programs and executables files etc.

Automated and Standalone mode for processing jobs- automated

job description file is generated by remote user request (via Web)user submits job file to a local batch systemjob requests from central server :

calibration constantsslow control correctionsservice info (e.g. path to store DSTs)

central server keeps the table of active clients, number of processed events, handle all interactions with database and data transmission

- standalonejob description file is generated by remote user request (via Web)user receives a stripped database version, submits the jobclient doesn’t communicate with central server during job executionDSTs and log files are bbftp’ed to CERN by user

Page 36: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

36September 29. 2004. CHEP04. Alexei Klimentov.

MC Production Statistics

Particle Million Events

% of Total

protons 7630 99.9

helium 3750 99.6

electrons 1280 99.7

positrons 1280 100

deuterons 250 100

anti-protons 352.5 100

carbon 291.5 97.2

photons 128 100

Nuclei (Z 3…28)

856.2 85URL: pcamss0.cern.ch/mm.html

185 days, 1196 computers8.4 TB, 250 PIII 1 GHz/day

Page 37: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

37September 29. 2004. CHEP04. Alexei Klimentov.

Y2004 MC Production Highlights

Data are generated at remote sites, transmitted to AMS@CERN and available for the analysis (only 20% of data was generated at CERN)

Transmission, process communication and book-keeping programs have been debugged, the same approach will be used for AMS-02 data handling

185 days of running (~97% stability) 18 Universities & Labs 8.4 Tbytes of data produced, stored and archived Peak rate 130 GB/day (12 Mbit/sec), average 55 GB/day (AMS-02 raw data transfer ~24 GB/day) 1196 computers Daily CPU equiv 250 1 GHz CPUs running 184 days/24h

Good simulation of AMS-02 Data Processing and Analysis

Page 38: AMS-02 Computing and Ground Data Handling CHEP 2004 September 29, 2004. Interlaken Alexei Klimentov — Alexei.Klimentov@cern.ch ETH Zurich and MITAlexei.Klimentov@cern.ch

38September 29. 2004. CHEP04. Alexei Klimentov.

List of Acronyms

LAN

LOR

LTO

MRDL

MSFC

Local Area Network

Lost Of Record

Linear Tape Open

Middle Rate Data Link

Marshall Space Flight Center, Huntsville Alabama

NASA

NCFTP

NM

NRT

National Aeronautics and Space Agency

See FTP

New Mexico

Near Real Time

Payload Data Service SystemPDSS

POCC Payload Operations Control Center

POIC

RAID

RT

RTDS

SMP

SOC

STS

SW

TB,Tbyte

TDRSS

TReK

Payload Operation and Integration Center, MSFC

Redundant Array of Inexpensive Disks

Real Time

Real Time Data System

Symmetric Multi-Processor

Science Operations Center

Space Shuttle

Software

TeraByte

Tracking & Data Relay Satellite System

Telescience Resource Kit

ACOP AMS Crew Operation Post

Al Alabama

AMS

amsbbftp

Alpha Magnetic Spectrometer

AMS ftp (see ftp)

bbftp

bps

CA

CERN

CET

DLT

DST

ESD

FTP

GB

GSC

BaBar ftp (see ftp)

Ground Support Computers

H&S

HW

Hz

ISS

Health and Status data

Hardware

Hertz

International Space Station

Bit per second

California

European Laboratory for Particle Physics, Geneva, CH

Central Europe Time

Digital Linear Tape

Data Summary Tape

Event Summary Data

File Transfer Protocol

GigaByte