5
1630 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 36, NO. 5, OCTOBER 1989 TANDEM, the new Data Acquisition System at the Paul Scherrer Institute (PSI) F.W. Schleputz Paul Scherrer Institute, CH-5232 Villigen PSI May 18, 1989 Abstract The paper presents the requirements established for the new, VAX based data acquisition system (DAS) at the Paul Scherrer Institute, PSI. The hardware and software concepts of the new system are described. The paper goes on to discuss the various components of the DAS software. It continues to summarize the present status of the system and concludes with an outlook onto future expansions. Introduction The Paul Scherrer Institute (PSI) in Villigen, into which the former Swiss Institute for Nuclear Research (SIN) has been integrated as of the January lst, 1988, operates three particle accelerators. Two of these deliver 72 MeV pro- ton beams and are used to feed the main accelerator ring which delivers a 590 MeV proton beam with an inten- sity of several hundred PA. This proton beam is used for the production of intense secondary beams of pions and muons. These particles are used by physicists to carry out experiments in the fields of nuclear structure analysis, el- ementary particle and solid state physics. Some fraction of the proton beam serves to feed the PSI cancer therapy facility. Parts of the 72 MeV proton beams are diverted for isotope production, and also for other forms of can- cer treatment. New beam areas and a neutron spallation source are presently under construction. Up to now, the experimenters have been using PDP-11 computers of various types for data acquisition. These are normally equipped with 124 kwords of main memory, 50 Mbytes of disk space, and two 800 BPI tape stations. They are not connected to the PSI site wide network, due to memory limitations. They are interfaced to CAMAC us- ing the GEC/Fisher/Hytec System Crate. The physicists use the GPS (General Purpose System) software running under the RSXllM operating system to acquire data from the experimental electronics and to control their appara- tus. This software was developed at SIN about 10 years ago and has reached a high degree of sophistication, ma- turity, and reliability. The system suffers, however, from the inherent limits of the PDP architecture, resulting in severe restrictions on the event and tape buffer sizes. Reauirements for a New Svstem A program to replace the aging PDPs with 32-bit sys- tems and a concurrent project to develop a new data acqui- sition system (DAS) was initiated in 1986. Input to define the requirements on the new DAS was solicited from the experimental spokesmen and a Data Acquisition Work- shop with presentations from other laboratories was held at SIN in March 1986. Obvious requirements were very low execution overhead, i.e. speed, robustness against all kinds of error conditions, and ease of use for both the knowledgeble and the casual user. In addition the input received from experimenters indicated that the new DAS should operate under a wide range of conditions, it must thus be flexible and adaptable: Target Environments: The new DAS is intended for use by most future experimental programs that use the beams of the PSI accelerators. Current experimental programs ex- tending beyond the projected lifetime of the PDPs will have to convert their data acquisition software to the new system. The event and data rates of these experi- ments vary greatly. In addition, the new DAS should also serve miscellaneous experimental activities at PSI, some of which, beyond normal data taking, place strong demands on the ability to monitor and control the settings of the experimental apparatus. Event and Data Rates: Two types of experiments, namely the so called multi-channel analyser experiments, and the experiments recording event by event data are distin- guished. The former type of experiment typically reads out only a few parameters per event and histograms them immediately. It is characterised by a high interrupt rate, unless a histogramming memory is used and the processor is interrupted only occasionally to read out this memory in which case the histograms constitute the “event” data. The latter type of experiment normally reads out more data per event and logs it to a storage medium for off-line analysis. Input from experiments indicated interrupt rates between 10Hz and 1000Hz, with event sizes ranging from 6 bytes to 10 kbytes. The data rates vary between 600 bytes/s and 100 kbytes/s. User Involvement: The willingness of the experimenters to get involved with the DAS varies greatly depending on the complexity and duration of the experiment and the composition of the research group. Some research groups view the DAS as a closed tool which they activate using a few ((obvious” commands, and they want to spend as little programming effort as possible, specifying the sequence of CAMAC operations in response to a trigger signal, and 0018-9499/89/1ooO-1630$01.00 0 1989 IEEE

TANDEM, the new data acquisition system at the Paul Scherrer Institute (PSI)

  • Upload
    fw

  • View
    214

  • Download
    2

Embed Size (px)

Citation preview

1630 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 36, NO. 5 , OCTOBER 1989

TANDEM, the new Data Acquisition System at the Paul Scherrer Institute (PSI)

F.W. Schleputz Paul Scherrer Institute, CH-5232 Villigen PSI

May 18, 1989

Abstract The paper presents the requirements established for the

new, VAX based data acquisition system (DAS) at the Paul Scherrer Institute, PSI. The hardware and software concepts of the new system are described. The paper goes on to discuss the various components of the DAS software. It continues to summarize the present status of the system and concludes with an outlook onto future expansions.

Introduction The Paul Scherrer Institute (PSI) in Villigen, into which

the former Swiss Institute for Nuclear Research (SIN) has been integrated as of the January l s t , 1988, operates three particle accelerators. Two of these deliver 72 MeV pro- ton beams and are used to feed the main accelerator ring which delivers a 590 MeV proton beam with an inten- sity of several hundred PA. This proton beam is used for the production of intense secondary beams of pions and muons. These particles are used by physicists to carry out experiments in the fields of nuclear structure analysis, el- ementary particle and solid state physics. Some fraction of the proton beam serves t o feed the PSI cancer therapy facility. Parts of the 72 MeV proton beams are diverted for isotope production, and also for other forms of can- cer treatment. New beam areas and a neutron spallation source are presently under construction.

Up to now, the experimenters have been using PDP-11 computers of various types for data acquisition. These are normally equipped with 124 kwords of main memory, 50 Mbytes of disk space, and two 800 BPI tape stations. They are not connected to the PSI site wide network, due to memory limitations. They are interfaced to CAMAC us- ing the GEC/Fisher/Hytec System Crate. The physicists use the GPS (General Purpose System) software running under the RSXllM operating system to acquire data from the experimental electronics and to control their appara- tus. This software was developed at SIN about 10 years ago and has reached a high degree of sophistication, ma- turity, and reliability. The system suffers, however, from the inherent limits of the PDP architecture, resulting in severe restrictions on the event and tape buffer sizes.

Reauirements for a New Svstem A program to replace the aging PDPs with 32-bit sys-

tems and a concurrent project to develop a new data acqui-

sition system (DAS) was initiated in 1986. Input to define the requirements on the new DAS was solicited from the experimental spokesmen and a Data Acquisition Work- shop with presentations from other laboratories was held at SIN in March 1986. Obvious requirements were very low execution overhead, i.e. speed, robustness against all kinds of error conditions, and ease of use for both the knowledgeble and the casual user. In addition the input received from experimenters indicated that the new DAS should operate under a wide range of conditions, it must thus be flexible and adaptable: Target Environments: The new DAS is intended for use by most future experimental programs that use the beams of the PSI accelerators. Current experimental programs ex- tending beyond the projected lifetime of the PDPs will have to convert their data acquisition software to the new system. The event and data rates of these experi- ments vary greatly. In addition, the new DAS should also serve miscellaneous experimental activities a t PSI, some of which, beyond normal data taking, place strong demands on the ability to monitor and control the settings of the experimental apparatus. Event and Data Rates: Two types of experiments, namely the so called multi-channel analyser experiments, and the experiments recording event by event data are distin- guished. The former type of experiment typically reads out only a few parameters per event and histograms them immediately. It is characterised by a high interrupt rate, unless a histogramming memory is used and the processor is interrupted only occasionally to read out this memory in which case the histograms constitute the “event” data. The latter type of experiment normally reads out more data per event and logs it to a storage medium for off-line analysis. Input from experiments indicated interrupt rates between 10Hz and 1000Hz, with event sizes ranging from 6 bytes to 10 kbytes. The data rates vary between 600 bytes/s and 100 kbytes/s. User Involvement: The willingness of the experimenters to get involved with the DAS varies greatly depending on the complexity and duration of the experiment and the composition of the research group. Some research groups view the DAS as a closed tool which they activate using a few ((obvious” commands, and they want to spend as little programming effort as possible, specifying the sequence of CAMAC operations in response to a trigger signal, and

0018-9499/89/1ooO-1630$01.00 0 1989 IEEE

1631

what to histogram. At the other extreme there are groups with one or more members with a high level of computing expertise, who want to tailor their own acquisition system and user surface from a set of utility routines that perform the essential functions of a DAS.

Hardware Confimration VAX type processors have become a de facto standard in

the particle physics world, and thus the new PSI DAS runs on a VAX. This VAX, however, is not directly connected to any electronic instrumentation modules, since the VMS operating system is not optimally used when exposed to high interrupt rates. Rather, a front end processor gath- ers the data in response to event triggers and buffers it for bulk shipment to the VAX (the back end). The following arguments governed the choice of the initial front end sys- tem configuration and the method of communication with the back end: Implementation Effort: Given the timescale for software development and the manpower available it was felt that one should seek t o exploit existing in-house know-how and should postpone the introduction and usage of technolo- gies new to PSI until a working DAS had been delivered to the physicists. Performance: The connection between the two systems should allow the transmission of 100 kbytes of data per second. Redundancy: The two systems should be connected in a way which allows the experimenter to switch to another back end system in case of hardware failure. This option is of particular importance on weekends, which are not cov- ered by maintenance contracts. It implies that the front end and back end systems may be separated by a large distance. Technology Independence: The connection method to be chosen should be independent of the types of VAX proces- sor and front end system used. This means in particular that it should not depend on any hardware characteris- tics of the VAX such as the system bus. This approach ensures that the software can be implemented in a hard- ware independent manner, and, vice versa, that the back end hardware configuration can be adapted to a particular experiments needs without change to the software.

These deliberations resulted in the use of a PDP-11 run- ning RSX-11s as front end processor and its connection to the VAX via DECnet/Ethernet. The exact hardware configuration of a batch of 12 to 16 systems to be commis- sioned until the end of 1988 was defined as follows:

Back End Svstem

MicroVAX I1 processor in with 9 MBytes memory.

RD54 disk (154 MBytes),

RA81 disk (475 MBytes),

8 line RS232 interface

rack mountable BA23 box,

for use as system disk.

user disk

a 2 DEQNA Ethernet interfaces (1 to connect to the PSI network, 1 to connect t o the frontend com- puter(s)).

a 1 or 2 1600/6250 BPI tape drives

Front End Svstem

a CES ACC2180 “STARBURST’ processor (PDP- ll), with Q-bus interface and 2 MBytes of memory.

0 CES SCI2280, Q-bus to System Crate interface

0 CES AMC2281, Autononous Memory Controller (DMA channel)

a CES CBA2136, CAMAC Q-Bus adapter, to house the DEQNA Ethernet interface

All of the above components are 1 slot wide CAMAC modules.

Software ConceDt The software is implemented in a fully modular style,

in the form of routine packages and utilities. Packages are a collection of modules, or subroutines, serving a well de- fined function, or a small number of closely related func- tions. The packages form a hierarchy, with packages at the same level being orthogonal, i.e. independent of each other. Packages can only depend on functions offered by lower level packages, ideally only by packages residing at the next lower level, and they can provide services only to higher level packages. Utilities are executable programs of- fering a command level interface t o the services of a pack- age. This approach offers a number of benefits: Easy Extendability: The system can be handed over to the experimenters in a series of releases, with each release adding more functionality since packages may be added as new requirements and possibilities arise. One will be able to take advantage of new hardware and software de- velopments unknown at this time. Similarly, the internal functionality of a package may be enhanced if no change to the package interface is required. Simplified Maintenance: Corrections and improvements can be applied locally t o a package, with predictable im- plications. Multiple User Level Interfaces: The same functionality may be activated via different user interfaces, e.g., there could be two interfaces for run control, one using DCL style commands and the other one being menu driven. The latter is becoming increasingly interesting with the advent of powerful windowing software such as DECwindows. Adaptability: The users, i.e. the experimenters, are p r e vided with a set of tools ranging from complete programs, sufficient in general to run a small experiment, to basic service function calls for situations that require careful tai- loring of the acquisition system.

‘Creative Electronic Systems SA, 70 Route du Pont-Butin, CH- 1213 Petit-Lancy/Geneva, Switzerland

1632

To reflect this concept the PSI DAS software is named TANDEM, which stands for “Tools for the Aquisition of Data by Experiments”. The following sections will discuss particular components of the TANDEM software.

Back End Software The bulk of the backend software has been written in

the VAX C language, the rest in FORTRAN or MACRO. As the back end computer is and is assumed to remain a VAX running the VMS operating system, no effort has been spent to make it portable in any way.

Remote Procedure Calls One of the basic tools of TANDEM is the Remote Pro-

cedure Call facility RPC. It allows a program to request a service offered by another program, the server, by sim- ply calling a subroutine. The calling program is not aware that the service is provided remotely, other than for the fact that it has to call an RPC routine to connect to the process which is offering the service. TANDEM makes ex- tensive use of this facility, for communication with servers residing both on the back end and on the front end com- puter. At present RPC uses DECnet task-tetask commu- nication, but other types of communication may easily be implemented.

Run Control In its most basic form, run control can be viewed as

switching between the the two DAS states of acquiring and not acquiring data from the experimental electronics, with a number of actions to be performed on the transitions. There are as many variations to this theme as there are experiments. Within the TANDEM DAS, five states have been defined:

DOWN: In this state, the DAS is dormant. An experi- ment and a setup setup (see below) have been selected, but no DAS processes are active.

UP: Normally, a number of preparatory actions have been carried out before reaching this state.

READY: This is a form of “hot” standby state.

ACTIVE: In this state the DAS is acquiring data.

PAUSED: This state is entered to shortly interrupt data taking.

The user may specify whichever actions he wants to per- form when switching from one state to the other.

Run Control Data Base The run control data base RCDB is a collection of data

items that describe an experiment. An experiment may be run in more than one setup, e.g. taking data at a beam energy of 210 MeV vs. taking data at a beam energy of 250 MeV, or production vs. calibration runs. The RCDB holds the data for all setups of the same experiment, the partic- ular setup being selected when the experiment is started.

Technically, the RCDB is implemented as an indexed sequential file. Two basic types of data items in the RCDB are distinguished: variables and actions. Variables may contain one or more elements of type integer, real or character. Variables may be inserted into named groups, named groups themselves may be inserted into named groups. Variables and named groups may be inserted into setups. A setup is an instance group (vs. a named group). If a variable carries the attribute SHARED its definition is shared by but a separate instance of the data is created for each setup that it becomes explicitly or implicitly part of. Variables may be accessed from programs for READ, WRITE and INCREMENT operations.

To illustrate this, assume an experiment that is run at two different energies of the incoming particles. The RCDB variable BM-CUR has been defined to contain the DAC value determining the field of the last bending magnet in the beamline. Two setups have been defined, namely 210-MEV and 250-MEV. In both cases BM-CUR refers to the this DAC value, but two different numeric values are stored in the two instances of the BM-CUR variable.

For each state transition of a particular setup the exper- imenter may enter actions into the RCDB to be executed on that state transition. Actions are DCL commands exe- cuted in a server process. The actions may be arranged in up to 255 blocks. All actions in one block are guaranteed to have been completed before any action of the next block is initialised. The execution sequence of actions within a particular block cannot be influenced by the experimenter. An selection identifier may be associated with each action.

The Run Control Data Base Utility RCDU supports the creation of RCDBs and the definition of variables, groups, actions and setups. The Run Control Data Base Interface RCDI is a set of routines providing the access to the data base variables.

Run Control Tools The Run Control Utility RCU is the program that lets

the experimenter start an experiment, i.e. choose a run control data base, select the experimental setup and then change the state of the DAS. On state transitions the ex- perimenter may specify one or more selection identifiers to further select the actions to be executed on the transition. The Run Control Interface RCI provides access to these functions from a program.

Data Buffer Manager The Data Buffer Manager DBM is a package of sub-

routines managing the flow of data packets to a number of registered receivers. In a particle physics DAS these are normally data block filters, histogram builders, data loggers and othcr specialised processes. Usage of a DBhl in a data acquisition system permits the separation of the packet flow management from the processing of the packet contents.

The TANDEM DBM is an implementation of a modified form of the Stream Machine described in [l]. It provides

1633

facilities for concurrent programs to share access to and communicate via streams of data blocks. The basic com- ponents of the DBM are: The Bufler: a memory area providing storage for DBM data structures and data blocks Processes: (sequential) programs which execute concur- rently and communicate through streams. Streams: historically ordered sequences of data blocks whose values may be communicated between processes Data Blocks: variable-length lists of data Piers: links between processes and streams that specify which data block is to be read or written

Processes communicate by reading and writing data blocks on streams. A stream is a sequence of data blocks written by a process. Associated with each data block is an index called the location which represents the order in which the data blocks are written to the stream. Writ- ing a data block to a stream adds the data block to the end of the stream, i.e. the new data block’s location is one greater than the position of the previous data block written to the stream. If sufficient memory resources have been provided, streams buffer the data blocks and a pro- cess writing a data block to a stream does not need to wait for a receiving process to read from the stream. Nor- mally, there will be only one process writing data blocks to a data stream. While this helps to reinforce the con- cept of the historical ordering of data blocks on a data stream, streams with several writers are supported with restrictions.

Processes receive data block data from a stream by read- ing the data block specified by a pier. A pier specifies data blocks in a stream by their location in the stream. If, be- cause not enough data blocks have been written to the stream, there is no data block available at the specified location, the receiving process suspends execution until a value has been written to the requested location. A stream may have any number of readers.

The Data Buffer Management Utility DBMU allows to examine the state of streams and piers, and to view data blocks in a stream.

Data Channels Event data on the VAX normally passes through a

number of processes implementing filters, producing his- tograms and finally logging the data to tape or disk. Dur- ing data acquisition the event data arrives from the front end computer. The same set of processes may be used for data analysis, in which case the data is read from tape or disk. To achieve independence from the source and desti- nation of the data in the program, the concept of a d a t a channel has been defined. Two types of channels, namely input or production channels, and output or logging chan- nels are distinguished. A data channel, identified by a number, may be booked, opened, read from, written to, closed and unbooked under program control. Its charac- teristics must be defined in the form of a VMS logical name, that is translated when the channel is booked.

A special program is provided as part of the TANDEM

DAS which can book and subsequently serve up to 10 data channels. These channels may be any mixture of produc- tion or logging channels. The channels are controlled via a remote command interface (using RPC). A production channel can be instructed to place the incoming data into a particular DBM stream, a logging channel can be set up to take the data to be logged from any number of DBM streams. Using this program the entire data flow from the front end computer to the logging medium can be set up and controlled via command interfaces.

Utilities The Central Message Reporter CMR acts as a clear-

ing house for messages. Messages in this sense are VhIS messages and text messages. Any program, be it part of TANDEM or user written, can send a message to the CMR to record a condition it has encountered. The ChlR will distribute the message to the receivers, as specified by the sender. Classes of receivers are users, terminals, processes, and operators. Operators are terminals hav- ing identified themselves via the VMS REPLY/ENABLE command. All messages are recorded in the log file. Again, the CMR functionality is accessible via a callable interface and a DCL style command utility.

The Condition Handler CII is a process registered as a CMR receiver to which another process may send a mes- sage upon detection of a particular condition. Upon re- ceipt of a message the CH will search the RCDB for actions connected to this particular condition, and have them exe- cuted. This approach ensures a flexible method of reacting to any arising conditions, which is completely under the control of the experimenter.

The CAMAC Command Processor CCP is a tool to ex- ecute CAMAC functions from the back end computer. It consists of a number of program callable routines, and a command interface. The CCP cooperates, via RPC, with a server task in the front end computer. Whereas the coin- mand interface can only process single CAMAC requests, the callable interface is able to pass preassembled lists to the server task.

The Basic Analvsis Task BAT To relieve the experimenters from the burden of writing

analysis tasks for filtering, histogramming etc., a p r o t e type analyser task has been written. This task can attach to a DBM buffer and stream specified via logical names. The user only writes certain subroutines which are spe- cific to his experiment, the minimum is one subroutine, but hooks are provided for some thirty functions allowing an almost unlimited tailoring of the BAT behaviour. The experimenter uses a command procedure to link his rou- tines to the task shell. The BAT keeps statistics on the number of events received and processed, these statistics can be displayed using the RPC based Remote Process Dialogue utility RPD.

The Data Buffer Management Utility DBMU can be tailored in a similar way, e.g. to allow experiment specific dumps of data blocks read from DBRl streams.

1634

Front End Software

The STARBURST front end computer operates under the RSXl lS operating system, and acts as a DECnet end node. The DAS supports up to 32 different event sources, which can be any mixture of CAMAC LAMS, the two external interrupts of the STARBURST, or periodically occurring internal events. For each event source the ex- perimenter can specify an ENABLE routine, a DISABLE routine, a READOUT routine and an ARM routine. The ENABLE and DISABLE routines are executed when runs are started or stopped. The READOUT routine gathers data from the electronics, the ARM routines r e a r m the event trigger logic . All routines are executed at system level. They reside in dynamically loadable regions, thus permitting changes to be made without rebooting the front end computer.

For each event source the experimenter may specify into which of four types of buffers data should be collected by the READOUT routine in response to a trigger. Data buffers are dynamically allocated from a pool and associ- ated with a data buffer type. The maximum number of buffers in the pool is sixteen, the maximum size is 8000 bytes. Thus ample space to buffer the incoming events is normally available. Buffers of a particular type are trans- mitted to a back end computer via one of four data chan- nels.

A program called the Front End Control Utility FCU, which runs on the back end computer, is used to set the DAS state, load the regions containing the routines sup- plied by the experimenter, and perform numerous other functions such as setting and displaying parameters and counters.

GraDhics

The histogramming subsystem of the Los Alamos Q sys- tem is presently used to define, accumulate and display histograms. I t is, however, planned to use the Wave soft- ware product by Precision Visuals for the presentation of grahics data in the near future. This product employs a well defined mechanism to interface to external routines.

The first release of the TANDEM software was made available at the end of March 1989. Prereleases have al- ready been used by several experiments. TANDEM is ex- pected to be used heavily during the second half of 1989. The described components are functioning and first expe- rience indicates that the design specifications can be met if care is taken when defining the buffering parameters. As with every new system, know how needs to be built concerning the usage of the system, both with the devel- opment team and the experimenters, and experience needs to be gathered regarding any special tuning of VMS sys- tem parameters.

Future DeveloDments Future developments of the system may be expected in

several areas. The present front end system is still based on a PDP processor with its known limitations. Studies on the implemetation of a VME based front end with in- terfaces to CAMAC and FASTBUS will begin this year. Together with ongoing efforts to increase its performance and reliability the software will be adapted to take advan- tage of the distributed processing power of VMS cluster environments, accounting for the fact that most hardware configurations will soon consist of a VAX with one or more served workstations for user interaction. The availability of powerful windowing software on these workstations in- vites the development of window oriented user interfaces, which are particularly useful for the casual user. Last but not least more physics oriented software components such as table driven test packages will either be integrated or developed.

Acknowledgements The valuable contributions made by David Maden of

PSI and Mike Huffer of SLAC during the initial stages of the TANDEM software development are gratefully ac- knowledged. I am particularly thankful for the continued commitment and active involvement of my collaborators Janet Hayman, Phillip Gaisford and Paul Murdock, who have designed and implemented most of the software mod-

’ ules. providing a to interface to the Q SYS- I would further like to thank the entire staff of the PSI

Wave windowing ‘Oftware for the Of Computing department for their help in setting up the

general infrastructure needed to use TANDEM at PSI. graphics data and menues.

* - DeveloDment Environment

All software has been developed using the central VAX cluster of PSI. Extensive use has been made of tools such as CMS and MMS for version control and building of the software components~ TANDEM is packaged in the form of a VMS layered product and installed on the experi- menters machines using VMSINSTALL and the Remote Zng, 1985. Ilnperial London. System Manager software RSM.

References

[l] D. Barstow, p. Barth, s. GutherY. The Stream Ma- chine: a Data Flow Architecture for Real-time Appli- cations. In PrOC 8th Int COnf on Software Engineer-

Status

’Precision Visuals, Inc., Boulder, Colorado 80301