5
IEEE TAauaction6 on NuctetA Science,Q Vot,NS-23, No.l, Feb4wvty 1976 ON DISTRIBUTED CONTROL AND INSTRUMENTATION SYSTEMS FOR FUTURE NUCLEAR POWER PLANTS by G. Yan and J.V.R. L'Archeveque Atomic Energy of Canada Limited Electronics, Instrumentation & Control Division Chalk River, Ontario Abstract The centralized dual computer system philosophy has evolved as the key concept underlying the highly successful application of direct digital control in CANDU power reactors. After more than a decade, this basic philosophy bears re-examination in the light of advances in system concepts - notably distributed architectures. A number of related experimental programs, all aimed at exploring the prospects of applying dis- tributed systems in Canadian nuclear power plants are discussed. It was realized from the outset that the successful application of distributed systems depends on the avail- ability of a highly reliable, high capacity, low cost communications medium. Accordingly, an experimental facility has been establish- ed and experiments have been defined to address such problem areas as interprocess communications, distributed data base design and man/machine interfaces. The design of a first application to be installed at the NRU/NRX research reactors is progressing well. Centralized Computer Control of CANDU Power Plants Experience with digital computer control of nuclear power plants in Canada began in 1963 with an experiment on the NRU research reactor at the Chalk River Nuclear Laboratories1. Since then, the centralized dual computer system philosophy has evolved as the key concept underlying the highly successful application of direct digital control in CANDU power reactors2 5. Each generating unit of a multi-unit station* has its own fully duplicated control system (Figure 1) located at the stationts control room. Field instrument cables for all units converge to a central area. Both computers of each dual system are operating at all times but only one has its outputs connected to the controlling elements. Whenever the "master" computer fails to perform a task, its outputs are disconnected and the "hot standby" is switched into service. After more than a decade, the basic dual computer philosophy bears re-examin- ation in the light of advances in system concepts - notably distributed architectures. * For example, there are four generating units at the Pickering station. COMPUTER oxe SENSORS AND CONTROL ELEMENTS IITF INPUTS !OU UTS CMTE iy COMPUTER PERIPHERALS PERIPHERALS DATA .~ ~~ ~~LN r~~~~~~~~~~~~~~~~~~~~~~~~~~ "MASTER" INPUTS/OUTPUTS CONNECTED OUTPUTS AD CONTROL DIGITAL ELEMENTS INPUTS _ OUTPUTS "HOT STANDBY" ONLY INPUTS CONNECTED Figure 1: Dual Computer Control of CANDU Power Plant Furthermore, as greater demands are being made of the computer, certain characteris- tics of centralized systems are being iden- tified as potential problem areas. With even larger power plants on the horizon, it has become important to eliminate these de- sign problems before they restrict the full exploitation of computer systems in nuclear stations. Problem Areas The original motivation to centralize systems was largely an economic one and re- flected the technologies of the late sixties. An inherent limitation of these structures is the overhead resulting from the funnelling of all activities into the central controller as the complexity of operation increases. In real-time applications, a sophisticated operating system is necessary to supervise the sharing of the central facility, e.g. task scheduling, priority interrupt handling, etc.. A significant percentage of the central processing power is wasted on essentially non-productive activities, reducing the number of tasks that could be handled in a given time. The design of a centralized system does not lend itself to easy partitioning, thus making the organization of large projects difficult. The software development, in particular, generally depends on a few persons, thoroughly familiar with the operating system, to produce an integral program package combining the requirements of widely different design activities. 431

On Distributed Control and Instrumentation Systems for Future Nuclear Power Plants

  • Upload
    j-v-r

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

IEEE TAauaction6 on NuctetA Science,Q Vot,NS-23, No.l, Feb4wvty 1976

ON DISTRIBUTED CONTROL AND INSTRUMENTATION SYSTEMS FOR

FUTURE NUCLEAR POWER PLANTS

by

G. Yan and J.V.R. L'ArchevequeAtomic Energy of Canada Limited

Electronics, Instrumentation & Control DivisionChalk River, Ontario

Abstract

The centralized dual computer systemphilosophy has evolved as the key conceptunderlying the highly successful applicationof direct digital control in CANDU powerreactors. After more than a decade, thisbasic philosophy bears re-examination in thelight of advances in system concepts -notably distributed architectures. A numberof related experimental programs, all aimedat exploring the prospects of applying dis-tributed systems in Canadian nuclear powerplants are discussed. It was realized fromthe outset that the successful applicationof distributed systems depends on the avail-ability of a highly reliable, high capacity,low cost communications medium. Accordingly,an experimental facility has been establish-ed and experiments have been defined toaddress such problem areas as interprocesscommunications, distributed data base designand man/machine interfaces. The design of afirst application to be installed at theNRU/NRX research reactors is progressingwell.

Centralized Computer Control ofCANDU Power Plants

Experience with digital computercontrol of nuclear power plants in Canadabegan in 1963 with an experiment on the NRUresearch reactor at the Chalk River NuclearLaboratories1. Since then, the centralizeddual computer system philosophy has evolvedas the key concept underlying the highlysuccessful application of direct digitalcontrol in CANDU power reactors2 5. Eachgenerating unit of a multi-unit station* hasits own fully duplicated control system(Figure 1) located at the stationts controlroom. Field instrument cables for all unitsconverge to a central area. Both computersof each dual system are operating at alltimes but only one has its outputs connectedto the controlling elements. Whenever the"master" computer fails to perform a task,its outputs are disconnected and the "hotstandby" is switched into service.

After more than a decade, the basicdual computer philosophy bears re-examin-ation in the light of advances in systemconcepts - notably distributed architectures.

* For example, there are four generatingunits at the Pickering station.

COMPUTERoxe

SENSORS

AND

CONTROL

ELEMENTS IITFINPUTS

!OU UTS

CMTEiy

COMPUTER

PERIPHERALS PERIPHERALS

DATA

.~ ~ ~ ~~LNr~~~~~~~~~~~~~~~~~~~~~~~~~~"MASTER"

INPUTS/OUTPUTS CONNECTED

OUTPUTSAD

CONTROLDIGITAL ELEMENTSINPUTS _OUTPUTS

"HOT STANDBY"ONLY INPUTS CONNECTED

Figure 1: Dual Computer Control of CANDUPower Plant

Furthermore, as greater demands are beingmade of the computer, certain characteris-tics of centralized systems are being iden-tified as potential problem areas. Witheven larger power plants on the horizon, ithas become important to eliminate these de-sign problems before they restrict the fullexploitation of computer systems in nuclearstations.

Problem Areas

The original motivation to centralizesystems was largely an economic one and re-flected the technologies of the latesixties. An inherent limitation of thesestructures is the overhead resulting fromthe funnelling of all activities into thecentral controller as the complexity ofoperation increases. In real-timeapplications, a sophisticated operatingsystem is necessary to supervise the sharingof the central facility, e.g. taskscheduling, priority interrupt handling,etc.. A significant percentage of thecentral processing power is wasted onessentially non-productive activities,reducing the number of tasks that could behandled in a given time.

The design of a centralized system doesnot lend itself to easy partitioning, thusmaking the organization of large projectsdifficult. The software development, inparticular, generally depends on a fewpersons, thoroughly familiar with theoperating system, to produce an integralprogram package combining the requirementsof widely different design activities.

431

Replacing some of the key components,e.g. processor or disk, of a centralized sys-tem without disrupting the availability ofthe generating station is no easy task.Since nuclear plants are designed for a lifeof 30 years, on the long term, upgrading,expansion and maintenance of the controlsystem may be expensive and difficult.

One startling aspect of overall central-ization is the massive amount of cablingrequired to establish all connections be-tween,the control area and the rest of theplant.* Cable layout and installation arecomplicated and costly. A further compli-cation is the large penetration into thecontainment area required to accommodate thecables.

Distributed Systems

enough self-sufficiency within its ownenvironment to reduce the communicationsrequirement and minimize the degree ofdependence on other components.

/

Incentives

Dramatic changes in the cost balancebetween components, coupled with a need toimprove system performance, have stimulateda broad-base research and development effortin distributed structures 6 The objec-tives being pursued are many, e.g. resourcesharing, increased flexibility, improvedreliability through dynamic allocation ofresources and incremental modernization.The approach is still new, but the potentialbenefits are attractive, so that an inves-tigation and evaluation program was startedover three years ago at the Chalk RiverNuclear Laboratories.

Concepts

The graphical representation of Figure2 has been found useful in our study ofdistributed system concepts. The wordsystem refers to an integrated assembly ofspecialized parts acting together for acommon purpose in a specified environment.The system itself is made up of interactingspecialized functional subsystems. Theinteractions between units are controlled bycommunication supervisors via a multi-levelcommunications medium. The number or therelative positions of subsystems on thecommunications medium are irrelevant. Eachsubsystem has a clearly defined functionalduty, but no single unit can be identifiedas the central controller. Rather, therules guiding the interactions control thetotal operation.

Embodied within these definitions, andthe corresponding graphical representation,are the practical incentives to develop dis-tributed systems for nuclear power plants.Thus, partitioning into functional blockscan potentially reduce the operating systemoverhead and facilitate a more balancedsubdivision of workload amongst designers.Each specialized subsystem operates with

Figure 2: Graphical Representation of aDistributed System Concept

To fully exploit this benefit requiresa better understanding of the trade-offs be-tween various partitioning strategies. Forexample, "physical partitioning" recognizesthe actual distribution of components andattempts to bring the solutions closer tothe sources of the problems, e.g. use ofremote multiplexers and field pre-processing."Functional partitioning", on the otherhand, tends to optimize certain aspects ofsystem cost and performance by groupingdevices according to capabilities andfunctions. The traditional hardware/sof'tware "designer-oriented" approach toimplementation is no longer adequate.

Availability of a multi-level commun-ications medium allowing for simple inter-connections of devices mitigate the cablingand penetration problems.

Design of Distributed Systems

On the experimental side, it was real-ized from the outset that the successfulapplication of distributed systems, dependson the availability of a highly reliable,high capacity, low cost communicationsmedium. Accordingly, initial emphasis wasplaced on providina a facility with thefollowing salient characteristics:

(i) compatibility with the concept of de-centralized control and distributionof functions amongst interconnectedsystems;

(ii) flexibility in addressing position-independent functions;

* For example, each dual controller systemfor a Bruce generating unit has about 3,000analog and 1,800 digital input/outputlinks 4.

(iii) modularity to allow for expansion andfuture growth;

(iv) multi-level communication for effi-cient handling of informationtransfer; and

432

(v) high noise immunity compatible withindustrial and process controlapplications.

Choice of CATV Technology

The selection of a technology capableof meeting these requirements is a vitalstep in the program. In view of the uncer-tainties regarding the future needs of dis-tributed systems, inherent flexibility isconsidered a dominant factor. In additionto providing highly reliable, low cost,short-haul, two-way digital and analogtransmission, the chosen technique shouldoffer total communications capability,including the entire audio and videospectrum. Furthermore, in order to reduceengineering costs and keep development timeto a minimum, commercially availableequipment is to be used whereverpracticable.

Unfortunately, pertinent experience onthe integration of communications equipmentinto distributed computer systems for pro-cess control is virtually non-existent. Itis therefore necessary to assess the meritsof available technologies in terms of theconditions prevailing in the industry and ofdiscernible trends in the communicationsmarket.

Figure 3 portrays the relative posi-tions of the three industries currentlysharing a common technology base to meetthe demands of the communications market.

Figure 3: Choice of a CommunicationsTechnology

Two sectors of the market, i.e. long-haul,two-way audio and short-haul, broadcastaudio/video, are dominated respectively bythe telephone and CATV industries. Theemerging field of digital data transmissionis being approached by both industries in amanner reflecting their respective areas ofexpertise. To a large extent, the computerindustry has fulfilled its need for datatransmission either by direct wiring forshort distances or by using existing tele-phone services in other cases. The adventof computer networking, however, has prompt-ed the computer industry to offer more

communications related equipment.

Based on a review of the market situa-tion and on a survey of available equipment,the CATV approach was selected as the mostpromising in terms of flexibility, datatransmission capability, noise immunity, lowcost and overall performance.

For example, a coaxial CATV cable costsabout $1.50 per installed foot. This com-pares favorably with a 50 twisted pair #16AWGcable installed and terminated at approxi-mately $3.50 per foot. The comparison isparticularly striking if the wider bandwidthof the coaxial cable (5-300 MHz), is takeninto account10. Furthermore, Frisch"1 hasestimated that on a typical CATV complexoperating at a data rate of 40 bits persecond per terminal, a single trunk (1 mega-bit per second) can handle 9,000 terminalswith an error rate of 10-22 at a signal-to-thermal noise ratio of 20dB and cross-modu-lation-to-signal ratio of -27dB.

Distributed System Development Facility

A distributed system development facil-ity based on CATV technology has been esta-blished at the Chalk River Nuclear Labora-tories. The primary purpose of the facilityis to help investigate communicationsproblems related to the design and operationof distributed systems for process control.Figure 4 is a schematic representation ofthe facility. The communications mediumitself uses standard off-the-shelf CATVequipment. The Head End equipment feeds afour way splitter to drive as many as fourseparate cables. Currently, only one cableis connected; it is approximately 200 metreslong, includes 21 taps, contains no ampli-fiers and remains in the same building. Asecond cable is reserved for connection tothe Computing Centre, about 200 metres awayin a separate building. The two remainingconnections are terminated at the splitterand are provided for future expansion.

Four communications levels have beendefined on the cable in a scheme based onfrequency multiplexing. Level 1 is commonto all system components and it providesboth terminal support capabilities and allnecessary control functions. Level 2 istypically reserved for bulk transfer offiles between subsystems in bursts lastingless than 30 seconds each. Level 3 is ahigh speed facility dynamically allocatedfor indefinite periods to guarantee fastresponse between different subsystems. Level4 is totally dedicated and is equivalent tohardwire "strapping".

Experimental Program

Even though the experimental program isoriented towards specific applications, itaddresses problems of broad interest, e.g.interprocess communications, distributeddata base design, and man/machine inter-faces. At present, a number of systems andterminals are being interconnected to thecommunications facility to prove itsreliability and demonstrate its performance.

433

Delays have been experienced during commis-sioning due to some design pr,oblems in theRF section of the delivered modems. Thesedifficulties have been mostly eliminated andthe installation is currently operating at48 kilobits per second per channel whichmeets our present requirements. Shouldhigher speeds become necessary, reliableoperation up to one megabit per second hasbeen demonstrated 2. The facility willprovide a flexible medium for testing amultiprocessor distributed data acquisitionsystem being developed for the NRU/NRXresearch reactors.

SPARETERMINATEDTRANSMITTAPS

As more experience is gained with theequipment, it is becoming clearer that thechoice of CATV technology has been propitious.Not only will the approach meet the fore-seeable communications needs of the newsystems, but the inherent versatility itoffers will enhance the advantages expectedfrom distributed architectures.

Acknowledgement

The authors wish to thank A.C. Capeland R.J. West for their valuable contributionin the specification, installation andtesting of the distributed system developmentfacility.

} SPARE XTERMINATEDRECEIVETAPS

PROVISION FOR SUCCESS 2 ADS TO COMPUTING CENTRE4 TERMINALS (POP-8/I) PDP-11/15 (CDC 6600/3300)

Figure 4: Distributed System Development Facility at the Chalk RiverNuclear Laboratories

References

1. A. Pearson, "Computer Control on CanadianNuclear Reactors", AECL-3452, October1969.

2. E. Sidall and J.E. Smith, "Computer Con-trol in the Douglas Point Nuclear PowerStation", Proceedings of the Symposiumon Heavy Water Power Reactors, IAEA,Vienna September 11-15, 1967.

3. A. Pearson, "Control and Instrumentationon Canadian Nuclear Power Plants",IAEA-PL-431/2, Vienna, 1972.

4. J.E. Smith, "Experience with Process Con-trol Computers in the Canadian-NuclearPower Program", IEEE Nuclear PowerSystems Symposium, December 1972.

5. L. Magagna and J.E. Smith, "Control Com-puter Applications in Ontario HydrolsNuclear Stations", ISA Transactions,Vol. 12, pp. 23-30, 1974.

6. L.G. Roberts and B.D. Wessler, "ComputerNetwork Development to Achieve ResourceSharing", AFIPS Spring Joint Conference,Vol. 36, p. 543, 1970.

7. S. Winkler (editor), "Computer Commun-ications: Impacts and Implications",The First International Conference on

Computer Communications, October 1972.

8. Datacomm 173, "Data Networks, Analysisand Design", 3rd Data CommunicationsSymposium Proceedings, November 1973.

434

9. E.D. Jensen, "A Mini-Multicomputer forReal-Time Control", Compcon 74, pp. 245-247, September 1974.

10. E.K. Smith, "Pilot Two-Way CATV Systems",IEEE Transactions on Communications,Vol. COM-23, No. 1, p. 111, January 1975.

11. I.T. Frisch, "Technical Problems in

Nationwide Networking and Interconnect-ions", IEEE Transactions onCommunications, Vol. COM-23, No. 1,p. 78, January 1975.

12. D.G. Willard, "A Time Division MultipleAccess System for Digital Communica-tions", Computer Design, p. 70, June1974.

435