27
Naval O a Atm sphri R a Laboratory NOAR 5,L Repom I Awi 191)t U DTIC FILE COPY Design of a Distributed Microprocessor Sensor System CDTC ~JUN 08 199~ Q B. S. Bourgeois M. M. Harris P. B. Wischow J. H. Ross Mapping, Charting, and Geodesy Division Ocean Science Directorate Approved for public release; distribution is unlimited. Naval Oceanographic and Atmospheric Research Laboratory, Stennis Space Center, Mississippi 39529-5004.

Naval O a Atm sphri R a Laboratory - DTIC · 2011-05-15 · Naval O a Atm sphri R a Laboratory NOAR 5,L Repom I Awi 191)t U DTIC FILE COPY Design of a Distributed Microprocessor Sensor

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Naval O a Atm sphri R a Laboratory

NOAR 5,L Repom I Awi 191)t U

DTIC FILE COPY

Design of a Distributed MicroprocessorSensor System

CDTC~JUN 08 199~

Q

B. S. BourgeoisM. M. HarrisP. B. WischowJ. H. RossMapping, Charting, and Geodesy DivisionOcean Science Directorate

Approved for public release; distribution is unlimited. Naval Oceanographic and AtmosphericResearch Laboratory, Stennis Space Center, Mississippi 39529-5004.

Foreword

The Geophysical Airborne Survey System (GASS) is a distributed micro-processor system that integrates magnetic, hydrographic, altitude, position, andattitude sensors. GASS is used on the Project Magnet P-3 Orion aircraft operatedby the Naval Oceanographic Office to collect magnetic and hydrographic data.The magnetic data will be used to construct isomagnetic charts; the gravitydata will be used to improve the accuracy of inertial navigation charts. Magneticand water depth data will be used to upgrade navigational charts.

This report summarizes the research done in the area of distributed micro-processor systems. This study provides a foundation for the design of GASS andother systems of this type. Details of the final GASS design are provided.

W. B. Moseley . B. Tupaz, aptain, USNTechnical Director Commanding Officer

II

Executive Summary

The Geophysical Airborne Survey System (GASS), developed by the NavalOcean Research and Development Activity*, is a real-time, distributed micro-processor sensor system. The Naval Oceanographic Office intends to use thissystem on the Project Magnet P-3 Orion aircraft to coilect worldwide magneticand hydrographic data.

This report briefly discusses the mission and the history of GASS and reviewsthe technology advances of the last decade in the area of real-time distributedmicroprocessor systems. Specific topics include the goals of distributed systemdesign, the qualitative value of distributed system design, and reliable designtechniques for the hardware and software in distributed systems. This technologyreview provides a foundation for the GASS design.

The systematic design decisions made for GASS are detailed, and the system'sarchitecture is described. Detailed drawings and descriptions of the hardware andsoftware for the final GASS design are also included. Recommendations aremade for possible system enhancements and for areas that require furtherinvestigation.

Accesilo ForNTIS CRA&I

c DTIC TABU ,a 11)omced i0JJsIIIIJta,.,r

By

I tby Codes

D ist A JIM, e-iai

* Now the Naval Oceanographic and Atmospheric Research Laboralory (NOARL).

Acknowledgments

The authors acknowledge the Naval Oceanographic Office for funding thisproject under program element number 980101, managed by Mr. Roger Young.Special thanks are given to the software development team at SverdrupTechnology, Inc., under the supervision of Mr. Alan Guess, for their work on thereal-time software. The authors also give special thanks to Mr. Jeffrey Spear ofPlanning Systems, Inc., for his work on the Operator Interface/System Controlsoftware. We thank Dr. Herbert C. Eppert, Jr., and Dr. Daniel G. Hickman ofNOARL for their interest and support of the GASS project.

ii 3

Contents

I. Introduction IA. The System and Its Mission IB. System History IC. Synopsis of Related Work I

II. System Design 4A. System Functions 4B. Design Requirements 5C. System Architecture 5D. System Building Blocks 9

Ill. System Details 10A. Hardware 10B. GASS Software 13C. GASS Data Flow 16

IV. Summary 16

V. Recommendations 17A. Further Study 17B. Future System Enhancements 18

VI. References 18

.il

Design of a Distributed Microprocessor Sensor System

1. Introduction collected with this system when airborne gravimetricThis report discusses the design of the Geophysical sensors become available. The new GASS, funded by

Airborne Survey System (GASS). To this end, relevant NAVOCEANO, was developed by the former Navalresearch in the area of real-time distributed processing Ocean Research and Development Activity (NORDA).systems is reviewed, the majordesign considerations of (NOTE: NORDA has been designated as the NavalGASS are discussed, and the final design is presented in Oceanographic and Atmospheric Rtscarch Laboratory).detail. The current system upgrade project began in

December 1986, and the system design was based onA. The System and Its Mission the state of the technology as of December 1987. It is

The GASS is a sensor system built around a distrib- estimated that 60% of the development costs will beuted microprocessor network. This design was chosen used for software development and testing, and onlyto provide a high degree of system modularity. Modu- 40% will be used for the purchase of hardware. TheIarity willenhancethe system's capability tobe flexible, system is designed for a life expectancy of 10 years.available, repairable, and updated throughout its lifespan. GASS will be used around the world for surveying C.SnpiofRltdWrthe earthis magnetic field and coastal water depths. The Distributed processing sy stems for real-timemagnetic data will be used to construct isomagnetic applications are becoming a reality with the adventcharts, and the gravity data will be used to improve the of low-cost, high-powered microprocessors. The pro-accuracy of inertial navigation systems. Both the mag- fessed gains of a distributed system are many: lowernetic data and the water depth data wkill be used for hardware cost, higher reliability, increased flexibility.nav igational charts. In addi tion to the magnetic, gravity, and ease of modernization and expansion due to theand water depth sensors. GASS also includes position, inhcrent modularity. Many distributed real-time sys-altitude, attitude, and time devices needed to correlate tems provide proof-of -concept for this designthe collcted data. methodology. Langley' provides the results of the work

performed under Navy contracts to create a distributedB. System Hlistory missile guidance and control system. The traditional

The1 G~ASS is designed for installation on the Project use of analog circuitry and single-processor designsN eP3rion aircraft, whichisow a t resulted in systems that were difficult and expensive

by the Naval Oceanographic Ofiice (NAVOCEANO). to upgrade. Langley's goal was to develop a modularProject Magnet was initially used to conduct surveys of architecture that would increase system flexibility andthe earth's magnetic field'. This project was initiated on reduce the cost of software development. Wilcock'a trial basis by the Chief of Naval Operations in 195 1, summarizes the concepts for the development of dig-and made into apermanent program in 1957. The sensor ital control systems for combat aircraft. This jointsystem was upgraded in 1970 by the Applied Physics effort between the Royal Aircraft Establishment andLaboratory of Johns Hopkins University, and was the British Aerospace Corporation was to develop adesignated the Geomagnetic Airborne Survey System. distributed processing system that would reduce systemThe latest version of the system, the Geophysical weight, reduce pilot workload, improve maintain-Airhorne Survey System, will be used to collect mag- ability, and improve survivability. Shin' discusses thenetic and hydrographic data. Hydrographic instruments Distributed Microprocessor Airborne Computingare included in the system because of the 2(X)-year System (DMACS) that was developed at the Rensselaerbacklog in coastal surveys (coastal surveys are presently Polytechnic Institute. The DMACS is designed to be aconducted using boats). Gravity data will also be combined computer system for high-performnance

military aircraft, responsible for the functions of presents a method for evaluating the performance of

weapons, navigation and control. Feo5 outlines the real-time distributed systems. This paper was a joint

evaluation requirements for Intelligent Redundant effort between MIT and the Draper Laboratory, and it

Actuation System (IRAS) designs. IRAS research is specifically addresses the utilization of resources and

NASA-sponsored aid should achieve reduced flight- the response time delays for processing tasks. Lala"'

control computer loading by shifting the tasks of failure presents the network testbed developed by Draper

isolation and configuration management to micro- Laboratory. This testbed will be used to experiment

processors at the actuator level. Shin6 discusses the with various network concepts to develop more

preliminary research of the Integrated Multi-Robot advanced network communication systems for future

System (IMRS). IMRS is a distributed processing system spacecraft. A recent, broad research effort for the cost

designed for manufacturing systems. IMRS is expected effectiveness of various distributed design approaches

to outperform contemporary centralized controllers on has been undertaken by the U.S. Air Force with its

the basis of physical space, computer capabilities, Modular Avionics System Architecture (MASA)

throughput, flexibility, and fault-tolerance. Gluch7 and program". Part of this study involves determining

Kieckhafer discuss the Multicomputer Architecture what level of modularity can best benefit the Air

for Fault-Tolerance (MAFT) developed by the Bendix Force. Brock 8 points out that mandated use of common

Aerospace Technology Center. MAFT was designed modules in aircraft systems could actually result in

for maximum reliability in real-time control systems. It increased weight, size, and cost over an optimum

ct:nsi.sts of se eral nodes connected by a broadcast bus point design.

ncR* ork. Each node uses two processors, one forexecu- Intrinsic to any discussion of distributed real-time

tivc fnctions and one for applications programs. Fura processing systems are methods of achieving fault

di-c,,scs the Integrated Fault-Tolcrant Avionics tolerance. The most common method for achieving

S stcms Compuier (IFTAS) developed by Boeing for fault tolerance in these systems is through redun-

hc L ncxt gcncration of space transport vehicles. IFTAS dancy of sensors, actuators, and processors. This

i a i,trLbutcd network of processing nodes intercon- redundancy often occurs naturally in distributed

nce'ed I, , high-srced serial bus. Each node will have processing systems. Gelderloos"9 describes the redun-

froni onc 10 tour ')rocessors depending upon the level of dancy management of the Shuttle Craft flight control

relixbilit rlcqu.rod b' its functions. One of the most system. This system uses quadruple redundant proces-

recent and s gnificant proponents for distributed sors and data buses, with dual and triple redundant

pfoce\>ing is tle Space Station Columbus' Data sensors and actuators. It is designed to ccniinue safeNI ,rnement Sy stem (DMS) :

''

. The DMS will be operation after two redundant system failures. Watson a'

rxi-onible toi data communications, data processing, summarizes the work performed by General DynamicsU. ... ,,dmnistration, data storage, data retrieval, in the development of a reliable avionics control

,.l daa tprecentation ughout the space station. The system. Reliability was achieved here through the useu f r.,.t io the [MS has been awarded to McDonnell- of redundant sensors, redundant actuators, redundant

Lx .-! lhicir proposed design will use a 100-million computers, and advanced self-testing features withinhit c,- o; ( Nhps), token ring, fiber optic network the computers. Sievers"' discusses the development of

IlI connect all of the processing nodes in a ring a fault-tolerant computer for the U.S. Navy. He claimswrolo!, . that the use of fault-tolerant architectures can achieve

Whilc much significant work has been done in the lifetime costs from 1/5 to 1/60of the baseline costs. The

-.! eal-tim distributed processing systems, there architecture proposed involved the use of multiple

i,l no clcar-cut meihodology for designing these single-board computers, which would run identical

v.\,ienls. Mcjzak:'poins out that the distributed system tasks and periodically check each other's results.

technology has not yet achieved a "rigid definition." McGlone22 summarizes the results of the Air Force's

Only a few attempts have been made to quantize the Full Authority Fault Tolerant Electronics Engine

value of distributed design approaches. Control (FAFTEEC) program. This program's goal was

lPcdar" presents a study that attempts to find an to develop the most cost-effective architecture for

optimal tradcolf among fault tolerance, computing reliable digital control of a gas-turbine engine. The

capacily, and cost in a distributed processor system. architecture chosen used dual sensors, actuators, and

Vielcartets provides an empirical study of current ground computers. Each computer would have dual central

and airborne in icroproccssor-based distributed systems processing units (CPU) to determine the existence of a

that could be applied to space systems. Mangoubi16 failure. Hartman" proposes architecture for advanced

2

fly-by-wire commercial aircraft systems. This work for use in satellite applications. This computer will usewas performed by Honeywell under a contract from the four CPUs, redundant read-only memory (ROM),Ames Research Center. The recommended architecture random-access memory (RAM), clocks, ports, and datainvolved redundant sensors, actuators, data buses, and buses. Lala describes a quadruple-redundant proces-processors. The processors are to be distributed sor developed at the Draper Laboratory. This processorthroughout the system, and fault tolerance would be has been designed to maintain fault-free operation inimplemented using either task reassignment or parallel the event oi any single point failure. As discussed byrunning tasks with voting. DzwonczyU presents a flight Lala 7, there have been instances of failures in triplycontrol system for the Entry Research Vehicle (ERV). redundant flightcontrol systems. Quadruple redundanceThe architecture for this system, developed by Draper is required to maintai. fault-free operation in theLaboratory and Langley Research Center, involves the presence of a single Byzantine (malicious) failure".use of a central fault-tolerant processor that is con- An abundance of research has been conducted on thenected to redundant sensors and effectUrs. development of software methods that will improve

Fault tolerance through redundancy in distributed the reliability of real-time distributed systems. Two ofprocessing systems is typically implemented through these methods, multiversion software and recoverythe use of multiple identical software tasks running on blocks, are intended to prevent system failures due toseparate processors with some form of voting. An undetected residual programming errors. The multi-example of this implementation is the Software Imple- version software method involves writing severalmented Fault-Tolerance (SIFT) Computer'. This form different versions of the same software task. If oneof redundancy places more of a software burden on version fails, then the other versions will maintainthe system, and may require an inordinate amount of the system operation. The recovery block method involvescomputer system's throughput. In the case of SIFT, periodically saving the system's state vector and, in thethe executive functions of the computer utilize 80% event of a failure, restarting the system at its lastof the system throughput8. Because of the disadvan- saved state. A third method, specifically designed fortages of software-intensive fault tolerance, the trend distributed systems, is relocatable software. Thishas been toward hardware-intensive, fault-tolerant software method enables a distributed processorschemes"5 . The goal of hardware-intensive fault system to tolerate hardware faults by relocating thetolerance is to provide redundancy that is "invisible" to tasks of the failed processors to operational proces-the software designer. Montgomery26 discusses a fault- sors. Multiversion software for real-time systemstolerant microprocessor that uses three processing is discussed by Shepherd32, Hitt33, Avizienis', andmodules. Each module is made from two microproces- Kieckhafer8 . Recovery Block techniques for real-timesors, and upon a detected module failure the spare is systems are discussed by Anderson35, Schneider36, andswitched in and the system is restarted using a .- itt33. Clarke37, Schmid 3 , Loques 39 and Best' discusspreviously stored system state vector. Evans27 discusses the use of relocatable software for real-time systems.the design of a fault-tolerant microcomputer used by Despite the research that has been done on multi-the Metro Fire Board in Melbourne, Australia. This version software and on recovery blocks, risks posed bysv Iem uses three microproccs&%,:s, each with identical these methods are still generally considered too high forperipheral cards. Under fault-free operations two use in distributed systems. Eckhardt 4' indic: !-z thatproccssors perform the same fajlt-critical tasks and the there are no data available to determine the costthird is used for noncritical tasks. If a discrepancy effectiveness of multiversion software, c- ca thoughbetween the fault-critical processors occurs, then the this method is used in the space shuttle and in Canadiancomputer switches to a voting scheme using all three Nuclear Reactor systems. Voigt42 states that exhaus-processors. Yaacob" describes a fault-tolerant micro- tive testing is the only effective method to date forcomputer that meets the requirements of civil avionics generating reliable software. Avizienis" notes thatreliability. This design uses three microprocessors multiversion software has been used in flight controlwired in a triply modular redundant structure, and a systemsfortheBoeing737/300,theAirbus, and the ATRfourth processor as a powered standby spare. Smith29 aircraft. The authors of this paper reviewed the resultsdescribes the design theory of the WG-DCS machine of a University of California at Los Angeles (UCLA)developed at the Draper Laboratory. This machine uses and UC-Irvine study on multiversion software. Thethree identical processors that run the same software. UCLA results were promising, but the UC-IrvineThe output is obtained through a hardwire voting study cast serious doubt about the effectiveness ofi hisscheme. lchikawa 3O describes a fault-tolerant computer method. At present, the only well-accepted method for

3

preventing system failures due to residual software development continues to be the greatest hurdle forerrors is through extensive testing. A study for the distributed systems design, with exhaustive testing theNetherlands Department of Civil Aviation 3 indicates only accepted method for producing reliahie software.that since software reliability cannot be accurately Since exhaustive testing of a complex software systemdetermined, reliable software development requires often requires an inordinate amount of time, the n1,rigorous development procedures and extensive test- feasible approach to reliable software development ising. It is generally accepted that a large percentage of through a highly structured development process.the residual error.; can be eliminated before testing Section II discusses the design of GASS, detailingthrough strict adherence to a software quality assurance the functions required of the system. the syStem's designprogram during software development. An intensive constrainLs, and the development ofan architectu:c thatstudy performed by Lear Siegler Inc. during the devel- will satisfy both the requirements and constraints.opmcnt of the Boeing 737/3(X) Flight Management Section III provides in-depth details of the Final GASSComputer System (FMCS) indicated that 40% to 50% design, including a description of the hardware andof the errors detected during the debugging process software paths traversed by a sensor datum. Section IVcould have been detected during functional testing of summarizes the major points of this rcport. Section Vsoltwvare modules 4 . The Department of Defense provides recommendations for areas in GASS thatstandard DoD-STD-2168, discussed by Cooper 5 and require further investigation and recommends futureSmiith ' , outlines a method for ensuring software quality system enhancements.througlh an active quality assurance program. A similarm~chodoloev isalso used in IEEE Standard9U3,"Guide II. System Design1or Software Qualit, Assurance Planning" and in areport by the [uropean Space Research and Technology A. System Functions

Centre on softyw are quality4-. It is evident that the use of Figure 1 illustrates the basic and essential functionsCompiter Aided Software Engineering (CASE) that are required of GASS. The primary function ofpackagcs throughout the specification, design, GASS is tocollect, format, andtime stampdata from theJv:ioliecnt till, ,id maintenance of software can sensors and to record these data onto magnetic tape.hm%, i, impact on its reliability and lifetime cost. The magnetic tape contains all data measured by theCASt- p:ick,-gcs can be used to enforce a quality assur- sensors, and these data are loaded later onto a home-ancc h providing a manageable program throughout base computer for analysis. The secondary function ofthe solmtwre life cycle. GASS is to provide centralized control and monitoring

of its sensors and ancilary equipment. Including con-hi ,,umniar , there is aniple evidenice that distributedte is( c ampetein that dis r ibucotd trol in the functions of GASS tremendously escalates itsl,,,....,,rl <ztlprodhuce a system that is lower in cost,

complexity. However, over 30 devices will be includedh;:,, th.h:hr rchsihi5it%, and greater flexibility than a in GASS, man), of which are extremely complicated toci;lc Ioes~ sl.towever, there is little

. eit operate. Centralizing control and monitoring of all theS,'I 'fi, pirial data available to the svstnls engi- devices in the system will reduce the manning and

tr ilcoptimal decsign of at distributed systemt~rt, tX ' o ti l dei of a e d dr ed real-tem training levels required to operate the system. Thefii,, 'a eJ \'\t d is h ap i t ne ied, rea '-time, tertiary function of GASS will be navigation. Since,:-,tfI hu( -,! s,,, ,tk:: is hig,_,l app ic at ion-Spec ific. W hile

a d.hN:iibUiCd dcsi er call be used to achieve a more

cos: ctive solution, it is often unclear how to RECORDER OPERATOR

optirni/e the design solutior for the greatest benefit. SYSTEN STATIUS

" e design is typi~altv hahtnced betw~een the system SENSOR DATA SURVEY NAVIGATIONcodt and the acceptable level of reliabilitv/flexibility. SENSOR DATA

" SENSOR CONTROLThe only proven melhod of achieving reliability in a

distributed system is through hardwaie redundancy. SYSTEM

Several soft, are metho(ds of improving ,,ystcm reliabil-ity have bei proposed, but their value is questionable. jA current trend is toward the development of highly CONTROL DATA AND STATUSreliable processors that achieve fault tolerance through Ihardware methods. These processors would providevery high levels of fault tolerance while placing little orno extra burden on the software designers. Software Figure 1. General systemfuntionm.

4-n

GASS is to be used for survey ing, sui v ey ti tcks must be detection/isolation and through system availability. Thegenerated and followed. Furthermore, deviations of the fault detecton and isolation capabilities of GASS areaircraft from the planned track can invalidate the data discussed in depth by Bourgeois". With advanced faultcollected; the system operator must be alerted if the detection and isolation features, a system will quicklyaircraft strays too far from the designated track. GASS alert the operator of any irregularities. High availabilitynavigation functions must include generating and edit- implies that while system operation may be interrupteding survey plans, ascertaining the aircraft's position by failures, the system may be easily and quicklyfrom its sensors, displaying aircraft position relative to placed back into operation. A high degree of systemthe survey plan to the system operator and the pilot, and availability can be achieved through an architecturekeeping a navigation log. The navigation log is a backup, that allows easy reconfiguration of the system into aon magnetic media, of the aircraft's track. This informa- degraded operating mode. Thus, reliable system designtion can be used to resume a survey in the event of a for GASS will entail a system that will rapidly detectsystem malfunction. A supplementary function of and announce failures, and allow for rapid reconfigura-GASS will be data analysis. GASS will have the ability tion to a degraded operating mode so that the surveyto perform analysis of data previously recorded on may be continued with minimum impact.magnetic tape, or to analyze data in near real time as itis collected from the sensors. C. System Architecture

1. Number of ProcessorsB. Design Requirements One of the first items that must be established about

Typical avionic system requirements involve size, the design of a system of this type is the number ofweight, power consumption, and electromagnetic processing elements that will be required. As shown ininterference considerations. To meet these require- Figure 2, two extremes are possible: a single processorments an avionic system is usually custom-built for a interfaced to all of the equipment in the system, or aparticular application The original GASS, built by the design where each piece of equipment has a processorApplied Physics Laboratory of Johns Hopkins and all processors are connected to a network. If onlyUniversity, consisted of many custom-made devices the data acquisition tasks are considered, then a singleand interfaces. This design methodology tends to make microprocessor should be able to handle the computa-a system very difficult to maintain and modernize. The tional load. With approximately 20 sensors and relatedneed for the current redesign of the system is a direct instruments in GASS, and a maximum sampling rateconsequence of the previous system design. The per sensor of 16 Hz, there are approximately 320 datacomponents in the original system can no longer be acquisition tasks must be performed each second. If amaintained due to lack of technical support, and the 16.7-MHz microprocessor (Motorola MC68020) weres) stern was not designed to allow for modernization, to be used for this application, then nearly 50,000 proc-The primary design constraint put forth by essor cycles would be available for each acquisitionNAVOCEANO for the new GASS is that the system be task. A crude estimate is five cycles on the average perconstructed with a highly modular architecture, using instruction for the MC68020, so nearly 10,000 instruc-off-the-shelf components if possible. NAVOCEANO tions per acquisition task are allowed. If only thedesired a system that would be easily reconfigured acquisition tasks were required, then a single MC68020

for different survey missions, but most of all, theydesired a system that would allow easy integration of OPERATOR

state-of-the-art sensors as they become available. RECORER INTERFACEThe mission of GASS will often take it to remote SENSOR,--- PROCESSO SENSOR

areas of the world, distant from technical supportactivities. As a consequence, NAVOCEANO desired A) FULLY CENTRALIZED

that the system be designed for maximum reliability. F- "- _ OPERATORHowever, since a failure in GASS cannot result in RECORDER-'--4PROC PRO--,INTERFACEpersonnel injury or equipment damage, it was not SNCONNECTO PROC-SEOdeemed necessary to include extensive fault tolerance SENSOR --- ETO -S NETWORK

in the system. System fault tolerance requires equip- SENSOR-PROC PRO -- SENSOR

ment redundancy, which significantly affects thecost and weight of the system. The design sought,then, should improve system reliability through fault Figure 2. Architecturalexremes.

5

processor would probably suffice. However, in addition problem with centralized systems. Not only can theto the basic acquisition tasks are the tasks for the tape entire system be halted by a failure of the centralrecorder, navigation processing, operator interfaces, computer's power supply, but failures in remote partsand other system control functions. The addition of of the system can disrupt operations. An extremethese tasks would make it difficult to operate the system example of this problem has been displayed by thewith a single processor. Also, the use of a single proc- Main Engine Control System for the FFG7 Class (Fastessor creates a monumental single point failure Frigate with guided missiles) of naval ships, designedvulnerability in the system. by the General Electric Ground Systems Division. The

The other extreme would be to use a processor for system is used to control the operation of the two maineach component in the system, with each processor propulsion gas turbine engines and their ancillaryinterfaced to a central bus system. The low cost of equipment. A centralized system is used, with all sensormicroprocessors would make this approach feasible and actuator signals routed to the central computer.and, if a proper design is used, then multiple micro- Component failures remote to the computer can causeprocessors can reduce the effect on the system of unwanted voltages to appear on the component's returna single failed processor. However, the constraint on signal lead, which is common to the entire system. AtGASS for the use of off-the-shelf components restricts best, the result would be an inoperative system.this approach. A processor per instrument is most Unfortunately the result would typically be erratic andattractive if a custom "black box" (typically a single undesirable system operation.circuit card) can be built that will interface the Distributed systems can nearly eliminate the prob-component directly to the central data bus. Use of off- lem of single point failures. In a distributed system thethe-shelf components, however, would require a tasks performed by the system are distributed amongseparate microprocessor card, an interface card, and several processors, called "nodes," interconnected by aa card cage for each device. The result is that more network. Network interfaces are available that allowcircuit cards would be required, and the overall system a system to completely disregard faulty nodes. If thepower requirement %,ould be greater. As aconscquence, system is designed to reduce inter-node dependencies,the decision %k as made to use "several" microprocessors, then the failure of a single processor can have little or nosome" here between the twoextremes. Theexactnumber effect on the rest of the system. A distributed systemof processors needed depends mainly upon how and reduces the wiring bulk by placing the processing,w here they are to be used in the system, as discussed in elements nearer to the remote system components, and

se h,: ,, x Iccti6n. the processing elements can be connected with either a

2. Centralized vs. Distributed Hardware single twisted pairof wires, a coax cable, or a fiber-opticcable. This wiring configuration also reduces a system's

With the choice to use multiple processors made, a vulnerability to electromagnetic interference.decision must be made about how these processors will A distributed system increases availability in that abe arranged in the system. Two general methods are single faulty node will not affect the rest of the system.available: centralized or distributed. A centralized Furthermore, if the system is so designed, then the taskssy,tern offers the advantage that a single card cage can of a faulty node can be redistributed to the functionalbe used. This system allows tbr higher communication nodes. Disadvantages of a distributed system, in light ofrates betwccn processors on a backplane bus system, the requirement foroff-the-shelf components, are that athe elimination of the hardware and software required separate card cage, each with its own power supply andto nctw ork distributed processors, and a reduction in tbc network interface card, must be used for each remotenumber of power supplies required for the sy,;ucm. processing node. The decision was made to use aThe disadvantagesofacentralizedsystem include single distributed design for GASS, because of its reliabilitypoint failure vulnerability and a more sevie wiring aspects. To reduce tihe cost incurred by the supportproblem. With a centralized system the interface cables equipment required at each node, only eight nodes willmust be run from each sensor to the computer system. be used. Six of these nodes will interface to sensors, theThis configuration results in much more weight and seventh node will be used for the operator and tapebilk over that which would be required for a network- recorder interlaces, and the eighth node will handle theconnected, distributed system. This problem has data analysis hinctions. lle six sensor nodes will bealready caused many high-performance fighter aircraft designed to pro'. .,: the computational [POwerneeded bymanufacturing firms to seriously consider using the currcnt sensortimplcmcnt, and to allow addition ofdistributed systems. The single point failure more sensors With six scnsor nodes, there will be andisadvantage is, without question, the single greatest approxinate loading of only three devices per node.

This numberleaves ample processing power for the task consisting of highly autonomous nodes. Since allof bus communications, some minor data processing, the sensor nodes in GASS must communicate with theand the addition of more sensors. single node that has the tape recorder and the operator

interface, the nodes of GASS cannot be considered3. Local A :a Network autonomous. Therefore, the ring bus is not iii

Since the system will be constructed using a distrib- efficient choice for this application. Ehe star bus, withuted architecture, a communications media must be the operator/recorder node of GASS placed at thechosen to interconnect the processors. The IEEE-488 center, would seem the most natural choice for thisstandard, or general-purpose interface bus (GPIB), system. The advantage of a star configuration is that itwould seem the natural choice for this application, can achieve the highest data rates of all LtC possibleHowever, the GPIB poses severe limitations that make architectures. The disadvantage of the star configura-the use ofa network-type communications system much tion is that the center node would require an interfaccmore attractive. The GPIB has a limitation of 15 devices card for each remote node. The bus configuration offersper bus, and the maximum length of cables used for a a system with less hardware but slowerdata rates, sincebus must not exceed 20 m49 . Also, the GPIB cable is several nodes must share the same communicationmuch heavier and bulkier than a single twisted pair or media. The bus is the typical choice for avionicsa coax cable. The GPIB cannot be easily upgraded to a systems because of the savings in hardware.3 A singlefaster communication media; a coax network could bus architecture was selected as the economical choiceeasily be upgraded to a faster fiber-optic network. for the GASS network, since each node will have theBecause of the GPIB limitations, the decision was made ability to buffer collected sensor data between busto use a network based on a srial data bus to intercon- transmission periods.nect the processor nodes. When a network is used for communications, a

Three fundamental local area network (LAN) Media Access Protocol (MAP) must be establishcd.configurations arc practical for use in an embedded There are two fundamental MAP's, contention andreal-time avionics system: the ring, the star, and the bus noncontention ' . With a contention MAP, also known(F~t. 3). In the ring bus, which is commonly unidirec- as random-access MAP, all nodes have equal accesstional, messages are circulated around the ring until rights to the network. The most commonly knownthey reach the target node(s). This protocol greatly network of this type is Xerox's Ethernet. A node gainssimplifies message routing: and, since more than one access to the network by listening; if the network ismessage can be in transit, very high data rates can be silent, then it will proceed to transmit. Since there areachieved'. The ring bus is well adapted for systems propagation delays across the wiring in a network, two

nodes may begin transmissions simultaneously.resulting in - "collision." Collision is the greatestdisadvantage of a contention MAP, and its occurrencmakes this protocol unsuitable for real -time applications.

RINDG Current research, however, indicates that the problemsof collision in random-access networks may soon beremedied", 52

With a noncontention MAP, some method is used to

* = PROCESSOR NODE ensure that only one node is transmitting on the networkat any given time. The two common methods for

STAR Bus implementing a noncontention MAP are bus controllerand token passing. With a bus controller MAP, a singlenode is given the responsibility to control all networkcommunications. Other nodcs are instructed to transmitdata packets by the controller node. The greatestadvantage of this scheme is its inherent simplicity ofimplementation. The disadvantages are that it does notmake maximum use of the bandwidth of thecommunication media, and i, presents a single pointfailure vulnerability. The time-slot implementation isthe typically used controller scheme, where cach remote

Figure 3. Standard network topologies, node is allotted a period of time for transmission by the

7

controllei. Inefficiency exists with this implementation functions of sampling or not sampling, sampling rate,since a node may be granted its time-slot even if it has and initialization. By placing all of the "intelligence" tonodatatotransmit.Thetoken-passingMAPhasreceived operate a sensor only at the sensor's node, networkmuch attention in recent years5-, and it provides for communications are decreased and the speed of themore efficient use of the network medium. With a token sensor task is increased, since it is not dependent uponpassing MAP a token is circulated from node to node the network. Furthermore, data from all sensors will bethroughout the network. The node that possesses the packaged identically, eliminating special handlingtoken is allowed to transmit on the bus and, when it no requirements insofar as the bus interface and tapelonger needs network access, it is responsible for recorder tasks are concerned. This software approachpassing the token to the next node. The noncontention also offers the advantage that all bus interface tasks willcontroller MAP was chosen for GASS because of its be identical. This method will reduce the cost of soft-simplicity of implementation and because the system ware development and, since the same module will bearchitecture is such that all sensor nodes will communi- used by several developers in different nodes, itcate with the operator/recorder node but not with each provides more thorough debugging of the softwareother. The problem of single point failure vulnerability module. Another advantage i. that tie highly modularpresented hy this MAP can only be resolved by using an sensor tasks may be easily relocated to different nodesadditional node with a tape recorder as an online within the system.backup for the controller node. The cost in terms of To reduce the possibility of a bottleneck in theexpense, weight, and size make this solution unaccept- network, the decision was made to use two microproc-able for GASS. As discussed in the next section, the essors in the operator/recorder node. One processoranalksis node will be used as an off-line backup for would be used strictly for the bus interface and thethe controller node. recorder interface to minimize the time required to pass

data from the network to the tape recorder. The real-4. Physical and Functional Partitioning time processor would handle all communications on theof Software network, and would pass the required control and data

The hardware design concept consists of eight proc- information between the network and the operatoressing nodes interconnected by a bus network that uses interface. The second processor would be used for thea controller MAP. Six of the nodes will be used for operatorinterface, which is a less time-critical function:sensors. The seventh node will be used for the operator This processor is the logical location for the navigationinterface and the magnetic tape recorder. The eighth task, which is predominantly an off-line task, requiringnode will contain the data analysis functions. Since all only occasional position data from the sensors. Sincecn. or nodes need to communicate with the operator/ data analysis tasks are typically computationally inten-

rccoric, nod,2, but not with each other, the network sive, a separate processor, isolated from the real-timecontroller function will be located in the operator/ elements of the system, should be used. The need torecorder node. All that remains to be defined for the isolate potentially slow software tasks from the rest ofs stems architecture is the placement of the various the system was the primary reason for using a separatesoftware tasks required to operate the system, node for the data analysis tasks. Since data analysis will

The software tasks required for this system include be performed on a separate node, this node must alsohus interface tasks at each node, operator interface tasks include the hardware and the software tasks to receiveikcvboard, displays, etc.), and a tape recordertask at the real-time data from the network or to read data tapesop.:rator/recordei node, sensor control and acquisition recorded by the operator/recorder node. The datatasks, navigation tasks, and data analysis tasks. To analysis node will thus require much of the samemaxi mize real-time performance, the software tasks hardware andsoftware as the operator/recorder node, soshould be physically and functionjally partitioned. In these two nodes were made using identical hardware.other words, the software should be designed to be Data analysis is a nonvital task in GASS and, in themoduiar in form. To meet this goal all sensor tasks are event of a failure of the operator/recorder node, thiscompletely isolated to the sensor nodes. They will be design scheme would allow the data analysis node to bedesigned such that all sensors appear to be identical used for control of the system.from the perspective of the operator/recorder node The resulting architecture, illustrated in Figure 4, hasand from the bus interface task in each node. More the software tasks physically and functionally parti-specifically, the individual sensor task in a sensor node tioned. The highest speed tasks, those of sensor controlwill be the only software that i, aware of the actual and data acquisition. are located only in the six sensordetails of operation of its particular sensor. To the rest nodes. The next highest speed requirement is that ofof the system. the sensor will have only the basic moving the data from the sensors to the tape recorder.

8

2. Local Area Network MediumPROCESSOR _OPERATOR

INTERFACE The LAN medium selected by NAVOCEANO waMEMORY the Military Standard 1553B serial bus, the data bus of

TAPE PROCESSOR choice for military avionic systems (and, thus, a largeRECORDER industrial backing). It supports I-Mbps operation,

-- ANALYSIS NODE transformer isolation of nodes, and offers the feature ofdual-redundant comr iication channels. A possible

SENSORS PROCESR PROCESSOR -SENSORS alternate bus woula be the Avionics StandardCommunication Bus (ASCB)"5 . ASCB was developed

SENSORS PROCESSOR PROCESSOR SENSORS by Sperry Corporation and is supported ty the General

REAL TIME DATA BUS Aviation Manuf,cturers Association (GAMA). It isNETWORK ------ also a dual-redundant bus s -em for avionics

applications, and supports 0.67 Mbps operation at aSENSORS P PROCESSOR SENSORS reduced cost. The chief advantages of the 1553B bus

over the ASCB is that 1553B has broader industrialFigure 4. GASS architecture. support, and fiber-optic upgrades for 1553B systems

This requirement is handled by bus interface tasks in are already available'.

each sensornode, and the bus and tape recordertasks are 3. Operating Systemshandled on the real-time processor in the operator/ Unix (Motorola V.3) was chosen as the operatingrecoider node. The slowest tasks, operator interface and system for the operator interface processor. Unix is anavigation, are handled in the second operator/recorder well-established, multitasking operating system withnode processor, which is physically separated from the which the system's designers were well acquainted. Areal-time processor by a global memory area. All full-capability operating system is needed for thecommunications between the two processors in this operator interface processor, since it is responsible fornode will be handled through the global memory, standard microcomputer tasks, including hard/floppyreducing the load on the real-time processor. The data disk drive control, display control, and printer control,analysis node will function independently froi.; the rest as well as its GASS specific tasks. For the real-timeof the system, and will 'listen" to the system's network processor and the processors in the sensor nodes, it wasto receive real-time data from the sensors. desired to have a streamlined operating system that

was smaller and faster. For this application the pSOSoperating system was selected. Manufactured by

D. System Building Blocks Software Components Group, pSOS is a real-time1. Microprocessor/Backplane multitasking operating system that is tailored for the

The choice for the backplane bus system was largely Motorola 68000 family of microprocessors.dri',en by the requirement to use off-the-shelfcomponcnLs, and the desire to use a 32-bit data and 4. Programming Languageaddress bus. The oldest 32-bit microprocessor standard Although some assembly-language programmingbus is the Motorola VNME bus system introduced in 1981 was required for this type of system, the design goal was(also known as IEEE Skandard P1014)6 . The VME bus to use a high-level programming language wheneverhad the greatest market support at the ti-ne of the GASS possible. High-level languages permit easier systemdesign and was chosen as the microprocessor backplane development and provide much easier system modifi-bus syztem. As a consequence, despite the diverse array cation and modernization. The language chosen forof ~ensors used, only 3 of the 17 interface cards in the this system was Kernighan and Ritchie's "C." C issystem had to be custom-made. Since the VME an excellent language for developing a system thatbackplane was selected, a natural choice for a micro- involves a significant amount of input/outputprocessor was the Motorola MC68020. The MC68020 processing and special-purpose equipment interfacing.is astate-of-the-artmicroprocessorthat supports multi- A posedhle contender for the GASS programmingtasking, and the 6X(XX) family of processors have been language choice was Ada, the Department ofused in aerospace applications by Plessey Avionics62 Defense's (DoD) standard programming languageand Draper Laboratory1 7, 2416 . It was the second most for real-time embedded systems. Even though Adapreferred microprocessor (,ifter the MIL-STD-750A) has been used by many DoD contractors and is selectedof Ada development tool manufacturers'. as the programming language for the Space Station,

9

Subsystem (SAS); and the Remote Sensor Sub-ORVEOY ASURVEY system (RSS), which is comprised of six remote sensorCONTROL ANALYSIS

SUBSYSTEM SUBSYSTEM systems (RSSI through RSS6).BUS NETWORK sThe SCS is the central control point of GASS. This

BUSsubsystem collects data from the RSS, stores the coi-

lected data on magnetic tape, and displays the necessaryRSS 1 F 2survey status information for the GASS operator and

RSS 4I RSS 5 RSS 6 the Project MAGNET aircraft pilot. The SCS is

REMOTE SENSOR designed to collect and store all system data at rates ofSUBSYSTEMS 1-6 1, 2, 4, 8, or 16 times a second.

The SAS provides the capability to perform in-flightFigureS. GASSsubsystems. analysis on the data being collected and to display the

results graphically. Each of the six remote sensor sys-Columbus61 , Ada has only recently been considered a tems are comprised of a remote sensorcontroller (RSC),mature programming language 6 . 64 , 68

,69 . Reported sensors, and sensor supporting instruments. The RSCs

problems with Ada include difficulty in obtaining in the remote sensor systems are identical, with thecompilers for microprocessors and differences in inter- exception of the special interface cards required forpreting the language between validated compilers; also the various sensors. The primary communications busthe code produced is too large and slow for embedded for GASS is a dual-redundant MIL-STD-1553B serialsystems. Ada was considered tobe inappropriate foruse data bus that connects to the SCS, the SAS, and eachin GASS due to the lack of familiarity by the design RSC. Other data buses are used within the SCS, theteam and to the difficulties reported by many large SAS, and each remote sensor system as required bygovernment contractors. their associated equipment.

III. System Details 1. SCS/SAS and the 1553 Data BusThe SCS and SAS subsystems are illustrated in

A. Hardware Figure 6. The SCS consists of the survey control com-GASS is comprised of three major subsystems as puter, a 9-track tape recorder, an operator console, a

iilustrated in Figure 5. These subsystems are the Survey graphics display, the project display, the navigationControl Subsystem (SCS); the Survey Analysis display, the pilot display, and a printer. The survey

SGASS 1553 -BUS

I i,,1-- -PTTI,TRACK,

WErORDER

SSURVEYCONTROL AAYI

CMUIRS2COMPUTER

Figue 6 Suvycnri adaayi uses

10

PRJ IS/A IPIE

Figure R EE SuvycNxladaayi ussesMIM0

PH -1P EM MONm~lll

control computer is a Plessey CS-10 computer decimal (BCD) line, the pulse per minute (PPM) line,system, with a Plcssey 68-22M CPU card and a Plessey and the timing fault line. The BCD line provides the68-25M CPU card. The 68-22M CPU card is used for current time stamp encoded in binary coded decimal.the operator interface and system control functions. The The PPM line provides the timing mark for the time68-25M CPU card provides an interface between issued by the BCD line. The timing fault line indicatesthe RSS, the tape drive, and the operator. The 68-25M the source of the PTTI signals. During normal opera-card interface to the operator is via a global memory tions the PTTI signals arc derived from the globalcard that is accessible by the 68-22M CPU card. Both positioning system (GPS) through the PTTI interfaceCPU cards are based on a Motorola 68020 micro- custom-built by NORDA. In the event that the GPSprocessor with a Motorola 68881 co-processor. The receiver fails to produce the PTTI signals, the timingoperator, graphics, project, and navigation displays are fault line is asserted by the SCS, and the SCS takesTektronics SF4208 graphics terminals. The pilot display control of the time synchronization function. In thisis an RGB (red-green-blue) display mounted in the mode the SCS sends the time stamp to the SAS and eachcockpit of the aircraft. This display provides the pilot RSC over the 1553 serial data bus, and then issues thewith the survey route and other related information. The timing mark over the PPM line.primary function of the SCS is to collect data fromthe RSS over the 1553 data bus and to store this data on 3. Remote Sensor Subsystemtape. The SCS also provides survey navigation informa- The RSS consists of all six RSCs and their associatedtion, GASS equipment status, backup system timing, sensors. The RSCs are custom-built VME chassisand operator system control functions, manufactured by NORDA. They provide the power

The SAS consists of the survey analysis computer, an supply and framework to mount the processor andoperator console, a graphics display, and a 9-track tape interface cards required to operate an individual RSS.recorder. The primary purpose of the SAS is to provide Each RSC contains a Plessey 68-25M CPU card, an SCIthe operator access to the GASS survey data base so that Corporation 1553B serial interface card, and any inter-in-flight data analysis may be performed. As shown in face cards required by the sensors of a particular RSS.Figure 6, switches SW I and SW2 allow either the SCS Sensor interface cards include custom parallel, customor the SAS to operate the project, navigation, and pilot serial, 1553B serial, ARINC-419, synchro, GPIB, anddisplays. The SAS hardware is identical to the SCS, RS422 interfaces. The following paragraphs describeenabling the SAS to operate GASS in the event of a the components of each RSS.catastrophic failure of the SCS. RSS I (Fig. 7) contains RSC 1, a Rosemount 1501 AT

The MIL-STD-1553B data bus is a serial data bus precision barometric altimeter, a Rockwell GPS-3A%ith a 1-MHz maximum bit rate. The dual-redundant receiver, and the NORDA PTTI interface. The altim-configurationofthisdatabusisuiedinGASS.This con- eter provides altitude in feet to the RSC through afiguration provides a dual path for all communications parallel interface. The Rosemount altimeter is the mostbetween the SCS/SAS and the RSCs. in the event that accurate barometric source for altitude on the aircraft.one of the redundant buses fails, GASS will automati- The GPS receiver provides latitude, longitude, altitude,cally switch to the sccondary bus without loss of data.The 1553B data bus also provides the feature that a _oiupococo cfailure of an individual processor connected to the bus Pwill not disable the bus. RSC1 A A

1 1 R R5 5 A A2. Syste Timing5 5 L L

- CU c3 CpuAn accurate time reference for GASS is absolutely E

crucial to enable post-processing of the geophysicaldata collected. Accurate time references are contained GASS 1553 BUS -'in all processing units: the SCS, the SAS, and each RSC. GPS CONTROL/Since each of these eight units contains its own timing DISPLAY UNITdevice, some method of assuring synchronization ROSEMOUNTbetween all devices is required. The Precise Time and AROMETRICTime Interval (PTTI) interface provides time synchro- GLOBAL POSITIONING ALTIMETERnization between all processing systems in GASS. SYSTEM RCVR-3A PTTII TO ALLPTTI provides time synchronization once every GPSANTENNAD PTTI INTERFACE SUBSYSTEMS

minute to ,he SCS, the SAS, and each RSC. This isaccomplished through three lines: the binary coded Figure 7. RSS interface diagram.

11

and heading to the RSC through an independent 1553 is capable of a maximum sampling rate of only 10 Hz inserial bus. The GPS receiver also provides the signals this configuration. The bandpass output is a signal thatrequired by the PTTI interface to provide system syn- is proportional to the rate of change of the local mag-chronization. The GPS receiver provides the most neticfieldintensity.Thissignalisinputto RSC3 throughaccurate aircraft position information in the system. It an analog to digital converter card. The strip-chartcalculates aircraft position through communications recorder provides the GASS operator with a quick-lookwith four satellites (normal operation) or through display of both outputs of the scalar magnetometer. Thecommunications with three or less satellites and a digital compensator provides magnetically compen-

barometric altitude input. Barometric altitude is sup- sated total field and gradient signals to RSC3 throughplied by the SCS over the 1553 data bus to RSCI. a parallel interface. It derives these signals from the

RSS2 (Fig. 8) contains RSC2, an AAU-21 baro- ASQ-81 scalar magnetometer's Larmor frequency

metric altimeter, and two Litton-72 inertial navigation signal and the X, Y, and Z axis magnetic intensities

systems (INS). The AAU-21 altimeter provides altitude from the vector magnetometer in RSS4.in feet to RSC2 through a parallel interface. Each RSS4 (Fig. 10) contains RSC4, a Honeywell H-423Litton INS provides heading, pitch, roll, latitude, lon- ring laser gyro (RLG), a Honeywell electronicallygitude, north velocity, east velocity, ground speed, and suspended gyro (ESG), a NAROD vector magne-

drift angle to RSC2 through an ARINC-419 serial tometer, and three HP3457A multimeters. The RLGinterface and a Synchro interface. These devices are provides latitude, longitude, pitch, roll, and truepart of the aircraft navigation system and are not heading to RSC4 through an independent 1553B serial

controlled by GASS. The barometric altitude datarequired by the Litton INSs is provided by the aircraft'ssystems, independent of GASS. G P PuRSC3] AAAADAAA

RSS3 (Fig. 9) contains RSC3, a Texas Instruments T R A R

ASQ-81 scalar magnetometer, two Hewlett-Packard T T T T T E A A P

HP3570B frequency counters, a Gould RS3200 strip- 3 Y L D 3 B

chart recorder, and an RMS Instruments automaticae romagnetic digital compensator. The ASQ-81 magne- GASSBUS

tomcter detects local magnetic field intensity using a BANDPSS --

sensor head in the tail of the aircraft. It produces two CONTROL GNALsignals: the Larmor frequency and the bandpass output. UI COTROL I CH2ART

AOiUNIT 1IHThe Larmor frequency is a frequency between 0.6 and MAGNETOMETER DREC

2.2 MHz that is proportional to magnetic field intensity. P I 'ER "" SPPLY PR5Q3

The Larmor frequency is measured by the HP5370Bfrequency counters and input to RSC3 over an HPIB M DETECTORdata bus. Two frequency counters were required toprovide the 16-Hz maximum sampling rate required of Figure9. RSS3 interface diagram.the s>trcm. An individual HP5370B frequency counter

PT Ti

OSSSP PPcTI SC4 I Y PE A A

p p SSP p E R R H

s A S D 5E RI 1C bA 1 PPSC AE A y IA y 0 A S E A ~A 5 T

0 A 1 5 L I

R R N RERRCPU ARL

Ar A C TI A C A C IA A [ 0LE EI LT 0 L L°L'A L H 01 L HT L I AELiCPU LA L LR A LR A LALLLGASS 1553 Bus

L L L L I- RLG RING LASERGASS J..! ,- , ,t_, - 1 CO -lCDU GYRO

1553 L -DIGITAL

BUSI ELECTRONICALLY TO RSS3 VOLTMETER #1ARINC-419 SUSPENDED GYRO DIGITAL I TITTON INERTIAL __ CRY OIOMPENSATOR G

GRAY CODEVOL T METER #NAVIGATION SYS 01 GRA CDIU CO MP I ENST

IBAROMETRIC VOLTMETER 3ALTIMETER VECTOR

NAVIGATION SYS #2 ARiNC-419 MAGNETOMEIER,

Figure 8. RSS2 interface diagram. Figure 10. RSS4 interface diagram.

12

data bus. The ESG provides latitude, longitude, pitch, system for determining shallow-water depth7". AMPSroll, and true heading to RSC4 through a special serial is under development by Lockheed through NASAinterface. RSC4 provides both the RLG and the ESG and is a system that determines shallow-water depthswith barometric altitude supplied by the SCS. The using multispectral images of the ocean surface. Thesevector magnetometer produces three voltages systems will collect and store their data independentlycorresponding to the orthogonal magnetic field vector from GASS, and they will have their own operator onintensities. These voltages are measured with the digital the aircraft. GASS provides HALS and AMPS with amultimeters and sent to RSC4 over an HPIB data bus. time stamp through a parallel interface, and with lati-

These voltages are also supplied to the d-gital compen- tude, longitude, height, true heading, pitch, and roll

sator in RSS3. through a serial interface.

RSS5 (Fig. 11) contains RSC5, an OPTECH 501-Alaser altimeter, and a Honeywell APN-222 radar B. GASS Softwarealtimeter. The laser altimeter provides altitude in meters Figure 13 illustrates the GASS software distribution.to RSC5 through a parallel interface. The radar altime- GASS software is functionally and physically distrib-ter provides altitude in feet to RSC5, also through a uted among the processors in the SCS, the SAS, and theparallel interface. The laser altimeter is the most accu- RSCs. All real-time or near-real-time functions arerate source of altitude in GASS for low altitudes, and the handled by the 68-25M CPU cards in the SCS, the SAS,radar altimeter is the most accurate for high altitudes. and the RSCs under a Software Components Group Inc.

RSS6 (Fig. 12) contains RSC6, the Hydrographic operating system called pSOS. All nonreal-time func-Airborne Laser System (HALS), and the Airborne tions are handled by the 68-22M CPU cards in the SCSMultispectral Pushbroom System (AMPS). HALS is and the SAS under a Motorola Unix V.3 operatingunder development by NORDA and the Naval Air system. The software is functionally distributed intoDevelopment Center (NADC) and is a laser-based the following areas: Survey Control System, SCS Real

Time Front End (RTFE), Survey Analysis System,SAS RTFE, and the RSCs.

- PTTI The SCS software is responsible for the operatorS. P Pinterface, system control, navigation, displays, andR S D A printer functions. The SCS RTFE software is respon-

R I R R R5 A G I A A sible for the 1553 communication, data handling,b L A V L L message processing, tape drive control, and time-3 L AE l L C

CPU E LR E E keeping functions. The SAS software is similar to thatI L L L of the SCS, but it does not incorporate the system

GASS 1553 BUS * 1 1 control and navigation functions. The SAS softwareincludes graphics and data base functions not in the

LASER RADAR SCS software. The SAS RTFE software is almostALTIMETER ALTIMETER__

P RECEIVE TRANSMITE.~U HEAUD NEN ANTENNAT

SURVEY CONTROL GLOBAL REAL TiMESUBSYSTEM MEMORY FRONT END

Figure 1. RSS5 interface diagram. OPERATOR INTERFACE 1553 COMMUNICATIONSYSTEM CONTROL DATA HANDLINGNAVIGATION MESSAGE PROCESSINGDISPLAYS TAPE DRIVE CONTROLPRINTER _ TIME KEEPING[- PTTI

P F P SURVEY153R S A R S A A ANALYSIS 4 - SERIAL

1 SE R SE R R SUBSYSTEM BUS5 4R A 4R A A5 2 1 2 1 L L 6

S 3 2 A L 2 A L L -45L E L E E 13

LL LSENSORS R MT ESR 12-l - - HYDROGRAPHIC RON OR 1

GASS 1553 BUS MAGNETIC CONTROLLERSPOSITION 1553 COMMUNICATION

ARONMUTISPE CTRAL HYDROGRAPHIC ALTITUDE SENSOR CONTROLPUSHBROOM SYSTEM AIRBORNE ATTITUDE DATA PROCESSING

LASER SCANNER TIME TIME KEEPING

Figure /2. RSS6 interface diagram. Figure 13. GASS software distribution.

13

identical to the SCS software. The RSC software is survey's navigation information on the computer'sresponsible for time keeping and 1553B data bus floppy drive. Thisinformationis used toquickly resumecommunications for all RSCs. It also performs the a survey should an SCS failure occur. The navigationfunctions of sensor control and sensor data pre- control task is responsible for computing the deviationprocessing for the sensors in a particular RSS. from the actual aircraft's track to the desired survey

The software in the six RSCs is physically separated track. The resulting information is displayed on thefrom the RTFEs in the SCS and SAS by the 1553B serial navigation and pilot displays. The display control taskbus. The software in the SCS and SAS is physically is responsible for driving the non-interactive naviga-separated from the respective RTFEs by the SCS and tion, project, and graphics displays (the pilot display isthe SAS Global Memory Cards, with the exception of a a slave under the control of the navigation display). The"mailbox" interrupt line. The SCS and SAS software is data for these displays are obtained from the SCS globalstored on hard disk and is loaded into RAM for opera- memory. The data print task creates printouts of datation. Both of the Plessey CS-10 computer systems and system parameters, as directed by the operator. Thecontain the software for the SCS and the SAS, and each SCS mailbox control task provides for the interfacemay be booted to perform either function. The software between the SCS software and the SCS RTFE software.for the SCS KTFE, the SAS RTFE, and the RSCs is Theman-machineinterfacetaskcontrolsallinteractionscontained in ROM in the respective 68-25M CPU cards, with the operator via the operator console. Its functionsand is loaded into RAM for operation. The ROM used include on-line help, configuration and control of sys-for the SCS and SAS RTFE contains the software for tems sensors, generation and editing of survey tracks,both, allowing either CS-10 computer system tooperate display configuration, special events log, and printas either the SCS or the SAS. The ROMs for the RSCs control. With the exceptionof the mailbox interrupt line,contain the software for all six RSCs. A jumper on a all communications between the SCS software and theparallel port of the RSC is used to inform the RSC boot SCS RTFE software are via the SCS global memory.software as to which RSC it is in upon power-up. The SCS Real-Time Front-End Softwareboot software will load the software for that RSCinto RAM. The SCS RTFE, software (Fig. 15) consists of the

1553 bus manager, message processor, tape manager,1. Survey Control Subsystem Software and time tasks. The 1553 bus manager task in the SCS

The SCS software (Fig. 14) consists of the navigation RTFE controls all communications on the 1553B dual-history, navigation control, display control, data redundant serial data bus. It operates the SCS's SCIprint, mailbox control, and man-machine interface tasks. 1553B interface card in the bus controller mode, pollingIThe navigation history task keeps a record of the each RSC for status and data. The message processor

FLOPPY DRIVE

R 1A 0SF RAM DRIVE NAVIGATION SCS

REM MOa REM MON HISTORY GLOBIALREMEMOR

RYM MON ONAV CONIGSURVEY

PRINTERT

SESR;SYSTEM DISPLAY T AEONTIRD CO NONTROL C M

SURVEY RACK GEERATIO

SENSOR VALUES

D IISA C FU DISPLAY CONFIG[ PRINTER f-D SCS MAL OXcmol., PROGRAM ID'S(OWP-R-TOR SENSOR STATUS'

CONSOLE "AN AND CONTROLMACHINE ALARMS

iiVy!!i RO!il -ENSOR ?'STE M CONFIGURATION INTERFACE 10, TAPE MCONTROL

Figure 14. Survey control subsystem software (UNIX operating system).

14

SGS BUS AGLOBAL 1553 BUS 1MEMORY MANAGER op 5NAV DATA 5BUSB

NAV CONFIG 3

SURVEY BWAYPOINTS U

MAIL BOX .4 _40- RTFE SCURRENT MESSAGE

SENSOR VALUES PROCESSOR

DISPLAY CONFIGPROGRAM IOS TIME

TAPE MANAGER STAMPSENSOR STATUS .4

AND CONTROL

ALARMS (j"- IE - PTIlTAPE CONTROL

TIME 1.

Figure 15. Real time front end software (pSOS operating system).

task handles all communications to and from the1553 bus manager task. It handles the formatting and SENSOR SENR2routing of all messages and data. It sends sensor control I CONTROL SENSOP ,i

commands to the bus manager, receiving sensor status - SENSOR COMMAND ETCPROCESSOR

messages and sensor data from the bus manager, and for - SENSOR ALARMrouting sensor data to the tape manager task and the PROCESSOR-DATA ACOULSITION

SCS global memory. The tape manager task formats all - s DATA ACQISI BU A Idata received from the message processor and for MESSAGE .13 B 5operation of the magnetic tape drive. The time task in B PROCESSOR MANACERTIMEI Bthe SCS updates the RTFE's processor clock using the ME BUS OUT UPTTI signal. This task also provides the current system DATA BUFFER S

LOCALMANAG EENTtime to the operator via the SCS global memory. The LOCSCLOCK

SCS RTFE time task serves as the backup for the PTTI PTTIsignal in GASS. If this task recognizes a loss of the

Figure 16. Remote sen.or controller software (pSOS operatingPTTI signal, then it will take over the system iming Sys).

tuncuon by asscrLiiig di; MT! fault lint-, generatingtiming sync pulses over the PTTI interface and send- time manager task will issue an alarm to the messageing the current time stamp to the bis manager for processor for transmission to the SCS. The RSC soft-transmission to all RSCs. ware contains a separate sensor control task for

3. Remote Sensor Controller Software every sensor or instrument in the particular RSC.

The remote sensor controller software (Fig. 16) Each sensor control task is responsible for sensor

consists of the 1553 bus manager, message processor, operation and control (via commands from the SCS),

time manager, and sensor control tasks. The bus man- sensor alarm processing, data acquisition, and data

ager task in the RSCs buffers and transmits the data and checking/formatting.

messages received from the message processor. It alsoreceives sensor control information from the 1553 bus 4. Survey Analysis System andand passes this to the message processor. The RSC SAS RTFE Softwarebus manager' ;k operates the SCI 1553B interface card The SAS RTFE software (Fig. 17) is similar to thein the remote terminal mode. The message processor SCS RTFE software, except that the tape manager tasktask processes all messages between the sensor control is included in the message processor task, and th,' bustasks, the bus manager task, and the time manager task. manager operates the SCI 1553B interface card in theThe time manager task updates the RSC processor's bus analyzer mode. The bus analyzer mode allows onlyclock using the PTTI signals. If the PTTI fails, then the the SAS to monitor the 1553B data bus. A separate tape

15

DATA ACQUISITION DATA DISPLAY(pSOS OPERATING SYSTEM) (UNIX OPERATING SYSTEM)

MANAGER SAS INTERFACE S CNOLE

5 MEMORYA5 , -- CONTROL DATA ACQ]UISITION

31553 , , -SELECT PLOTTING OPTONSBBUS ,-

S ELECT DATA CO MP UTATION

U SAS CONTROLSMESSAGE SAS SAS

PTT - C TIMER TIME POI

ADATAHANDLER

Figure 17. Survey analysis subsystem software.

manager task is not needed, since the SAS will only sensors into a buffer. When the RSC is polled by theretrieve data from the tape recorder and will not send SCS, the bus manager sends all of the data currently indata to the recorder. The SAS software consists of the its buffer to the 1553B interface card for transmissionman-machine interface, SAS mailbox control, data on the 1553B data bus. The SCS RTFE bus managerhandler, que dump, and plot tasks. As with the SCS, the task receives the data transmitted on the bus by the RSCman-machine interface provides all interface between from the SCS's 1553B interface card. The bus managerthe operator and the SAS. This task allows the operator task passes the data to the SCS RTFE message proces-to control data acquisition from the 1553B bus and tape sor task. The message processor routes all the sensordrive, to select plotting options, and to select data data to the tape manager task and to the sensor data areacomputations. The SAS mailbox control task provides in the SCS global memory card. The tape manager taskthe interface between the SAS software and the SAS formats the data and sends them to the Pertec interfaceRTFE software. The plot tasks can generate real-time card, which in turn sends the data to the tape drive forplots from the data base or from off-line plots from data recording. The display control task, located in the SCSstor,- n iles. The plots created can be displayed on the CPU card, retrieves the data placed in the SCS globalgraphics display or on the printer. The data handler task memory card by the SCS RTFE CPU card, formats it,routes data from the SAS RTFE to either the data base and sends it to the displays via a serial interface.or to disk files. The que dump task routes the operator-specitied data from either the disk files or the data baseto the plot tasks. IV. Summary

This study evaluates and applies the state of the an ofC. GASS Data Flow distributed real-time system design. The knowledge

Figure 18 illustrates the typical flow path of data base for distributed design is largely empirical, and fewthrough GASS. Data originate at an individual metrics are available for evaluating the economics of asensor and are sent to an RSC via the sensor's interface particular design. The significant gains of a distributedcard. The sensor interface card converts the data from design over a centralized design are an increase inthe sensor into a format that can be used by the RSC's reliability, flexibility, and expandability. Software68-25M CPU card. The sensor manager task receives development is the most difficult and the most expen- gthedata from the sensor interface card and sendsit tothe sive part of distributed systems development. Whilemessage processor task. The message processor task distributed systems can realize a savings in hardware,properly formats the data and sends it to the bus man- the cost of the software could easily offset this savingsager task. The bus manager collects the data from all if the system design is too complex. It is important to

16

RSC CPUCARD

I ESRSENSOR WRCSNOINTERFACEMAGEI.ARDARE ICARD ANALOG fHRWR]RS232

RS42 RSC MESSAGEIEEE 488 IPROCESSOR1553B

PARALLEL

DRIVE INTERFACE S YNCHROMAGECARD

SCS RTFECPU CARD

GLOBALTF TA..PROEO R DATABOMERATOR CARDRY

CPU SCSIE 1553B RACARD RTE CARSA

Fiue1GLOASS PROCES RDATAaBUS

realize that the only widely accepted method for ensur- extra load the system can handle. If there are criticallying reliability of software is through exhaustive testing. loaded processors in the system, then the problem canHowever, some of the costs of software debugging can probably be remedied by a different distribution ofbe reduced by following a rigorous software quality sensors in the RSCs. Another significant measure is theassurance plan. latency time of the network bus. Excessive bus latency

A systematic approach to the design of GASS can adversely affect the navigation function of theinvolved first the identification of the system's require- system, since GASS will be used on an aircraft. Atments and major design restrictions. Then, a methodical typical survey speeds of 200 knots, 15 seconds isanalysis of available architecture options was made to equivalent to 0.95 miles of travel. Probably the mostproduce a design that meets both the requirements and significant measure of system performance at this pointrestrictions. The major elements of the system's archi- is that of measurement time skew. Even though thetecture are the number of processors, the way these system is time synchronized and sensor acquisitionprocessors are distributed and interconnected in the tasks are initiated within a few microseconds of eachsystem, and the distribution of the software tasks. Ths other, a time skew still exists between the actual meas-study revealed that the design of embedded real-time urements. Part of this skew can be accounted for by thedistributed systems is largely application specific; the fact that the acquisition tasks for different sensorsdesign of GASS given in this paper offers one possible require varying lengths of time to actually trigger theirsolution to the given design parameters. sensors. The major contributor to the time skew, and

also the hardest to measure, is the time required for eachV. Recommendations instrument to perform a measurement once triggered.

Some instruments, the barometric altimeters, forA. Further Study example, provide data almost instantly after triggering

Once GASS is completed and has undergone suc- of the acquisition task. Other instruments, such as thecessful functional testing, performance testing should frequency counters used for the ASQ-81 magne-be done to evaluate the system's design and its potential tometer, may take almost the entire sample intervalfor expansion. One key performance criterion is the (when at an 8- or 16-Hz sampling rate) to make theloading, or the "busy time" of the system's processors. measurement. These time skews need to be accuratelyThis measure will reveal critically loaded processors, if determined so that they can be accounted forduring postthey exist, and will also provide an idea of how much processing of the GASS data.

17

B. Future System Enhancements Workshop on Space Operations Automation and

The GASS architecture will allow very easy modifi- Robotics, pp. 4 0 5 -4 1 1.cation and modernization of the system. If processors 7. Gluch, D. and M. Paul (1986). Fault-Tolerance inbecome overloaded, then the system can be easily Distributed Fly-by-Wire Flight Control Systems.reconfigured to distribute the load properly. If load Proceedings of the AIAA/IEEE 7th Digital Avionicsdistribution is not an adequate solution, then more Systems Conference, pp. 507-514, October.processors could be added to the existing nodes, or extra 8. Kieckhafer, R., C. Walter, A. Finn, P. Thambidurainodes can be added to the system. If bus latency (1988). The MAFT architecture for distributed faultbecomes a problem, then bus performance can be tolerance. IEEE Transactions on Computers,improved by using a token passing MAP instead of the 37(4):398-405.currently used controller scheme. In the event of 9. Fura, D., T. Hill, and M. Raftery Design and Vali-severe bus loading problems, the system could be dationofthe IFTAS Fault-Tolerant Clock. Proceedingssegmented into several buses by adding more bus of the 8th AIAA/IEEE Digital Avionics Systemsinterface cards to the SCS, or the entire bus system Conference, pp. 235-242, October.could be upgraded to a faster fiber-optic media. The 10. Svenningsson, M. (1987). Central processingsystem's fault detection capabilities can be improved unit for fault tolerant computing in Columbus. Actaeasily by incorporating cross-instrument data testing, Astronautica, 15:661-665.as discussed by Bourgeois 8 . System availability can be 11. Madden, W. and P. Wilhelm (1988) Space Stationimproved by altering the current RSC software structure. Data Management System Architecture. ProceedingsThe current design allows any of the RSC processor oftheAlAAIlEEE8th DigitalAvionics Systems Confer-cards to be used in any RSC. This design could be ence, pp. 792-798, October.improved further by configuring the RSC software to 12. Whitelaw, V. (1988). The Space Station Dataallow execution ot any combination of the system's Management System: Avionics That Integrate.sensor tasks. With this change, many of the sensors of Proceedings of the AIAA/IEEE 8th Digital Avionicsa failed RSC could be relocated to an operational RSC. Systems Conference, pp. 767-774, October.

13. Mejzak, R. (1987). New technology impacts onfuture avionics architectures. AGARD, Advanced Com-

VI. References puterAids in thePlanningandExecution ofAir Warfare1. Harris, M., B. Bourgeois, P. Wischow, and and Ground Strike Operations.

J. Ross (1989). GeophysicalAirborne Survey System-- 14. Pedar, A. and V. Sarma (1983). ArchitectureSYstem Overview. In prep. Naval Oceanographic and optimization of aerospace computing systems. IEEEAtmospheric Research Laboratory, Stennis Space Transactions on Computers, C-32:(10)911-922.Conter, MS. 15. Vielcanet, P., H. Horgen, T. Demoy, and

2. Langley, F.. D. Siegel, W. Savage, and R. Wehman P. Howlwett (1984). Comparative Study on Data Sys-

(1979 . Federated microcomputersystems foron-board tern Architectures. Contract Report, ESTEC Contract

missile guidance and control. AGARD Advances in No. 5524/83/NL/PP.Guidance and Control Systems using Digital Tech- 16. Mangoubi, R., E. Gai, and B. Walker (1986). On

niqus, May. the Performance Analysis of Real-Time Distributed3. W~ilcock, G., P. Lancaster, ad C. Moxey (1982). Computer System. Proceedings of the 7th AIAAIIEEEIn.egratedcontrol of P.cLanica lsyster, aC oy (198. Digital Avionics Systems Conference, pp. 529-535,

Integrated control of mechanical systems for future October.combat aircraft. AGARD, Taccal Airborne Distrib- 17. Lala, J., L. Alger, S. Adams, L. Burkhardt,uted Computing and Networks. G. Nagle, and N. Murray (1988). Development of a

4. Shin, K. and C. Krishna (1982). Performance Space-Systems Network Testbcd. Proctedings of thestudy of a distributed microprocessor architecture for 8th AIAA/IEEE Digital Avionics Systems Conference,use aboard a military aircraft. AGARD, Tactical Air- pp. 571-579, October.borne Distributed Computing and Networks. 18. Brock, L., and 1. Deyst (1988). Modular Avi-

5. De Feo, P., L. Geiger, and J. Harris (1985). onics Systems Studies. Proceedings of the AIAAIIEEEIntelligent Redundant Actuation System Requirements 8th Digital Avionics Systems Conference, pp. 1-7,and Preliminary System Design. Contractor Report, October.NASA Contract No. NAS2-12081. 19. Gelderloos H., and D. Wilson (1976). Redun-

6. Shin, K., R. Throne, and Y. Muthuswamy (1987). dancy Management of Shuttle Flight Control Sensors.Communication and Control in an Integrated Proceedings of the 15th IEEE Conference on DecisionManufacturing System. Proceedings of the 1st Annual and Control, pp. 462-475, December.

18

20. Watson, J., W. Yousey, and J. Railey (1979). 33. Hitt, E. (1986). Real-TimcFaultTolerantSoftwareRedundancy management considerations for a control- in Distributed Avionics Systems Architectures Usingconfigured fighter aircraft triplex digital fly-by-wire Digital Data Buses. Proceedings of the 7th AIAAIIEEEflight control system. AGARD Advances in Guidance Digital Avionics Systems Conference, pp. 523-528,and Control Systems Using Digital Techniques. October.

21. Sievers, M.,G.Kravetz, B.Dussia, and J. Jackson 34. Avizienis, A. and M. Lyu (1988). On the(1982). Military Standard Fault-Tolerant Microcom- Effectiveness of Multiversion Software in Digitalputer. Contractor Report, Office of Naval Research Avionics. Proceedings of the AIAA/IEEE 8thContract No. N00014-82-C-0126. Digital Avionics Systems Conference, pp. 422-427,

22. McGlone, M., R. Miller, W. Davies, and October.P. Adams (1983). Full authority fault tolerant 35. Anderson, R. (1980). Automated Recovery in aelectronic engine control systems for advanced high Real-Time, Distributed, Multiple Microprocessorperformance engines. Society ofAutomotive Engineers, Computer System. Masters Thesis, Naval PostgraduateAerospace Congress and Exposition, October. School, December.

23. Hartman, G., J. Wall, E. Rang, H. Lee, R. Schulte, 36. Schneider, F. and R. Schlichting (1981). Towardsand W. Ng (1983). Advanced Flight Control System Fault Tolerant Process Control Software. ProceedingsStudy. Contract Report, Ames Research Center, of the I I th Annual International IEEE Symposium onContract NAS4-2876. Fault-Tolerant Computing, pg 48-55, June.

24. Dzwonczy, M. and H. Stone (1988). A Fault- 37. Clarke, E. and C. Nikolaou (1981). Reconfig-Tolerant Suite for an Entry Research Vehicle. uration Strategies for Reliable Shared MemoryProceedings of the 8th AIAA/IEEE Digital Avionics Multiprocessor Systems. Proceedings of the 11th An-Systems Conference, pp. 593-598, October. nual International IEEE Symposium on Fault-Tolerant

25. Goldberg, J., et al. (1984). Development and Computing, pg 152-158, June.Analysis of the Software Implemented Fault-Tolerance 38. Schmid, H., S. Larimer, and T. Madak (1986).(SIFT) Computer. Contract Report, NASA Contract Channelized or Nonchannelized Fault-TolerantNASA-CR- 172146. Computers: A Hardware Complexity Comparison of

26. Montgomery, V. (1981). Development of a Fault-Tolerant Computers for Flight Control Systems.Fault-TolerantMicroprocessorBasedComputerSystem Proceedings of the 7th AIAAIlEEE Digital Avionicsfor Space Flight. Southern University, Baton Rouge, Systems Conference, pp. 655-663, October.LA, Contract NSG-8053. 39. Loques, 0. and J. Kramer (1986). Flexible Fault

27. Evans, R. and S. Price (1981). Fault Tolerant Tolerance for Distributed Computer Systems. lEEMicroprocessor System Design. Proceedings of the Proceedings, 133.E(6):319-322.11th Annual International IEEE Symposium on Fault- 40. Best, D., K. McGahee, and R. Shultz (1988). ATolerant Computing, pg 214-220, June. Fault Tolerant Avionics Multiprocessing System Archi-

28. Yaacob, M., M. Hartley, and P. Depledge (1983). tecture Supporting Concurrent Execution of Ada Tasks.Operational Fault-Tolerant Microcomputer for Very Proceedings of the AIAAIIEEE 8th Digital AvionicsHigh Reliability. lEE Proceedings, 130.E(3):90-94. Systems Conference, pp. 256-262, October.

29. Smith, T. (1984). Fault Tolerant Processor 41. Eckhardt, D. (1985). Fault-Tolerant Software

Concepts and Operation. Proceedings of the 14th IEEE Experiment Objectives and Status. NASA ComputerFault Tolerant Computing Symposium, pp. 158-163, Science/Data Systems Technical Symposium, April.June. 42. Voigt, R. (1986). The Design of a Real Time

30. Ichikawa, S., Y. Kawada, M. Mine, Y. Ishige, Operating System for a Fault Tolerant Microcomputer.and A. Itsukaichi (1988). Fault-Tolerant Computing Masters Thesis, Navy Postgraduate School, December.System and Software for Engineering Test Satellite-VI 43. Dekker, G. (1985). Reliability Aspects of Soft-(ETS-VI) Attitude and Control Subsystem. Proceed- ware for Digital Avionics. Contractor Report for theings of the AIAA/IEEE 8th Digital Avionics Systems Netherlands Dept. of Civil Aviation.Conference, pp. 170-176, October. 44. Cramer, M. (1986). A Quantitative Analysis of

31. Lamport, L., R. Shostak and M. Pease (1982). The the History of Developing a Large Embedded Softwarebyzantine generals problem. ACM Transactions on System. Proceedings of the 7th AIAA/IEEE DigitalProgramming Languages and Systems, 4(3):382-401. Avionics Systems Conference, pp. 355-362, October.

32. Shepherd, J. (1982). A Method of Designing 45. Cooper, L., J. Radatz, andR. Butler(1986). DoDFault Tolerant Software. Proceedings of the Royal Develops Single Set of Software Quality Standards.Aeronautical Society's Symposium on Certification of Proceedings of the 7th AIAA/IEEE Digital AvionicsAvionic Systems, April. Systems Conference, pp. 777-783, October.

19

46. Smith, J. (1986). Software Quality Issues. 58. Ludvigson, M., M. Modrow, and P. GoldmanProceedings of the 7th AIAA/IEEE Digital Avionics (1988). The High Speed Bus Technology DevelopmentSystems Conference, pp. 765-768, October. Program. Proceedings of the AIAA/IEEE 8th Digital

47. Anonymous (1983). Software Quality Assurance Avionics Systems Conference, October.for ESA Spacecraft and Associated Equipment. 59. Kroeger, B. and H. Shih (1988). An SAE HighEuropean Space Research and Technology Centre Speed Ring Bus Overview. Proceedings of the AIAA/Report, February, ASA Document #84N26037. IEEE 8th Digital Avionics Systems Conference,

48. Bourgeois, B., M, Harris, T. Nicolaides, pp. 719-723, October.P. Wischow, A. Martinez, P. Duvoisin, and R. Drake 60. Werner, B. (1989). Token-Ring Local-Area(1989). Fault Detection and Reliability of the GASS Networks and their Performance. Proceedings of theMulti-processor Survey System. Proceedings of IEEE IEEE, 77(2):238-256.Southeast Conference, pp. 319-325, April. 61. Cooling, J. (1986). Real-Time Interfacing:

49. IEEE Standard Digital Interface for Program- Engineering Aspects of Microprocessor Peripheralmable Instrumentation (1987). ANSI/IEEE Standard Systems. Van Nostrand Reinhold Co. Ltd., Berkshire,488.1. Egl an.

50. Fritz, J., C. Kaldenbach, and L. Progar (1985). England.Local Area Networks Selection Guidelines. Informa- 62. Keen, W. (1988). Ada in Avionics - Beyondtion Systems & Networks Corporation, Prentice Hall, Validation. Proceedings of the AIAAIIEEE 8th DigitalEnglewood Cliffs, New Jersey. Avionics Systems Conference, pp. 251-255, October.

51. Shaw, J., H. Herzog, and K. Okubo (1986). 63. Lala, J.,L.Alger, R.Gauthier, andM.Dzwonczyk

Digital Autonomous Terminal Access Communication (1986). A Fault Tolerant Processor to Meet Rig-

(DATAC). Proceedings of the 7th AIAA/IEEE Digital orous Failure Requirements. Proceedings of the 7th

Avionics Systems Conference, pp. 221-226, October. AIAAIIEEE Digital Avionics Systems Conference,

52. Holmes, D. (1986). Global System Data Bus pp. 555-562, October.Using the Digital Autonomous Terminal Access 64. Voelcker, J. (1987). Ada: from promise to prac-Communication Protocol. Proceedings of the 7th rice? IEEE Spectrum, pp. 44-49, April.AIAA/IEEE Digital Avionics Systems Conference, 65. Jennings, R. (1986). Avionics Standardpp. 227-233, October. Communication Bus, It's Implementation and Usage.

53. McGough, J. (1986). Evaluation of Data Busses Proceedings of the 7th AIAA/IEEE Digital Avionicsfor Flight Critical Control Applications. Proceedings Systems Conference, pp. 242-249, October.of the 7th AIAA/IEEE Digital Avionics Systems Con- 66. Blaha, M., C. DeGennaro, and S. Utley (1988). Afrrence, pp. 718-727, October. Dual Speed, Mil-Std-1553B Compatible Fiber Optic

54. Anderson, S. (1986). The High Speed Intercon- Data Bus. Proceedings of the AIAAIIEEE 8th Digitalnect System Architecture and Operation. Proceedings Avionics Systems Conference, pp. 395-398, October.ojithe Aerospace Avionics Equipment and Integration 67. Humphrey, T. (1988). Reducing the Risks ofConference, pp. 173-183, April. Using Ada Onboard the Space Station. Proceedings

55. Spieth, J. and W. Seward (1986). Simulation of theAIAA/IEEE8th DigitalAvionics Systems Confer-Model of a High-Speed Token-Passing Bus for Avion- ence, pp. 599-602, October.ics Applications. Proceedings of the 7th AIAA/IEEE 68. Lahn, T., S. Minear, and J. Murray 1986). SomeDigital Avionics Systems Conference, pp. 242-249, Views on the Use of Ada for Digital Flight ControlOctober. Systems. Proceedings of the 7th AIAA/IEEE Digital

56. Meyer, J. (1986). SAE AE-9B Draft Standard Avionics Systems Conference, pp. 455-460, October.High Speed Token Passing Data Bus for Avionics 69. Garlington, K. and A. Tyrrell (1988). A CaseApplications. Proceedings ofthe 7thAIAA/IEEEDigital Study: F-16 ADA Digital Flight Control System.Avionics Systems Conference, pp. 234-241, October. Proceedings of the AIAAIIEEE 8th Digital Avionics

57. Nelson, J., L. Shafer, D. Hamlin, and H. Herrmann Systems Conference, pp. 267-272, October.(1987). A Candidate for Linear Token-Passing, High- 70. Harris, M., H. Mesick, H. Bymes, T. Curran, andSpeed Data Bus Systems. Proceedings of the 2nd V. Contarino (1987). A Laser Sounder for U.S. NavyAerospace Avionics Equipment and Integration Hydrographers. Proceedings of the IEEE InternationalConference, pp. 105-120, November. Geoscience and Remote Sensing Symposium. S

20

Distribution List

Asst Secretary of the Navy Commanding Officer Officer in Charge(Research. Engineering & Systems) Naval Research Laboratory Naval Surface Weapons Center DetNavy Department Washington DC 20375 White Oak LaboratoryWashington DC 20350-1000 10901 New Hampshire Ave.

Commander Silver Spring MD 20903-5000

Chief of Naval Operations Naval Oceanography Command Attn: LibraryNavy Department Stennis Space Center MS 39529-5000 Commanding OfficerWashington DC 20350-1000 Commanding Officer Fleet Anti-Sub Warfare Training Center,Attn: OP-02 Fleet Numerical Oceanography Center Atlantic

OP-71 Monterey CA 93943-5005 Naval StationOP-0962X Norfolk VA 23511-6495OP-987 Commanding Officer

Naval Oceanographic Office Defense Mapping Agency Sys CenOceanographer of the Navy Stennis Space Center MS 39522-5001 12100 Sunset Hill Rd. #200Chief of Naval Operations Attn: Code GGAP, B. Mullen Reston VA 22090-3207U.S. Naval Observatory Code A, J. Depner Attn: SGWN34th & Mass Ave., NW M. WagnerWashington DC 20390-1800 Commander E. DanfordAttn: OP-96 Naval Ocean Systems Center

San Diego CA 92152-5000 Office of Naval Technology800 N. Quincy St.Commander Commanding Officer Arint n VA 22217-5000Naval Air Development Center ONR Branch Office Attn:nCode 20, Dr. P. SelwynWarminster PA 18974-5000 Box 39 Code 228, Dr. M. Briscoe

Commanding Officer FPO New York NY 09510-0700 Code 234, Dr. C. V. Votaw

Naval Coastal Systems Center Commander Office of Naval ResearchPanama City FL 32407-5000 David W. Taylor Naval Research Center 800 N. Quincy St.

Bethesda MD 20084-5000 Arlington VA 22217-5000

Commander Attn: Code 10Space & Naval Warfare Sys Con Commander Code 1OD/10P, Dr. E. SilvaWasi-,ngton DC 20363-5100 Naval Surface Weapons Center Code 12

Dahigren VA 22448-5000 Code 112, Dr. E. Hartwig

Commander Commanding OfficerNaval Faciities Eng Command Naval Underwater Systems Center CommanderNaval Facilities Eng Command HO Newport RI 02841-5047 Naval Sea Systems Command200 Stovall St. Naval Sea Systems Command HQAlexandria VA 22332-2300 Superintendent Washington DC 20362-5101

Naval Postgraduate School Commanding Officer

Commanding Officer Monterey CA 93943 Naval Civil Engineering LaboratoryNOARLStennis Space Center MS 39529-5004 Director of Navy Laboratories Port Hueneme CA 93043

Attn Code 100 Rm 1062, Crystal Plaza Bldg 5Code 105 Department of the Navy Commander

Code 115 Washington DC 20360 Naval Air Systems CommandCode 17, . HamackNaval Air Systems Command HQ

Code 12EX Officer in Charge Washington DC 20361-0001

Code 125L (10) New London LaboratoryCode 125P Naval Underwater Sys Cen Det Pennsylvania State University

Code 200 New London CT 06320 Applied Research LaboratoryP.O. Box 30

Code 300 Director State College PA 16801Code 350. D Hickman National Ocean Data CenterCode 351. J Byrnes WSC1 Room 103 University of Texas at AustinCode 351. J Braud 6001 Executive Blvd. Applied Research LaboratoriesCode 351, M Lohrenz Rockville MD 20852 P.O. Box 8029Code 351. B, Bourgeois Attn: G. W. Withee Austin TX 78713-8029Code 351. M HarrisCode 351. P Wischow Direcor Johns Hopkins UniversityCode 351. J Ross Woods Hole Oceanographic Inst Applied Physics Labo'atoryCode 400 P.O. Box 32 Johns Hopkins Rd.

Woods Hole MA 02543 Laurel MD 20707Brooke FarquharNOARL Liaison Office University of California University of WashingtonCrystal Plaza #5. Room 802 Scripps Institute of Oceanography Applied Physics Laboratory221, Jefferson Davis Hwy. P.O. Box 6049 1013 Northeast 40th St.Arlinqton VA 22202-5000 San Diego CA 92106 Seattle WA 98105

I Form ApprovedREPORT DOCUMENTATION PAGE 0MB No. 0704-0186Pubic reporting burden for this coilectioe ot information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data source,gathering and maintaining the data needed, end completing and reviewing the collection ot information. Send comments rgdIng this burden estimate or any other aspect ofhiS collection of informatio including suggestions for reducing this buen to Wasington Hedquarters Services, Directoe t forma on Operations and Repos 1215 JeffersonDavis Highway. Suite 1204, Arlington, VA 22202-43(1. and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188), Washington, DC 20503.

1. Agency Use Only (Leave blank). 2. Report Date. 3. Report ltpe and Dates Covered.

4. Title and Subtitle- 5. Funding Numbers.

Design of a Distributed Microprocessor Sensor System Program Element Na 980101

Project Na6. Author(s)

Task NctBrian S. Bourgeois, Mike M. Harris, Perry BL Wischow, and Jerry H. Ross

Accession Na 93519G7. Performing Organization Name(s) and Address(es). 8. Performing Organization

Report Number.Ocean Science DirectorateNaval Oceanographic and Atmospheric Research Laboratory NOARL Report 1Stennis Space Center, Mississippi 39529-5004

9. Sponsoring/Monitorng Agency Name(s) and Address(es). 10. SponsoringlMonitoring AgencyReport Number.

11. Supplementary Notes.

12a. Distributionl/Avallability Statement. 12b. Distribution Code.

ApprGved for public release; distribution is unlimited. Naval Oceanographic andAtmospheric Research Laboratory, Stennis Space Center, Mississippi 39529-5004.

13. Abstract (Maximum 200 words).The Geophysical Airborne Survey System (GASS), developed by the Naval Ocean Research and Development Activity, is a real-time,

distributed microprocessor sensor system. The Naval Oceanographic Office intends to use this system on the Project Magnet P-3 Orionaircraft to collect worldwide magnetic and hydrographic data.

This report briefly discusses the mission and the history of GASS and reviews the technology advances of the last decade in thearea of real-time distributed microprocessor systems. Specific topics include the goals of distributed system design, the qualitativevalue of distributed system design, and reliable design techniques for the hardware and software In distributed systems. This technologyreview provides a foundation for the GASS design.

The system design decisions made for GASS are detailed, and the system's architecture is described. Detailed drawings and descriptionsof the hardware and software for the final GASS design are also included. Recommendations are made for possible system enhancementsand for areas that require further investigation.

-Now the Naval Oceanographic and Atmospheric Research Laboratory (NOARL).

14. Subject Terms. 15. Number of Pages.24

distributed microprocessor systems, GASS, systematic design, system architecture 16. Price Cede.

17. Security Classification 18. Security Classification 19. Security Classification 20, Limitation of Abstract.of Report. of This Page. of Abstract.

Unclassified Unclassified Unclassified NoneNSN 7540-01-280-5500 CMimed byAS sit R I2 -vi