12
Integrity Lessons from the WAAS Integrity Performance Panel (WIPP) Todd Walter, Per Enge, Stanford University, Bruce DeCleene, FAA ABSTRACT The Wide Area Augmentation System (WAAS) is unlike any previous navigation system fielded by the FAA. Historically the FAA has implemented relatively simple and distributed systems. Each only affects a small portion of the airspace and each is maintained independently of the others. WAAS, in contrast, is a complex and centralized, system that provides guidance to the whole airspace. Consequently, the certification for WAAS must proceed very cautiously. WAAS is being pursued because its potential benefits are significant. It will provide guidance throughout the national airspace. It will enable approaches with vertical guidance to every runway end in the United States without requiring local navigational aids. It will enable advanced procedures such as curved approaches and departures. Eventually it will allow greater capacity through smaller separation standards. These and other benefits motivate the effort to create and certify this new type of system. Although the analysis becomes much more difficult, the system must maintain the same or higher level of safety than the existing infrastructure. Another difference with WAAS is that it is inherently a non-stationary system. It relies on satellites that are constantly in motion and that may change their characteristics. Additionally, the propagation of the satellite signals varies with local conditions. Thus, the system has differing properties over time and space. However, the system requirements apply to each individual approach. In particular, the integrity requirement, that the confidence bound fails to contain the true error in fewer than one in ten million approaches, must apply to all users under all foreseeable operational conditions. To ensure that the integrity requirement would be met, the FAA formed the WAAS Integrity Performance Panel (WIPP). The role of the WIPP is to independently assess the safety of WAAS and to recommend system improvements. To accomplish these tasks, the WIPP had to determine how to interpret the integrity requirement for WAAS, develop algorithms to meet this requirement, and ultimately validate them. INTRODUCTION The Wide Area Augmentation System (WAAS) monitors the Global Positioning System (GPS) and provides both differential corrections to improve the accuracy and associated confidence bounds to ensure the integrity. WAAS utilizes a network of precisely surveyed reference receivers, located throughout the United States. The information gathered from these WAAS reference Stations (WRSs) monitors GPS and its propagation environment in real-time. However, WAAS designers must be aware of the limitations of its monitoring. The observables are corrupted by noise and biases causing certain fault modes to be difficult to detect. Because it is a safety-of-life system, WAAS must place rigorous bounds on the probability that it is in error, even under faulted conditions. In late 1999, concerns arose over the WAAS design and the process by which WAAS was to be proven safe. In response, the FAA created the WAAS Integrity Performance Panel (WIPP). The WIPP is a body of GPS and system safety experts chartered to assess the system engineering and safety design of WAAS and recommend any required changes. The WIPP consists of members from government (FAA, JPL), industry (Raytheon, Zeta, MITRE), and academia (Stanford University). They first convened in early 2000 to address the integrity and certification of WAAS. Primarily the WIPP quantified the degree to which WAAS mitigated the system vulnerabilities. Over the next two years, the WIPP changed the design of several system components where the system could not satisfactorily demonstrate the required level of integrity. As each threat was addressed, the WIPP built upon what it had learned. Some of the main lessons that emerged from the WIPP are: ß The aviation integrity requirement of 10 -7 per approach applies in principle to each and every approach. It is not an ensemble average over all conditions.

Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

Integrity Lessons from the WAASIntegrity Performance Panel (WIPP)

Todd Walter, Per Enge, Stanford University,Bruce DeCleene, FAA

ABSTRACT

The Wide Area Augmentation System (WAAS) is unlikeany previous navigation system fielded by the FAA.Historically the FAA has implemented relatively simpleand distributed systems. Each only affects a small portionof the airspace and each is maintained independently ofthe others. WAAS, in contrast, is a complex andcentralized, system that provides guidance to the wholeairspace. Consequently, the certification for WAAS mustproceed very cautiously. WAAS is being pursuedbecause its potential benefits are significant. It willprovide guidance throughout the national airspace. It willenable approaches with vertical guidance to every runwayend in the United States without requiring localnavigational aids. It will enable advanced proceduressuch as curved approaches and departures. Eventually itwill allow greater capacity through smaller separationstandards. These and other benefits motivate the effort tocreate and certify this new type of system. Although theanalysis becomes much more difficult, the system mustmaintain the same or higher level of safety than theexisting infrastructure.

Another difference with WAAS is that it is inherently anon-stationary system. It relies on satellites that areconstantly in motion and that may change theircharacteristics. Additionally, the propagation of thesatellite signals varies with local conditions. Thus, thesystem has differing properties over time and space.However, the system requirements apply to eachindividual approach. In particular, the integrityrequirement, that the confidence bound fails to contain thetrue error in fewer than one in ten million approaches,must apply to all users under all foreseeable operationalconditions. To ensure that the integrity requirementwould be met, the FAA formed the WAAS IntegrityPerformance Panel (WIPP). The role of the WIPP is toindependently assess the safety of WAAS and torecommend system improvements. To accomplish thesetasks, the WIPP had to determine how to interpret theintegrity requirement for WAAS, develop algorithms tomeet this requirement, and ultimately validate them.

INTRODUCTION

The Wide Area Augmentation System (WAAS) monitorsthe Global Positioning System (GPS) and provides bothdifferential corrections to improve the accuracy andassociated confidence bounds to ensure the integrity.WAAS utilizes a network of precisely surveyed referencereceivers, located throughout the United States. Theinformation gathered from these WAAS referenceStations (WRSs) monitors GPS and its propagationenvironment in real-time. However, WAAS designersmust be aware of the limitations of its monitoring. Theobservables are corrupted by noise and biases causingcertain fault modes to be difficult to detect. Because it isa safety-of-life system, WAAS must place rigorousbounds on the probability that it is in error, even underfaulted conditions.

In late 1999, concerns arose over the WAAS design andthe process by which WAAS was to be proven safe. Inresponse, the FAA created the WAAS IntegrityPerformance Panel (WIPP). The WIPP is a body of GPSand system safety experts chartered to assess the systemengineering and safety design of WAAS and recommendany required changes. The WIPP consists of membersfrom government (FAA, JPL), industry (Raytheon, Zeta,MITRE), and academia (Stanford University). They firstconvened in early 2000 to address the integrity andcertification of WAAS.

Primarily the WIPP quantified the degree to whichWAAS mitigated the system vulnerabilities. Over thenext two years, the WIPP changed the design of severalsystem components where the system could notsatisfactorily demonstrate the required level of integrity.As each threat was addressed, the WIPP built upon whatit had learned.

Some of the main lessons that emerged from the WIPPare:ß The aviation integrity requirement of 10-7 per

approach applies in principle to each and everyapproach. It is not an ensemble average over allconditions.

Page 2: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

ß For events where fault modes or rare events are notknown, validated threat models are essential both todescribe what the system protects against and toquantitatively assess how effectively it provides suchprotection.

ß The system design must be shown to be safe againstall fault modes and external threats, addressing thepotential for latent faults just beneath the system’sability to detect them. Conventional non-aviationdifferential systems presume no failures exist untilconsistency checks fail.

ß Analysis must take place primarily in the range orcorrection domain as opposed to the position domain.

ß The small numbers associated with integrity analysisare not intuitive. Careful analysis must take priorityover anecdotal evidence.

These lessons will be described in greater detail. Of theselessons, the need for threat models is the most importantand was the most lacking. Threat models describe eventsor conditions that may cause harm to the user. In thiscase, harm is referred to as Hazardously MisleadingInformation (HMI). It is defined as a true error that islarger than the guaranteed Protection Level (PL). WAASprovides differential corrections that are applied to thereceived pseudoranges from GPS. At the same time,confidence bounds are also supplied to the user. Thesebounds are used, with the geometry of satellites about theuser, to calculate the PL. In order to use the calculatedposition for navigation, the PL must be small enough tosupport the operation. The user only has real-time accessto the PL, not the true error. Thus, HMI arises if the userhas been told that the error in position is small enough tosupport the operation, but in fact, it is not.

The threat models must describe all the known conditionsthat could cause the true errors to exceed the predictedconfidence bounds. Having a comprehensive list isessential to achieving the required level of safety and italso drives the system design. Additionally, restrictingthe scope of the threats is necessary for practical reasons.It is not possible to create a system that can protectagainst every conceivable threat. Fortunately, many suchthreats are either unphysical or extremely improbable.Restricting threats to those that are sufficiently likely isnecessary for creating a practical system.

INTEGRITY REQUIREMENT

The integrity requirement for precision approach guidance(APV-I through Category I) is 1-2x10-7 per approach [1].There is a general understanding that this probabilistic

requirement applies individually to every approach. Thisdefinition is further refined in the WAAS specification [2]as applying at every location and time in the servicevolume. Since WAAS provides service to a large numberof runways, it is not acceptable for one airport to have lessintegrity simply because a different aircraft hundreds ofmiles away has margin against the requirement.Similarly, with the non-stationary characteristics arisingfrom effects such as the orbiting satellites, it is notappropriate for operations to continue during an hourinterval when the integrity requirement is not met, justbecause it is exceeded for the rest of the day. Generally,this can be restated as meaning that the probability ofHazardously Misleading Information (HMI) must be at orbelow 1x10-7 for an approach at the worst time andlocation in the service volume for which the service isclaimed to be available. Despite this apparentunderstanding, a more detailed discussion of theinterpretation is instructive.

The integrity requirement is that the Positioning Error(PE) must be no greater than the confidence bound,known as the Protection Level (PL), beyond the specifiedprobability. Confusion may result because therequirement is probabilistic, yet at the worst time andplace, the errors appear deterministic. Instead, therequirement should be viewed as applying to ahypothetical collection of users under essentially identicalconditions. The collection of users, referred to as theensemble, must be hypothetical in this case becausesatellite navigation systems and their associated errors areinherently non-stationary. Any true ensemble wouldaverage over too many different conditions, combiningusers with high and low risk. Thus, we must imagine anensemble of users, for each point in space and time,whose errors follow probability distributions specific tothat point.

Of course, there can only be one actual user at a givenpoint in space and time. That user will experience aspecific set of errors that combine to create the positionerror. These errors are comprised of both deterministicand stochastic components. The distinction is that if wecould replicate the conditions and environment for theuser, the deterministic components would be completelyrepeatable. Thus, these errors would be common mode;all users in our ensemble would suffer them to the samedegree. On the other hand, stochastic errors such asthermal noise would differ for each user in our ensemble.Overall, these components combine to form a range ofpossible errors whose magnitudes have differingprobability. When we look at a very large number(approaching infinity) of hypothetical users in theensemble, some will have errors that exceed the

Page 3: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

protection level while most will not. The fraction of usersthat exceed the PL can be used to determine theprobability of an integrity failure under those conditions.

The difficult aspect of applying this philosophy isdefining equivalent user conditions and then determiningthe error distributions. A circular definition is that userconditions can be called equivalent if they carry the samelevel of risk. A more practical approach is to exploit priorknowledge of the error sources. For example, if it wereknown that an error source only has a definite temperaturedependency, then the ensembles should be formed overall users in narrow temperature ranges. The errordistributions and probability of exceeding the PL wouldbe calculated for each ensemble, and the integrityrequirement would have to be met for the most difficultcase for which availability is claimed. Unfortunately, trueerror sources usually have multiple dependencies andthese dependencies are different between the various errorsources. Thus, the ensembles may need to be formed overnarrow ranges of numerous parameters. However, greatcare must be taken because if certain dependencies are notproperly recognized, the ensembles may unknowinglyaverage over different risk levels.

The restatement of the requirement that it applies to theworst time and location is misleading because it isacceptable to average against certain conditions. Someevents may be sufficiently rare to ignore altogether. If,under similar conditions, the a priori likelihood is wellbelow 1x10-7 per approach (considering exposure time tothe failure), then there may not be any need to provideadditional protection. The worst time and place shouldnot be viewed as when and where this unlikely eventoccurs. The event need only be considered if it issufficiently likely to occur, if when and where it is mostlikely to occur can be predicted ahead of time, or if it isstrongly correlated with an observable. Even if the eventis not sufficiently rare to be ignored, its a prioriprobability may be utilized provided the event remainsunpredictable and immeasurable. Thus, the conditionswhere the event is present may be averaged withotherwise similar conditions without the event. Takingadvantage of such a priori probabilities must beapproached very cautiously on a case-by-case basis.

The goal is to ensure that all users are exposed to risk atno greater than the specified rate of 10-7 per approach.Thus, ensembles that cannot be correlated in some waywith user behavior do not make sense. For example, usersmay tend to fly to the same airport at the same time of dayor during a certain season. Therefore, an ensemble of allusers with a specific geometry at a certain location andcertain time of day, but theoretically infinitely extended

forwards and backwards over adjacent days is reasonable.On the other hand, an ensemble of all users whosethermal noise consists of five-sigma errors aligned in theworst possible direction is neither realistic nor practical.The latter example attempts to combine rare and randomevents into a unifying ensemble that cannot be made tocorrespond to user behavior. In general, conditionsleading to high risk that are both rare and random can beaveraged with lower risk conditions. The requirement forrarity seeks to assure that users do not receive multipleexposures to the high-risk condition, while therequirement for randomness seeks to avoid a predictableviolation of the integrity requirement. However, thecorrelation with conceivable user behavior must be thedetermining factor when deciding whether or not toaverage the risk. Similarly, a correlation with a systemobservable should be exploited to protect the user whenperformance goes out of tolerance.

Deciding how to define the ensembles provides thenecessary information for determining the errordistributions. Components will largely be divided intonoise-like contributions, with some spread in their values,and bias-like contributions whose values are seen as fixedalthough probably unknown. Although many of theseerror sources may be deterministic, practically they mayneed to be described in stochastic terms. Many errorsources fall into this category including ionosphere,troposphere, and multipath. If we knew enough about thesurrounding environments, we could predict their effectsfor each user. However, because it is usually not practicalto obtain this information it may be acceptable to viewthese effects as unpredictable as long as their effectscannot be correlated with user behavior.

Knowledge of the error characteristics is very importantin evaluating system design. While impossible to knowfully, many important characteristics such asdependencies may be recognized. This knowledge allowsproper determination of the error distributions. Afterdefining the individual distributions, the correlationsbetween them must be established. Many deterministicerror sources will affect multiple ranging sourcessimultaneously. Correlated deterministic errors may addtogether coherently for a specific user. Such effectsrequire larger increases in the protection level than if theerrors were uncorrelated. If these effects are notrecognized and treated appropriately, the integrityrequirement will not be met and the user will sufferexcessive risk. Although the form of the protection levelequations given in [1] and [2] suggest that all errorsources are independent, zero-mean, and gaussian, this isnot the case under all operating conditions. Each errorsource must be carefully analyzed, both individually and

Page 4: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

in relation to the other sources. Only then can theappropriate confidence bounds be determined.

ERROR MODELING

Each individual error source has some probabilitydistribution associated with it. This distribution describesthe likelihood of encountering a certain error value.Ideally, smaller errors are more likely than larger errors.Generally, this is true for most error sources. The centralregion of most error sources can be well described by agaussian distribution. That is, most errors are clusteredabout a mean (usually near zero) and the likelihood ofbeing farther away from the mean falls off according tothe well-known model. This is often a consequence of thecentral-limit-theorem that states that distributions tend toapproach gaussian as more independent random variablesare combined.

Unfortunately, the tails of the observed distributionsrarely look gaussian. Two competing effects tend tomodify their behavior. The first is clipping, because thereare many cross-comparisons and reasonability checks, thelarger errors tend to be removed. Thus, for a trulygaussian process, outlier removal would lead to fewerlarge errors than would otherwise be expected. Thesecond effect is mixing. The error sources are rarelystationary. Thus, some of the time the error might begaussian with a certain mean and sigma and at other timeshave a different distribution. Such mixing may resultfrom a change in the nominal conditions or from theintroduction of a fault mode. Mixing generally leads to

broader tails or large errors being more likely thanotherwise expected.

The mixing causes additional problems. If the errorprocesses were stationary, it would be possible to collectas large a data set as practical and then conservativelyextrapolate the tail behavior using a gaussian or othermodel. However, because the distribution changes overtime, it is more difficult to predict future performancebased on past behavior. Furthermore, mixing leads tomore complicated distributions whose tails are moredifficult to extrapolate. With enough mixing, it can bevery difficult to characterize the underlying distributionsat all. Figure 1 is an example of a mixed distribution.The majority of the data points are selected from a zero-mean gaussian with unity variance. A few of the pointsare selected from a zero-mean gaussian with a variance offour. This plot contains some very typical features of thereal data we collect. The majority of the data conformsvery well to a gaussian model, while the tails usuallycontain outliers. Sampling issues are usually significantas it is very difficult to obtain large amounts ofindependent data. Thus, just by looking at the graph it isdifficult to determine the actual distribution.

The central-limit-theorem causes error distributions toapproach gaussian as several independent sources arecombined. Certainly, the main body of collected datatends to be gaussian in appearance. The tails are moredifficult to discern. A generalized mixed gaussiandescription is appropriate. Here, the errors can bedescribed as gaussian where the mean and variance arealso drawn from some joint probability distribution.

e ΠN (m,s )

m,s Πpr (M,S)

At any given instant, the error is gaussian, but its meanand variance have some uncertainty. By understandingthe extent of the possible means and variances we canoverbound the worst-case. Additional information ideallyallows us to partition the space and distinguish whenlarger bounds are needed versus when smaller ones can beprovided.

Nominally, we expect the distribution to be zero-meanand have some well-defined variance. Some smallfraction of the time the error may still be zero-mean, buthave a larger variance as depicted in Figure 1. During afault mode the mean may grow in magnitude, but thevariance may stay roughly the same as nominal (of courseother variances are possible). Restricting the error

Figure 1. Simulated probability distribution composed ofa mixed gaussian: 2900 points with unity variance and100 points with a variance of four. The top and bottomgraphs are the same data displayed on a linear scale (top)and log-scale (bottom).

Page 5: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

distribution to this class distribution allows the analysis tobecome tractable.

Of course, it is impossible to truly know the realdistribution, particularly to 10-7 confidence. The use of amodel like this must be accepted by a body of expertssuch as the WIPP who can assert that it is valid based onphysical knowledge of the system design, supporting data,and simulation. This combination is essential fordescribing the tail behavior. A physical understanding ofthe error process is essential to describing expectedbehavior. Data must be collected in sufficient quantityand under many conditions. The physical knowledgemust be exploited to determine what the worst-caseconditions are and how data should be reduced. Forexample, severe ionospheric behavior is correlated withsolar events and magnetic disturbances. Data must becollected during some of the most extreme operatingconditions. Finally, simulation may be used to confirmthat the models constructed are consistent with theobservations.

Physical knowledge of the system is essential. Anyinformation on the physical processes behind the errorsource can be used to separate mixtures and create better-defined distributions. For example, multipath can berelated to the surrounding environment. Large reflectionstend to occur at lower elevation angles. Partitioning databy elevation angles may reduce mixing. Changes tomultipath can be related to changes in satellite positionand to changes in the environment. Excessive multipathcan sometimes be related to specific reflectors.Additionally, the magnitude of multipath errors can bebounded, by limiting the number of reflectors andstrength of the reflected signals.

Data is also essential. The data must be sufficient tosupport assumptions or validate system performance tothe degree to which the safety of the system relies on thatdata. It is not sufficient to collect a day or two ofrandomly selected data, but many days collected underextreme conditions. Examples include tropospheric datafrom many different climates, ionospheric data from theworst times in the 11-year solar cycle, multipath datafrom the most cluttered environments etc. Rare events areunlikely to be captured in small data sets. Large data setstaken over long time-periods are more likely to capturepostulated events. Having data containing these eventsprovides better insight into their effect.

THREAT MODELS

Threat models describe the anticipated events that thesystem must protect the user against and conditionsduring which it must provide reliably safe confidencebounds. The threat model must describe the specificnature of the threat, its magnitude and its likelihood.Together, the various threat models must becomprehensive in describing all reasonable conditionsunder which the system might have difficulty protectingthe user. Ultimately they form a major part of the basisfor determining if the system design meets its integrityrequirement. Each individual threat must be fullymitigated to within its allocation. Only when it can beshown that each threat has been sufficiently addressed canthe system be deemed safe.

WAAS is being developed primarily to address existingthreats to GPS. However, it runs the risk of introducingthreats in absence of any GPS fault. By necessity, it is acomplex system of hardware and software. Included inany threat model must be self-induced errors. Some ofthese errors are universal to any design while others arespecific to the implementation. For example, the softwaredesign assurance of WAAS reference receivers was basedon market availability of equipment, so referencereceivers software faults were a unique threat that had tobe mitigated through system integrity monitoring. Thefollowing is a high level list of generic threats. While it isnot comprehensive, it does include the most significantcategories either for magnitude of effect or likelihood.There are numerous other threats that have a smallereffect, are less likely, or are implementation specific.

High-Level Threat List• Satellite

o Clock/ephemeris erroro Signal deformationo Code carrier incoherency

• Ionosphereo Local non-planar behavior

ß Well-sampledß Undersampled

• Troposphere• Receiver

o Multipatho Thermal noiseo Antenna biaso Survey errorso Receiver errors

• Master stationo SV clock/ephemeris estimate errorso Ionospheric estimation errors

Page 6: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

o SV Tgd estimate errorso Receiver IFB estimate errorso WRS clock estimate errorso Communication errorso Broadcast errors

• User errors

The following sections provide greater detail for eachthreat, although the true details depend on implementationand must be decided by the service provider.

SV Clock/Ephemeris Estimate ErrorsSatellites suffer from nominal ephemeris and clock errorswhen there are no faults in the GPS system [3].Additionally, the broadcast GPS clock and ephemerisinformation may contain significant errors in the event ofa GPS system fault or erroneous upload. Such faults maycreate jumps, ramps, or higher order errors in the GPSclock, ephemeris, or both [4] [5]. Such faults may becreated by changes in state of the satellite orbit or clock,or simply due to the broadcasting of erroneousinformation. Either the user or the system may alsoexperience incorrectly decoded ephemeris information.

The UDRE must be sufficient to overbound the residualerrors in the corrected satellite clock and ephemeris.

Signal DeformationsICAO has adopted a threat model to describe the possiblesignal distortions that may occur on the GPS L1 CA code[1]. These distortions will lead to biases that depend uponthe correlator spacing and bandwidth of the observingreceivers. Such biases would be transparent to a networkof identically configured receivers [6].

The UDRE must be sufficient to overbound unobservableerrors caused by signal deformation. Unobservable errorsare those that cannot be detected to the required level ofintegrity.

Code-Carrier IncoherencyA postulated threat is that a satellite may fail to maintainthe coherency between the broadcast code and carrier.This fault mode occurs on the satellite and is unrelated toincoherence caused by the ionosphere. This threat causeseither a step or a rate of change between the code andcarrier broadcast from the satellite. This threat has neverbeen observed, but nevertheless must be protected againstas a postulated satellite failure.

The UDRE must be sufficient to overbound unobservableerrors caused by incoherency. Unobservable errors arethose that cannot be detected to the required level ofintegrity.

Ionosphere and Ionospheric Estimation ErrorsThe majority of the time, mid-latitude ionosphere is easilyestimated and bounded using a simple local planar fit.However, periods of disturbance occasionally occurwhere simple confidence bounds fall significantly short ofbounding the true error [7]. Additionally, in other regionsof the world, in particular equatorial regions, theionosphere often cannot be adequately described by thissimple model [8]. Some of these disturbances can occurover very short baselines causing them to be difficult todescribe even with higher order models. Gradients largerthan three meters of vertical delay over a ten-kilometerbaseline have been observed, even at mid-latitude [9].These worst-case gradients are a threat to both SBAS andGBAS.

The broadcast ionospheric grid format specified in theMOPS may also limit accuracy and integrity. The simpletwo-dimensional model and assumed obliquity factor maynot always provide an accurate conversion between slantand vertical ionosphere. There will also be instanceswhere the five-degree grid is too coarse to adequatelydescribe the surrounding ionosphere.

There are times and locations where the ionosphere isvery difficult to model. This problem may becompounded by poor observability [10]. IonosphericPierce Point (IPP) placement may be such that it fails tosample important ionospheric structures. This may resultfrom the intrinsic layout of the reference stations andsatellites, or from data loss through station, satellite, orcommunication outages. As a result, certain ionosphericfeatures that invalidate the assumed model can escapedetection.

Finally, because the ionosphere is not a static mediumthere may be large temporal gradients in addition tospatial gradients. Rates of change as large as four verticalmeters per minute have been observed at mid latitudes[9].

The GIVE must account for inadequacies of the assumedionospheric model, restrictions of the grid, and limitationsof observability. The GIVE must be sufficient to protectagainst the worst possible ionospheric disturbance thatmay be present in that region given the IPP distribution.Additionally, since each ionospheric correction does nottime out until after ten minutes, the GIVE and the Old ButActive Data (OBAD) terms [11] must protect against anychanges in the ionosphere that can occur over that timescale. Because the physics of the ionosphere areincompletely understood, the most practical ionospheric

Page 7: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

threat models are heavily data driven and contain a largeamount of conservatism.

Tropospheric ErrorsTropospheric errors are typically small compared toionospheric errors or satellite faults. Historicalobservations were used to formulate a model and analyzedeviations from that model [12]. A very conservativebound was applied to the distribution of those deviations.The model and bound are described in the MOPS andSARPS [11][1]. These errors may affect the user bothdirectly through their local troposphere, and indirectlythrough errors at the reference stations that may propagateinto satellite clock and ephemeris estimates. The userprotects against the direct effect using specified formulas.

The master station must ensure that the UDRE adequatelyprotects against the propagated tropospheric errors andtheir effect on satellite clock and ephemeris estimates. Ofparticular concern are the statistical properties of theseerror sources. These errors may be correlated for longperiods, and will produce correlated errors across allsatellites at a reference station and each receiver at thereference station.

Multipath and Thermal NoiseMultipath is the most significant measurement errorsource. It limits the ability to estimate the satellite andionospheric errors. It depends upon the environmentsurrounding the antenna and the satellite trajectories.While many receiver tracking techniques can limit itsmagnitude, its period can be tens minutes or greater [13].Additionally, it contains a periodic component thatrepeats over a sidereal day. Thus, severe multipath maybe seen repeatedly for several days or longer.

Since all measurements that form the corrections and theUDREs and GIVEs are affected by multipath, great caremust be used to bound not only its maximum extent butits other statistical characteristics as well (non-gaussian,non-white, periodic, etc.). There is potential forcorrelation between measurements and between antennasat a single reference site. Additionally the localenvironment may change either due to meteorologicalconditions (snow, rain, ice), or physical changes (newobjects or structures placed nearby).

If carrier smoothing is used to mitigate multipath thenrobust cycle slip detection is essential. Half integer cycleslips have been observed on different brands of receivers.In one case, several half cycle slips were observed in thesame direction each several minutes apart resulting in aseveral meter error. Cycle slip detection must be able toreliably catch unfortunate combinations of L1 and L2 half

and full integer cycle slips in order to achieve an unbiasedresult.

Antenna BiasLook-angle dependent biases in the code phase on bothL1 and L2 are present on GPS antennas. These biasesmay be several tens of centimeters. In the case of at leastone antenna, they did not become smaller at higherelevation angle. These biases are observable in ananechoic chamber, but extremely difficult to observe inoperation. They may result from intrinsic antenna designas well as manufacturing variation.

While the particular orientation of each antenna and biasis random, it is also static. Therefore, there may existsome points in the service volume where the biases tendto add together coherently consistently. Thus, theselocations will experience this effect day after day. Toprotect these regions, the biases should be treatedpessimistically as though they are all nearly worst-caseand coherent. Calibration may be applied, althoughindividual variation, the difficulty of maintaining properorientation, and the possibility of temporal changes,hamper its practicality.

Survey ErrorsErrors in the surveyed coordinates of the antenna codephase center can affect users in the same manner asantenna biases. However, survey errors tend to be muchsmaller in magnitude and cancel between L1 and L2.

These errors can typically be lumped in with antenna biasprotection terms and mitigated in the same manner.

Receiver ErrorsThe receivers themselves can introduce errors throughfalse lock or other mechanisms including hardware failure(GPS receiver, antenna, atomic frequency standard).These events have been observed to be rare anduncorrelated.

These may be mitigated through the use of redundant andindependent receivers, antennas, and clocks, at the samereference station. However, the UDRE and GIVE muststill protect against small errors may exist up to the size ofthe detection threshold.

Tgd and IFB Estimation ErrorsFor internal use, the correction algorithms often need toknow the hardware differential delay between the L1 andL2 frequencies. These are referred to as Tgd for the biason the satellite and IFB for the InterFrequency Bias in thereference station receivers. These values are typicallyestimated in tandem with the ionospheric delay estimation

Page 8: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

[14]. Although these values are nominally constant, thereare some conditions under which they may change theirvalue. One method is component switching, if a newreceiver or antenna is used to replace an old one, or ifdifferent components or paths are made active on asatellite. Another means is through thermal variationeither at the reference station or on the satellite as it goesthrough its eclipse season. Finally, component aging mayalso induce a slow variation

The estimation process may have difficulty indistinguishing changes in these values from changes inthe ionosphere. The steady state bias value and stepchanges may be readily observable, but slow changescomparable to the ionosphere may be particularly difficultto distinguish. Ionospheric disturbances that don’t followthe assumed model of the ionosphere may also corrupt thebias estimates. The UDREs and GIVEs must bound theuncertainty that may result from such estimation errors.

Receiver Clock Estimate ErrorsSimilarly, the satellite correction algorithm must estimateand remove the time offsets between the reference stationreceivers. These differences are nominally linear overlong times for atomic frequency standards. However,component replacement or failure may invalidate thatmodel.

Nominally, these differences are easily separated,however, reference station clock failures and/or satelliteephemeris errors may make this task more difficult. TheUDRE must protect against errors that may propagate intothe satellite clock and ephemeris correction due to theseerrors. Particular attention must be paid to correlationsthat may result from this type of misestimation

RANGE DOMAIN VS. POSITION DOMAIN

The HMI requirement is specified in the position domain,yet WAAS broadcasts values in the range/correctiondomain. The users combine the corrections andconfidences with their geometry to form the positionsolution and protection level. Exactly which correctionsand satellites are used is known only to the user.Therefore, how the position error depends on the residualerrors is known only to the users. WAAS cannot monitorsolely in the position domain and fully protect its users.A combination of position domain and range/correctiondomain monitoring is most efficient.

To see this effect we can look at a specific user geometry.This example was created using Stanford’s MatlabAlgorithm Availability Simulation Tool (MAAST) [15]which can be used to simulate WAAS performance. Theuser has eight satellites in view as shown in Table 1.Figure 2 shows the elevations and azimuths of thesatellites along with their PRN values. Table 1 alsoshows the PRN, elevation, azimuth, and one sigmaconfidence bound (si). In addition, the fifth columnshows the dependence of the vertical error to apseudorange error on that satellite, S3i. S is the projectionmatrix and is defined as S = (GTWG)-1GTW, where G isthe geometry matrix and W is the weighting matrix, seeAppendix J of [11]. This term multiplies the error on thepseudorange to determine the contribution to the vertical

PRN EL AZ si S3i S3i

withoutPRN 8

2 45.8° -32.3° 2.34 m 0.595 0.4515 11.2° -76.8° 10.1 m 0.258 0.4376 36.6° 48.4° 2.32 m 0.162 2.0058 9.98° 73.0° 3.74 m 1.000 -9 61.4° 28.5° 2.03 m -1.928 -3.08715 32.8° 151.0° 6.89 m -0.015 0.17421 42.3° -136.0° 4.83 m 0.066 -0.003122 40.6° 120.1° 6.19 m -0.139 0.022

Table 1. Satellite elevation and azimuth angles,confidence bounds and projection matrix values both forthe all-in-view solution and without PRN 8.

Figure 2. Satellite elevation and azimuth values for astandard skyplot. PRN 8 is a low elevation satellite that ifnot included in the solution dramatically changes theinfluence of PRN 6.

Page 9: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

error. Thus a 1 m ranging error on PRN 2 would create apositive 59.5 cm vertical error for the user with thiscombination of satellites and weights. The final columnin Table 1 shows the projection matrix values if PRN 8, alow elevation satellite, is not included in the positionsolution.

With the all-in-view solution, the user has a VPL of33.3 m (HPL = 20.4 m). When PRN 8 is dropped, theVPL increases to 48.6 m (HPL = 20.5 m). Both valuesare below the 50 m Vertical Alert Limit (VAL) for LPV[16]. Either solution could be used for vertical guidance.Notice that the vertical error dependency changesdramatically with the loss of PRN 8. In particular, PRN6, which had little influence over the all-in-view solution,now has a very strong impact on this subset solution.Also notice that the other values change as well. PRNs 2,21, and 122 lose influence while PRNs 5, 6, 9 and 15become more important. More surprisingly, theinfluences of PRNs 15, 21, and 122 change sign;therefore, what was a positive error for the all-in-viewsolution becomes a negative error for this particularsubset.

The changes in the S3i values with subset or supersetposition solutions limit the ability to verify performanceexclusively in the position domain. For example, if PRN6 had a 25 m bias on its pseudorange, it would lead to avertical error of greater than 50 m with PRN 8 missing,but just over 4 m for the all-in-view solution. A positiondomain check with all satellites would not be concernedwith a 4 m bias compared to a 33.3 m VPL. Thus, one

would be inclined to think that all was well. However,the user unfortunate enough to lose PRN 8 would suffer a50 m bias, large enough to cause harm. A 25 m biaswould be more than a ten-sigma error in the range domainand thus would be easily detectable. Therefore, it is thecombination of range and position domain checks thatprotect users with different combinations of satellites. Itmay be possible to work exclusively in the positiondomain by using subset solutions, however that approachis numerically much more intensive and significantlymore complex when considering a wide area system thatmust consider users throughout the service volume.

There is nothing unique about this particular geometry.To investigate how position errors can hide for onecombination of satellites and be exposed for another, weset MAAST to look for subset solutions that had verydifferent S3i values in its subset solutions. We restrictedthe search to geometries that had VPLs below 40 m forall-in-view and then only investigated subsets with VPLsbelow 50 m. Of the 3726 geometries investigated, onlytwo did not change S3i values by more than 40%.

To better illustrate the effect, the remaining 3724geometries had biases placed on the satellite with thelargest change. Each bias was chosen such that it wouldlead to a 50 m positioning bias in the subset solution (a25 m bias on PRN 6 in the example above). Eachpseudorange was also assigned a zero-mean gaussianerror with a standard deviation of one half of its one-sigma confidence bound (column four of Table 1). Thebroadcast WAAS confidence bounds are approximately

Figure 3. The triangle plot for all-in-view solutionsincluding one biased satellite in each is shown. Hereeach bias is deweighted by the other satellites. Noobvious problems are evident in this chart.

Figure 4. The triangle plot for the subset solutions thatexpose each biased satellite is shown. Here the biasesare exposed as being hazardous for the user. Thisdemonstrates the importance of checking each subset orin the range domain.

Page 10: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

three times larger than the nominal no-fault values (thisinflation is necessary to protect against fault modes). Wethen calculated position errors and VPLs for both the all-in-view and subset solutions. The results are plotted instandard triangle charts, Figure 3 and 4.

Figure 3 is similar in appearance to a nominal trianglechart except the VPLs are clipped at 40 m due to ourselection process and the position errors are worse thannormal due to the injected error on the single satellite.However, the position errors are all below the VPL andthe aggregate is not obviously biased. An observer mightbe inclined to declare that the system is functioning safelybased on this chart. However, Figure 4 shows that withthe same errors and biases, but a slightly differentgeometry, this is not true. The subset solution removessatellites that were masking the bias for each case. Theresult is an obviously faulted triangle chart. Thus, atriangle chart without obvious faults, Figure 3, is noguarantee of a safe system, as evidenced by Figure 4.

This simulation was pessimistic in its construction sincethe minimum, unacceptable error was placed on the mostsensitive satellite. On the other hand, the geometries werechosen at random and do not have any unique subsetcharacteristics. We were surprised we could create suchradically different triangle charts from the samepseudorange errors and one biased satellite simply bylooking at subsets. We were aware that we could corrupttwo or more pseudoranges to create arbitrarily large errorsin the subset solution and zero error for the all-in-view,but had not realized how well single errors could behidden. The lesson is that it is not sufficient to observe aparticular set of position solutions. The most effectivemethod is to combine position domain monitoring withrange domain monitoring.

SMALL NUMBERS AND INTUITION

The integrity requirement of 10-7 is an incredibly smallnumber. In fact, it has to be; there have been more than2.5x107 landings per year in the US each year for the last10 years [17]. Granted only a small fraction of these areinstrument landings in poor visibility, however a largervalue could have a noticeable effect on the overallaccident rate. Furthermore, air traffic is expected toincrease over the coming years. To reduce the totalnumber of accidents while increasing the number offlights requires lowering the risk per operation.Satisfying and exceeding the WAAS integrityrequirement is part of that overall strategy.

It is hard to imagine the exceedingly small probability ofone part in ten million. By design, no individual willsample anything approaching that number of approaches.At most, an individual will sample of order tens ofthousands of approaches, typically far less. Additionallythat individual will likely mostly experience nominalconditions, and rarely the unusual events, such asionospheric disturbances, where WAAS still has to meet10-7. Thus, personal experience is only sensitive to 10-4 atbest. Stand-alone GPS already has this level of integrity,so the design issues for WAAS can be counter-intuitive.It is because so many flight operations take place undersuch a variety of conditions that WAAS needs to extendintegrity to 10-7. The greater populace samples the systemevery year in a more thorough way than any individualwill in a lifetime.

WAAS is specifically in place to protect against rareevents, events that one will infrequently encounter. As aresult, the situations that WAAS is designed to protectagainst run counter to our intuition. It is tempting to saythat an event such as the signal deformation that occurredon SVN 19 back in 1993 [18] is sufficiently unlikely tooccur again that we do not need to worry about it.However, it did occur, and there is no basis to assume itwill not occur again. We can estimate the likelihood ofreoccurrence from the single observed event. The BlockII satellites so far have a cumulative lifetime of roughly280 years (through January 2003). One event over 280satellite years is a likelihood of occurrence ofapproximately 4x10-7 per satellite per hour. This is anexceedingly small number and one that many people mayeasily dismiss. However, a user has on average eightsatellites in their solution for a combined probability ofexperiencing this effect of 3.2x10-6 per hour. This is morethan an order of magnitude over the entire integritybudget. In addition, if there were no capability to detectand mitigate the condition should it occur, then allsubsequent users would be exposed to unacceptably highrisk. Meanwhile, the system allocation to this effect ismuch less than 1% of the total, or below 10-9 per hour.Thus, it is not possible to dismiss this event out of hand.In addition, this calculated value of reoccurrence is toosmall. It is possible that other events have escapedattention or that we have been fortunate to date. In fact, arecent paper indicates that some form of signaldeformation may be present on other satellites [19].

Other events with far greater probability are sometimesdismissed because they are perceived to be remote. Thepossibility that eight satellite biases all line up to have thesame sign may seem to be extremely unlikely however ifeither sign is equally likely and they are all uncorrelated,the probability is twice (either all positive or all negative)

Page 11: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

2-8, or one in 128. This is roughly 1%, rare, but not nearlyimprobable enough to dismiss it. Additionally thelikelihood that the most significant biases line up is evengreater. Thus, one must work to reduce the likelihood ofbiases, or ensure they are all positively correlated, asrandom biases have a non-negligible chance of aligning.

By necessity, WAAS must work with very smallnumbers, probabilities of 10-7 and below. Theseprobabilities are outside of personal experience andintuition. Events that seem unlikely must have an upperbound calculated for them. They should not simply bedismissed out of hand. Unless one does the calculationthey may not be able to distinguish between probabilitiesof 10-5 and a 10-7.

CONCLUSIONS

Augmentations systems for aviation are very differentfrom conventional differential GPS. They aresupplementing and ultimately replacing provennavigational aids whose safety has been demonstratedover many years of operational experience. Consequentlytheir safety must be proven before they are put intoservice. Over the course of documenting the proof ofsafety, the WIPP learned many important lessons. Chiefamong these was the use of threat models. Threat modelsdefine our fault modes, how they manifest themselves andhow likely they are. They describe what we must protectagainst. A well-defined threat model permits aquantitative assessment of the mitigation strategy. Thequantitative assessment as opposed to a qualitativeassessment is essential to establishing 10-7 integrity.

Another key lesson is the application of the 10-7 integrityrequirement to each approach that changed how weviewed certain error sources. Rather than averaging overconditions with different risk levels, we must overboundthe conditions describing the worst allowable situation. Apriori probabilities may be used only for events that areinfrequent, unpredictable, and unobservable. Forexample, ionospheric storms occur with certainty,therefore the system must provide at least 10-7 integritywhile ionospheric disturbances are present. However, theonset time, exactly when the mid-latitude ionosphere willtransition from a period of quiet to a disturbed state, isboth rare and random. Thus, we may apply an a priori tothat brief period of time when the ionosphere may bedisturbed, but we haven’t yet detected it. This lessonaffects how we view all of our a priori failure rates andprobability distributions.

ACKNOWLEDGMENTS

The work for this paper was supported by the FAASatellite Product Team under research grant 95-G-005.The authors gratefully acknowledge the contributionsfrom the other WIPP members.

REFERENCES

[1] ICAO Standard and Recommended Procedures(SARPS) Annex 10

[2] FAA-E-2892C WAAS Specification

[3] Jefferson, D. C. and Bar-Sever, Y. E., “Accuracy andConsistency of GPS Broadcast Ephemeris Data,” inProceeding of ION GPS-2000, Salt Lake City, UT,September 2000.

[4] Hansen, A., Walter, T., Lawrence, D., and Enge, P.,“GPS Satellite Clock Event of SV#27 and Its Impact onAugmented Navigation Systems,” in proceedings of IONGPS-98, Nashville, TN, September 1998.

[5] Shank, C. M. and Lavrakas, J. “GPS Integrity: AnMCS Perspective,” in Proceeding of ION GPS-1993, SaltLake City, UT, September 1993.

[6] Phelts, R. E., “Multicorrelator Techniques for RobustMitigation of Threats to GPS Signal Quality,” StanfordU n i v e r s i t y T h e s i s , J u n e , 2 0 0 1 .http://waas.stanford.edu/~wwu/papers/gps/PDF/ericthesis.pdf

[7] Walter, T., et al. “Robust Detection of IonosphericIrregularities,” in NAVIGATION, Journal of the Instituteof Navigation, vol. 48, no. 2, Summer 2001.

[8] Klobuchar, J., et al., “Ionospheric Issues for a SBASin the Equatorial Region,” in proceedings of the 10th

International Ionospheric Effects Symposium, Alexandria,VA, May, 2002.

[9] Datta-Barua, S., Walter, T., Pullen, S., Luo, M., andEnge, P., “Using WAAS Ionospheric Data to EstimateLAAS Short Baseline Gradients,” in Proceeding of IONNTM, San Diego, CA, January, 2002.

[10] Sparks, L., et al., “The WAAS Ionospheric ThreatModel,” in Proceedings of the International BeaconSatellite Symposium, Boston, June, 2001.

Page 12: Integrity Lessons from the WAAS Integrity Performance ...web.stanford.edu/group/scpnt/gpslab/pubs/papers/Walter_IONNTM_2003.pdfessential to achieving the required level of safety and

[11] WAAS Minimum Operational PerformanceSpecification (MOPS), RTCA document DO-229C

[12] Collins, J. P. and Langley, R. B., “The residualtropospheric propagation delay: How bad can it get?,” inproceedings of ION GPS-98, Nashville, TN, September1998.

[13] Shallberg, K., Shloss, P., Altshuler, E., andTahmazyan, L., “WAAS Measurement Processing,Reducing the Effects of Multipath,” in Proceeding of IONGPS-2001, Salt Lake City, UT, September 2001.

[14] Hansen, A., “Tomographic Estimation of theIonosphere Using Terrestrial GPS Sensors,” StanfordUniversity Thesis, March, 2002.

[15] Jan, S., Chan, W., Walter, T., and Enge, P., “MatlabSimulation Toolset for SBAS Availability Analysis,” inProceeding of ION GPS-2001, Salt Lake City, UT,September 2001.

[16] Cabler, H. and DeCleene, B., “LPV: New, ImprovedWAAS Instrument Approach,” in Proceedings of IONGPS-2002, Portland, OR, September 2002.

[17] Statistics gathered from NTSB web page,http://www.ntsb.gov/aviation includes parts 121 and 135,general aviation and military.

[18] Enge, P., Phelts, E., and Mitelman, A., “DetectingAnomalous Signals from GPS Satellites,” GlobalNavigation Satellite System Panel meeting, ToulouseOctober 18-29 1999, working paper 19.

[19] Brenner, M., Kline, P., and Reuter. R., “Performanceof a Prototype Local Area Augmentation System (LAAS)Ground Installation,” in Proceedings of ION GPS-2002,Portland, OR, September 2002.