12
KALMAN FILTER RELIABILITY ANALYSIS USING DIFFERENT UPDATE STRATEGIES M.G. Petovello, M.E. Cannon and G. Lachapelle Department of Geomatics Engineering University of Calgary BIOGRAPHIES Mark Petovello is currently finishing his PhD in Geomatics Engineering at the University of Calgary. He obtained his BSc from the same department in 1998 and has been working in the area of satellite and inertial navigation since that time. Some of his past research includes integration of GPS and GLONASS, characterization of oscillator stability using standalone GPS, and real-time integration of GPS/INS. Dr. M. Elizabeth Cannon is a Professor of Geomatics Engineering at the University of Calgary. She has been involved in GPS research and development since 1984, and has worked extensively on the integration of GPS and inertial navigation systems. She is a Past President of the US Institute of Navigation. Dr. Gérard Lachapelle holds a Canada Research Chair/iCORE Chair in wireless location in the Department of Geomatics Engineering, which he joined in 1988. He has been involved with GPS developments and applications since 1980 and made numerous contributions to that field. More information can be found on www.geomatics.ucalgary.ca/faculty/lachap/ lachap.html. ABSTRACT As the number of uses for the Global Positioning System (GPS) continues to expand, so does the need for reliable position and velocity estimates. In this context, the need to identify and reject erroneous observations before they are allowed to corrupt state estimates is a major concern in many applications. Statistical reliability analysis is a tool whereby the ability of a particular system to identify blunders, and the effect of such blunders on the state estimates, can be assessed. Statistical reliability analyses can be performed assuming either an epoch-to- epoch least squares estimation approach or using a Kalman filter, depending on which estimator will be used in practice. However, with the trend towards the integration of multiple navigation sensors, the latter approach is often preferred. Unfortunately, traditional reliability analysis using Kalman filters assumes all of the observations at a given epoch are processed simultaneously. While such analyses are nevertheless beneficial, they do not consider the case where uncorrelated observations at a given epoch are processed sequentially. Such an approach offers significant processing advantages that may be important for time critical applications, or applications where minimal processing power is available, such as in embedded systems. The paper begins with a brief review of the Kalman filter equations and discusses blunder detection in that context. The traditional statistical reliability equations are then presented as a theoretical extension of blunder detection. Next, the concept of sequential processing of observations is presented and the computational benefits of such an approach are briefly discussed. The traditional reliability equations are then redeveloped accordingly. The paper concludes with some sample results from data collected during field trials to illustrate the advantages of each approach, in the context of GPS navigation. For the sequential update approach, the order in which observations are processed is varied

Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

Embed Size (px)

DESCRIPTION

Kalman Filtering

Citation preview

Page 1: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

KALMAN FILTER RELIABILITY ANALYSIS USING DIFFERENT UPDATE STRATEGIES

M.G. Petovello, M.E. Cannon and G. Lachapelle Department of Geomatics Engineering

University of Calgary BIOGRAPHIES

Mark Petovello is currently finishing

his PhD in Geomatics Engineering at the University of Calgary. He obtained his BSc from the same department in 1998 and has been working in the area of satellite and inertial navigation since that time. Some of his past research includes integration of GPS and GLONASS, characterization of oscillator stability using standalone GPS, and real-time integration of GPS/INS.

Dr. M. Elizabeth Cannon is a

Professor of Geomatics Engineering at the University of Calgary. She has been involved in GPS research and development since 1984, and has worked extensively on the integration of GPS and inertial navigation systems. She is a Past President of the US Institute of Navigation.

Dr. Gérard Lachapelle holds a

Canada Research Chair/iCORE Chair in wireless location in the Department of Geomatics Engineering, which he joined in 1988. He has been involved with GPS developments and applications since 1980 and made numerous contributions to that field. More information can be found on www.geomatics.ucalgary.ca/faculty/lachap/ lachap.html.

ABSTRACT

As the number of uses for the Global Positioning System (GPS) continues to expand, so does the need for reliable position and velocity estimates. In this context, the need to identify and reject erroneous observations before they are allowed to corrupt state estimates is a major concern in many applications. Statistical

reliability analysis is a tool whereby the ability of a particular system to identify blunders, and the effect of such blunders on the state estimates, can be assessed.

Statistical reliability analyses can be

performed assuming either an epoch-to-epoch least squares estimation approach or using a Kalman filter, depending on which estimator will be used in practice. However, with the trend towards the integration of multiple navigation sensors, the latter approach is often preferred. Unfortunately, traditional reliability analysis using Kalman filters assumes all of the observations at a given epoch are processed simultaneously. While such analyses are nevertheless beneficial, they do not consider the case where uncorrelated observations at a given epoch are processed sequentially. Such an approach offers significant processing advantages that may be important for time critical applications, or applications where minimal processing power is available, such as in embedded systems.

The paper begins with a brief review

of the Kalman filter equations and discusses blunder detection in that context. The traditional statistical reliability equations are then presented as a theoretical extension of blunder detection. Next, the concept of sequential processing of observations is presented and the computational benefits of such an approach are briefly discussed. The traditional reliability equations are then redeveloped accordingly. The paper concludes with some sample results from data collected during field trials to illustrate the advantages of each approach, in the context of GPS navigation. For the sequential update approach, the order in which observations are processed is varied

Page 2: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

to illustrate differences. Results show that although the sequential update approach is not as reliable as the simultaneous update, differences from the traditional approach are not often significant.

INTRODUCTION

All navigation systems and

applications require the reliable estimation of the relevant system parameters. In this context, “reliable” is often defined by the application at hand. For example, handheld Global Positioning System (GPS) receivers intended for recreational use may only require hundreds of metres of reliability, whereas a navigation system intended for autonomous control of vehicles could require solutions reliable to the centimetre-level. To this end, statistical reliability quantifies a system’s ability to identify and remove blunders before they contaminate the estimated parameters (Leick, 1995; Ryan, 2002). Statistical reliability can be divided into two main categories; internal reliability and external reliability. Internal reliability quantifies the magnitude of blunder that can be identified by the system. Any blunder smaller than this theoretical minimum will pass through the system undetected. External reliability quantifies the impact of such an undetected blunder on the estimated system parameters.

Statistical reliability analyses can be performed for systems using epoch-to-epoch least squares (Leick, 1995; Ryan, 2002) or systems using Kalman filters (Teunissen and Salzmann, 1989; Ryan, 2002). Given the trend towards the integration of several sensors, the latter approach is gaining popularity. To this end, several investigations have been conducted that look at the reliability of Kalman filtering, including Teunissen and Salzmann (1989), Wei et al. (1990), Lu and Lachapelle (1992) and Ryan (2002), to name a few. All of these investigations use the same basic algorithms and are therefore all subject to the same limitations. In particular, the traditional equations developed by

Teunissen and Salzmann (1989) assume all observations are processed simultaneously (simultaneous processing) in a single filter. However, such an implementation is not always necessary, or even desirable. For example, systems that use statistically independent sets of observations can process these observations sequentially to significantly limit the number of numerical operations. Also, decentralized, or cascaded, systems use a series of Kalman filters to produce the final system output in an effort to reduce computational requirements. Assessing the statistical reliability of such systems requires the reformulation of the traditional reliability equations.

The primary objective of this paper is

to develop the equations for computing the reliability parameters for systems that process statistically independent sets of observations sequentially. The statistical reliability of decentralized filter architectures is not addressed here (see Petovello, 2003 for details). As a secondary objective, the differences between simultaneous and sequential processing of GPS pseudorange (code), Doppler and carrier phase measurements are characterized.

The paper begins with a review of

the relevant Kalman filter equations. Next, the traditional statistical reliability equations are shown along with their inherent assumptions. The reliability parameters for sequential systems are then discussed and the appropriate equations are presented. Finally, the statistical reliability of GPS measurements is assessed using both processing strategies.

METHODOLOGY

This section begins with a brief

presentation of the Kalman filter equations and continues with a presentation of reliability testing. Statistical reliability is then shown to be a theoretical extension of reliability testing. The section concludes with the concept of sequential processing

Page 3: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

with the necessary reliability equations being redeveloped accordingly.

Review of Kalman Filtering

A Kalman filter is a recursive

algorithm that produces an optimal estimate of a state vector, x, based on assumed dynamics of the system and any available observations. The observations relate to the states as follows

(1) wHxz +=

where z is the observation vector, H is the design matrix relating the observations to the state vector, and w is a vector of white noise. The state vector is comprised of all of the states to be estimated by the system. Assuming the system dynamics are known, and using any available observations, the fundamental Kalman filtering equations can be broken down into prediction and update steps as follows (Gelb, 1974)

Prediction

(2) k1k,k1k x̂x̂ ++ Φ=

(3) kT

1k,kx̂1k,kx̂ QCCk1k

+ΦΦ= +++

Update

(4) ( 1z

Tkx̂k

Tkx̂k kkk

CHCHHCK−

+= ) ( )−−+ −+= kkkkkk x̂HzKx̂x̂ (5) (6) ( ) −+ −=

+ k1k x̂kkx̂ CHKIC where a superscript “─” and “+” represent a quantity before and after update respectively, the subscript “k” represents a quantity at the kth epoch and

1k,k +Φ is the transition matrix for the

system from epochs k to k+1, derived from the assumed dynamics of the system

x̂C is the covariance matrix of the state vector

Q is the covariance matrix of the system driving noise, which

represents the uncertainty of the assumed dynamics model

K is the Kalman gain (gain matrix) zC is the covariance matrix of the

observations From the above equations, the

innovation sequence (innovations), v, is defined as the difference between the actual and predicted observations as follows

(7) −−= kkkk x̂Hzv

In this way, the innovations represent the amount of new information contained in the observations, relative to the current state estimate. The gain matrix filters this new information in equation 5 to obtain corrections to the current state vector estimate.

Reliability Testing

Reliability testing aims at the actual

identification of blundered observations during normal filter operation. To illustrate how erroneous observations are detected, recall the innovation sequence of equation 7. Assuming the observation and system driving noise are zero-mean white noise processes with Gaussian distributions, the innovation sequence will share the same properties (Gelb, 1974). However, consider the case when the observations are in error by an amount given by

kkk Mz ∇=∆ (8)

where ∇ is a vector of blunders and M is a full rank matrix mapping the blunders into the observations. In this case, the innovation sequence is also biased by the same amount. Consequently, under the null hypothesis (H0) that the observations are bias free and the alternate hypothesis (Ha) that they are biased, the innovation sequence is distributed as

Page 4: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

( )( )

ka

k0

vkkHk

vHk

C,MN~v

C,0N~v

[ ]Tik

1dk 00100mM LL=→ =

where Cv is the covariance of the innovation sequence given by

(9)

kkk zTkx̂kv CHCHC +=

where the element with unity value is at the ith location and thus corresponds to the ith observation. Next, the square root of the test statistic in equation 10 is taken and thus is distributed as (recalling that d = 1)

( )( )1,N~t

1,0N~t

0Hk

Hk

a

0

δ

The test statistic for testing H0 against Ha is given by (Teunissen and Salzmann, 1989)

(10) ( ) k

1v

Tk

1k

1v

Tkk

1v

Tkk vCMMCMMCvT

kkk

−−−−= where the non-centrality parameter is given by the square root of equation 11 as

which is distributed as

( )ii

1v

iik

1v

Tik

i0 kk

CmCm −− ∇=∇=δ (12)

( )( )2

02

Hk

2Hk

,d~T

0,d~T

a

0

δχ

χ

In equation 12, is the blunder for the ii∇ th observation and the “ii” subscript indicates the ith diagonal element of the matrix. However, the non-centrality parameter is also given by

where d is the number of degrees of freedom, equal to the number of assumed blunders, and δ0 is non-centrality parameter given by (ibid.)

β−α− +=δ 12/10 nn (13)

( ) kk

1v

Tk

Tk

20 MCM

k∇∇=δ − (11)

where α and β are the probabilities of committing a Type I or Type II error respectively and n is the value of the normal distribution at the subscripted point. This relationship is shown in Figure 1.

Finally, if the test statistic exceeds , where α is the level of significance, then the null hypothesis is rejected in favour of the alternate hypothesis.

)0,d(2αχ

Therefore, given α, β and the

covariance matrix of the innovation sequence, the smallest blunder capable of being detected is obtained from equation 12 and 13 as

Statistical Reliability

Statistical reliability is a theoretical

extension of reliability testing. It is theoretical in the sense that real observations are not required to perform a statistical reliability analysis. Instead, statistical reliability analyses can be performed using only the knowledge of the system’s process noise and the expected observation accuracy and geometry. To illustrate the concept of statistical reliability, we begin by first assuming that the number of blunders (d) is limited to one for any given epoch. In this way, the matrix M reduces to a vector of the form

( ) ( )

ii1

v

12/1

ii1

v

0iMDB

kkC

nn

C −

β−α−

+=

δ=∇ (14)

This quantity is the Marginal (Minimum) Detectable Blunder (MDB) for the ith observation and represents the internal reliability of the system. The MDB for a given observation is affected by the geometry of the measurements (H), the uncertainty of the measurements (Cz), the

Page 5: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

amount of process noise added to the system (Q), and the a priori knowledge of the state vector ( . A shorthand notation for equation 14 is given by

)Cx̂

(15) MDBvC ∇→

implying that the MDB vector on the left hand side is “passed through” the gain matrix above the arrow (K in this case) to yield the PL vector on the right hand side (via equation 16). Equations 14/15 and 16/17 represent the statistical reliability parameters for a given Kalman filter assuming all observations are processed simultaneously. In other words, the above discussion tacitly assumed that all observations were being processed in a single filter. The next section considers the statistical reliability for observations processed sequentially.

which implies that the covariance matrix of the innovation sequence on the left hand side is used to obtain the MDBs on the right hand side, on an observation-by-observation basis (via equation 14).

Sequential Processing of Statistically Uncorrelated Observations

Figure 1 - Relationship Between Type I and Type II Error Probabilities and the Non-Centrality Parameter

Once the MDB for a given observation is obtained, its effect on the estimated parameters is computed by propagating the error through the gain matrix as

(16) i

MDBikk

ik mKx ∇=∆

Simultaneous processing of observations means that all observations for a given epoch are grouped together and processed as a whole. Sequential processing on the other hand, processes statistically independent sets of observations one after the other. This approach has two major advantages. First, it simplifies software development since observations can be processed as they are received, instead of having to combine them together. Second, the number of operations required for processing observations sequentially can be significant. It is noted however, that both processing strategies yield identical numerical results, as shown in Brown and Hwang (1992) or Petovello (2003).

This section reviews sequential

processing and develops the required reliability equations. An example of the computational savings associated with sequential processing in the context of GPS data processing is shown in the next section.

This is the external reliability for the ith observation. It represents the smallest undetected error in the estimated parameters that could occur due to a blunder on the ith observation. For this reason, it is often called the protection level (PL). Similar to above, a shorthand notation for equation 16 is given by

(17) xK

MDB ∆→∇

To begin, a small modification of the Kalman filtering notation is required. First, the epoch subscript (k) is dropped without loss of generality. Second, observations at a particular epoch are assumed to be processed in sets (e.g. z1, z2, etc.). The

Page 6: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

where the “sim” subscript implies that all observations are processed together and the “(0)” superscript implies that no observations have previously been processed at the current epoch. For sequential processing, assume the first set of observations is processed alone. The MDBs for this set of observations can be easily shown to be

remaining notation can be expressed as follows

iH is the design matrix for the ith set

of observations, izC is the covariance matrix of the ith

set of observations, ,...)b,a(

iK is the gain matrix for the ith set of observations, after having already processed observation sets a, b, etc.,

(20) )0(

1)0(

v1C ∇→

,...)b,a(iv is the innovation sequence for

the ith set of observations, after having already processed observation sets a, b, etc.,

Next, assume that the second set of observations are processed to yield (21) )1(

2)1(

v2C ∇→

,...)b,a(v i

C is the covariance matrix of the innovation sequence for the ith set of observations, after having already processed observation sets a, b, etc., and

Repeating this process for multiple sets of observations, the general form for the internal reliability for sequential processing is ,...)b,a(

x̂C is the covariance matrix of the estimated states after having processed observation sets a, b, etc..

(22) )1i,...,2,1(i

)1i,...,2,1(v i

C −− ∇→

The notation for the reliability

parameters is also slightly different from above. It can be summarized as follows

,...)b,a(

i∇ is MDB vector for the ith set of observations after having already processed observation sets a, b, etc., and

,...)b,a(ix∆ is the PL vector for the ith set of

observations after having already processed observation sets a, b, etc..

Using this new notation, the internal

and external reliability parameters for the simultaneous processing approach can be summarized respectively as

(18) )0(

sim)0(

v simC ∇→

(19) )0(sim

K)0(sim x

)0(sim ∆ →∇

This represents the internal reliability for sequential processing. Now, consider that for a given measurement accuracy and geometry, the MDBs for a given observation set are only a function of the covariance matrix of the estimated parameters (the process noise will be reflected in this matrix). It follows therefore, that the more confident the estimate of the state vector, the lower the internal reliability parameters will be. With this in mind, observation sets that are processed first will have a larger MDB than if they were processed later because as each observation set is processed, the estimated state covariance is decreased, which thus improves the internal reliability for subsequent observation sets. In fact, in general, the last set of observations processed in sequential mode has the same internal reliability as if all observations were processed simultaneously. A detailed proof is given in Petovello (2003).

Page 7: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

A consequence of the above is that, in sequential mode, the order in which the observations are processed can impact the overall reliability of the system. Therefore, those observations that contribute most to the final state estimates should be processed later than those that do not. In this way, the reliability of these observations will be improved. This will be addressed again in the results section of this paper.

To assess the external reliability for

the sequential processing strategy, the fact that both sequential and simultaneous processing of the observations produce the same numerical answer, in terms of the final state estimate, is exploited. Assuming only two sets of observations are available, the correction to the initial state estimate using the simultaneous processing approach is given by

)0(

2,1)0(

2,1Hx̂ vK0=δ (23)

where the “1,2” subscript implies that observations 1 and 2 are processed simultaneously. Next, in the presence of blunders, the above equation can be written as

( )

)0(2,1Hx̂

)0(2,1

)0(2,1

)0(2,1

)0(2,1

)0(2,1

)0(2,1

)0(2,1Hx̂

x

KvK

vK

0

a

∆+δ=

∇+=

∇+=δ

(24)

from which the following is obvious

(25) )0(

2,1K)0(

2,1 x)0(

2,1 ∆ →∇ or in matrix form

(26)

∆∆

∇∇

)0(2

)0(1K

)0(2

)0(1

xx)0(

2,1

However, the blunders of interest are those obtained from the sequential processing approach, not from the simultaneous approach as shown above. Fortunately, the

“transformation” of the blunders into the protection level vector is linear in terms of the blunders themselves. This means that the MDBs in equations 25 and 26 can be scaled to match those of the sequential approach. Doing so yields

(27)

∆∆

∇∇

)1(2

)0(1K

)1(2

)0(1

xx)0(

2,1

or more generally

(28) )1i,...,2,1(

iK)1i,...,2,1(

i x)0(

sim −− ∆ →∇ where the “sim” subscript again indicates that the gain matrix is computed assuming simultaneous processing of the observations. The use of the gain matrix from the simultaneous case essentially means that although the internal reliability parameters are generally larger with sequential processing, their effect on the estimated states can be minimized by other measurements processed afterwards. Advantages of Sequential Processing

The major advantage of sequential

processing is the reduction in the number of floating point observations required by a software program. To illustrate, consider the computation of the gain matrix in equation 4. In particular, note that the dimension of the matrix to be inverted is equal to the number of observations. It follows therefore, that reducing the number of observations being processed at a time can significantly reduce the number of floating point operations required. For the case at hand, the matrix to be inverted is known to be symmetric and positive-definite and so the inverse can be performed using Cholesky decomposition for efficiency. The number of floating point operations for this approach (including square roots) is given by

( )6N5

2N3

3N2N

23−+=Γ (29)

Page 8: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

where N is the dimension of the matrix being inverted.

As a practical example, consider the three observables available from a typical GPS receiver, namely code, Doppler and carrier phase. Using double difference techniques, all of the observations of a particular observable become correlated with each other. However, the different double differenced observables are often considered to be mutually uncorrelated and can therefore benefit from the sequential processing approach. Specifically, the double difference code, Doppler and carrier phase observables can be processed sequentially. To illustrate the reduction in the number of floating point operations for this case, the following represents the fractional improvement of the sequential approach over the simultaneous approach

( )( )m3m3

ΓΓ (30)

where m is the number of double differences, assumed to be the same for all three observation types. As shown in Figure 2, when three to ten double differences are considered, the above ratio ranges from about 6 to 7.2 and thus represents a significant computational savings.

RESULTS

As stated above, statistical reliability

assessments do not require actual observations, but can instead be performed using covariance information from simulations. Specifically, for GPS-only systems, as investigated herein, this requires knowledge of the satellite and user positions, and the measurement precision. However, such approaches can be optimistic because (i) consideration of satellite masking is often ignored and, (ii) carrier phase ambiguities are typically assumed to be always fixed or always float. To obtain a complete analysis using

covariance simulations is therefore beyond the scope of this paper.

Figure 2 - Floating Point Operation Count Using Sequential and Simultaneous Processing (see equation 30)

Instead, the results presented herein are obtained from an actual field test to better represent practical conditions. The data set was collected west of Calgary, AB, near Springbank airport and lasted about 20 minutes. All results assume that code, Doppler and L1 carrier phase observations are processed at every epoch. The undifferenced measurement precisions are shown in Table 1. The data was processed using the double difference technique using the University of Calgary’s Satellite And Inertial Navigation Technology (SAINT™) software. It is noted that the ambiguities were fixed to integers for the majority of the run, but with some short periods with some, or all, ambiguities being estimated as real-valued quantities.

The Kalman filter used for

processing estimated the vehicle’s position and velocity, with the latter being modeled as first order Gauss Markov parameters. The spectral density of the process noise driving the velocity states was 1 m2/s. The double difference carrier phase ambiguities are also added to the state vector when necessary.

Page 9: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

Table 1 - Standard Deviations of Undifferenced Measurements

Measurement Standard DeviationC/A Code 50 cm Doppler 3 cm/s

L1 Phase 2 cm To summarize results in a concise

manner, the relevant reliability parameters are computed for each observation for each satellite (PRN) at every epoch in the data set. The root mean square (RMS) of these values across the data set is then computed. With this in mind, Figure 3 shows the RMS MDBs and three-dimensional PLs (TPLs) for each satellite and observable, as computed using the simultaneous processing approach. Note that the TPL for the Doppler measurements refer to the velocity estimates, while the TPL for the code and phase measurements refer to the position estimates.

Figure 3 - RMS MDBs and TPLs Using Simultaneous Processing for Each PRN for the First Data Set

As shown, the code MDBs are

considerably larger than those of the phase measurements. However, their effect on the estimated parameters is considerably smaller. The reason for this is that the relatively large measurement uncertainty of the code measurements limits the ability to detect blunders, but also provides little information to the overall system parameters. In contrast, phase blunders are considerably smaller than for code, but their

impact on the estimated position is larger because the of the high accuracy of the carrier phase observations.

From the above analysis, it is

concluded that those observations that contribute most to the estimated parameters should have the highest reliability, since a blunder on one of these observations will cause the largest impact. However, as discussed previously, if sequential processing is to be used, the order in which the observations are processed becomes important. For the case at hand, there are three different situations for any particular type of observation. These include

1. When a particular type of observation is

processed first. In this case, the orders in which other observations are processed are unimportant.

2. When a particular type of observation is processed second. Two sub-categories are then considered, depending on which of the remaining two observations were processed first.

3. When a particular type of observation is processed last. As stated above, this is the same as if all observations are processed simultaneously.

With this in mind, Figure 4 shows the

position TPLs for the code blunders when the observations are processed in different orders using the sequential processing approach. This graph shows that as the code observations are processed later, relative to the Doppler and phase observations, the TPLs decrease. This is expected since the MDBs must decrease in this case (see above). However, since the gain matrix that maps these errors into the estimated states is the same in all cases (i.e. the simultaneous gain matrix is independent of the observation processing order), the graph also indirectly represents the relative differences in the code MDBs using the various processing strategies.

Page 10: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

Figure 4 - RMS Position TPLs for Code Blunders Using Different Observation Orders with the Sequential Processing Approach

Figure 5 - RMS Velocity TPLs for Doppler Blunders Using Different Observation Orders with the Sequential Processing Approach

Also of note, is that as long as the

code observations are processed after the carrier phase observations, there is essentially no difference whether they are processed second or third (last). This is seen by comparing the red and light blue lines in Figure 4. Although the light blue lines are actually slightly lower (this cannot be seen but is mathematically true since the light blue line represents the best-case reliability for the code observations), the differences are negligible in a practical sense. This is because once the phase observations are processed, their high accuracy will limit the effect of a code blunder on the estimated position.

For the Doppler blunders, there is very little difference between the different observation processing orders. The main reason for this is that the Doppler observations observe primarily the velocity. In contrast, the code and phase observations observe position, and only contribute to the velocity via the correlation between the position and velocity states in the filter. As a result of this, processing code or phase observations before the Doppler measurements does not produce significant differences in the velocity TPLs. In particular, code observations can be processed before or after the Doppler measurements with basically no difference in velocity reliability. Processing the L1 phase before the Doppler measurements can provide some minor advantages, because the high accuracy of the phase is able to impact the velocity estimates through their mutual correlation.

While the TPLs for the code

observations are dependent on the order in which the observations are processed, the differences are not considered very significant. Specifically, the differences between the orders in which the observations are processed are only as large as one centimetre in a three-dimensional case.

For the L1 phase blunders, the TPLs

are clearly dependent on the order in which the observations are processed. Overall, Figure 6 shows that the effect of an L1 phase blunder on the estimated position can be reduced by up to almost 3 cm in some cases (PRN 4), depending on the satellite and the order in which the observations are processed.

Similar to Figure 4 above, Figure 5

and Figure 6 show the RMS TPLs for Doppler and L1 phase blunders. As before, the TPLs for the Doppler and phase measurements refer to the velocity and position respectively.

Page 11: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

Figure 6 - RMS Position TPLs for L1 Carrier Phase Blunders Using Different Observation Orders with the Sequential Processing Approach

The results from the above analysis

show that, while processing observations in different orders using the sequential approach does produce different reliability performance, these differences are not often significant. In particular, differences of 1 cm for code blunders, 1 cm/s for Doppler blunders, and about 3 cm for L1 phase blunders were seen. However, since the respective TPLs are typically considerably larger than this, the relative improvement is only slight. CONCLUSIONS AND FUTURE WORK

The traditional equations for

statistical reliability in kinematic systems assume that all observations are processed at once. However, software simplicity and efficiency can result from processing statistically independent set of observations one at a time. For RTK GPS systems, the processing requirements were shown to be reduced by a factor of about 6 to 7. Given this advantage, this paper redeveloped the traditional equations for statistical reliability in kinematic systems for use with sequential processing of independent observations.

Application of the new equations to a

GPS data set with mostly fixed ambiguities was done to show the difference in the reliability parameters when using the

simultaneous and sequential processing approaches. Furthermore, for the sequential approach, the code, Doppler and L1 phase observations were processed in different orders to evaluate any differences that may occur.

Overall, while some differences

between the simultaneous and sequential processing approaches were observed in terms of the external reliability parameters, these differences were not considered significant. Maximum differences in the three-dimensional position and velocity PLs were found to be 1 cm for code observations, 1 cm/s for Doppler observations and about 3 cm for L1 phase observations. As such, RTK GPS applications exploit the efficiency benefits of sequential processing without significant loss of reliability.

Results presented were based on a

limited sample of satellite geometries and measurement accuracies. Future work will investigate performing covariance simulations to better evaluate the different reliability approaches under a variety of operational conditions. Such an investigation will also involve comparing the reliability parameters obtained with fixed and float ambiguities.

REFERENCES

Brown, R.G. and P.Y.C. Hwang (1992).

Introduction to Random Signals and Applied Kalman Filtering, John Wiley & Sons, Inc., Second Edition.

Gelb, A. (1974). Applied Optimal Estimation, The M.I.T. Press.

Leick, A. (1995). GPS Satellite Surveying, John Wiley & Sons, Inc., Second Edition.

Lu, G. and G. Lachapelle (1992), Statistical quality control for kinematic GPS positioning, manuscripta geodaetica, Vol. 17, pp. 270-281.

Page 12: Kalman Filter Reliability Analysis Using Different Update Strategies!!!!!!!!

Petovello, M.G. (2003), Real-Time Integration of a Tactical-Grade IMU and GPS for High-Accuracy Positioning and Navigation, PhD Thesis, Department of Geomatics Engineering, The University of Calgary. Submitted.

Ryan, S.J. (2002), Augmentation of DGPS for Marine Navigation, PhD Thesis, Department of Geomatics Engineering, The University of Calgary. UCGE Report Number 20164.

Teunissen, P.J.G. and M.A. Salzmann (1989), A recursive slippage test for use in state-space filtering, manuscripta geodaetica, Vol. 14, pp. 383-390.

Wei, M., D. Lapucha and H. Martell (1990), Fault Detection and Estimation in Dynamic Systems, Proceedings of KIS 1990, Department of Geomatics Engineering, The University of Calgary, pp. 201-217.