7
Improved state estimator in the face of unreliable parameters* Mukul Agarwal and Dominique Bonvin+ Technisch- Chemisches Laboratorium, Eidgeniissische Technische Hochschule, 8092 Ziirich, Switzerland (Received 31 May 1991; revised 22 November 1991) The improvement in state estimation based on independently estimated parameter values can be marginal when these parameter values are in error. A measure of uncertainty in the parameters is their error covariance, which most parameter estimation methods do not yield reliably. This work develops a new approach that obviates reliance on incorrect error covariance of the parameters. Uncertainty in the parameter values is assessed on the basis of errors in a priori predictions over a certain time horizon. This uncertainty measure is incorporated into the state estimator, modifying the state gains to account for errors in the parameter values used. (Keywords: state estimation; errors; simulation) Simultaneous on-line estimation of states and para- meters of dynamic systems has popularly been per- formed using the extended Kalman filter (EKF). The performance of the EKF can be improved by estimating the parameters separately, independently of the state estimates, and using these values as known parameters in a parallel state estimator. However, the uncertainty in the parameter values used must somehow be accounted for in order to make the state estimator less prone to divergence. Nelson and Stearl recognized the need for some modification of the state-error covariance so as to account for the errors in the parameter values used. The available techniques for divergence prevention of state estimators in the face of parameter errors can be divided into two categories. First, possible parameter error is lumped into a global uncertainty, and dealt with in a general framework with no specific attention to the parameters themselvesz4. In the second category belong methods that require prior knowledge of the parameter- error covariance. The Schmidt-Kalman filtep utilizes the parameter-error covariance in an extended-state formu- lation that modifies the state uncertainty without esti- mating the extending states themselves. Another approach by Chung and Btlanger7 exploits the para- meter-error covariance to determine an optimal state gain that minimizes the sensitivity of the state estimates to parameter error. The following section outlines the motivation for the ‘The original version of this paper was presented at the AlChE Annual Meeting held in San Francisco in November 1989. ‘Present address: lnstitut d’Automatique, EPFL, 1015 Lausanne, Switzerland. strategy developed in this work. The theoretical aspects are developed in the following two sections. Simulation results in the final section demonstrate the performance of the proposed technique. Problem evaluation The proposed approach Decoupled parameter estimators serve as a source of parameter values for the state estimator. A reasonable measure of uncertainty in these parameter values is their error covariance matrix, P,. Most parameter estimation methods do not yield a reliable P,. For example, predic- tion error methods yield the correct P,, but only asymp- totically and in the absence of plant-model mismatch8. Even when an accurate P, is available, its usefulness would usually be mitigated by a non-linear mapping involved between the estimated parameters and those that appear in the state-space model used by the state estimator. On the other hand, the a priori prediction error is a reliable parameter uncertainty measure, which is available with all parameter estimation methods. A technique for calculating a reliable P, from the predic- tion error is developed later in a general framework. The state estimator can then be improved using any of the methods that utilize a known P, to account for para- meter uncertainty. Of these, the sensitivity-based mod- ification of Chung and BClanger7 is simple and com- putationally attractive, but requires P, to be diagonal with sufficiently small variances. The Schmidt-Kalman filter described by Jazwinski4 does not impose restric- tions on the size of parameter variances, but it does J. Proc. Cont. 1991, Vol 1, November 251 095%1524/91/050251-07 0 1991 Butterworth-Heinemann Ltd

Improved state estimator in the face of unreliable parameters

Embed Size (px)

Citation preview

Page 1: Improved state estimator in the face of unreliable parameters

Improved state estimator in the face of unreliable parameters*

Mukul Agarwal and Dominique Bonvin+

Technisch- Chemisches Laboratorium, Eidgeniissische Technische Hochschule, 8092 Ziirich, Switzerland (Received 31 May 1991; revised 22 November 1991)

The improvement in state estimation based on independently estimated parameter values can be marginal when these parameter values are in error. A measure of uncertainty in the parameters is their error covariance, which most parameter estimation methods do not yield reliably. This work develops a new approach that obviates reliance on incorrect error covariance of the parameters. Uncertainty in the parameter values is assessed on the basis of errors in a priori predictions over a certain time horizon. This uncertainty measure is incorporated into the state estimator, modifying the state gains to account for errors in the parameter values used.

(Keywords: state estimation; errors; simulation)

Simultaneous on-line estimation of states and para- meters of dynamic systems has popularly been per- formed using the extended Kalman filter (EKF). The performance of the EKF can be improved by estimating the parameters separately, independently of the state estimates, and using these values as known parameters in a parallel state estimator. However, the uncertainty in the parameter values used must somehow be accounted for in order to make the state estimator less prone to divergence. Nelson and Stearl recognized the need for some modification of the state-error covariance so as to account for the errors in the parameter values used.

The available techniques for divergence prevention of state estimators in the face of parameter errors can be divided into two categories. First, possible parameter error is lumped into a global uncertainty, and dealt with in a general framework with no specific attention to the parameters themselvesz4. In the second category belong methods that require prior knowledge of the parameter- error covariance. The Schmidt-Kalman filtep utilizes the parameter-error covariance in an extended-state formu- lation that modifies the state uncertainty without esti- mating the extending states themselves. Another approach by Chung and Btlanger7 exploits the para- meter-error covariance to determine an optimal state gain that minimizes the sensitivity of the state estimates to parameter error.

The following section outlines the motivation for the

‘The original version of this paper was presented at the AlChE Annual Meeting held in San Francisco in November 1989. ‘Present address: lnstitut d’Automatique, EPFL, 1015 Lausanne, Switzerland.

strategy developed in this work. The theoretical aspects are developed in the following two sections. Simulation results in the final section demonstrate the performance of the proposed technique.

Problem evaluation

The proposed approach

Decoupled parameter estimators serve as a source of parameter values for the state estimator. A reasonable measure of uncertainty in these parameter values is their error covariance matrix, P,. Most parameter estimation methods do not yield a reliable P,. For example, predic- tion error methods yield the correct P,, but only asymp- totically and in the absence of plant-model mismatch8. Even when an accurate P, is available, its usefulness would usually be mitigated by a non-linear mapping involved between the estimated parameters and those that appear in the state-space model used by the state estimator. On the other hand, the a priori prediction error is a reliable parameter uncertainty measure, which is available with all parameter estimation methods. A technique for calculating a reliable P, from the predic- tion error is developed later in a general framework.

The state estimator can then be improved using any of the methods that utilize a known P, to account for para- meter uncertainty. Of these, the sensitivity-based mod- ification of Chung and BClanger7 is simple and com- putationally attractive, but requires P, to be diagonal with sufficiently small variances. The Schmidt-Kalman filter described by Jazwinski4 does not impose restric- tions on the size of parameter variances, but it does

J. Proc. Cont. 1991, Vol 1, November 251

095%1524/91/050251-07 0 1991 Butterworth-Heinemann Ltd

Page 2: Improved state estimator in the face of unreliable parameters

improved state estimator: M. Agarwal and D. Bonvin

require a time-invariant diagonal P, and consideration of parameters only as additive bias terms in the dynamic and measurement models. These limitations are removed in the development of a later section, which extends the Schmidt-Kalman filter to a realistic general case.

Comparison with noise adaption

The proposed technique involves on-line adaption of P, using an innovation sequence, unlike the adaptive filters that use the innovation sequence to identify on-line the noise covariances QX and R, for the dynamic and mea- surement models, respectively. There are several reasons for adapting P, in preference to QX and R, in many applications.

First, the innovation sequence available from the par- ameter estimator goes unheeded in the noise-adapting filters, which concentrate solely on the innovation sequence of the state estimator. Second, the state uncer- tainty introduced by a wrong parameter value depends strongly on the value of the state, so that noise adaption would have to be much faster than the equivalent P, adaption. Third, for systems typically comprising several dynamic and measurement equations with a few poorly known parameters, adaption of P, involves fewer unknowns than does noise adaption.

Fourth, most noise-adapting filters are restricted to diagonal QX9, an unrealistic assumption when a particu- lar physical parameter appears in more than one dyna- mic equation. On the other hand, the physical para- meters themselves are usually uncorrelated and this prior information could be used to advantage by presetting the appropriate off-diagonal terms in P, to zero. Fifth, noise- adapting filters are known to suffer from several draw- backs,5%9 such as computational burden, sensitivity to initial guess and tendency to diverge, or unrealistic res- trictions such as dimensions of QX and R, being equal’0 or noise processes being stationary”.

It is therefore clear that although many real appli- cations would benefit more from P, adaption, one could readily construct an example where noise adaption would outperform the P, adaption (or vice versa). There- fore, no such comparison is presented for the simulation example in the last section.

Parameter uncertainty from residuals

The theoretical development is presented for the linear case. Extensions to non-linear models are commented upon as needed. The system is assumed to be modelled in continuous time as:

x(t) = A(a)x(t) + B(a)u(t) + o(t) x(0) = 0 (1)

y(t) = cT(a)x(t) + v(t) (2)

where x is an n-vector state, u an m-vector input, and y a scalar output (scalar y implies no loss of generality as measurements can be processed one at a time). Matrices

A and B, and vector c, of appropriate dimensions, may depend on the unknown parameters a. It is important to note that a is a vector comprising the parameters that are estimated; these may or may not be the physical para- meters of the system or the elements of A, B, and c. w and v are mutually uncorrelated zero-mean, white noise pro- cesses with known covariances Q, and R,. The above linear model can be discretized and transformed to the input-output form:

a(k + 1) = a(k) + o,(k) (3)

y(k) = WWW) + v,(k) (4)

where the parameter vector 0, the information vector $, and variance R, of the noise sequence v, are the usual mappings of the corresponding properties of the conti- nuous-time state-space representation*. Here 8 is a vector of input-output parameters, which could be identical to, or a non-linear mapping of, the vector of estimated para- meters a. Inclusion of the zero-mean, white noise sequence o, in Equation (3) allows, through specification of its covariance Q,, coverage of slow time variations in the estimated parameters. The general formulation above has been used to accommodate two main choices of the estimated parameters a. One choice, a = 8, results in the input-output model being linear in a and the state- space model being non-linear in a. The other choice of a as the unknown physical parameters (or their combi- nations appearing as elements of A, B and c) might preserve a state-space model linear in a, but results in the input-output model being non-linear in a.

Regardless of the specific method used to estimate a, the estimated value & at time k can be used to predict, from the deterministic part of Equation (4), the output 9 after 1 sampling instants. Then the a priori prediction residuals are given by:

r(k + Ilk) E y(k + I) - P(k + Ilk) (5)

= +T(k + 00 ( a(k) + i %(k + i - 1) i=l

) - QT(k + WWW)) + v,(k + 0,

I= 1,2,... (6)

Considering the expectation of Equation (6) conditioned upon information available at time k, the predicted vari- ance of the residual is:

W(k + WI z mT(k + I)P,(klk)m(k + I)

+ mT(k+I) 1

iQ,(k+i-1) m(k+l) i= I I

+ R& + I), I= 1,2,... (7)

where E is the expectation operator, P, is the sought &- error covariance that quantifies the uncertainty in the parameter values, and m is given by

m(k + o ~ %bT(k + r)e(a)l aa a=B(klk) (8)

252 J. Proc. Cont. 1991, Vol 1, November

Page 3: Improved state estimator in the face of unreliable parameters

Improved state estimator: M. Agarwal and D. Bonvin

so that m = 4 for the choice a = 8, in which case Equation (7) involves no approximation.

Analogous to the covariance matching technique used for adaption of noises Q, and RX5, the estimate of P, will derive from the objective that the actual prediction resi- duals ought to be consistent with their theoretically pre- dicted statistics, that is:

ryk + Ilk) = e{r2(k + Ilk)} I= l,..,N (9)

where N is the time horizon for a priori predictions. The time span for the consistency check must comprise at least as many sampling instants as the number of unknown elements in P,. However, a broader time span is essential to make the statistics of the actual residuals meaningful in the face of dynamic and measurement noises, model mismatch and possible approximation in Equation (7). Substituting the respective expressions for both sides of Equation (9) leads to a set of relationships involving P, as the only unknown:

mT(k + Z) P,&(k) m(k + l) = y(k + Ilk),

I= l,...,N (10)

where the scalar y defined as:

y(k + Ilk) E b(k + I) - $(k + ilk)]*

- &(k + 0 (11) is considered known (see Remark 1 below). Since P, is non-negative definite by definition, the y used in Equa- tion (10) is elevated artificially to a low limit of zero if the value calculated from Equation (11) is negative.

In order to combine this set of N equations into one relationship, the scalar term involving P, can be trans- formed as:

mT(k + !)P,(klk)m(k + I) = hT(k + I)p(klk), I= l,...,N (12)

where p is a vector containing a single entry of each unknown in P, (without repeating the symmetric off- diagonal elements) and h is a vector containing the cor- responding elements of mTm to represent the coefficient of each unknown. The transformed representation enables lumping of the N relationships in Equation (10) into one equation:

Hp = I? (13)

where the matrix H and the vector I are formed by piling together the N transposed vectors hT and scalars y, res- pectively. The solution for p is then obtained using ordin- ary linear regression. The elements of p are subsequently rearranged to form the matrix P,(klk).

Remark 1. Similar to the covariance matching tech-

niques for noise adaption 3,5, the requirement in Equation (11) that future measurements y(k + r) be known intro- duces a time lag in the implementation of the filter. Thus &(klk), P,(klk), and n(klk) are calculated only after the measurement y(k + r) is available at the instant k + 1. A delayed implementation is not required when part or all of the measurement information up to the current instant k has not been used for evaluating B, and can therefore be utilized to compute the a priori residuals and P, using negative values of 1. Such is the case, for instance, when the parameters are not updated at all, or are updated only intermittently using off-line regression, or are updated on-line using only every alternate measurement.

Remark 2. The development above is presented for sca- lar y to enhance notational clarity; multiple outputs are trivially accommodated by processing Equations (10) and (12) independently for each output, and subse- quently stacking all equations together for all outputs during the lumping that leads to Equation (13).

Remark 3. When the state-space model is non-linear in states, the decoupled estimation scheme can be envi- sioned as comprising an intermittent non-linear regres- sion (over limited windows of data to stay alert to poss- ible time variations in parameters) to obtain optimal values for the parameters and the initial state, and a recursive state estimator that serves to fill in the measure- ment updates of the state assuming constant parameter values until more reliable estimates of parameters and states become available at the completion of the next window.

Improved state estimator

The estimated P, is utilized in the state estimator to improve reliability of the state estimates. Although the development here follows the philosophy of the Schmidt-Kalman algorithm described by Jazwinski4, other methods such as the sensitivity-based approach by Chung and BClanger’ are applicable as well.

Consider the state-space model in Equations (1) and (2) with the extended state x, and its error covariance matrix P defined as

X x, = [I a

px pm p = P,‘, P, 1 1

(14)

(15)

where P,, denotes the cross-covariance matrix between x and a. Further, the extended noise covariance Q, and the filter gain K, are formed in the obvious way from their respective components, and all extended-state properties are initialized using the initial value for each component block on the vector or on the diagonal blocks, and zero entries in the off-diagonal blocks. The extended state- space model thus formed is subsequently linearized, dis-

J. Proc. Cont. 1991, Vol 1, November 253

Page 4: Improved state estimator in the face of unreliable parameters

improved state estimator: M. Agarwal and 0. Bonvin

cretized and used to implement the EKF in the usual way4.

The present approach differs from the standard imple- mentation of the EKF equations in that only the state rows need to be calculated to determine x(klk - l), K,(k), x(klk), P&k), and Pz&k), in order to save on computational effort, although the covariance update P(klk - 1) between observations must be calculated in its entirety. The important difference, however, lies in the use of P&k) calculated earlier to complete the uncalcu- lated blank block in P(klk). Subsequently, this completed P(klk) is used in the next cycle to process the dynamic and the measurement updates, calculating only the state rows when sufficient. The uncalculated blank block in P(k + Ilk + 1) is then again completed using P,(k + 11 k + 1) given earlier. Subsequent cycles repeat the same procedure.

The covariance matrix P(klk) formed by appending the new P,(klk) must stay non-negative definite to be meaningful. This condition may be violated when P,(kJk) carries a significantly reduced uncertainty. The solution lies in artificially forcing P,(kJk) sufhciently to make it compatible with the computed P,(klk) and the incoming P,(klk). This is best done by reducing the absolute values, while retaining the sign, of the elements of P,, just enough to make P non-negative definite. The necessary correction on each element of P,, is unique, as can be verified by considering the Cholesky factorization of P. Indeed, this factorization offers a systematic, efficient way actually to calculate the factors of the desired matrix P.

Remark 4. Two choices are available in defining a. The first simply sets a = 0, and the second specifies a to comprise the physical parameters (or their combinations as elements of A, B and c) appearing in the state-space model. Due to the inherent non-linear parametric trans- formation existing between the state-space model used by the extended-state estimator of this section and the input-output representation used for calculating P, ear- lier, it is not possible to specify a so as to retain linearity of parameters in both representations. Since, in general, the parameter estimator has fewer sources of error, its performance would deteriorate relatively less if the latter specification of a is chosen.

Remark 5. Despite its apparent similarity to the normal EKF, the improved extended-state estimator is intrinsi- cally different in that the error due to linearization approximation is not self-feeding and becomes progressi- vely less significant as a more accurate fresh parameter value is used at each sampling instrqt.

Simulation results

A simple single-input single-output system is simulated, with one state and two unknown parameters, one each in the dynamic and the measurement equations:

x(t) = -ax(t) + u(t) + O(t), x(0) = xg (16)

254 J. Proc. Cont. 1991, Vol I, November

*.c

U

0

I 11 I

10. 20.

k

(4

I.!

Y

0

I

I’ I

F I’

I

10. 20.

k

(b)

Figure 1 a Input signal; b process response. y: - actual, - -- deterministic

y(r) = cx(t) + V(f) (17)

The process is simulated for constant parameters a = 0.5 and c = 0.5 and initial state x0 = 1. The input is gener- ated as a Gaussian random signal with mean equal to 1 and variance equal to 0.1, held constant over discrete periods of duration 1. Considerable dynamic and mea- surement noises are added with means equal to zero and variances Q = lo-3 and R = 10-j. The generated input and the process response are shown in Figure 1, which also shows the process response without the additive noises.

The model structure is assumed to be known, includ- ing the known variances Q and R. The problem is to estimate the state, with unknown initial value, account- ing for uncertainties in the parameter values that either are simply available as output of some black-box para- meter estimator or are estimated on-line using a parallel decoupled estimator. Using a sampling time r = 1 the input-output representation of the model is:

y(k) = @y(k - 1) + 8,u(k - 1) + [6&k - 1) +

v(k)1 (18)

where the mapping between the physical parameters {a,c} and the parameters {t3,,0,} of the input-output represen- tation is:

0, = e-a, e2 = (1 - e-m,;

a=lln 1 0 T 8,’ c = :l+)[j+]

(19)

An idealized situation is considered first, where a para- meter estimator is assumed to provide estimates of a and c. Figure 2a shows performance of the linear Kalman

Page 5: Improved state estimator in the face of unreliable parameters

Improved state estimator: M. Agarwal and D. Bonvin

2.’ o( (----?::. /

0’ ,

im -2.5

0. k 20.

(4 (b)

-2.4 I

0. k 20.

Figure 2 Performance given large uncertainty in parameter a. a, Linear Kalman filter; b improved filter. a: - true value, - --a, -. -. - c.P,:---PO,-.-.-P,.x:--truevalue,---estimate

filter for state estimation using parameter values appear- ing in the CL graph at the top. Thus a stays at a starting value with 100% error for some time before gradually converging to a bias-free estimate, and c stays mostly at the true value with a short excursion to slightly higher values in the middle of the run. The Kalman filter uses the correct initial value of x, but with PI(O) = 100. After the initial effect of PI(O) dies out, the filter gain settles to the steady-state compromise between the Q and R values. The state estimate is understandably poor. The filter places too much reliance on the dynamic model assuming that the uncertainty in the dynamics is only as indicated by Q.

Performance of the improved filter is shown in Figure 2b. Here a is chosen to comprise Q and c, and P, is determined using the algorithm given earlier incorporat- ing the prior knowledge that a and c are uncorrelated. Non-negative definiteness of P, is ensured by using an a posteriori minimum of zero for its elements, followed by a second linear regression when necessary. Future pre- dictions are used over six instants to identify the two diagonal elements of P,. Using this P, in the improved filter in the previous section, the state estimate is signifi- cantly better. Due to the identified uncertainty in a, less reliance is placed on the dynamic model, the gain is higher and the estimate follows the measurement model more closely. However, when both the dynamic and the measurement model have uncertainty beyond Q and R, as is the case in the middle of the run, then added reliance can be placed on neither. No improvement is then poss- ible due to the intrinsic limitation of the system.

Figures 3u and 3b repeat the run of Figure 2b with prediction horizons of four and eight instants, respecti- vely. A horizon of four instants for identifying two unknowns in P, is clearly less reliable, placing excessive uncertainty on c in the first part, so that the x estimate is

‘.‘r------ l.ll

IJyGq [pq 0. 20. 0. .20.

-2.5ho, -2.50Lo 0. k k

(4 (b)

Figure 3 Improved filter of Figure 26 with: a, shorter prediction horizon; b, longer prediction horizon. P.: --- PO, -.-.- P,. x: - true value, - - - estimate

I. 1

% f: L El ,' i

I ’ I’

0. - , _ _‘f‘\\

0. 20.

2.8

‘\ , \ /’

X

FT!!!Tl

, I\ I’

\’

0. 0. 20.

1.1

% “E _--_ 0. -

2.5

KX 0.0 i!L!!EB -2.5

0. k 20.

(a) (b)

-2.5, 0. k 20.

Figure 4 Improved filter with choice a = 8 and: a, short prediction horizon; b, normal prediction horizon. P.: ~ PBle,, -- ~ Pa,, -.-.- Pe2. x: ~ true value, - - - estimate

poorer than in Figure 2b, although still better than in Figure 2~. A horizon of eight instants in Figure 36 leads to a better P, compared to Figure 2b, and consequently to improvement in the state estimate near the beginning.

If a is chosen to comprise 0, and t& instead of a and c, then according to the mapping in Equation (19), 8, and O2 would be expected to be correlated and P, would no longer be diagonal. The results for this case are shown in Figures 4u and 46 with prediction horizons of four and eight instants, respectively. The identified P, in Figure 4u is expectedly poor as three unknown elements are identi- fied using a horizon of only four instants, and the state

J. Proc. Cont. 1991, Vol I, November 255

Page 6: Improved state estimator in the face of unreliable parameters

Improved state estimator: M. Agarwal and D. Bonvin

0. k 20. -2.51 I

-2.51 I -2.5’ I 0. k 20. 0. k 20 0. k 20.

(4 (b) (4

Figure 5 Performance given large uncertainty in parameter c: a, linear Kalman filter; b improved filter. a: - true value, - - - a, -.-.- c. P.: ---P., -.-.-PC. x: ~ true value, ---estimate

(b)

Figure 6 a, b Performance given slowly converging c from RLS: a, linear Kalman filter; b improved filter. a: - true value, - - -a, -.-.- c. P.: - - - P., -.-.- PC. x: __ true value, - -- estimate

estimate is partly even worse than in Figure 2~. The horizon of eight instants in Figure 46 is almost three times the number of unknown elements in P,, and in this respect corresponds to the case in Figure 26 for the other choice of cr. For these two cases, the state estimates are comparable. Thus although the choice of cr in Figure 4 eliminates the approximation in Equation (7) to identify a better P,, this advantage is counteracted by the added non-linearity in the state-space model given by Equa- tions (16), (17) and (20).

A second idealized situation is shown in Figure 5, where the unknown parameter estimator yields accurate a (not visible due to overlap with the true value), and wrong but converging c. The linear Kalman filter in Figure .5a performs poorly due to excessive reliance on the measurement model, unaware of the inaccurate c. The improved filter in Figure 56, with identical specifica- tions as in Figure 2b, identifies the uncertainty in c and improves the state estimate by reducing the gain in order to follow the dynamic model more closely.

The case of parameter estimation using a parallel recursive least-squares (RLS) technique is shown in Figure 6. The parameter estimator uses initial values of a and c that are 0.9 and 3 times the true value, respectively, and assigns them commensurate confidence levels by set- ting their initial error variances to 10e4 and 10-3, respec- tively. The resulting estimates appear at the top of Figure 6a, showing convergence to biased estimates as is expected of RLS in the presence of additive noise in the input-output model of Equation (18). In the first half of the run, these estimates are qualitatively similar to those in Figure 5, and so are the performances of the linear Kalman filter and the improved filter with identical spe- cifications as in Figure 5. In the second half, however, estimates of both a and c have comparable error, and added reliance on neither the dynamic nor the measure-

J7y-l 1.1

% r ’ . 1 \

0. I, - . _._ 0. 20.

o/ _ ‘_-_-_-_-_ = = =j

0. 20.

X

-2.5 0. k 20

(cl

-2.5 0. k 20.

(4

Figure 6 c, d Performance given slowly converging c from RLS: c, extended filter with P. from RLS; d improved filter with choice a = 0. a:---ttruevalue,---a,-~-~-c.P.:---P,,,-~-~-P~2.x:---true value, - - - estimate

ment model is warranted. The gain and the state estimate of the improved filter in Figure 6b are, for the second part of the run, comparable to those of the linear Kalman filter in Figure 6a and no improvement is possible.

The RLS starts out with overly low variances for a and c, and therefore yields a wrong parameter-error covar- iance matrix throughout the run. Use of this covariance matrix, instead of P,, in the extended filter of the previous section leads to only barely peceptible improvement in Figure 6c compared to the linear Kalman filter in Figure 6~. Using the other definition a = 0 for the improved filter gives the results of Figure 6d, where P, is prespeci- fied to be diagonal despite the expected correlation

256 J. Proc. Cont. 1991, Vol 1, November

Page 7: Improved state estimator in the face of unreliable parameters

Improved state estimator: M. Agatwal and D. Bonvin

between Cl1 and &, and a prediction horizon of six instants is used. The result shows that the correlation is not crucial and the state estimate is comparable to that in Figure 6b.

In summary, the presented results demonstrate the advantage of the improved filter in basing its relative weighting of the dynamic and measurement models not only on the noise variances Q and R but also on possible degradation in the worth of each model due to errors in the parameters.

Conclusions

A new technique is presented to account for uncertainty in the parameter values used by state estimators. The parameter uncertainty is estimated from prediction resi- duals for the model used by the decoupled parameter estimator, and is then incorporated in the state estimator to modify the state-error covariance. A simple simula- tion example demonstrates how the new technique improves the state estimates, how its performance com- pares with other options and the conditions that render its use especially beneficial or only marginal.

Acknowledgment

The first-named author acknowledges financial support from the schweizerischer Nationalfonds zur Fiirderung der wissenschaftlichen Forschung.

References

1 Nelson, L. W. and Stear, E. IEEE Trans. Aufom. Control 1976 21, 94

2 Friedland, B. IEEE Trans. Autom. Control 1969 14, 359 3 Jazwinski, A. H. Automatica 1969 5,475 4 Jazwiniki, A. H. Stochastic Processes and Filtering Theory, Aca-

demic, New York, USA, 1970 5 Mehra, R. K. IEEE Trans. Autom. Control 1972 17,693 6 Nahi, N. E. and Weiss, I. M. Inform. Control 1978,39, 212 7 Chung, R. C. and Belanger, P. R. IEEE Trans. Autom. Control 1976

21,98 8 Ljung, L. and SiiderstrBm, T. Theory and Practice of Recursive

Identification, MIT Press, London, UK, 1983 9 Brewer, H. W. Control Dyn. Syst. 1976 12,491

10 Zheng-ou, W., A new method of on-line estimation of noise covar- iances Q and R, IFAC Symp. Ident. Par. Est., York, UK, 1985

11 Sangsuk-lam, S. and Bull&k, T. E., Direct estimation of noise covariances, American Control Conference, Atlanta, GA, USA, 1988

J. Proc. Cont. 1991, Vol 1, November 257