6
Variance Behavior and Signification in Probabilistic Framework Applied to Vehicle Localization Benjamin Mourllion LIVIC (INRETS-LCPC) 14 route de la mini` ere 78000 Versailles-Satory, France Email: [email protected] Dominique Gruyer LIVIC (INRETS-LCPC) 14 route de la mini` ere 78000 Versailles-Satory, France Email: [email protected] Alain Lambert IEF UMR 8622 CNRS Centre d’Orsay 91405 Orsay-cedex, France Email: [email protected] Abstract — The aim of this paper is to propose a discussion about the covariance signification and repre- sentation in probabilistic vehicle localization processes. Especially, we focus on its behavior along different trajectories (straight lines and curves). We discuss the interpretation, on one hand, of the mean (computed through a monte carlo simulation or estimated through an extended Kalman filtering process) and, on the other hand, of the uncertainty ellipses. I. Introduction Classical localization processes use some proprioceptive and exteroceptive sensors data to estimate a state vector of a vehicle (like the position (x, y), the heading (θ) and also, some time, the velocity (V )). The Kalman filter [3] (KF) is a well-known recursive state estimator for linear systems. However, in many applications, vehicle localization for in- stance, the process function (the vehicle model in this case) and/or the measurement function can be nonlinear. In order to solve this problem, the Kalman filter was adapted and named Extended Kalman Filter (EKF) [6], [7]. The idea is that the process and/or measurement functions are linearized with a first order Taylor approximation, making possible to apply the traditional KF. Let us remind the Kalman filter is optimal for linear systems [12]. However, the approximations carried out through the EKF’s lin- earization make it an suboptimal estimator. The EKF is the most popular KF variant and the most used as well. We can notice that other Kalman variant exist as UKF [2], [11], DD1 and DD2 [9] or CDKF [1]. As the aim of this paper is to discuss about the variance behavior in a general way, we focus only on a monte carlo (MC) simulation and the use of the EKF. But, this discussion can be applied to the other filters since they are based on the probabilistic framework. The interested reader can refer to [4], [5], [8] for some studies on the other filters. The section II reminds some concepts about the Kalman filtering applied to the vehicle localization purpose. This section introduces the EKF, presents the process model used for the filtering and reminds how the variance can be represented according to [10]. The section III presents an example performed with true data. To explain sev- eral aspects of the covariance behavior we focus in the section IV on an academic example. Then we point out some characteristics on the variance that we discuss in the section V. II. Theoretical Background A. Extended Kalman and Notations Presentation Let f be the evolution function and g the measurement function. v and n are respectively the process noise and the measurement noise and are both considered as white, gaussian and centered noises (¯ v = 0, ¯ n = 0). X k+1 = f (X k ,u * k ,v k ) v N v,R v ) Z k = g(X k ,n k ) n N n, R n ) (1) Where u * k is the noised command: u * k N (u k ,R γ ). The Kalman filter is defined in two steps: the first one, called predictive step, allows us, at time k + 1, to estimate the state of the system based on the inputs at the time k. The second step, called corrective step, allows us to update the result of the predictive step with a new measurement. The EKF follows exactly the same scheme as the KF. The difference is in the computation of the matrices. In the KF, the process and the measurement matrices are composed of the true ” linear functions whereas, in the EKF, these matrices (called jacobian matrices in this case) are composed of the first-order Taylor linearized functions. The following algorithm shows the implementation of the EKF: Filter Initialization: ˆ X 0 = E[X 0 ] P 0 = E[(X 0 - ˆ X 0 )(X 0 - ˆ X 0 ) t ] R v = E[(v - ¯ v)(v - ¯ v) t ] R n = E[(n - ¯ n)(n - ¯ n) t ] (2) k [[1,..., [[ Predictive Step Process Jacobian Matrix Computation: A k = X f (X, u * k , v)| X= ˆ X k-1|k-1 D v = v f ( ˆ X k-1|k-1 ,u * k ,v)| vv B k = u f ( ˆ X k-1|k-1 , u, ¯ v)| u=u * k (3) Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan 294 8-10

Variance Behavior and Signification in Probabilistic Framework Applied to Vehicle Localization

Embed Size (px)

Citation preview

Variance Behavior and Signification in Probabilistic

Framework Applied to Vehicle Localization

Benjamin MourllionLIVIC (INRETS-LCPC)14 route de la miniere

78000 Versailles-Satory, FranceEmail: [email protected]

Dominique GruyerLIVIC (INRETS-LCPC)14 route de la miniere

78000 Versailles-Satory, FranceEmail: [email protected]

Alain LambertIEF UMR 8622 CNRS

Centre d’Orsay91405 Orsay-cedex, France

Email: [email protected]

Abstract—The aim of this paper is to propose adiscussion about the covariance signification and repre-sentation in probabilistic vehicle localization processes.Especially, we focus on its behavior along differenttrajectories (straight lines and curves). We discuss theinterpretation, on one hand, of the mean (computedthrough a monte carlo simulation or estimated throughan extended Kalman filtering process) and, on the otherhand, of the uncertainty ellipses.

I. Introduction

Classical localization processes use some proprioceptiveand exteroceptive sensors data to estimate a state vector ofa vehicle (like the position (x, y), the heading (θ) and also,some time, the velocity (V )). The Kalman filter [3] (KF) isa well-known recursive state estimator for linear systems.However, in many applications, vehicle localization for in-stance, the process function (the vehicle model in this case)and/or the measurement function can be nonlinear. Inorder to solve this problem, the Kalman filter was adaptedand named Extended Kalman Filter (EKF) [6], [7]. Theidea is that the process and/or measurement functions arelinearized with a first order Taylor approximation, makingpossible to apply the traditional KF. Let us remind theKalman filter is optimal for linear systems [12]. However,the approximations carried out through the EKF’s lin-earization make it an suboptimal estimator. The EKF isthe most popular KF variant and the most used as well.We can notice that other Kalman variant exist as UKF [2],[11], DD1 and DD2 [9] or CDKF [1]. As the aim of thispaper is to discuss about the variance behavior in a generalway, we focus only on a monte carlo (MC) simulation andthe use of the EKF. But, this discussion can be applied tothe other filters since they are based on the probabilisticframework. The interested reader can refer to [4], [5], [8]for some studies on the other filters.

The section II reminds some concepts about the Kalmanfiltering applied to the vehicle localization purpose. Thissection introduces the EKF, presents the process modelused for the filtering and reminds how the variance canbe represented according to [10]. The section III presentsan example performed with true data. To explain sev-eral aspects of the covariance behavior we focus in the

section IV on an academic example. Then we point outsome characteristics on the variance that we discuss in thesection V.

II. Theoretical Background

A. Extended Kalman and Notations Presentation

Let f be the evolution function and g the measurementfunction. v and n are respectively the process noise andthe measurement noise and are both considered as white,gaussian and centered noises (v = 0, n = 0).

{

Xk+1 = f(Xk, u∗k, vk) v N (v, Rv)

Zk = g(Xk, nk) n N (n, Rn)(1)

Where u∗k is the noised command: u∗

k N (uk, Rγ).The Kalman filter is defined in two steps: the first

one, called predictive step, allows us, at time k + 1, toestimate the state of the system based on the inputs atthe time k. The second step, called corrective step, allowsus to update the result of the predictive step with a newmeasurement.The EKF follows exactly the same scheme as the KF.The difference is in the computation of the matrices.In the KF, the process and the measurement matricesare composed of the ”true” linear functions whereas,in the EKF, these matrices (called jacobian matricesin this case) are composed of the first-order Taylorlinearized functions. The following algorithm shows theimplementation of the EKF:

• Filter Initialization:

X0 = E[X0]

P0 = E[(X0 − X0)(X0 − X0)t]

Rv = E[(v − v)(v − v)t]Rn = E[(n − n)(n − n)t]

(2)

• ∀k ∈ [[1, . . . ,∞[[Predictive Step

Process Jacobian Matrix Computation:

Ak = ∇Xf(X,u∗k, v)|X=Xk−1|k−1

Dv = ∇vf(Xk−1|k−1, u∗k, v)|v=v

Bk = ∇uf(Xk−1|k−1, u, v)|u=u∗k

(3)

Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan

294

8-10

l

ϕ

θ

l

ϕ

θ

OV

O’V

� �

X

Y

Fig. 1. Bicycle Model

State and Covariance Prediction Computation:

Xk|k−1 = f(Xk−1|k−1, uk, v)Pk|k−1 = AkPk−1|k−1A

tk

+DvRvDTv + BkRγBt

k

(4)

Corrective Step

Observation Jacobian Matrix Computation:

Ck = ∇Xg(X, n)|X=Xk|k−1

Dn = ∇ng(Xk|k−1, n)|n=n

(5)

State and Covariance Correction:

Kk = Pxk|k−1Ct

k[CkPk|k−1Ctk

+DnRnD−n 1]−1

Xk|k = Xk|k−1

+Kk[Zk − g(Xk|k−1, n)]Pk|k = [I − KkCk]Pk|k−1

(6)

B. Process Model Presentation

The model that we take into account in the Kalman fil-tering process is a classical bicycle model with [x, y, θ, V ]t

the state vector (see figure 1).

X =

xk+1 = xk + VkTe cos(θk +VkTe

2ltan(ϕ))

yk+1 = yk + VkTe sin(θk +VkTe

2ltan(ϕ))

θk+1 = θk +VkTe

ltan(ϕ)

Vk+1 = Vk + aTe

(7)

The command is given by (a, ϕ)t, where a is the acceler-ation and ϕ the steering (wheels angle). Te is the samplingtime.

C. (Co)Variance Representation

The result of the Kalman filtering gives an a posteriorigaussian state vector Xk|k and an a posteriori covariancematrix Pk|k.

We can write the pdf (probability density fonction) ofthe random Gaussian vector has following:

pXk|k= N (Xk|k, Pk|k) =1

(2π)λ/2√

|Pk|k|exp

(

− 12 (Xk − Xk|k)tP−1

k|k(Xk − Xk|k))

(8)with: |Pk|k| the determinant of Pk|k and λ the dimension

of the gaussian vector (so, here λ = 4).The iso-density surfaces are ellipsoids and are given by

the following equation:

(Xk − Xk|k)tP−1k|k(Xk − Xk|k) = constant (9)

In this paper, we are focusing only on the graphicalrepresentation of the position. We apply our study onlywith the 2 × 2 upper left sub-matrix of the covariancematrix.

Pk|k =

σ2xx σ2

xy σ2xθ σ2

xv

σ2yx σ2

yy σ2yθ σ2

yv

σ2θx σ2

θy σ2θθ σ2

θv

σ2vx σ2

vy σ2vy σ2

vv

k|k

(10)

As we are in the dimension two, the position is repre-sented by an ellipse (called uncertainty ellipse). Accordingto [10], the parameters of the ellipse can be obtainedthrough the computation of the eigenvalues (the lengthsof the two axes) and the eigenvectors (orientation of theellipse) of the covariance sub-matrix of Pk|k. These eigen-

values are weighted by the factor k =√

−2 log(1 − Pa),where Pa is the membership probability.

III. True Data Filtering

Let us implement the theoretical tools presented inthe previous section on true data. The experimentationtook place on the Satory’s test tracks. The experimentalvehicle is equipped of an odometric sensor, a gyroscope,an accelerometer, a gps, a steering wheel coder. We usethe longitudinal acceleration (measured through the ac-celerometer) and the steering wheel coder as commandfor the predictive step. Then, odometer and gyrometer areused for the corrective step. Here, in this experiment wevoluntarily do not use the gps data for the corrective step.Indeed, this last one is used in RTK mode in order toobtain a reference. The figure 2 shows a typical resultsof this localization stage. Ellipses are displayed each twoseconds. The bold curve presents the reference trajectory(obtained through the gps). The two other plain curvesand the dashed one represent respectively the left and theright side of the road and the middle of the road. Dots arethe estimated positions and the ellipses are the uncertaintyellipses in position. The vehicle starts at the position

Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan

295

5.8155 5.816 5.8165 5.817 5.8175 5.818

x 105

1.208

1.2082

1.2084

1.2086

1.2088

1.209

1.2092

1.2094

1.2096

1.2098

1.21

x 105

x (m)

y(m

)

Uncertainty ellipse

Road

True path

Fig. 2. True Data Filtering Localization Results. Ellipses aredisplayed each two seconds.

0 5 10 15 20 25 30 35 400

10

20

30

40

50

60

70

80

90

Time (s)

Tra

ce

(m2)

Fig. 3. Trace of the Covariance Matrix

(5.8181 × 105, 1.2084 × 105) and runs at the constantspeed 14.2 m/s (approximatively 50 km/h) during all theexperiment.

The figure 3 shows the trace of the sub-matrix covari-ance (equation 10). At the initial condition, we know veryaccurately the state of the vehicle; so the variance is nearlynull.

We can emphasize two aspects. At first glance, it mayseem that the behavior of the ellipse (figure 2) is surprising.The ellipse orientation during the left curve does notfollow the orientation of the vehicle. The second interestingaspect is about the behavior of the variance. Why theslope of the curve plotted on figure 3 is gentler during thetime t ∈ [18; 26] (corresponding to the left road curve)than during t ∈ [0; 18] ∪ [26; 40] (corresponding to thestraight lines) ? To study these two aspects, we will focus(next section) on the propagation of an initial noise on theheading of a vehicle.

IV. Heading Initial Noise Propagation Through

the Predictive Step

In order to focus on the specific aspects of the variancebehavior that we have just underlined in the previoussection, we will use a very simplified example.

−200 −100 0 100 200 300

−50

0

50

100

150

200

250

300

350

400

x (m)

y(m

)

Start Point

Fig. 4. Monte Carlo Simulation for σθ = 0.8 rad

Let us consider the propagation of only an initial noiseon the heading (θ). This means that at the time t = 0,only the state θ0|0 of the vector X0|0 is noised by a gaussiannoise, centered with a σθ standard deviation (x0|0, y0|0 and

V0|0 are perfectly known).The vehicle model used to simulate data and the model

used in the predictive step are the same. Therefore, themodel noise is null. Thus, Rv is a null matrix. At last, thecommand (a, ϕ)t is also un-noised.

The scenario is the following: the vehicle goes in straightline then turns on the left and goes again in straight line.

The figure 4 shows the monte carlo simulation per-formed with 500 dots. The banana-shaped cloud (montecarlo cloud) represents the different states of the vehicle.From this cloud we are able to compute the mean andthe variance of the dots and then to draw iso-densitycurves (equation 9). The figure 5 shows the result withthe uncertainty ellipse (called in this case MC Ellipse)instead the monte carlo cloud. In this simulation, ellipsesare displayed each 2.5 seconds.

Now, let us compute the iso-density surface through theKalman paradigm. We use only the predictive step of theEKF. Let us remember that we propagate only an initialnoise thus Rv is null. In order to have a defined covariancematrix, we set a small value on the x variance in P0|0.The ellipse obtained by this way is called EKF ellipse (seefigure 7).

As we propagate an initial noise through the predictionstep, intuitively, we can suppose that the variance canonly increase or stay constant. Figures 6 and 8 showthe half axes lengths of the MC ellipses and the EKFellipses respectively (fonction only of the variance). Duringthe straight line period, the intuition is verified (varianceincreases). But, during the left curve, we can notice thatellipses are not perpendiculars anymore to the trajectoryand that the lengths of the axes are decreasing.

Several aspects are interesting to underline:

• due to the strong correlation of x and y during thecurve, the variance decreases.

Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan

296

−200 −100 0 100 200 300

0

50

100

150

200

250

300

350

400

x (m)

y(m

)

Fig. 5. Ellipses Obtained From the Monte Carlo Simulation (fig-ure 4)

0 20 40 60 80 1000

10

20

30

40

50

60Axes Evolution

Time (s)

Half

Axes

Length

softh

eM

CE

llip

se(m

)

Minor Axis

Major Axis

Fig. 6. Half Axes Length of the Ellipse Obtained From the MonteCarlo Simulation (figure 4)

−200 −100 0 100 200 300

0

50

100

150

200

250

300

350

400

x (m)

y(m

)

Fig. 7. Ellipses Obtained From the a Posteriori Covariance Matrixof the Kalman Filter Process

0 20 40 60 80 1000

10

20

30

40

50

60Axes Evolution

Time (s)

Half

Axes

Length

softh

eE

KF

Ellip

se(m

)

Minor Axis

Major Axis

Fig. 8. Half Axes Length of the Ellipse Obtained From the aPosteriori Covariance Matrix of the Kalman Filter Process

• the centers of the MC ellipses and the EKF ellipsesare not the same. A bias exists.

• At last, a variance appears on x when we compute itthrough the monte carlo method (see the minor axe’sevolution on figure 6) which does not exist when wecompute it through the EKF method (see the minoraxe’s evolution on figure 8).

V. Discussion

A. Bias

It appears that a bias exists between the mean computedwith the monte carlo method and the mean computed withthe EKF method.

We notice that the prediction is computed in the polarspace (velocity and heading) and the result is returned inthe cartesian space (equation 7). So, our problematic canbe assimilated as a polar to cartesian conversion problem.

Let us generate a cloud of dots in the polar space. Thenoise on the radius (respectively on the angle) is gaussian,centered and has a standard deviation σr (respectively σθ).We take N = 5000 measurements. Thus, we obtain a setof points with polar coordinates (r, θ)t with r N (r, σr)and θ N (θ, σθ) where (r, θ) = (1, π/2). Let us considerf the transformation from polar to cartesian spaces.

Let µc =

(

xy

)

f(·)

be the mean of the set of points

computed in the cartesian space and µp =

(

xy

)

f(·)the mean computed in the polar space and expressed inthe cartesian space.The larger is the bearing standarddeviation, the nearer is the mean (µc) from the origin ofthe global frame (see figure 9) and thus, µc is out of theset of points.

So, we are in possession of two means (µc and µp).

Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan

297

−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.40.5

0.6

0.7

0.8

0.9

1

1.1

x

y

Cloud

MC center

MC Ellipse

EKF Center

EKF Ellipse

Fig. 9. The banana-shape obtained with 5000 measurements. Theblack star is the monte carlo mean computed in the polar space, thecircle is the monte carlo mean computing in the cartesian space.

0.9 0.95 1 1.05 1.10

50

100

150

200r distribution

0.5 1 1.5 2 2.50

50

100

150

200θ distribution

−1 −0.5 0 0.5 10

50

100

150x distribution

0 0.5 1 1.50

100

200

300

400

y distribution

Fig. 10. Variables Distributions. r and theta respect a gaussiandistribution.

Which one we must take into account in order to havethe best estimated state ? To answer to this question letus see the signification of the mean and the covariance.

Considering (z1, . . . , zn) n samples of a (µ, σ2) gaussianlaw, their likelihood is:

L(z1, . . . , zn, µ, σ2)=

n∏

i=1

1√2πσ2

e−(zi−µ)2

2σ2

=

(

1√2πσ2

)n

e−1

2σ2

(zi−µ)2(11)

To find the maximum of this function, we use the loga-rithm function:

log(L(z1, . . . , zn, µ, σ2)) = −n

2log(σ2) − n

2log(2π)

− 1

2σ2

n∑

i=1

(zi − µ)2

The partial derivative functions with respect to the µand σ2 parameters are:

∂ log(L(z1, . . . , zn, µ, σ2))

∂µ=

1

σ2

n∑

i=1

(zi − µ)

and

∂ log(L(z1, . . . , zn, µ, σ2))

∂(σ2)= − n

2σ2+

1

2σ4

n∑

i=1

(zi − µ)2

The derivative functions are equal to zero for:

µ =

∑n

i=1 zi

nand σ2 =

∑n

i=1(zi − µ)2

n

To verify if this point is a maximum, we compute thesecond order partial derivative functions:

∂2 log(L(z1, . . . , zn, µ, σ2))

∂µ2= − n

σ2

∂2 log(L(z1, . . . , zn, µ, σ2))

∂µ∂(σ2)= − 1

σ4

n∑

i=1

(zi − µ)

∂2 log(L(z1, . . . , zn, µ, σ2))

∂(σ2)2= − n

2σ4− 1

σ8

n∑

i=1

(zi − µ)2

The hessian matrix at the point (µ, σ2) is:

(

− nσ2 00 − n

2σ4

)

(12)

This matrix is negative defined, therefore the point (µ, σ2)is a maximum of the L function.

So, under a gaussian constraint, the mean and thecovariance are the estimators of the likelihood maximum.

In our purpose, only r and θ are gaussian. Therefore, µp

is the sole mean to consider.A priori, the means of x and y are not maximum

of likelihood estimators, these means have not specialsignification.

Smith and Cheeseman said in [10]: ”By assuming the(multivariable) distribution is Gaussian, we can still makeprobabilistic estimates that agree with simulations verywell, except when the angular errors are large.”

The first order filters compute the transformation ofthe mean and the transformation of the variance with thelinearized function. The state vector stays thus a gaussianmultivariable. It is not the case for the second order filters.We can see, in figure 10, that the multivariable (r, θ)t isgaussian, but (x, y)t is not gaussian anymore.

The mathematical expectation of the estimated positionis different of the most probable position when variablesare not gaussian. So, as we have a large standard deviationnoise on the angle and according to [10], computing thetransformation of the mean (µp) has a physical significa-tion, but computing the mean of the transformation isphysically irrelevant (the mean position is not the mostprobable position).

Moreover, as we can see by comparing the length of theminor axis on figure 6 with the length of the minor axis onfigure 8, there is a difference. Again, this difference is stilldue to computation space where the variable is a gaussianone in a case and a no gaussian one anymore in the othercase.

Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan

298

t �

t �

t�

t �

σσσσθθθθ

Fig. 11. Explicative Scheme of the Propagation of an Initial Noiseon the Heading

t �

t �

t �

σσσσθθθθ

σσσσθθθθ

t

σσσσθθθθ

Fig. 12. The variance on the Position due to the Propagation ofan Initial Noise on the Heading Does Not Depend of the TrajectoryTaken Between Two Positions

B. Ellipse’s Orientation

In the previous subsection, we discuss about the meanand the variance. In this subsection we discuss about theorientation of the ellipse (covariance behavior).

Let us remember that we are in the particular casewhere all is perfectly known excepted the initial value ofthe heading. In this context, it is quite easy to see whythe major axe of ellipse is not necessary perpendicular tothe trajectory. All trajectories are determinist and knownexcept the initial heading of the vehicle which is knownwith an error. When the trajectories are including a curvethey cross each other. Thus, the spatial repartition of thecloud can become smaller.

The figure 11 explains graphically why the variance candecrease when we run only the predictive step.

Moreover, in this specific case, the variance in positionof a position ti does not depend of the trajectory betweenthe initial position t0 and the considered position ti. Forinstance, on figure 12, if the vehicle starts from the positiont0 and goes to the position t3 by the plain trajectory orby the dashed trajectory the two uncertainty ellipses thusobtained are the same.

Mathematically, this phenomenon can be explained bythe fact that, during the left curve, the variables x and ybecome strongly correlated. Intuitively, we expect that theuncertainty ellipse remains perpendicular to the trajectory

because we reason like in solid mechanic. But, here, eachdot of the ellipse has the same velocity (there is nocomposition of velocity as in mechanic).

VI. Conclusion

This paper allows us to have a better understanding ofthe variance behavior in position. After having presentedthe extended Kalman and the variance representation, weprovided an example using true data. The results obtainedcould be surprising.

In order to carry out an explanation, we focus on a onlypropagation on an initial uncertainty on the heading ofthe vehicle. In this case we saw that with only the use ofthe predictive step the variance can decreases due to thecovariance. Moreover, we have explained the bias betweenthe mean computed with the MC method and the meancomputed with the EKF method. This result is importantbecause some minimum variance estimators are used whennoise are no gaussian and become biased. Therefore theyare not maximum likelihood estimator anymore. Thus,the state estimated is not the most probable state of thevehicle.

In our further works, a study on the correlation betweenthe coordinates x and y with the heading θ (σxθ and σyθ)should be very interesting.

References

[1] K. Ito and K. Xiong. Gaussian filters for nonlinear filtering prob-lems. In IEEE Transaction on Automatic Control, volume 45of 5, May 2000.

[2] S.J. Julier and J.K. Uhlmann. A new extension of the kalman fil-ter and nonlinear systems. In Proceedings of the SPIE AerosenseInternational Symposium on Aerospace/Defense Sensing, Sim-ulation and Controls, Orlando, Florida, September 2001.

[3] R.E Kalman. A new approach to linear filtering and predictionsystem. Transactions of the ASME-Journal of Basic Engineer-ing, 82(D):35–45, 1960.

[4] T. Lefebvre, H. Bruyninckx, and J. De Schutter. Comment on’a new method for the nonlinear transformation of means andcovariances in filters and estimators’. In IEEE Transaction onAutomatic Control, volume 47, September 2002.

[5] T. Lefebvre, H. Bruyninckx, and J. De Schutter. Kalmanfilters for nonlinear systems: a comparison of performance. TheInternational Journal of Control, 77(7), May 2004.

[6] F.L. Lewis. Optimal Estimation. John Wiley & Sons, New-York,1986.

[7] P.S. Maybeck. Stochastic Models, Estimation and Control,volume vol. 2. McGraw-Hill, Singapore, 1984.

[8] B. Mourllion, D. Gruyer, A. Lambert, and S. Glaser. Kalmanfilters predictive steps comparison for vehicle localization. InIEEE/RSJ International Conference on Intelligent Robots andSystems, pages 1251–1257, Edmonton, Alberta, Canada, August2005.

[9] M. Nørgaard, N.K. Poulsen, and O. Ravn. Advances inderivative-free state estimation for nonlinear systems. (IMM-REP-1998-15), April 2000.

[10] R.C. Smith and P. Cheeseman. On the representation andestimation of spatial uncertainty. International Journal ofRobotics Research, 5(4), winter 1986.

[11] E. A. Wan and Rudolph van der Merwe. The unscented kalmanfilter for nonlinear estimation. In Adaptive Systems for SignalProcessing, Communication and Control (AS-SPCC), 2000.

[12] G. Welch and G. Bishop. An introduction to the kalman filter.Technical Report Report 95-041, University of North Carolinaat Chapel Hill, 1995.

Intelligent Vehicles Symposium 2006, June 13-15, 2006, Tokyo, Japan

299