1
MEASUREMENT MODEL NONLINEARITY IN ESTIMATION OF DYNAMICAL SYSTEMS*
Manoranjan Majji,†J.L.Junkins,‡ and J.D.Turner§
Role of nonlinearity of the measurement model and its interactions with quality
of measurements and geometry of the problem is coarsely examined. It is shown
that for problems in astrodynamics several important conclusions can be drawn
by an examination of the transformations of density function in various coordi-
nate systems and choices of variables. Probability density transformations
through nonlinear, smooth and analytic functions are examined and the role of
change of variables in calculus of random variables is elucidated. It is shown
that the transformation of probability density functions through mappings pro-
vides insight in to problems, apriori providing the analyst with an insight on the
interaction of nonlinearity, uncertainty and geometry of estimation problems.
Examples are presented to highlight salient aspects of the discussion. Finally a
sequential orbit determination problem is analyzed and the transformation for-
mula is shown to be helpful in making the choice of coordinates for estimation
of dynamic systems.
INTRODUCTION
Estimating parameters in algebraic models to extract useful information from measurements is
almost as old as science itself. Contributions of Gauss1 and others in the key areas of mensura-
tion have been foundational in the development of virtually all fields of research where the best
available measurements must be brought together along with the best mathematical models to
develop a better understanding of the physics of problems involved. Therefore, algebraic models
play a central role in engineering in general and astronautics in particular.
In the current paper, we propose to explore the fundamental role of nonlinear algebraic trans-
formations in estimation theory, motivated by some questions asked as a part of the second au-
thor’s tutorial lecture2 “How Nonlinear is it?” Thanks to several researchers over the past century
and a half, we indeed have achieved reasonable understanding of how uncertainty distributions
propagate through dynamical systems (recent work includes 3,4,5 - several classical works have
been addressing this important problem). In orbital mechanics, Junkins, et al,6,7 among others,
have shown that uncertainty propagation through nonlinear dynamical systems is an important
problem in practical situations.
* Dedicated to Professor Kyle T. Alfriend for his contributions to astronautics. †Research Associate, Aerospace Engineering Department, Texas A&M University, College Station, TX, 77840. ‡Distinguished Professor, Regents Professor, Royce Wisenbaker Chair in Engineering, Aerospace Engineering Depart-
ment, Texas A&M University, College Station, TX, 77843-3141, [email protected], Fellow, AAS. §Research Professor, Aerospace Engineering Department, Texas A&M University, College Station, TX, 77843-3141.
AAS 10-330
2
However, the nonlinearity of the algebraic models has comparatively received lesser attention
from the researchers. This is frequently because of the remarkable effectiveness of nonlinear least
squares methods to solve estimation problems involving algebraic nonlinearity. Although the
classical method performs remarkably well and more importantly provides partial answers on
questions relating to the accuracy of the estimates obtained in the process, several important in-
sights can be had from knowing the true (joint) probability density function of the “answer” ob-
tained by any nonlinear inversion process. This is frequently possible by a well-known transfor-
mation of variables formula for analytic functions. This paper presents several applications of the
transformation of density functions using the change of variable formulations.
We first present a brief overview of the nonlinear least squares and derive the estimates and
uncertainties of the methodology using linear error theory. For a class of important representative
problems, we then compare the estimated covariance with monte-carlo results for an analysis on
the relation of the domain of convergence with the accuracy of the covariance bounds obtained by
linearization. Transformations of probability density function are then introduced. For the same
problem, an exact analytical expression for the density function of the estimate is obtained which
is then compared with the monte-carlo results for several representative cases are discussed. Ef-
fects of nonlinearity of the measurement model, quality of measurements and measurement geo-
metry are discussed in the light of the results obtained for this example. The probability density
function transformation is then applied to a variety of problems in astrodynamics. Application to
the problem of sequential orbit determination and comparison of accuracies by choice of coordi-
nates is then presented. Paper concludes by a discussion of the application of the method to un-
certainty propagation problem in orbital mechanics.
GAUSSIAN LEAST SQUARES DIFFERENTIAL CORRECTIONS
Historically Gaussian least squares differential corrections (GLSDC8 - also known as nonli-
near least squares) has been a well-known and widely used approach to solve such nonlinear
problems AND estimate uncertainty metrics in the form of associated covariance from the “Nor-
mal” equations.
Let us now analyze the workings of nonlinear least squares algorithm with the simplest class
of nonlinear mappings. Consider a nonlinear algebraic transformation under consideration be-
tween two vector spaces given as
gy x (1)
Nonlinear least squares aims at finding the correction x to the current “guess” x̂ of the solution
of the nonlinear algebraic equations as
ˆ ˆg y x x (2)
In such problems, we wish to minimize the sum squared error of the outputs ˆ y y y given by
the cost functional
1
2
TJ W y y (3)
Expanding (2) in a first order Taylor series expansion, we arrive at the least square “Normal eq-
uation” solution for the correction vector as
1
T TH WH H W
x y (4)
3
where ˆ
gH
xx. Upon sufficient proximity to the true solution, an iterative process such as
above leads to convergence to the true state value *ˆ x x .
The most important information in this simple analysis is the approximate covariance of the state
estimateone derives as
1
* * 1T
TE H R H
x x x x (5)
with the jacobian*
gH
xxevaluated at the converged estimate. In fact this is an quite impor-
tant piece of information.
In practical estimation problems (or even in general “root solving”), we wish to examine what
happens to the statistics of y for “small” deviations in *x x being denoted by x .For simplici-
ty, we assume that this mapping is smooth and single-valued. Considering zero mean x devia-
tions about this nominal *
x value, with the first two moments: , TE E P x 0 x x with
finite variance, let us examine the first statistical moment (mean) of the output deviations from
the nominal value using a Taylor series expansion as
1 1 2
* *1 1 2
21
2
i ii j j j
j j j
g gy x x x HOT
x x x
x x x x
(6)
Clearly, taking expected value of both sides, even for “small” variations, but for which the
second order terms are not negligible, the output is NOT zero mean.
1 2
*1 2
2
,
1
2
ii j j
j j
gE y P HOT
x x
x x
(7)
In fact to second order, the so-called quadratic bias is incurred by the mean of the outputs.
Obviously, even “small” changes modify the mean of the distribution of y . Of course this dis-
cussion invites the analyst to worry about convergence. Is the quadratic bias real, or simply an
artifact of truncation of Eq (2)? One can easily construct examples where it is “real” (a quadrati-
cally nonlinear g, for example), and other nonlinear mappings where including the higher order
terms cause the accurately converged first moments to be zero. Setting aside all these questions of
fundamental importance to the analyst, it is strikingly clear and obvious that “unbiased-ness” of
the estimator, that is usually held quite sacred in the linear estimation paradigm does not seem
necessary in the nonlinear case.
So how is it able to be so accurate in many applications? Under what circumstances is it like-
ly to fail? Of course the important question about the convergence of GLSDC and validity of the
linear error theory implicit in this historical algorithm is an immediate one. More broadly, it is
relevant to ask what happens to estimation algorithms generally in the presence of large mea-
surement noise. In other words, for a given problem, what are the tradeoffs of measurement noise
level vis-à-vis radius of convergence of the approximations implicit in a nonlinear estimation al-
gorithm? This is an important family of questions. While there are insights in the literature, we
4
return to basics in this paper, and focus mainly on the nonlinear measurement models. We note
that considerable recent effort9 has been invested to consider the analogous problem of nonlinear
propagation through the solution of nonlinear differential equations, and we note that the unifica-
tion of these two methodologies is needed to complete the picture.
In an attempt to address some of these questions, we found it convenient to look at a simple
problem and also develop ancillary analytical calculations and tools for this purpose. To this end,
let us consider a simple set of measurement equations,
2 2
1tan
r
j T T j
Tj j
T
r x y v
yv
x
(8)
Range and elevation measurements of a given target. Then two representative situations are
now shown.
Fig 1a: Case 1
Fig 1b: Case 2
Fig 1c: Case 3
Fig 1d: Case 4
Figure 1: Effects of Nonlinearity, Uncertainty and Measurement Geometry on Quality of
5
Estimates obtained in GLSDC.
With reference to Fig. 1 above and the associated sub-figures, let us discuss each case of the
performance of the GLSDC algorithm. First let us consider Fig. 1b and the case 2. In this case,
the measurement noise in both range and azimuth are equal and of small magni-
tudes 0.01, 0.01r . The covariance ellipses plotted in black color are the result of the
GLSDC algorithm operating on a set of 100 measurements each. The colored contour plots on the
other hand represent the true probability density functions in the nonlinear transformed space (to
be derived separately in the developments of the paper below). Continuing from our discussion, it
is clear that in Fig 1b, the GLSDC approximated density function coincides with the true density
function about the estimate. This apparent qualitative agreement between the approximation and
“true” aposteriori estimation error probability density functions is due to fact that high precision
measurements have been made with a relatively good geometry. This makes the assumptions in-
volved in the convergence of GLSDC algorithm act favorably to the analyst. Hence there is a
considerable agreement in the estimation error density functions in this case.
In stark contrast, consider the cases shown in Figs. 1a and 1c. In these figures, it is noted that
the one of the noise magnitudes dominate the other. This leads to some degree of violation of the
“linearity” assumption by the analyst. The violation manifests itself as a large deviation in the
true and approximate probability density function of the estimation error. The “true” density func-
tion, as shown exhibits a remarked non-Gaussian behavior and the linear error theory approxima-
tion is unable to capture this effect. Thus quality of measurements and geometry play an impor-
tant role. Similar situation is shown in Fig. 1d, where the effect of pure loss of precision on the
aposteriori estimation error probability density function is shown.
Having shown some applications of true probability density functions in evaluation of aposte-
riori estimation error characterization in some problems, we now show how to use the “change of
variables” theorem in computing the transformation of density functions of a random variable.
CHANGE OF VARIABLES: TRANSFORMATION OF DENSITY OF A FUNCTION OF
RANDOM VARIABLE
Consider the following theorem:
:
Let be an invertible, continuously differentiable mapping, with a differentiable
inverse. If the probability density function is known, then the probability density
functi
p
ξ
Change of Variables
η g ξ
ξ
1
1
on in the transformed space is given by
detd
p pd
η ξ
g ηη ξ g η
η
Let us now look at some example applications of the change of variables formula of univariate
and multivariate functions to outline the further developments of this paper.
Example 1: A Nonlinear Algebraic Transformation
Consider the nonlinear function given by
21 xy x (9)
6
Given that the random variable x is Gaussian, denoted by ~ 3,x N , applying the above for-
mula it can be seen that the probability density of y is given by
2
22
2
1 1 41 1exp
2 22 1 4yp y
y
y
(10)
Accordingly the double root is added up in the change of variable formulation. Quite clearly, the
mathematical formula for computation of the density function of the y variable is quite complex
to write out. However, a systematic application of the change of variables theorem indicates the
fact that this has never to be written out explicitly. It is only to be computed and the resulting
graphs are to be interpreted for sufficient statistics according to the problem at hand. This is the
central goal of this paper. To show important applications of the change of variables formula and
the associated analysis as to how the transformation of density functions can be used in order to
compute useful quantities in estimation and control. Numerically, the results of the above exam-
ple can be plotted as shown in Fig. 2.
Figure 2: Transformation of the Probability Density function as a function of the para-
meter as discussed in example 1.
It is clear that as the perturbation parameter ranges from 0 1 , the transformation of an orig-
inal Gaussian distribution becomes progressively non - Gaussian and heavy tailed. This is an im-
portant transformation, for, Gaussian distributions through nonlinear functions do not retain their
Gaussian behavior.
Example 2: Kepler’s Equation
Let us now turn our attention to a problem whose physical connotations are well known to all of
us. In uncertainty propagation calculations, let us say that it turns out that the true anomaly at a
given time has been specified with a Gaussian dispersion, given as
7
2
1exp
2 2
M
M M
M
Mp
(11)
An immediate step would involve the computation of the uncertainties involved in the corres-
ponding position and velocity estimates of the particle at that time instant. It is well known that
the eccentric anomaly is a straightforward path of determining such a transformation since the
position and velocity in the orbit plane can be then determined. The relationship between the true
and eccentric anomalies is given by the very well known Kepler’s equation given by
sinM E e E (12)
Since this relationship is an analytic and one - to - one relation, our change of variables theorem
immediately applies and we have the transformed density in the eccentric anomaly space being
given by (as a function of the parameter eccentricity, e )
2
2
sin1exp 1 cos
2 2
M
E
M M
E e Ep e E
(13)
We call this the Kepler distribution since it has found to exhibit interesting properties as the pa-
rameter eccentricity e is varied. Fig 3 plots such a variation
Figure 3: Transformation of the Probability Density function in to Eccentric anomaly
variable through Kepler's Equation
The most important observation to be made here is that after a critical eccentricity value, it can
clearly be seen that the probability density becomes bi-modal. This has some physical connota-
tion, which becomes clear upon going further to the true anomaly space.
Yet another transformation can be carried out from the Eccentric anomaly to the true anomaly
leads us to a density function as plotted in Fig. 4.
8
Figure 4: Transformation of the Probability Density Function from Mean to True Ano-
maly
Quite remarkably, one notes that as eccentricity goes towards 1e , two impulse functions ap-
pear as the density function. This transformation from a Gaussian distribution is remarkably non-
linear and has not seen by the authors in such expressive form in other physical examples. The
physical meaning of this parametric behavior is also remarkably well explained. We see the
straight line limiting solution and as eccentricity becomes larger, the analyst observes that the
particle spends increasingly long time at the apoapsis and the periapses is swept at increasingly
larger velocities. This interesting behavior has been observed by simple algebraic transformations
of density functions and an application of an elementary change of variable formula from ordi-
nary calculus.
Example 3: Error Analysis of the Nonlinear Least Squares for Direction and Angle
Measurement Models
One of the most important class of practical measurements are given by the range, azimuth
and elevation measurements. In several problems of navigation and attitude estimation it is vitally
important to measure range and heading directions. Quite clearly, the mapping between the quan-
tities to be estimated and the measurements are nonlinear. However, in accordance with the
change of variables theorem above, it turns out that the transformations are mostly one-one func-
tions with a differentiable inverse. Therefore in several of these nonlinear estimation problems,
one can infact construct the non-Gaussian joint density functions of the estimates, once the noise
characteristics of the measurement errors are provided. This nonlinear inversion is quite useful, as
will be discussed briefly in the next section detailing navigation applications, to “submerge” the
estimation errors within an appropriately large Gaussian noise signal. This further leads to proper
initialization of the sequential Kalman filters involved in providing navigation solution. Let us
first present the simplest measurement model where range and elevation are available for mea-
surement and the underlying “truth” model involves the estimation of the planar location of a tar-
get.
Mathematically, the measurement model is given as
9
2 2
1tan
r
j T T j
Tj j
T
r x y v
yv
x
(14)
Given the fact that ,r are jointly Gaussian, their joint probability density function is given
by
2
2
2
, 2
1, exp
2 2 2
r
r
r
r
rp r
(15)
then, applying the change of variables formula discussed before, we have that the joint density of
estimates ,x y are given as
2
22 2
2 22 2
1
,
tan1
, exp2 22
r
r
y
r
x
y
xp x y
x y
x y
(16)
Since the analytical transformed density function for this particular problem can be determined,
the dispersion characteristics of the joint density function can be compared with this true joint
density function of the estimates. Thus the convergence and accuracy of nonlinear least squares
can be characterized apriori for a class of problems with given nominal estimates, and measure-
ment noise characteristics. Such a characterization we have performed for the range-elevation
measurement problem. By an application of the GLSDC algorithm discussed in the introduction
of the paper, we validate the dispersion of the estimates. The results of this analysis are presented
in Fig. 5.
10
Figure 5: Comparison of the Estimation Error of Nonlinear Least Squares with Analyti-
cal Joint Density Function of the Estimates
In the more general measurement system of range, azimuth and elevation sets,
2 2
1
1
2 2
tan
tan
r
j T T j
Tj j
T
Tj j
T T
r x y v
yv
x
zv
x y
(17)
Similar analysis can be carried out. For example, if it is known that the range, azimuth and eleva-
tion are known to be jointly Gaussian, with a density function (and appropriate parameters de-
fined as)
2 2 2
0 0 0
, , 3/2 2 2 2
1, , exp
2 2 22r
rr
r rp r
(18)
then, quite clearly, using the change of variables formula, one can immediately write the joint
density of the position estimates as
11
2
2 2
0
2
2
1 13/2
0
, , 22 2 2 2 2
2
1
02 2
2
2
tan2, , exp
2
tan
2
r
r
x y z
x y r
y
xp x y z
x y z x y
z
x y
(19)
APPLICATIONS OF THE TRANSFORMATION THEOREM
While the transformation formula is relatively well known in the probability literature, surpri-
singly few estimation algorithms employ this elegantly simple result (one exception to this seems
to be the paper by Alfriend and Carpenter10*
). One of the chief problems in astrodynamics in gen-
eral and orbit estimation in particular is the choice of co-ordinates. It is well known that some
special coordinates of astronautics regularize and render orbit mechanics problems “more” linear
than the others (cf. Junkins and Singla2). The change of variables theorem, in such cases can be
shown to provide valuable insights on the utility of the choice of coordinates in orbit estimation.
We present numerical evidence to this effect in the following discussion.
Sequential Orbit Determination Problems
Consider the ideal two body problem with three coordinate choices for the dynamics. Accor-
dingly the measurement equations for each choice can be appropriately written.
In the classical Cartesian coordinates, the dynamics model for the classical two body problem
is well known to be nonlinear. The measurement equations in this case as given by Eq. (17). For
discussion, let us name this set as the model 1.
An alternative choice is the use of spherical/polar coordinates. The beauty of using these coor-
dinates is that the measurement equations are rigorously linear.
,
,
r
j j j
j j j
j j j
r v
v
v
(20)
* This was pointed out to the first author by Dr. Alfriend himself!
12
Therefore the estimation process is free from nonlinearity at the update phase and the ex-
tended Kalman filter is rigorously valid, provided the propagation phase remains Gaussian (to
within engineering approximations).
However, as we know very well, the dynamical equations of motion in the spherical coordi-
nates are nonlinear. Therefore the propagation phase of uncertainty looses accuracy over time.
Let us call this the model 2.
Yet another alternative (we call model 3) is the use of orbital elements (Kepler elements).
While the propagation equations in the Kepler element space are rigorously linear (initial Gaus-
sian uncertain remains Gaussian), we found that the measurement equations are highly nonlinear,
given by
1
1
1 cos ,
tantan
tan sin
sin sin sin
r
j j
j j
j j
r a e E v
fv
i i
i f v
(21)
In fact the extended Kalman filter for a nonlinear (eccentric orbit case, parameters shown be-
low) orbit estimation problem could not converge for this case and prevented cross comparisons
with other situations (models 1 and 2).
Therefore, we compare the results between models 1 and 2, in the current paper. A representa-
tive orbit of high eccentricity was chosen such that the uncertainty propagation is pronounced and
the efficacy of the nonlinear transformation formula is brought out. The initial position and veloc-
ity of the orbit were chosen to be
0
0
2.7939 0.2197 0.0471 RE
0.0632 0.7586 0.3161 RE/TU
T
T
t
t
x
v (22)
where RE denotes 1 Earth radius and TU denotes a time unit in canonical units. 15 simulated
measurements over a span of 6 hours were used for estimation of the orbit parameters. The ex-
tended Kalman filter implementation in Cartesian coordinates (model 1) is shown in Fig. 6.
13
Using the measurement models of Eq. (20), we see that the extended Kalman filter yields
much more bounded error characteristics, as shown in the Fig. 7.
Figure 7: Extended Kalman filter implementation for model 2, polar coordinates
Figure 6: Extended Kalman filter implementation using Cartesian coordinates (model 1)
14
Upon cursory examination of the plots, it might occur to an analyst that the better choice is of
polar coordinates, since the errors “appear” much more bounded with this particular choice of
coordinates.
Let us now use the change of variables transformations to determine, if this is indeed the case.
One key assumption of that of the extended Kalman filter theory, we wish to re-iterate at this
juncture is that of linearity and of Gaussian nature. As was evident with the previous develop-
ments of the paper, since the extended Kalman filter converged in both models, we can safely
assume that our estimation errors are within the linearization region and hence the true uncertain-
ty distributions can be assumed to be Gaussian.
With this argument in place, we can now transform the estimation error densities of one of the
candidate choices (estimation errors in model 1) in to the other set (model 2, say) and compare
the transformed density with the native probability density. THIS procedure is a rigorous compar-
ison and furnishes a more appropriate error comparison to the analyst as opposed to a pure ex-
amination of the covariance characteristics as drawn in the Figs 6-7.
In other words, an apparent divergence in the extended Kalman filter error does NOT neces-
sarily warrant the elimination of a candidate coordinate for orbit estimation. This is an important
point we have been able to numerically demonstrate through this paper.
When we assume the error characteristics of the Cartesian coordinate EKF are Gaussian and
use the change of variables formula, we get the following results. We ignore cross covariance
transformations for simplicity of the results. Without loss of generality, we discuss the nuances
involved in evaluating density functions using the transformation method, in the position space. It
will be clear that our conclusions carry over to the velocity space transformations in an equally
valid manner.
Figure 8: Waterfall plot comparing probability density functions of range
In the waterfall plot of Fig. 8, we plot the Gaussian (approximant) pdf of the EKF estimation
error using model 2 in black. Estimation error in Cartesian coordinates is computed using the ap-
propriate implementation of the EKF. The resulting Gaussian pdf is then transformed in to the
15
spherical coordinate space. The range component is then plotted in the blue waterfall plot of Fig.
8.
As was demonstrated in Figs 6-7, it is evident that the spherical coordinates lead to tighter es-
timation errors in the range direction.
Rather interestingly and perhaps surprisingly (if one does not examine the geometry of the
measurement process) the same does not hold true for the azimuth state, as shown in the Fig. 9.
Figure 9: Waterfall plot of the estimation error probability density functions in azimuth
state
Clearly, in this case, the Cartesian coordinates seem to be the choice, as they appear to be
producing tighter (to almost 1 ) density functions of the aposteriori estimation error process.
The authors cannot emphasize the importance of this demonstration. Quite clearly, the transfor-
mation of errors depend on several factors and this result demonstrates the importance of careful
analysis in such problems.
This situation prevails to be true in the estimation errors of the elevation state, as shown in the
Fig. 10. Owing to the geometry of the measurement process at this particular instance of the orbit,
we observe that in the angle states, the Cartesian coordinates prove to be a better choice, while
intuition suggests otherwise, using a direct – read out from the covariance plots of the EKF.
In such cases the transformation of density function formula provides valuable insight and
prevents the analyst from making erroneous evaluations on the utility of the coordinate transfor-
mations.
16
Figure 10: Waterfall plot of the estimation error probability density functions in eleva-
tion state
Yet another utility of the change of variables formula happens to be in the process of initiali-
zation of extended Kalman filters. In several nonlinear problems, it is quite challenging to find
initial covariance that is “matched” with a true physics based uncertainty. For example, in the
orbit estimation problem that uses an extended Kalman filter, if the Kepler elements are chosen
for propagation, the analyst would require covariance estimates in the Kepler element space.
However, a Herrick-Gibbs type least squares solution that is used to “start-up” the sequential es-
timation process results in error covariance for the initial guess that is in other coordinates (Carte-
sian for example, as implemented in Crassidis and Junkins8).
There seems to be no systematic method of transforming uncertainties such that the extended
Kalman filter in Kepler element space can be “reasonably” initialized. In such situations, the
change of variables theorem can be a quite useful to the analyst, as explained by the procedure
and the figure shown below.
The analyst would guess an initial Gaussian error covariance in the Kepler element space (for
the sake of discussion). Then the covariance (resulting from a different calculation, assuming
Gaussian) in the Cartesian space is transformed into the Kepler element space using the change of
variables formula developed in this paper. The densities about the initial guess for the state vector
are then plotted and compared. Depending on the discrepancy between the densities, the analyst
then adjusts (“inflates” or “deflates”) his/her guess for the initial Gaussian error covariance in the
desired Kepler element space. This process will prevent the smugness and slow convergence
problems of the extended Kalman filter due to inconsistencies in the initial covariance estimates.
Representative calculations for the transformations between Kepler element space and the
Cartesian coordinates are shown in Fig. 11.
17
Figure 11: Application of change of variables formula for matching probability density
function estimates in the initialization process of an extended Kalman filter
Thus the change of variables formula in probability theory can be of wide applicability in
problems of estimation theory.
CONCLUSION
This paper explores the utility of the analytical transformation of probability density function
between spaces. Several applications of the change of variables formula in astronautics are pre-
sented. It was found that for nonlinear transformations and functional relationships that are
smooth with non vanishing first order derivatives, the change of variables theorem yields useful
information about the nature and properties of the probability density function.
REFERENCES
1 Gauss, C. F., Theoria Motus, (englishtranslation by Davis, C. H.) Little Brown and Company, Boston MA, 1857.
2J. L. Junkins and P. Singla, “How Nonlinear Is It? - A Tutorial on Nonlinearity of Orbit and Attitude Dynamics,” The
Journal of Astronautical Sciences, Vol 52, No. 1-2, Keynote Paper, 7-60, 2004.
3R.S. Park and D.J. Scheeres. “Nonlinear Mapping Of Gaussian State Uncertainties: Theory And Applications To
Spacecraft Control And Navigation,” 2006. Journal of Guidance, Control and Dynamics 29(6): 1367-1375.
4 G. Terejanu, P. Singla, T. Singh, and P. D. Scott, “Uncertainty Propagation for Nonlinear Dynamical Systems using
Gaussian Mixture Models,” AIAA Journal of Guidance, Control and Dynamics, Vol. 31, No. 6, 1623-1633, Nov. 2008.
18
5 M. Kumar, S. Chakravorty, P. Singla, and J. L. Junkins, “The Partition of Unity Finite Element Approach with h-p
refinement to the Stationary Fokker-Planck Equation,”Journal of Sound and Vibration, Vol. 327, Issues 1-2, 144-162,
Oct. 2009.
6Junkins, J.L., Akella, M. R., Alfriend, K. T., “Non-Gaussian Error Propagation in Orbital Mechanics,” 19th Annual
AAS Guidance and Control Conference, Breckenridge, CO, Feb, 1996, AAS Journal of the Astronautical Sciences, Vol.
44, No. 4, pp. 541-564, 1996.
7Junkins, J.L., “Adventures on the Interface of Dynamics and Control,” invited Theodore von Karman Lecture, AIAA
Aerospace Sciences Conference, Reno, NV, January, 1997, AIAA Journal of Guidance, Control, and Dynamics, Vol.
20, No. 6, Nov-Dec, 1997, pp. 1058-1072.
8Crassidis, J., Junkins, J., Optimal Estimation of Dynamical Systems, Chapman-Hall/CRC Press, Boca Raton, FL, 2004.
9Majji, M., Junkins, J.,and Turner, J.D., ”High Order Methods for Estimation of Dynamical Systems” Journal of Astro-
nautical Sciences, Vol. 52, 2009.
9 Majji, M., Junkins, J.,and Turner, J.D., ”High Order Methods for Estimation of Dynamical Systems” Journal of As-
tronautical Sciences, Vol. 52, 2009.
10Alfriend, K. T., and Carpenter, R., “Navigation Accuracy Guidelines for Orbital Formation Flying,” NASA GSFC
Technical Report, No. 20040079829, pp. 1-18, 2004.