8
Hindawi Publishing Corporation ISRN Aerospace Engineering Volume 2013, Article ID 950912, 7 pages http://dx.doi.org/10.1155/2013/950912 Research Article Approximate Solutions to Nonlinear Optimal Control Problems in Astrodynamics Francesco Topputo and Franco Bernelli-Zazzera Department of Aerospace Science and Technology, Politecnico di Milano, Via La Masa 34, 20156 Milano, Italy Correspondence should be addressed to Francesco Topputo; [email protected] Received 25 June 2013; Accepted 27 August 2013 Academic Editors: H. Baoyin, C. Bigelow, and D. Yu Copyright © 2013 F. Topputo and F. Bernelli-Zazzera. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A method to solve nonlinear optimal control problems is proposed in this work. e method implements an approximating sequence of time-varying linear quadratic regulators that converge to the solution of the original, nonlinear problem. Each subproblem is solved by manipulating the state transition matrix of the state-costate dynamics. Hard, soſt, and mixed boundary conditions are handled. e presented method is a modified version of an algorithm known as “approximating sequence of Riccati equations.” Sample problems in astrodynamics are treated to show the effectiveness of the method, whose limitations are also discussed. 1. Introduction Optimal control problems are solved with indirect or direct methods. Indirect methods stem from the calculus of vari- ations [1, 2]; direct methods use a nonlinear programming optimization [3, 4]. Both methods require the solution of a complex set of equations (Euler-Lagrange differential equations or Karush-Kuhn-Tucker algebraic equations) for which iterative numerical methods are used. ese iterative procedures implement some form of Newton’s method to find the zeros of a nonlinear function. ey are initiated by providing an initial guess solution. Guessing an appropriate initial solution is not trivial and requires a deep knowledge of the problem at hand. In indirect methods, the initial value of the Lagrange multiplier has to be provided, whose lack of physical meaning makes it difficult to formulate a good guess. In direct methods, the initial trajectory and control have to be guessed at discrete points over the whole time interval. is paper presents an approximate method to solve nonlinear optimal control problems. is is a modification of the method known as “approximating sequence of Riccati equations” (ASRE) [5, 6]. It transforms the nonlinear dynam- ics and objective function into a pseudolinear and quadratic- like structure, respectively, by using state- and control- dependent functions. At each iteration, these functions are evaluated by using the solutions at the previous iteration, and therefore, a series of time-varying linear quadratic regulators is treated. is sequence is solved with a state transition matrix approach, where three different final conditions are handled: final state fully specified, final state not specified, and final state not completely specified. ese define hard, soſt, and mixed constrained problems, respectively. e main feature of the presented method is that it does not require guessing any initial solution or Lagrange multiplier. In fact, iterations start by evaluating the state- and control-dependent functions using the initial condition and zero control, respectively. e way the dynamics and objective function are factorized recalls the state-dependent Riccati equations (SDRE) method [79]. ese two methods possess some similarities, although the way they solve the optimal control problem is different. As the method is approximated, suboptimal solutions are derived. ese could be used as first guess solutions for either indirect or direct methods. 2. The Nonlinear Optimal Control Problem e optimal control problem requires that, given a set of first-order differential equations ̇ x = f (x, u, ) , (1)

950912.pdf

Embed Size (px)

Citation preview

  • Hindawi Publishing CorporationISRN Aerospace EngineeringVolume 2013, Article ID 950912, 7 pageshttp://dx.doi.org/10.1155/2013/950912

    Research ArticleApproximate Solutions to Nonlinear OptimalControl Problems in Astrodynamics

    Francesco Topputo and Franco Bernelli-Zazzera

    Department of Aerospace Science and Technology, Politecnico di Milano, Via La Masa 34, 20156 Milano, Italy

    Correspondence should be addressed to Francesco Topputo; [email protected]

    Received 25 June 2013; Accepted 27 August 2013

    Academic Editors: H. Baoyin, C. Bigelow, and D. Yu

    Copyright 2013 F. Topputo and F. Bernelli-Zazzera. This is an open access article distributed under the Creative CommonsAttribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work isproperly cited.

    A method to solve nonlinear optimal control problems is proposed in this work. The method implements an approximatingsequence of time-varying linear quadratic regulators that converge to the solution of the original, nonlinear problem. Eachsubproblem is solved by manipulating the state transition matrix of the state-costate dynamics. Hard, soft, and mixed boundaryconditions are handled. The presented method is a modified version of an algorithm known as approximating sequence of Riccatiequations. Sample problems in astrodynamics are treated to show the effectiveness of the method, whose limitations are alsodiscussed.

    1. Introduction

    Optimal control problems are solved with indirect or directmethods. Indirect methods stem from the calculus of vari-ations [1, 2]; direct methods use a nonlinear programmingoptimization [3, 4]. Both methods require the solutionof a complex set of equations (Euler-Lagrange differentialequations or Karush-Kuhn-Tucker algebraic equations) forwhich iterative numerical methods are used. These iterativeprocedures implement some form of Newtons method tofind the zeros of a nonlinear function. They are initiated byproviding an initial guess solution. Guessing an appropriateinitial solution is not trivial and requires a deep knowledgeof the problem at hand. In indirect methods, the initial valueof the Lagrange multiplier has to be provided, whose lack ofphysical meaningmakes it difficult to formulate a good guess.In directmethods, the initial trajectory and control have to beguessed at discrete points over the whole time interval.

    This paper presents an approximate method to solvenonlinear optimal control problems. This is a modificationof the method known as approximating sequence of Riccatiequations (ASRE) [5, 6]. It transforms the nonlinear dynam-ics and objective function into a pseudolinear and quadratic-like structure, respectively, by using state- and control-dependent functions. At each iteration, these functions are

    evaluated by using the solutions at the previous iteration, andtherefore, a series of time-varying linear quadratic regulatorsis treated. This sequence is solved with a state transitionmatrix approach, where three different final conditions arehandled: final state fully specified, final state not specified,and final state not completely specified. These define hard,soft, and mixed constrained problems, respectively.

    The main feature of the presented method is that itdoes not require guessing any initial solution or Lagrangemultiplier. In fact, iterations start by evaluating the state- andcontrol-dependent functions using the initial condition andzero control, respectively.Theway the dynamics andobjectivefunction are factorized recalls the state-dependent Riccatiequations (SDRE) method [79].These two methods possesssome similarities, although the way they solve the optimalcontrol problem is different. As the method is approximated,suboptimal solutions are derived.These could be used as firstguess solutions for either indirect or direct methods.

    2. The Nonlinear Optimal Control Problem

    The optimal control problem requires that, given a set of first-order differential equations

    x = f (x, u, ) , (1)

  • 2 ISRN Aerospace Engineering

    the control functions u() must be determined withininitial, final time

    ,

    , such that the performance index

    = (x () ,

    ) +

    (x, u, ) d (2)

    is minimized while satisfying + two-point conditions

    x () = x

    , (x (

    ) ,

    ) = 0. (3)

    The problem consists in finding a solution that representsa stationary point of the augmented performance index

    = (x () ,

    ) + ^ (x (

    ) , u (

    ) ,

    )

    +

    [ (x, u, ) + (f (x, u, ) x)] d,(4)

    where is the vector of costate and ^ is the multiplier of theboundary condition.The necessary conditions for optimality,also referred to as Euler-Lagrange equations, are

    x =

    , =

    x,

    u= 0, (5)

    where, the Hamiltonian, is

    (x,, u, ) = (x, u, ) + f (x, u, ) . (6)

    The differential-algebraic system (5) must be solved togetherwith the final boundary conditions (3) and the transversalityconditions

    () = [

    x+ (

    x)

    ^]=

    , (7)

    which define a differential-algebraic parametric two-pointboundary value problem whose solution supplies ^ and thefunctions x(), (), u(), [

    ,

    ].

    3. The Approximating Sequence ofRiccati Equations

    Let the controlled dynamics (1) be rewritten in the form

    x = (x, ) x + (x, u, ) u, (8)

    and let the objective function (2) be rearranged as

    =

    1

    2

    x () (x (

    ) ,

    ) x (

    )

    +

    1

    2

    [x (x, ) x + u (x, u, ) u] d,(9)

    where the operators,, ,, andhave appropriate dimen-sions. The nonlinear dynamics (8) and the performanceindex (9) define an optimal control problem. The initialstate, x

    , is assumed to be given, while the final condition

    ( in (3)) can assume three different forms (see Section 4).The problem is formulated as an approximating sequence

    of Riccati equations. This method reduces problem (8)-(9)to a series of time-varying linear quadratic regulators thatare defined by evaluating the state- and control-dependentmatrices using the solution of the previous iteration (the firstiteration considers the initial condition and zero control).

    The initial step consists in solving problem 0, which isdefined as follows:

    x(0) = (x, ) x(0) + (x

    , 0, ) u(0),

    =

    1

    2

    x(0) () (x

    ,

    ) x(0) (

    )

    +

    1

    2

    [x(0) (x, ) x(0) + u(0) (x

    , 0, ) u(0)] d.

    (10)

    Problem 0 is a standard time-varying linear quadratic reg-ulator (TVLQR), as the arguments of , , , , and areall given except for the time. This problem is solved to yieldx(0)() and u(0)(), [

    ,

    ], where the superscript denotes

    the problem that the solution refers to.At a generic, subsequent iteration, problem has to be

    solved. This is defined as follows:

    x() = (x(1) () , ) x() + (x(1) () , u(1) () , ) u(),

    =

    1

    2

    x() () (x(1) (

    ) ,

    ) x() (

    )

    +

    1

    2

    [x()(x(1) () , ) x()

    + u() (x(1) () , u(1) () , ) u()] d.(11)

    Problem is again a TVLQR; note that x(1) and u(1) arethe solutions of problem 1 achieved at previous iteration.Solving problem yields x()() and u()(), [

    ,

    ].

    Iterations continue until a certain convergence criterionis satisfied. In the present implementation of the algorithm,the convergence is reached when

    x() x(1)

    = max[ ,]

    {

    ()

    ()

    (1)

    ()

    , = 1, . . . , } ,

    (12)

    where is a prescribed tolerance.That is, iterations terminatewhen the difference between each component of the state,evaluated for all times, changes by less than between twoconsecutive iterations.

    4. Solution of the Time-VaryingLinear Quadratic Regulator bythe State Transition Matrix

    With the approach sketched in Section 3, a fully nonlinearoptimal control problem is reduced to a sequence of time-varying linear quadratic regulators. These can be solved

  • ISRN Aerospace Engineering 3

    a number of times to achieve an approximate solution of theoriginal, nonlinear problem. This is done by exploiting thestructure of the problem as well as its state transition matrix.This scheme differs from that implemented in [5, 6], and, inpart, is described in [1].

    Suppose that the following dynamics are given:

    x = () x + () u, (13)

    together with the quadratic objective function

    =

    1

    2

    x () (

    ) x (

    )

    +

    1

    2

    [x () x + u () u] d,(14)

    where , , and are positive semidefinite and positivedefinite time-varying matrices with appropriate dimensions,respectively. The Hamiltonian of this problem is

    =

    1

    2

    [x () x + u () u] + [ () x + () u] , (15)

    and the optimality conditions (5) read

    x = () x + () u, (16)

    = () x (), (17)

    0 = () u + (). (18)

    From (18), it is possible to get

    u = 1 () (), (19)

    which can be substituted into (16)-(17) to yield

    x = () x () 1 () (),

    = () x ().(20)

    In a compact form, (20) can be arranged as

    [

    x] = [

    () () 1()

    ()

    () ()

    ] [

    x] . (21)

    Since (21) is a system of linear differential equations, the exactsolution can be written as

    x () = (, ) x + (, ), (22)

    () = (, ) x + (, ), (23)

    where the functions ,

    ,

    , and

    are the compo-

    nents of the state transition matrix, which can be found byintegrating the following dynamics:

    [

    ] = [

    () () 1()

    ()

    () ()

    ] [

    ] ,

    (24)

    with the initial conditions

    (,

    ) =

    (

    ,

    ) =

    ,

    (,

    ) =

    (

    ,

    ) = 0

    .

    (25)

    If both xand

    were given, it would be possible to

    compute x() and () through (22)-(23), and therefore theoptimal control function u() with (19). The initial condi-tion is assumed to be given, whereas the computation ofdepends on the final condition, which, in the present

    algorithm, can be defined in three different ways.

    4.1. Hard Constrained Problem. In a hard constrained prob-lem (HCP), the value of the final state is fully specified,x(

    ) = x

    , and therefore, (14) does not account for . The

    value of can be found by writing (22) at final time

    x=

    (

    ,

    ) x

    +

    (

    ,

    )

    (26)

    and by solving for ; that is,

    (x

    , x

    ,

    ,

    ) =

    1

    (

    ,

    ) [x

    (

    ,

    ) x

    ] . (27)

    4.2. Soft Constrained Problem. In a soft constrained problem(SCP), the final state is not specified, and thus in (14) is an positive definite matrix.The transversality condition (7)sets a relation between the state and costate at final time

    () = (

    ) x (

    ) , (28)

    which can be used to find .This is done by writing (22)-(23)

    at final time and using (28)

    x () =

    (

    ,

    ) x

    +

    (

    ,

    )

    ,

    () x (

    ) =

    (

    ,

    ) x

    +

    (

    ,

    )

    .

    (29)

    Equations (29) represent a linear algebraic system of 2equations in the 2 unknowns {x(

    ),

    }. The system can be

    solved by substitution to yield

    (x

    ,

    ,

    ) = [

    (

    ,

    ) (

    )

    (

    ,

    )]

    1

    [ ()

    (

    ,

    )

    (

    ,

    )] x

    .

    (30)

    4.3. Mixed Constrained Problem. In a mixed constrainedproblem (MCP), some components of the final state arespecified and some are not. Without any loss of generality,let the state be decomposed as x = (y, z), where y arethe known components at final time, y(

    ) = y

    , and z

    are remaining elements. The costate is decomposedaccordingly as = (, ). With this formalism, in (14) is( ) ( ), and it is pre- and postmultiplied by z(

    ).

    The transversality condition (7) is () = (

    )z(

    ).

  • 4 ISRN Aerospace Engineering

    The MCP is solved by partitioning the state transitionmatrix in a suitable form such that, at final time, (22)-(23)read

    [

    y ()

    z ()

    ] = [

    ] [

    y

    z

    ] + [

    ] [

    ] , (31)

    [

    ()

    ()

    ] = [

    ] [

    y

    z

    ] + [

    ] [

    ] , (32)

    where the dependence of the state transition matrix compo-nents on

    ,

    is omitted for brevity. From the first row of (31),

    it is possible to get

    =

    1

    [y

    y

    z]

    1

    , (33)

    which can be substituted in the second row of (31) to yield

    z () = [

    1

    ] y

    + [

    1

    ] z

    + 1

    y+ [

    1

    ]

    .

    (34)

    Equations (33)-(34), together with the transversality condi-tion (

    ) = (

    )z(

    ), can be substituted in the second row

    of (32) to compute the component of the initial costate

    (x

    , y

    ,

    ,

    ) = [

    ]

    1

    w (x, y

    ,

    ,

    ) , (35)

    where

    =

    1

    (

    1

    ) ,

    w (x, y

    ,

    ,

    )

    = [ (

    1

    )

    +

    1

    ] y

    + [ (

    1

    ) +

    +

    1

    ] z

    + [ (1

    )

    1

    ] y

    .

    (36)

    Once is know, the remaining part of the initial costate,

    , is computed through (33), and therefore, the full initial

    costate is obtained as a function of the initial condition, givenfinal condition, initial and final time; that is,

    (x

    , y

    ,

    ,

    ) =

    ((x

    , y

    ,

    ,

    ),

    (x

    , y

    ,

    ,

    )).

    5. Numerical Examples

    Two simple problems with nonlinear dynamics are consid-ered to apply the developed algorithm. These correspond tothe controlled relative spacecraftmotion and to the controlledtwo-body dynamics for low-thrust transfers.

    5.1. Low-Thrust Rendezvous. This problem is taken fromthe literature where a solution is available, for comparisonssake [10, 11]. Consider the planar, relative motion of twoparticles in a central gravity field expressed in a rotatingframe with normalized units: the length unit is equal to

    the orbital radius, the time unit is such that the orbital periodis 2, and the gravitational parameter is equal to 1. In thesedynamics, the state, x = (

    1,

    2,

    3,

    4), represents the radial,

    tangential displacements (1,

    2) and the radial, tangential

    velocity deviations (3,

    4), respectively. The control, u =

    (1,

    2), is made up by the radial and tangential accelerations,

    respectively.The equations of motion are

    1=

    3,

    2=

    4,

    3= 2

    4 (1 +

    1) (

    1

    3 1) +

    1,

    4= 2

    3

    2(

    1

    3 1) +

    2,

    (37)

    with = (1+ 1)

    2+

    2

    2. The initial condition is x

    =

    (0.2, 0.2, 0.1, 0.1). Two different problems are solved to testthe algorithm in both hard and soft constrained conditions.Hard Constrained Rendezvous. The HCP consists inminimizing

    =

    1

    2

    uu d (38)

    with the final, given condition x

    = (0, 0, 0, 0) and = 0,

    = 1.

    Soft Constrained Rendezvous. The SCP considers the follow-ing objective function:

    =

    1

    2

    x () x (

    ) +

    1

    2

    uu d, (39)

    with = diag(25, 15, 10, 10), = 0 and

    = 1 (x

    is free).

    Thedifferential equations (37) are factorized into the formof (8) as

    [

    [

    [

    [

    1

    2

    3

    4

    ]

    ]

    ]

    ]

    =

    [

    [

    [

    [

    [

    [

    [

    [

    0 0 1 0

    0 0 0 1

    (1,

    2) (1 +

    1

    1

    ) 0 0 2

    0 (1,

    2) 2 0

    ]

    ]

    ]

    ]

    ]

    ]

    ]

    ]

    (x)

    [

    [

    [

    [

    1

    2

    3

    4

    ]

    ]

    ]

    ]

    +

    [

    [

    [

    [

    0 0

    0 0

    1 0

    0 1

    ]

    ]

    ]

    ]

    [

    1

    2

    ] ,

    (40)

    with (1,

    2) = 1/[(

    1+1)

    2+

    2

    2]3/2

    +1. Thus, the problemis put into the pseudo-LQR form (8)-(9) by defining(x) and as in (40) and by setting = 0

    44and =

    22.

  • ISRN Aerospace Engineering 5

    0 0.05 0.1 0.15 0.2 0.250.1

    0

    0.1

    0.2

    0.3

    x1

    x2

    (a) 1 versus 2

    0.4 0.2 0 0.2 0.40.4

    0.2

    0

    0.2

    0.4

    x3

    x4

    (b) 3 versus 4

    2 1 0 1 2 32.5

    2

    1.5

    1

    0.5

    0

    0.5

    u1

    u2

    (c) 1 versus 2

    Figure 1: Hard constrained rendezvous.

    The two problems have been solved with the developedmethod. Table 1 reports the details of the HCP and SCP,whose solutions are shown in Figures 1 and 2, respectively.In Table 1, is the objective function at the final iteration,Iter is the number of iterations, and the CPU time is thecomputational time (this refers to an Intel Core 2Duo 2GHzwith 4GBRAM running Mac OS X 10.6). The terminationtolerance in (12) is 109. The optimal solutions foundreplicate those already known in the literature [10, 11],indicating the effectiveness of the developed method.

    5.2. Low-Thrust Orbital Transfer. In this problem, the con-trolled, planar Keplerian motion of a spacecraft in polarcoordinates is studied. The dynamics are written in scaledcoordinates, where the length unit corresponds to the radiusof the initial orbit, the time unit is such that its period is2, and the gravitational parameter is 1. The state, x =(

    1,

    2,

    3,

    4), is made up by the radial distance from

    the attractor (1), the phase angle (

    2), the radial velocity

    (3), and the transversal velocity (

    4), whereas the control,

    u = (1,

    2), corresponds to the radial and transversal

    accelerations, respectively [12, 13]. The equations of motionare

    1=

    3,

    2=

    4,

    3=

    12

    4

    1

    2

    1

    + 1,

    4=

    234

    1

    +

    2

    1

    ,

    (41)

    and the objective function is

    =

    1

    2

    uu d, (42)

    with = 0 and

    = . The initial state corresponds to the

    conditions at the initial orbit; that is, x= (1, 0, 0, 1). Two

    differentHCPs are solved,which correspond to the final states

  • 6 ISRN Aerospace Engineering

    0.05 0.1 0.15 0.2 0.250.05

    0.1

    0.15

    0.2

    0.25

    x1

    x2

    (a) 1 versus 2

    0.3 0.2 0.1 0 0.1 0.20.3

    0.2

    0.1

    0

    0.1

    0.2

    x3

    x4

    (b) 3 versus 4

    1.5 1 0.5 0 0.5 12

    1.5

    1

    0.5

    0

    0.5

    u1

    u2

    (c) 1 versus 2

    Figure 2: Soft constrained rendezvous.

    x= (1.52, , 0, 1.52

    3/2) and x

    = (1.52, 1.5, 0, 1.52

    3/2),

    respectively. This setup mimics an Earth-Mars low-thrusttransfer. The dynamics (41) and the objective function (42)are put in the form (8)-(9) by defining = 0

    44, =

    22,

    and

    (x) =

    [

    [

    [

    [

    [

    [

    [

    [

    [

    [

    0 0 1 0

    0 0 0 1

    1

    3

    1

    0 0 14

    0 0

    24

    1

    0

    ]

    ]

    ]

    ]

    ]

    ]

    ]

    ]

    ]

    ]

    ,

    (x) =[

    [

    [

    [

    [

    0 0

    0 0

    1 0

    0

    1

    1

    ]

    ]

    ]

    ]

    ]

    . (43)

    Table 1: Rendezvous solutions details.

    Problem Iter CPU time (s)HCP 0.9586 5 0.375SCP 0.5660 6 0.426

    Table 2: Earth-Mars transfer details.

    Problem Iter CPU time (s)2,

    = 0.5298 22 5.4252,

    = 1.5 4.8665 123 41.831

    The two HCPs have been solved with the developedmethod. The solutions details are reported in Table 2, whosecolumns have the same meaning as in Table 1. It can be seenthat more iterations and an increased computational burdenare required to solve this problem. The solution with

    2,=

    1.5 is reported in Figure 3.

  • ISRN Aerospace Engineering 7

    2

    1

    30

    210

    60

    240

    90

    270

    120

    300

    150

    330

    180 0

    (a) Transfer trajectory

    0 1 2 3 43

    2

    1

    0

    1

    2

    3

    Time

    Con

    trol

    u1

    u

    u

    2

    (b) Control profile

    Figure 3: Orbital transfer with 2,

    = 1.5.

    6. Conclusion

    In this paper, an approximated method to solve nonlinearoptimal control problems has been presented, with applica-tions to sample cases in astrodynamics. With this method,the nonlinear dynamics and objective function are factor-ized in a pseudolinear and quadratic-like forms, which aresimilar to those used in the state-dependent Riccati equationapproach. Once in this form, a number of time-varying linearquadratic regulator problems are solved. A state transitionmatrix approach is used to deal with the time-varying linearquadratic regulators. The results show the effectiveness ofthe method, which can be used to either have suboptimalsolutions or to provide initial solutions to more accurateoptimizers.

    References

    [1] A. E. Bryson and Y. C. Ho,Applied Optimal Control, JohnWiley& Sons, New York, NY, USA, 1975.

    [2] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze, and E.F. Mishchenko, The Mathematical Theory of Optimal Processes,John Wiley & Sons, New York, NY, USA, 1962.

    [3] J. T. Betts, Practical Methods for Optimal Control and EstimationUsing Nonlinear Programming, SIAM, Philadelphia, Pa, USA,2010.

    [4] B. Conway, Spacecraft trajecory optimization using directtranscription and nonlinear programming, in Spacecraft Tra-jectory Optimization, pp. 3778, Cambridge University Press,Cambridge, UK, 2010.

    [5] T. Cimen and S. P. Banks, Global optimal feedback controlfor general nonlinear systems with nonquadratic performancecriteria, Systems and Control Letters, vol. 53, no. 5, pp. 327346,2004.

    [6] T. Cimen and S. P. Banks, Nonlinear optimal tracking controlwith application to super-tankers for autopilot design, Auto-matica, vol. 40, no. 11, pp. 18451863, 2004.

    [7] C. P. Mracek and J. R. Cloutier, Control designs for thenonlinear benchmark problem via the state-dependent Riccatiequation method, International Journal of Robust and Nonlin-ear Control, vol. 8, no. 4-5, pp. 401433, 1998.

    [8] J. D. Pearson, Approximation methods in optimal control,Journal of Electronics and Control, vol. 13, pp. 453469, 1962.

    [9] A. Wernli and G. Cook, Suboptimal control for the nonlinearquadratic regulator problem, Automatica, vol. 11, no. 1, pp. 7584, 1975.

    [10] C. Park, V. Guibout, and D. J. Scheeres, Solving optimal con-tinuous thrust rendezvous problemswith generating functions,Journal of Guidance, Control, and Dynamics, vol. 29, no. 2, pp.321331, 2006.

    [11] C. Park and D. J. Scheeres, Determination of optimal feedbackterminal controllers for general boundary conditions usinggenerating functions, Automatica, vol. 42, no. 5, pp. 869875,2006.

    [12] A. Owis, F. Topputo, and F. Bernelli-Zazzera, Radially acceler-ated optimal feedback orbits in central gravity field with lineardrag, Celestial Mechanics and Dynamical Astronomy, vol. 103,no. 1, pp. 116, 2009.

    [13] F. Topputo, A. H. Owis, and F. Bernelli-Zazzera, Analyticalsolution of optimal feedback control for radially acceleratedorbits, Journal of Guidance, Control, and Dynamics, vol. 31, no.5, pp. 13521359, 2008.

  • Submit your manuscripts athttp://www.hindawi.com

    VLSI Design

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    International Journal of

    RotatingMachinery

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Hindawi Publishing Corporation http://www.hindawi.com

    Journal ofEngineeringVolume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Shock and Vibration

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Mechanical Engineering

    Advances in

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Civil EngineeringAdvances in

    Acoustics and VibrationAdvances in

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Electrical and Computer Engineering

    Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Distributed Sensor Networks

    International Journal of

    The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

    SensorsJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Modelling & Simulation in EngineeringHindawi Publishing Corporation http://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Active and Passive Electronic Components

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Chemical EngineeringInternational Journal of

    Control Scienceand Engineering

    Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Antennas andPropagation

    International Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Navigation and Observation

    International Journal of

    Advances inOptoElectronics

    Hindawi Publishing Corporation http://www.hindawi.com

    Volume 2014

    RoboticsJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014