Upload
bhajneet-singh
View
63
Download
13
Tags:
Embed Size (px)
Citation preview
· Quasilinearization and Nonlinear
Boundary-Value Problems
Richard E. ,Bellman and Robert E. Kalaba
The RAND Corporation
AMERICAN ELSEVIER PUBLISHING COMPANY, INC New York 196 S
SOLE DISTRIBUTORS FOR UREAT BRITAIN
ELSEVIER PUBLISHING COMPANY, LTD.
Barking, Essex, England
SOLE DISTRIBUTORS FOR THE CONTINENT OF EUROPE
ELSEVIER PUBLISHING COMPANY
Amsterdam, The Netherlands
Library of Congress Catalog Card Number: 65-22807
COPYRIGHT © 1965 BY THE RAND CORPORATION
ALL RIGHTS RESERVED. THIS BOOK OR ANY PART THEREOF
MUST NOT BE REPRODUCED IN ANY FORM WITHOUT THE WRITTEN
PERMISSION OF THE PUBLISHER, AMERICAN ELSEVIER PUBLISHING
COMPANY, INC., 52 VANDERBILT AVENUE, NEW YORK 10017, N.Y.
PUNTED IN THE UNITED STATES
,I
Introduction
CONTENTS
CHAPTER ONE
The Riccati Equation
1
1. Introduction. 7 2. Newton-Raphson Method 7 3. Multidimensional Version. 11 4. Square Roots 12 5. Quasilinearization . 13 6. The Riccati Equation 14 7. The First Order Linear Equation 15 8. Solution of Riccati Equation in Terms of Maximum Operation 15 9. Upper and Lower Bounds 18
10. Successive Approximations via Quasilinearization 19 11. Monotonicity 20 12. Discussion 21 13. A Fundamental Lemma 22 Comments and Bibliography 22
CHAPTER TWO
Two-Point Boundary-Value Problems for Second-Order Differential Equations
1. Introduction. 27 2. Two-Point Boundary-Value Problems for Second-Order Linear
Differential Equation . 28 3. The Inhomogeneous Equation 29
VI CONT.NTS
4. Vector-Matrix Approach 32 5. Green's Functions 34 6. Convexity 34 7. Quasilinearization 36 8. Discussion 37 9. Existence and Boundedness 37
10. Convergence 38 11. Convergence of the Picard Algorithm 40 12. A Numerical Example 41 13. General Second-Order Nonlinear Differential Equations 42 14. A Numerical Example 43 15. Calculus of Variations 44 16. Quasilinearization . 46 17. Upper and Lower Bounds via Quasilinearization 48 18. Solution of Inner Problem 50 19. Further Uses of Quasilinearization 50 20. Dynamic Programming 51 21. Invariant Imbedding 52 22. Combination of Dynamic Programming and Quasilineariza-
tion . 53 Comments and Bibliography 53
CHAPTER THREE
Monotone Behavior and Differential Inequalities
1. Monotonicity 57 2. An Elementary Approach 58 3. Relations Between Solutions and Coefficients. 59 4. A Useful Factorization of the Second-Order Linear Differ-
ential Operator . 60 5. A Positivity Result 62 6. A Related Parabolic Partial Differential Equation 63 7. Characteristic Values Again 66 8. Variational Approach . 67 9. Discussion 68
10. Improvement of Convergence Estimates 70 11. Discussion 71 Comments and Bibliography 71
CONTENTS
CHAPTER FOUR
Systems of Differential Equations, Storage and Differential Approximation
VII
1. Introduction. 75 2. Quasilinearization Applied to Systems 76 3. Solution of Linear System 78 4. A Numerical Example 79 5. Multipoint Boundary-Value Problems 80 6. Successive Approximations and Storage 80 7. Simultaneous Calculation of Approximations 81 8. Discussion . 83 9. Back and Forth Integration . 85
10 .. Differential-Difference Equations 85 11. Reduction to Differential Equations. 86 12. Functional Differential Equations 87 13. Differential Approximation 88 14. Simple Version of Differential Approximation 89 15. A Renewal Equation 91 16. Discussion 93 17. Storage and Memory 95 18. Monotonicity for Systems 95 19. Monotonicity for Nth-Order Linear Differential Equations. 97 20. Discussion 98 21. Use of the Gram-Schmidt Orthogonalization Procedure in
the Numerical Solution of Linear Boundary-Value Problems 98 22. Dynamic Programming 103 23. Invariant Imbedding 104 Comments and Bibliography 105
CHAPTER FIVE
Partial Differential Equations
1. Introduction. 2. Parabolic Equations 3. A Specific Equation 4. Difference Approximation 5. Comparison Between Picard and Quasilinearization 6. Gradient Techniques .
111 112 112 113 114 114
vIIi
7. An Elliptic Equation . 8. Computational Aspects 9. Positivity and Monotonicity
10. The Hopf-Lax Equation Comments and Bibliography
CHAPTER SIX
CONTINTS
116 117 121 122 123
Applications in Physics, Engineering, and Biology
1. Introduction 125 2. Optimal Design and Control 125 3. An Example 127 4. Numerical Results . 129 5. An Inverse Problem in Radiative Transfer 130 6. Analytic Formulation . 132 7. Numerical Results . 132 8. The Van der Pol Equation 134 9. Orbit Determination as a Multipoint Boundary-Value
Problem . 137 10. Numerical Results . 138 11. Periodogram Analysis 139 12. Numerical Results . 140 13. Discussion 144 14. Periodic Forcing Terms 144 15. Estimation of Heart Parameters. 144 16. Basic Assumptions. 146 17. The Inverse Problem . 147 18. Numerical Experiments 148 Comments and Bibliography 150
CHAPTER SEVEN
Dynamic Programming and Quasilinearization
1. Introduction . 2. A Basic Functional Equation 3. Approximation in Policy Space 4. Dynamic Programming and Quasilinearization 5. Functional Equations 6. Discussion
153 153 154 156 157 158
CONTENTS ix
7. Dynamic Programming and Differential Approximation. 158 8. Dynamic Programming and the Identification of Systems 159 9. Discussion 161 Comments and Bibliography 161
APPENDICES
One: Minimum Time Program 163 Two: Design and Control Program . 167 Three: Program for Radiative Transfer-Inverse Problem for Two
Slabs, Three Constants 183 Four: Van Der Pol Quation Program 185 Five: Orbit Determination Program 189 Six: Cardiology Program 193
Index 203
Quasilinearization and Nonlinear
Boundary-Value Problems
INTRODUCTION
Modern electronic computers .can provide the numerical solution of systems of one thousand simultaneous nonlinear ordinary differential equations, given a complete set of initial conditions, with accuracy and speed. It follows that as soon as physical, economic, engineering, and biological problems are reduced to the task of solving initial value problems for ordinary differential equations, they are well on their way to complete solution. Hence, the great challenge to the mathematician is to find ways to construct the requisite theories and formulations, approximations and reductions.
Sometimes, as in planetary theory, the basic equations are ordinary differential equations. In this case, however, some of the fundamental problems do not provide us with a full set of initial conditions. In fact, the basic problem of orbit determination is to determine a complete set of initial conditions, given certain angular measurements made at various times. In other areas, particularly in mathematical physics and mathematical biology, the underlying equations are partial differential equations, differential-integral equations, differential-difference equations, and equations of even more complex type. Many preliminary analyses of the precise questions to be asked, and the analytical procedures to be used, must be made in order to exploit fully the powerful computational capabilities now available. Whereas in earlier times the preferred procedure was to simplify so as to obtain linear functional equations, the current aim is to transform all computational questions to initial value problems for ordinary differential equations, be they linear or nonlinear. The theories of dynamic programming and invariant imbedding achieve this in a number of areas through the introduction of new state
2 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
variables and the use of semigroup properties in space, time, and structure. Quasilinearization achieves this objective by com bini ng linear approximation techniques with the capabilities of the digital computer in various adroit fashions. The approximations are carefully constructed to yield rapid convergence, and monotonicity as well, in many cases.
The origin of quasilinearization lies in the theory of dynamic programming. The connections with classical geometry and analysis are, nonetheless, quite diverse and do not constitute any simple pattern. A basic ingredient is geometric duality theory of the type associated with Minkowski, Fenchel, Kuhn and Tucker, and many others; and there is a close association with the duality introduced into the calculus of variations by Friedrichs and since extended by so many others. In another direction, there is a strong heritage from the theory of differential inequalities started by Caplygin and continued by a very active Russian school. Intimately connected with this is the modern theory of positive and monotone operators created by Collatz and vigorously developed by Redheffer and others. This, in turn, is associated with the theory of generalized convexity pursued by Beckenbach, Bonsall, Peixoto, and a number of other analysts. As far as many applications are concerned, we seem merely to be applying Newton-Raphson-Kantorovich approximation techniques in function space.
Without dwelling at length on any of these very intriguing areas, which themselves merit monographs, we mention the connections in the text and give the interested reader a number of references for further reading.
The objectives of'the theory of quasilinearization are easily stated. First, we desire a uniform approach to the study of the existence and uniqueness of the solutions of ordinary and partial differential equations subject to initial and boundary-value conditions. Furthermore, we want representation theorems for these solutions in terms of the solutions of linear equations. Finally, we want a uniform approach to the numerical solution of both descriptive and variational
INTRODUCTION 3
problems which possesses various monotonicity properties and rapidity of convergence.
Throughout this work, when we use the terms "computational solution" or "numerical solution," we are thinking in terms of algorithms to be carried out by means of digital computers.
It is esthetically pleasing, pedagogically desirable, and efficient scientifically that the same conceptual and analytic approach serves all of these purposes. This volume is intended as an introduction to quasilinearization, aimed both at those who are solely interested in the analysis, and those who are primarily busied with applications.
Let us now turn to a brief description of the contents of the seven chapters. To acquaint the reader with the analytic techniques in their simplest form, we begin in Chapter One with a discussion of the application of quasilinearization to the study of the Riccati equation,
(1)
This is one of the most important ordinary differential equations in analysis, both in its own right, and because of its connection with the second-order linear differential equation
(2) w" + a1(t)w' + alt)w = o. It is shown how quasilinearization permits us to obtain an explicit
analytic representation for u in terms of quadratures and the maximum operation. From this, both upper and lower bounds for u and w can be deduced.
In Chapter Two, we turn to a discussion of two-point boundaryvalue problems of the form
(3) U" = g(u',lI,t),
with the boundary conditions
(4) h1(u'(0),u(0)) = 0, hlu'(l),u(l)) = o. The quadratic convergence of the sequence of approximations generated· by quasilinearization is studied, and some numerical examples are given.
.. QUASI LINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
Chapter Three is devoted to the examination of conditions under which the sequence of approximations converges monotonically. We are led to study linear differential inequalities of the form
(5) u" + p(t)U' + q(t)U ~ O.
In this way, we encounter Green's functions. To treat the signpreserving character of these functions, we employ a number of interesting techniques, such as factorization of the operator, variational methods, and difference approximations to partial differential equations. These questions are intimately related to characteristic value problems.
In Chapter Four, some corresponding problems of monotonicity are treated for higher-order differential equations and for linear systems. The analytic results here are quite sketchy and not easy to obtain. A number of examples of the numerical efficacy of the methods are given.
A close scrutiny of the effectiveness of the numerical procedures suggested shows that the use of successive approximations in a routine fashion can result in storage difficulties when the order of the differential equation or system is high. We are prompted then to develop techniques for using successive approximations without excessive reliance on rapid-access storage, and, in general, to study the fascinating area of the storage and retrieval of information. A very simple method for avoiding storage is presented, and then applied to the numerical solution of differential-difference equations of the form
(6) u'( t) = g( u( t), u( t - t1), t),
and to functional-difference equations of the form
(7) u'(t) = g(u(t),u(h(t)),t).
Equations of this type play important roles in modern mathematical biology.
In a further effort to overcome the storage problem, we turn to the method of differential approximation. This method was developed
INTRODUCTION 5
in collaboration with J. M. Richardson of the Hughes Research Laboratories at Malibu to provide analytic approximations to various classes of complex functional equations. A general formulation for differential equations is the following.
Given a vector equation of the form
(8) dx dt = g(x,t), x(O) = c,
we wish to find a more tractable equation of the form
dy - = h(y,t,a), dt
yeO) = b, (9)
with the property that y '"'-' x. Here a and b are vectors to be chosen so that IIx - yll, a suitably
defined measure, such as J: (x - y, x - y) dt, is small. This
variational problem is amenable to the technique of quasilinearization. As an application of this approach, we show how to obtain an accurate solution of a renewal equation such as
(to) rt 2 U(t) = 1 + Jo e-<t-s) u(s) ds
by means of a finite system of ordinary differential equations. In Chapter Five, we present the results of some applications of
quasilinearization to the solution of the nonlinear elliptic partial differential equation
(11 )
and the nonlinear parabolic partial differential equation
(12) ut = u"'''' + g(u),
subject to various initial and boundary conditions. Although only a few cases have been run, because of the time
involved, there are clear indications that there is much to be gained in both time and accuracy by employing quasilinearization in the numerical solution of partial differential equations. Much, however, remains to be done before any definitive statements can be made
6 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
comparable to those that can be uttered as far as ordinary differential equations are concerned.
Chapter Six contains a number of examples which illustrate some of the ways in which quasilinearization can be applied to treat descriptive and variational processes arising in design and control, in orbit determination, in the detection of periodicities, and in the identification of systems. This last topic covers inverse problems in radiative transfer and cardiology.
The final chapter on dynamic programming begins with an explanation of the way in which approximation in policy space leads to quasilinearization, continues with an indication of how dynamic programming can be used to bypass two-point boundary values arising in quadratic variational problems, and concludes with a discussion of how local quasilinearization and global dynamic programming can be used to widen the domains of applicability of both theories.
As always, we have imposed upon friends and colleagues for their advice and opinions. We particularly wish to thank T. A. Brown, O. Gross, M. Juncosa, and R. Sridhar in this connection. Without the aid of J. Buell, B. Kotkin, and especially H. Kagiwada in writing the computer programs, and in pursuing the minutiae that make the difference between success and failure, much of this would have been quite visionary. Finally, we express our appreciation of Jeanette Blood whose patience, industry, intelligence, and sense of humor enabled us to complete the chore of book writing.
All that we have done is a prologue to what remains to be done in the future. We hope that others will find research in these domains as interesting and rewarding as we have, and we hope also that our methods will be used to cultivate intensively much that we have only sampled.
This study was prepared as part of a program of research undertaken by The RAND Corporation for the Advanced Research Projects Agency, the National [nstitutes of Health, and United States Air Force Project RAND.
Chapter One
THE RICCATI EQUATION
I. INTRODUCTION
In this chapter we wish to apply approximation techniques to the study of one of the fundamental equations of mathematical analysis, the first-order nonlinear differential equation
(1.1) u' + u2 + p(t)u + q(t) = 0,
known as the Riccati equation. This equation plays a fundamental role in the analysis of the behavior of the solutions of the secondorder linear differential equation
(1.2) w" + p(t)w' + q(t)w = 0,
and thus occupies an important position in quantum mechanics in connection with the study of the Schrodinger equation. Furthermore, together with its multidimensional analogues, it enjoys a central place in dynamic programming and invariant imbedding.
Consideration of the equation of (1.1) will give us an opportunity to illustrate the use of quasilinearization techniques, untrammeled by arithmetic and algebraic underbrush. The basic properties of monotonicity and quadratic convergence can be established in particularly simple fashion, and we can thus, in a leisurely fashion, set the pattern for much that follows.
2. NEWTON-RAPHSON METHOD
To motivate an approach we shall use repeatedly throughout this volume, let us begin with the problem of finding a sequence of approximations to the root of the scalar equation
(2.1 ) f( x) = 0.
7
8 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
y
x
Fig. 1.1
We shall assume that f(x) is monotone decreasing for all x and strictly convex, that is,r(x) > O. (See Fig. 1.1.) Hence, the root r is simple, /,(r) =;1= O.
Let Xo be an initial approximation to the root r, with Xo < r
(j(xo) > 0), and suppose that we approximate to f(x) by a linear
y
Fig. 1.1
THE RICCATI EQUATION 9
function of x determined by the value and slope of the function j(x) at x = Xo,
(2.2)
(See Fig. 1.2.) A further approximation to r is then obtained by solving the_{inearequatio~ in x,
(2.3)
This yields the second approximation,
(2.4) x - x _ f(xo) I - 0 f'(xo) .
The process is then repeated at Xl leading to a new value x 2, and so on, as illustrated in Fig. 1.2. The general recurrence relation is
(2.5)
It is clear from Fig. 1.2 that
(2.6) Xo < Xl < X 2 < ... < r,
if Xo < r initially. Analytically, this follows from the inequalities !(xn ) > 0, f'(x n ) < O. This monotonicity is an important property computationally. We would prefer to have alternate approximations above and below r, that is, Xo < X 2 < X4 < ... < r < ... < Xs < Xl>
but this is usually very difficult to arrange. A second property, even more important computationally, is
perhaps not so obvious. It is what is called quadratic convergence. We assert that
( 2.7)
where k is independent of n, under a reasonable hypothesis con~crning f'''(x).
10 QUASILINEARIZATION AND NONLINEAR BOUNDARY.VALUE PROBLEMS
To see this, write
(2.8) f(xn)
Xn+l - r = Xn - f'(Xn) - r
f(xn) (' f(r») = Xn - f'(Xn) - r - f'(r)
= qJ(Xn) - qJ(r),
where qJ(x) = x - j(x)Jj'(x). Using the first three terms of the Taylor expansion with a remainder, we obtain
(2.9) (x r)2
Xn+l - r = (xn - r)g/(r) + n - qJ"(O), 2
where Xo ~ Xn ~ 0 ~ r. Since
(2.10) cp'(x) =f(x)!"(x) f'(X)2 '
we see that qJ'(r) = O. Hence
(2,11)
where k = max"'o::::o::::r Icp"(0)IJ2, a bound depending uponj'l/(x). Furthermore,
(2.12) xn+1 - xn = qJ(xn) - cp(xn- 1)
( ) '() (xn - Xn _ 1)2 "(0) = xn - x n- 1 cp xn- 1 + 2 qJ,
where X n - 1 ~ 0 ~ xn , which leads to
Referring to (2.5), we replace j(xn-1)!f'(Xn- 1) by Xn - x n - 1. Hence, (2.13) yields
(2.14)
Thus,
(2,15)
THE RICCATI EQUATION
where (2.16) kl = max [1f"(0)1 + 1 qJ'f(O) I] .
x095,r 11'(0)1 2
The relation in (2.15) is also called quadratic convergence.
II
It follows that as Xn approaches r, there is an enormous acceleration of convergence. Asymptotically, each additional step doubles the number of correct digits in the approximation. This is a most important property when large-scale operations are involved, not only because the computing time is usually directly proportional to the number of iterations, but also because round-off error can increase in a serious fashion as the number of stages increases. It is interesting then to conceive of computing as being analogous to summing a divergent series which is asymptotic. There is an optimal partial sum for which the error is minimum.
3. MULTIDIMENSIONAL VERSION
The method readily generalizes to higher dimensions. Consider the problem of determining a solution of a system of simultaneous equations
(3.1) i = 1,2, ... , N,
or, in vector notation,
(3.2) J(x) = O.
If Xo is an initial approximation, write
(3.3)
where ](xo) is the Jacobian matrix
(3.4) J(xo) = (OJi) . OX; "'="'0
The new approximation is thus
(3.5) Xl = Xo - J(xo)-Y(xo),
and, generally, we obtain the recurrence relation
( .l.6)
12 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
It is now not difficult to obtain the analogues of the results of the previous section under appropriate assumptions concerningf(x) and its partial derivatives.
Analogous results hold for functional equations, but require more sophisticated concepts both to state and establish.
4. SQUARE ROOTS
As a particularly interesting application of the procedure discussed in Sec. 2, consider the problem of determining the positive solution of x2 = y, y > 0, which is to say, that of obtaining the square root ofy.
Settingf(x) = x 2 - y, the recurrence relation becomes
(4.1) x = x _ (x; - y) = Xn + L n+l n 2 2 2 ' Xn xn
a very well-known algorithm, ascribed to Heron of Alexandria. The demonstration of monotonicity and quadratic convergence
by no means disposes of all problems connected with the use of this algorithm. Suppose that we are required to extract the square roots of a set of positive numbers Yb Y2, ... , YN, lying in a fixed interval [l,a]. Suppose that we don't know what these numbers will be in advance, but that we are required to choose an initial approximation Xo in advance. What is the best fixed initial approximation to choose?
One way to formulate this question precisely is to agree that we will iterate (4.1) exactly N times and ask for the value of Xo that minimizes the function
(4.2)
Let xo(N) denote a value (actually the value) that minimizes. It can be shown that
(4.3) lim xo(N) = al-l. N"'<:IJ
THE RICCATI EQUATION 13
This result is due to Sylvester and Hammer. Analogous results can be derived for the algorithm determined by the problem of extracting a kth root, k > 1, namely
(4.4)
The corresponding limit for xo(N) can be readily guessed, but it has not been established rigorously. The analogous problems for more general algebraic, transcendental, and functional equations have apparently never been discussed, or even posed.
5. QUASILINEARIZATION
Let us now consider the equation
(5.1) x + bxn = a, a, b > 0, n> 1,
and indicate how quasi linearization techniques permit us to obtain a representation for the unique positive solution in terms of the maximum operation.
We begin with the observation that
(5.2)
for x ;;:: 0 and n > 1, with the maximum attained at y = x. Subsequently, we shall examine the source of this result. At the moment, let us accept it as an easily established identity.
Then (5.1) may be written
( 5.3) x + max [bnxyn-l - ben - l)yn] = a, v2:0
since b > 0, or, equivalently,
( 5.4) x + bnxyn-l - ben - l)yn S a
for all y ~ O. Hence, since 1 + bnyn-l ;;:: 0,
( 5.5) x S a + ben - 1)yn 1 + bnyn-l
14 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
for all y ~ o. Since equality is attained for y = x, we may write
(5.6) . [a + ben - l)yn] x = mln ,
'V 1 + bnyn-l
an explicit analytic representation for the desired solution. Expressions of this type are useful since they furnish quick and
convenient upper bounds by appropriate choice of y.
6. THE RICCATI EQUATION
Let us now apply this technique of quasilinearization to an important functional equation, the Riccati equation. The analysis that follows not only has tutorial value, but is useful in its own right, since, as we have mentioned above, the Riccati equation, together with its multidimensional and function space analogues, plays a fundamental role in modern control theory and mathematical physics, particularly in connection with dynamic programming and invariant imbedding.
The Riccati equation is a first-order nonlinear differential equation having the form
(6.1) v' + v2 + p(t)v + q(t) = 0.
Despite its rather simple form, it cannot be solved explicitly in terms of quadratures and the elementary functions of analysis for arbitrary coefficients p(t) and q(t).
The connection with the second-order linear differential equation is at first sight purely formal. Starting with the equation
(6.2)
we set
(6.3)
u" + p(t)u' + q(t)u = 0,
u' v =_. t-
and obtain (6.1) as the equation for v.
THE RICCATI EQUATION IS
As the reader will see subsequently in our discussion of variational problems via dynamic programming, the connection between the two equations is much more profound and meaningful, and more varied.
7. THE FIRST-ORDER LINEAR EQUATION
Prior to our study of (6.1) via quasilinearization and successive approximations, let us for the sake of completeness recall the solution of the first-order linear equation
(7.1) dw - - f(t)w = get), dt
w(O) = c.
Multiplying through by the function e-S~ f(,,) dS, an integrating factor, we obtain the equation
(7.2)
which we solve by immediate integration. Hence, the solution of (6.1) is given by the expression
(7.3) w = cefJf(s)dS + enf(S)dsfg(s)e-S~f(r)dr ds,
a result we shall use below. In place of this rather cumbersome formula, let us write
(7.4) w = T(j,g) ,
which defines the operation T(f,g), a linear inhomogeneous operation on g. Observe that the positivity of the exponential function allows us to conclude that gl(t) ~ g2(t) for t ~ 0 implies T(f,gl) ~ T(f,g2)
for t ~ 0, a most important and useful property, which provides the basis for the subsequent results of this chapter.
8. SOLUTION OF RICCATI EQUATION IN TERMS OF MAXIMUM OPERATION
Let us now show that the Riccati equation can be solved in terms of a maximum operation. Consider the parabola y = X2 and the
16 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
Y - Y, = 2 X, (x - x,)
Y
Y = 2XI X - x,'
X
Fig. 1.3
(8.1)
or
(8.2)
(See Fig. 1.3.) Since the curve y = X2, as a consequence of its convexity, lies
above the tangent line, we have the inequality
(8.3)
for all Xl, with equality only if X = Xl' Hence,
(8.4)
a result which can, of course, be established immediately in many ways. This simple geometrical idea is at the bottom of the results of (5.2) and (4.4).
Returning to the equation of (6.1),
(8.S) v' - _v2 - p{t)v - q(t),
THE RICCATI EQUATION
let us replace the quantity v2 by its equivalent expression,
max [2uv - u2],
u
so that the equation now has the form
(8.6) v' = -max [2uv - u2] - p(t)v - q(t)
u
= min [u 2 - 2uv - p(t)v - q(t)].
u
17
Were it not for the minimization operation, the equation would be linear. As it is, the equation possesses certain properties ordinarily associated with linear equations. It is for these reasons that we employ the term "quasilinearization."
Consider the companion equation, the linear differential equation
(8.7) w' = u2 - 2uw - p(t)w - q(t),
where u(t) is now a fixed function of I. Let v and w have the same initial values, v(O) = w(O) = c.
Let vet), the solution of the Riccati equation, exist in the interval [0,10]. A proviso concerning existence is necessary, since the solution of a Riccati equation need not exist for all I. Consider, for example, the equation u' = 1 + u2
, u(O) = 0, with the solution u = tan t for 0 S 1 < 1T/2.
To establish the inequality w ~ v, we argue as before. Observe that (8.6) is equivalent to the equation
(8.8)
for all u(I), or
( X.9)
v' S u2 - 2uv - p(t)v - q(t)
v' = u2 - 2uv - p(t)v - q(t) - ret),
where r(t) ~ 0 for 1 ~ O. The function r(t) depends, of course, upon u and v, but this is of no consequence at the moment.
Hence, using the notation of Sec. 7,
( X.tO) v = T( -2u - pet), u2 - q(t) - ret»~
~ T( -2u - pet), u2 - q(t» = w,
using the positivity property of the operator T(f,g).
18 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
Since this inequality holds for all u(t), with equality for u(t) = vet), we have the desired result:
(8.11) vet) = min T( -2u - pet), u 2 - q(t»,
u
which we state as a theorem: Theorem 1. For 0 S t S to, where to depends upon the interval of
existence of vet), we have the representation
(8.12) vet) = m~n [ce-f~(2U(S)+P(S»dS
+ fe-f!(2U(rl+P(r»dr[u2(s) - q(s)] dS]' Using quasilinearization, we have obtained an analytic expression
for the solution of the Riccati equation.
9. UPPER AND LOWER BOUNDS
From the representation of (8.12), we can readily obtain upper bounds for v(t) by appropriate choice of u(t). From this, in turn, we can deduce upper bounds for the solution of the second-order linear differential equation
(9.1) w" + p(t)w' + q(t)w = 0,
recalling the discussion of Sec. 7. If q(t) < 0, we can obtain lower bounds by replacing v = Ijw and
proceeding as above with the equation
(9.2) Wi - 1 - p(t)w - q(t)w2 = 0.
These techniques have been used to study the asymptotic behavior of the solution as a function of a parameter and as a function of t, notably by Calogero.
Furthermore, to obtain more precise estimates, we can use more general transformations of the form
(9.3)
since w will satisfy a Riccati equation whenever v does.
THE RICCATI EQUATION
10. SUCCESSIVE APPROXIMATIONS VIA QUASILINEARIZATION
19
We are now in a position to use the formula in Theorem 1 of Sec. 8 to generate a series of approximations to the solution v(t). We know that the minimum is attained by the solution v(t) itself. Hence, we suspect that if Vo is a reasonable initial approximation, then VI obtained as the solution of
(10.1) v{ = v~ - 2VOVI - p(t)v1 - q(t),
will be even a better approximation, by analogy with the procedure we employed to find the root of I(x) = 0, using the Newton-Raphson approximation. Repeating this approach, we obtain the recurrence relation
This is precisely the recurrence relation we would obtain if we applied a Newton-Raphson-Kantorovich approximation scheme to the nonlinear differential equation
(10.3) v' = _v2 - p(t)v - q(t), v(O) = c.
Generally, if the differential equation were v' = g(v,t), we would use the approximation scheme
We wish to emphasize, however, that the two approaches are not equivalent despite the fact that they coincide in particular cases. As we have indicated in the Introduction and shall discuss in some detail in Chapter Seven, quasi linearization owes a great deal to its inception within the theory of dynamic programming, with the concept of approximation in policy space playing an important and guiding role. In particular, from this viewpoint we are led to expect monotonicity of convergence. On the other hand, the geometric background of the Newton-Raphson approximation procedure leads us to expect quadratic convergence. As we shall demonstrate below, both properties
20 QUASI LINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
are valid, and the resultant combination is extremely powerful computationally and analytically.
Alternatively, we can regard quasilinearization as a systematic exploitation of duality.
II. MONOTONICITY
Let us now establish the monotonicity of approximation. Consider the relations
(11.1) v~+1 = v; - 2VnVn+1 - p(t)vn+1 - q(t), Vn+1(O) = c,
~ V~+I - 2vn+1vn+1 - p(t)Vn+1 - q(t)
(since the minimum value of the right-hand side is achieved for Vn = vn+I ). We deliberately do not simplify in order to emphasize the relation to the equation
Comparing this equation with the inequality of (10.1), and using the property of the linear operator discussed in Sec. 7, we see that
(11.3)
Hence, inductively,
(11.4)
If the function Vo is chosen arbitrarily, it mayor may not be true that Vo ~ VI.
To show that vn(t) ~ v(t) for all n ~ 1, we write
(11.5) Vi = _v 2 - p(t)v - q(t)
= min [w 2 - 2wv - p(t)v - q(t)]
w
~ v~ - 2vnv - p(t)v - q(t).
Hence, once again using the property of the linear operator, v ~ v" for each n ~ 1.
THE RICCATI EQUATION 21
Consequently, in any interval [O,to] within which v(t) exists, we can assert the convergence of the sequence {v,,}.
From the relation
(11.6) v,,+1(t) = f[V~ - 2VnVn+1 - p(t1)Vn+1 - q(t1)] dt1,
we conclude, on the basis of bounded convergence, that
(11.7) vet) - c = f[ _v2 - P(t1)V - q(t1)] dt1,
where v = limn~ 00 vn(t). Hence vet), as an integral, is continuous. Differentia ting,
(11.8) v' = _v2 - p(t)v - q(t), v(O) = c.
A standard uniqueness theorem shows that v = v. Finally, since the elements of the sequence {vn } are continuous,
since the convergence is monotone, and since v(t) is continuous, we can assert, using Dini's theorem, that the convergence is actually uniform in [O,to].
12. DISCUSSION
In the previous sections, we have discussed two important properties of the sequence of approximations obtained by means of quasilinearization, quadratic convergence and monotonicity. Of these, the first was an immediate consequence of the generalized tangent approximation, a Newton-Raphson-type procedure. The second, however, holds only for special classes of operators or for small t-intervals. It is a consequence, in this case, of the positivity property of the operator
(12.1) d .
L= dt + p(t),
or, more properly, of the operator L-1. We have repeatedly used the fact that
(12.2) Lu ~ O=>u ~ O.
Fortunately, as we shall observe in what follows, many of the most important operators of analysis and mathematical physics possess
22 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
this positivity property. In the following pages we will treat second and nth-order linear differential equations, parabolic partial differential equations, elliptic partial differential equations, and some important types of quasilinear partial differential equations.
Extensions of the Newton-Raphson technique to functional space have been made by Kantorovich and others. In view of his contributions to this important area, it is fitting to call the approximation technique used above an application of a general Newton-RaphsonKantorovich technique.
13. A FUNDAMENTAL LEMMA
The positivity property of L -1 can be used to establish the following fundamental result in the theory of differential equations.
Theorem 2. If u(t), vet) ~ ° for t 2 0, c ~ 0, then
(13.1)
implies that
(13.2)
u(t) ~ c + fU(s)V(S) ds
This result, together with extensions and generalizations, is extremely useful in establishing existence and uniqueness of solutions of nonlinear differential equations and in studying the stability of solutions of ordinary and partial differential equations. It belongs to the general area of differential inequalities, a theory inaugurated by Caplygin.
Chapter One
COMMENTS AND BIBLIOGRAPHY
§2. For a detailed discussion of the historical development of the NewtonRaphson method, see
D. T. WHITESIDE, "Patterns of Mathematical Thought in the Latter Seventeenth Century," Arch. Hist. of the Exact Sci., Vol. I, No.3, 1961, pp. 179-388 (especially p. 207).
THE RICCATI EQUATION 23
For analytic results and extensions to function space, see
L. V. KANTOROYICH, "Functional Analysis and Applied Mathematics," Uspehi Mat. Nauk, Vol. 3, 1948, pp. 89-185,
L. V. KANTOROYICH and V. I. KRYLOY, Approximate Methods of Higher Analysis, Interscience Publishers Inc., New York, 1958,
M. M. V AINBERG, Variational Methods for the Study of Nonlinear Operators, Holden-Day, Inc., San Francisco, 1964,
S. S. SAYENKO, "Iteration Method for Solving Algebraic and Transcendental Equations," Z. Vycisl. Mat. Mat. Hz., Vol. 4, No.4, 1964, pp. 738-744,
where many further references will be found.
§3. The min-max problem posed in this section was discussed in
J. J. SYLVESTER, "Meditations on Idea of Poncelet's Theorem," and "Notes on Meditation ... ," Math. Papers, Vol. II,
and in
P. C. HAMMER, Proceedings Compo Seminar, IBM, December 1949, p. 132.
As far as we know, these are the only published works in this area, although unpublished notes by Bellman and Tukey exist.
§5. This result was given in
R. BELLMAN, "On Explicit Solutions of Some Trinomial Equations in Terms of the Maximun Operation," Math. Mag., September-October 1956, pp. 41-44.
§6. The theory of the solubility of differential equations in terms of the elementary functions and quadratures was created by Liouville and further developed by Ostrowski and Ritt. For an exposition, see
J. F. RITT, Integration in Finite Terms, Liouville's Theory of Elementary Methods, Columbia University Press, New York, 1948.
G. N. WATSON, Bessel Functions, Cambridge University Press, 1944, pp. 111-123.
See also G. H. HARDY, The Integration of Functions of a Single Variable, Cambridge
University Press, 1958.
§8. This result was given in
R. BELLMAN, "Functional Equations in the Theory of Dynamic Programming-V: Positivity and Quasilinearity," Proc. Nat. A cad. Sci. USA, Vol. 41, 1955, pp. 743-746.
It was treated in considerable detail in
R. KALABA, "On Nonlinear Differential Equations, the Maximum Operation, lind Monotone Convergence," J. Math. Mech., Vol. 8, 1959, pp. 519-574.
24 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
The explicit representation of the solution of the Riccati equation has been used by Calogero to attack some important questions in scattering theory; see
F. CALOGERO, "A Novel Approach to Elementary Scattering Theory," Nuovo Cimento, Vol. 27, 1963, pp. 261-302.
---, "A Variational Principle for Scattering Phase Shifts," Nuovo Cimento, Vol. 27, 1963, pp. 947-951.
--, "A Note on the Riccati Equation," J. Math. Phys., Vol. 4, 1963, pp. 427-430.
---, "The Scattering of a Dirac Particle on a Central Scalar Potential," Nuovo Cimento, Vol. 27, 1963, pp. 1007-1016.
---, "Maximum and Minimum Principle in Potential Scattering," Nuovo Cimento, Vol. 28, 1963, pp. 320--333.
For an application to stability theory, see
R. BELLMAN, "A Note on Asymptotic Behavior of Differential Equations," Boll. d'Unione Mate., Vol. 18, 1963, pp. 16-18.
§10--§11. These results were first given in the work by R. Kalaba mentioned above. For an application to stochastic differential equations, see
R. BELLMAN, "On the Representation of the Solution of a Class of Stochastic Differential Equations," Proc. Amer. Math. Soc., Vol. 9, 1958, pp. 326-327.
For a generalization of the preceding results to the equation u' = g(u,t), see
R. BELLMAN, "On Monotone Convergence to Solutions of u' = g(u,t)," Proc. Amer. Math. Soc., Vol. 8, 1957, pp. 1007-1009.
For a very interesting new use of second-order approximation techniques, see
J. MOSER, "A New Technique for the Construction of Solutions of Nonlinear Differential Equations," Proc. Nat. Acad. Sci. USA, Vol. 47, 1961, pp. 1824-1831.
§13. This result was first used for the study of stability in
R. BELLMAN, "The Boundedness of Solutions of Linear Differential Equations," Duke Math. J., Vol. 14, 1947, pp. 83-97.
---, Stability Theory of Differential Equations, McGraw-Hill Book Company, Inc., New York, 1953.
Extensions of this lemma are due to Bihari and Langenhop; see the references to them and to Caplygin in
E. F. BECKENBACH and R. BELLMAN. Inequalities. Springer, Berlin. 1961.
For applications of this lemma to the study of existence and uniqueness theorems for ordinary differential equations. see
THE RICCATI EQUATION 25
V. V. NEMYTSKI and V. V. STEPANOV, Qualitative Theory of D(tferential Equations, Moscow, 1949 (in Russian).
For general discussions of nonlinear problems, see
T. VON KARMAN, "The Engineer Grapples with Nonlinear Problems," Bull. Amer. Math. Soc., Vol. 46, 1940, pp. 615-683.
G. TEMPLE, "Linearization and Delinearization," Proc. International Congress of Math., 1958, pp. 233-247.
For some applications, the reader may consult
J. R. RADBILL, "Application of Quasilinearization to Boundary-Layer Equations," AIAA Journal, Vol. 2, No. 10, 1964, pp. 8-10.
K. S. P. KUMAR and R. SRIDHAR, "On the Identification of Control Systems by the Quasilinearization Method," IEEE Trans. on Automatic Control, Vol. AC-9, April 1964, pp. 151-154.
M. C. SMITH, C. S. Wu, and H. S. SCHWIMMER, MagnetohydrodynamicHypersonic Viscous and Inviscid Flow Near the Stagnation Point ofa Blunt Body, The RAND Corporation, RM-4376-PR, January 1965.
R. KALABA, "Dynamic Programming, Fermat's Principle, and the Eikonal Equation," J. Optical Soc. Amer., Vol. 51, October 1961, pp. 1150-115\.
Chapter Two
TWO·POINT BOUNDARY·VALUE PROBLEMS FOR SECOND·ORDER DIFFERENTIAL
EQUATIONS
I. INTRODUCTION
We now turn our attention to the study of nonlinear second-order differential equations of the form
(1.1) un = g(u,u',t)
with the two-point boundary condition u(O) = ah u(b) = a2 • We possess no convenient or useful technique for representing the general solution in terms of a finite set of particular solutions as in the linear case (a point we shall discuss below). Consequently, we possess no ready means of reducing the transcendental problem involved in solving (1.1) to an algebraic problem as is the situation in the case where g(u,u' ,f) is linear in u and u'. Indeed, the entire area of existence and uniqueness of solutions of nonlinear differential equations is one of great complexity and subtlety, and one which contains many uncharted regions.
To obtain an analytic foothold, and simultaneously to provide computational algorithms, we must have recourse to approximation techniques. Fixed-point methods, so valuable in establishing existence of solutions, are of no use numerically. Generally, few of the standard classical techniques of successive approximations are of much utility numerically. Nonetheless, as of now, a number of powerful computational methods exist. We shall, however, concentrate upon one particular method-that furnished by the theory of quasilinearization.
Our aim is to obtain the solution of (1.1), when it exists, as the limit of a sequence of solutions of linear differential equations. Each of these can be solved numerically in a convenient fashion using the
27
28 QUASI LINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
fundamental tool of superposition. Furthermore, as in the case of the Riccati equation discussed in the previous chapter, we want quadratic convergence, and, if possible, monotonicity. The results we obtain will show that these are not unreasonable demands.
2. TWO·POINT BOUNDARY·VALUE PROBLEMS FOR SECOND·ORDER LINEAR DIFFERENTIAL EQUATION
Preparatory to a study of general second-order differential equations, let us examine the linear equation
(2.1) u" + p(t)u' + q(t)u = 0, u(o) = aI' u(b) = a2,
a two-point boundary-value problem. We have considered specific simple boundary conditions in place of general conditions at the end-points, since the approach is the same in all cases.
Let U I and U2 be the two principal solutions of the homogeneous equation above, defined by the initial conditions
(2.2) ul(o) = 1,
u{(o) = 0,
u2(0) = 0,
u~(o) = 1.
Then, by virtue of the linearity of (2.1), every solution can be represented as a linear combination
(2.3)
where CI and C2 are constants determined by the initial or boundary conditions. Using the conditions at t = 0 and t = b, we readily obtain the two linear algebraic equations
(2.4)
Hence,
(2.5)
TWO·POINT BOUNDARY·VALUE PROBLEMS 29
Thus, if u2(b) =;tf 0, there is a unique solution. If u2(b) = 0, there is a solution only if a2 = aiui(b), and in this case there is a one-parameter family of solutions of the form
(2.6)
with C2 arbitrary. Hence, existence and uniqueness are here strongly interconnected, as is frequently the case. T~e condition u2(b) =;tf 0 we recognize as a characteristic value
condition associated with the Sturm-Liouville equation
(2.7) u" + p(t)u' + Aq(t)U = 0,
u(o) = u(b) = 0.
In our case, we demand that A = 1 not be a characteristic value. In other words, we insist that when A = 1 the only solution of (2.7) be the trivial solution u(t) = O.
3. THE INHOMOGENEOUS EQUATION
Let us now turn to the inhomogeneous equation
(3.1) u" + p(t)u' + q(t)u = ret),
. u(o) = ai'
Taking advantage of the linearity, we write
(3.2) u = v + w,
where wand v are chosen, respectively, to satisfy the equations
(3.3) w" + p(t)w' + q(t)w = ret),
w(o) = 0, w'(O) = 0, and
( 3.4) v" + p(t)v' + q(t)v = 0,
v(b) = a2 - web).
30 QUASI LINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
The first equation is an initial value problem which we consider below, and the solution of the second has already been discussed.
It remains to discuss the analytic solution of the inhomogeneous equation of (3.3) in terms of the solutions of the homogeneous equation of (2.1). The most elegant and efficient way of doing this is to use vector-matrix theory, a method we shall sketch in the following section.
Let us, however, to avoid detours, employ the ingenious variation of parameters technique of Lagrange. Let Ul and U2 be particular solutions of the homogeneous equation, as above, and write
(3.5)
where Sl and S2 are functions of t to be determined at our convenience. Then
(3.6)
To simplify, set
(3.7)
This is the first condition on s~ and s~. Since w' = SlU~ + S2U~, with this constraint, we have
(3.8)
Thus
(3.9) w" + p(t)w' + q(t)w = Sl(U~ + p(t)u~ + q(t)u1)
+ S2(U~ + p(t)u~ + q(t)u 2)
+ s~u{ + s~u~ = ret).
Since U1 and U2 satisfy the homogeneous equation, this reduces to
(3.10) S~U~ + s~u~ = ret).
Combining this with (3.7), we have two simultaneous linear algebraic equations for s~ and s~. A solution exists, provided that the determinant (the Wronskian)
(3.11)
TWO-POINT BOUNDARY-VALUE PROBLEMS 31
is nonzero_ Accepting this fact, which we will establish in Chapter Three, Sec_ 3, we have
(3.l2) I 0 u21
, ret) u~ S - '---'-~--=-.;.
1 - Wet) , l
UI 0 I , u{ ret)
S2 == . Wet)
Hence,
(3.13) Sl = _ (t u2(t l)r(tl) dtl , Jo W(t l )
S2 == (t ul ( tl)r( tl) dtl , Jo W(t l )
choosing Sl(O) == S2(0) == 0, since we want to satisfy the conditions w(O) == w'(O) == O. '
Thus,
(3.l4) 1 t ( )[ L U ~.±:i~t l~) U:.:..=2~( t~) _-_U:.:..=2~( t~l)_U~l (..£.t )~] d_t~l W == r tl -
o W(tl )
= f G(t,tl)r(tl ) dtl ·
To solve the equation
(3.15) U" + p(t)u' + q(t)u == ret), u(O) == u(b) == 0,
we proceed as follows. The general solution of (3.15) has the form
(3.16)
where C2 is a constant to be determined. Setting t == b, we see that
(3.l7) C2 == _ (b G(b,tl)r(tl) dt l . Jo u2(b)
Hence,
(3. t 8)
32 QUASI LINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
A simple calculation shows that we may write
(3.19) u(t) = Lb K(t,t1,b)r(t1) dtl>
where
(3.20)
Conversely, differentiation shows that if u satisfies (3.19), it satisfies (3.16).
4. VECTOR·MATRIX APPROACH
Let us point out in passing that a straightforward derivation of (3.14) may be obtained using a minimum of vector-matrix theory applied to linear differential equations.
We begin with the observation that
(4.1) u" + p(t)u' + q(t)u = ret)
may be written in the form of a first-order linear system
(4.2)
u~ = - p(t)u 2 - q(t)u1 + ret).
Hence, introducing respectively the vectors u(t) and !(t), and the matrix A(t) defined by
(4.3) x(t) = [u1(t)], J(t) = [0 ], u2(t) ret)
A(t) = [0 1] -q(t) - pet) ,
we have, equivalent to (4.2), the vector-matrix system
(4.4) x' = A(t)x + J(t),
with an initial condition x(O) = O.
TWO·POINT BOUNDARY·VALUE PROBLEMS 33
There are two ways of approaching the solution of (4.4). The first uses the Lagrange variation of parameters approach in vectormatrix form. The second, which generalizes to any type of linear functional equation, uses the solution of the adjoint equation. Let us discuss the second approach.
Let yet) be an as yet unspecified matrix and start with the relation
(4.5) LT Yx' dt = LT [Y A(t)x + Yf(t)] dt.
Integrating by parts, we have
(4.6) Y(T)x(T) - Y(O)x(O) - LT Y'x dt = LT Y A(t)x dt + LT Yf(t) dt.
Since x(o) = 0, this may be written
(4.7) Y(T)x(T) = LT[y, + Y A(t)]x dt + JoT Yf(t) dt.
Since this holds for any matrix Y, let us choose a most convenient one. We specify the matrix Y(t) = Y(t,T), in the interval [O,T], by the equation
(4.8) Y' + Y A(t) = 0, YeT) = I.
With Y(t) determined in this fashion, we have
(4.9) x(T) = LT Yf(t) dt.
If X(t) denotes the solution of the matrix equation
( 4.10) X' = A(t)X, X(O) = I,
it is easy to see that the solution of (4.8) is given by
( 4.11)
This is a consequence of the relation
( 4.12) .!!. (X-I) = _ X-I dX X-I. dt dt
34 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
Hence, (4.9) becomes
(4.l3) x(T) = LT X(T)X(trlf(t) dt.
Carrying out the indicated matrix calculations, we obtain the explicit result of (3.14). The preceding expression, in (4.13), is preferable for analytic purposes, since the structure of the representation is quite clear.
5. GREEN'S FUNCTIONS
The function K(t,t1b) introduced in Sec. 3 is a very important function, called a Green's function. It is determined both by the form of the particular linear equation and by the boundary conditions. For example, if the equation is
(5.1) u" = ret), u(o) = u(b) = 0,
a simple calculation along the preceding lines shows that
(5.2) u = Lb K(t,tI,b)r(tI) dtl>
where
(5.3) K(t,tl,b) = t(t i - b)/b, I:::;; 11 :::;; b,
= t I ( t - b )/b, 0:::;; tl :::;; I.
We shall employ this result below. There are many interesting and important properties of Green's
functions which we will not be able to discuss here. They playa basic role in the advanced theory of differential equations and their associated integral equations. We shall, however, in the following chapter discuss some important questions of positivity of Green's functions_
6. CONVEXITY
Let us now generalize the representation
(6.1) u2 = max (2uv - v2),
v
TWO-POINT BOUNDARY-VALUE PROBLEMS 35
y = f (x)
--------------4--------------------- x
Fig. 2.1
used in Chapter One to study the Riccati equation, by introducing the concept of convexity. We say that a function j(x) is convex for a ~ x ~ b ifj"(x) ~ 0 in this interval. It is strictly convex ifj"(x) > o.
Consider the graph of a strictly convex function (see Fig. 2.1), and consider the tangent at the point P(u,j(u)),
(6.2) y - feu) = f'(u)(x - u).
By virtue of strict convexity, we see that the curve is always above the tangent for all values of x except x = u. Hence,
(6.3) f(x) ~ feu) + f'(u)(x - u),
for all u and x with equality at x = u. Therefore, we may write
( 6.4) f(x) = max [feu) + (x - u)f'(u)]. u
The result can, of course, be obtained purely analytically, by the use of calculus.
Similarly, for a function of two variables, we have
(6.5) f(x,y) = max [f(u,v) + (x - u)fu + (y - v)fv], U.l'
36 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
provided that j(x,y) is strictly convex in the domain of interest, which is to say, provided that
(6.6) fxx > 0, Ifxx jXY I > 0, fxy fyy
or equivalently, that the quadratic form
(6.7)
is positive definite. Analogous results hold for functions of any finite number of vari
ables and for functionals. The general discussion of convexity requires some subtlety but merits it, since applications of this simple, yet fundamental, concept occur throughout all of analysis and geometry.
7. QUASILINEARIZATION
We are now ready to employ quasilinearization for analytic and computational purposes. We shall first use this technique to obtain quadratic convergence, and illustrate this by means of some examples.
Let us begin, for the sake of illustration, with the equation
(7.1) u" = feu), u(o) = u(b) = 0.
Let uo(x) be some initial approximation and consider the sequence {un} determined by the recurrence relation
(7.2)
Each function un(x) is a solution of a linear equation, a very important characteristic of this algorithm.
This is an application of the Newton-Raphson-Kantorovich approximation method in function space. On the basis of what has gone before, we can expect quadratic convergence, where. convergence occurs.
TWO·POINT BOUNDARY·VALUE PROBLEMS 37
8. DISCUSSION
Since each solution is determined by a two-point boundary condition, it is not evident a priori that the sequence {Un} as defined by (7.2) actually exists.
We shall first demonstrate that the sequence is well defined, provided that we restrict x to a sufficiently small interval [O,b], and then that the sequence converges quadratically. In the following chapter, we shall consider the proof of monotone convergence, which requires some more sophisticated ideas, and use this result to enlarge the interval of convergence of the sequence of approximations. We shall present a variety of approaches to this very important and interesting problem area of monotonicity and positivity.
9. EXISTENCE AND BOUNDEDNESS
To begin with, we shall establish the existence and uniform boundedness of the sequence {un(x)} for b sufficiently small. Starting with the equation of (7.1), namely
(9.1) U~+l= f(u n) + (un+! - un)f'(u n),
un+1(0) = un+1(b) = 0,
we obtain, as illustrated in Secs. 4 and 5, the linear integral equation
(9.2) un+1 = i b
K(x,y)fJ(u n) + (u n+1 - un)!'(un)] dy,
where K(x,y) is the Green's function
(9.3) K(x,y) = x(y - b)/b,
= (x - b)y/b,
o ~ x < y ~ b,
b ~ x > y ~ o. To obtain (9.2), we regard (9.1) as an equation of the form u" =
r(t), u(O) = u(b) = 0, and apply the result stated in Sec. 5. Observe that
(9.4) b
max IK(x,y)1 = -, "'.11 4
38 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
where the maximization is over the region ° ~ x, y ~ b. Let
(9.5) max max (If(u)l, If'(u)I) = m, lui :0;1
assuming that m < 00, and choose uo(x) so that I uo(x) I ~ I, for
° ~ x ~ b. Turning to (9.2), we have
(9.6) lun+11 ~ f'K(X,Y)1 [If(un)1 + lunllf'(un)1 + If'(un)llun+1l] dy.
Hence, writing mi = maxO:O;X:O;b lu1(x)l, we have, for n = 0,
Provided, therefore, that b2mj4 < I, we obtain the bound
(9.8) < b2m/2
m1 - 1 _ b2m/4
This upper bound is itself less than 1 if b2 ~ 4j3m, a constraint that can be met by taking the interval [O,b] to be sufficiently small, for example, b2 ~ 4j3m. Thus, under these conditions, m1 ~ l.
It is clear that this procedure can be continued inductively with the result that we can assert that lun(x)1 ~ 1 for ° ~ x ~ b, provided that b2 ~ 4j3m. We have thus demonstrated that the inductive definition of the sequence {uix)} is meaningful.
10. CONVERGENCE
The convergence proof follows standard lines, with the difference that the convergence will turn out to be quadratic.
Returning to the recurrence relation of (9.1), let us subtract the nth equation from the (n + 1)st,
(to.l) (Untl - un)" =f(u n } - f(u n- I) - (Un - Un-tV'(u n - l )
+ !'(Un)(Unti - Un)·
TWO-POINT BOUNDARY-VALUE PROBLEMS 39
Regarding this as a differential equation for U n+1 - Un' and converting into an integral equation as before, we have
(10.2) (un+! - Un) = f K(x,y){J(u n) - f(Un-l)
- (Un - Un-1)f'(U n- 1)
+ !'(Un)(Un+! - Un)} dy.
The mean-value theorem tells us that
where () lies between Un- 1 and Un. Hence, letting
(10.4) k = max 1f"(u)l, lui :0;1
we have, very much as before,
b (b[k 2 ] (10.5) IU n+1 - unl ::::;; "4 Jo "2 (Un - Un-I) + m IU n+1 - unl dy.
Hence, using the previous argument (9.7) and (9.8),
b2k/8 2 max Iu n+! - unl ::::;; 2 (max IU n - Un-I/) .
x 1 - b m/4 x (10.6)
This shows that there is quadratic convergence if there is convergence at all.
If the elements of a sequence {un}, n = 0, I, 2, ... ,satisfy the relation
(10.7) n = 1,2, ... ,
a simple induction shows that
(10.8)
The convergence will depend therefore upon the quantity
b2k/8 kvo = 2 max lUI - uol,
1 - b m/4 x ( 10.9)
40 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
which by the standard procedures used above can be shown to be less than one for b sufficiently small.
It is interesting to note that the proof shows that it is sufficient for convergence that max", IUn+1 - unl be small enough for some n. Consequently, even if the interval [O,b] appears to be too large initially, we can always hope that a judicious initial approximation uo(x) will keep lu1(x) - uo(x)1 sufficiently small. Here, physical and engineering experience and intuition can be of great service.
If, then, the quantity in (10.7) is less than one, we have uniform convergence of the sequence {un(x)} to a function u(x) satisfying the equation
(10.10) u = f K(x,y)f(u) dy.
From this we deduce that u satisfies the original differential equation. Furthermore, the foregoing techniques show that
(10.11) max lu - unl ~ k2 max lu - un_112,
'" '" from which uniqueness follows in the usual fashion.
We have gone into careful detail as to the existence and convergence of the sequence of approximations in order to indicate what the necessary steps are. Henceforth, in treating the more complex functional equations of the later chapters, we shall omit the proofs of existence and convergence. These require no additional concepts or techniques, apart from some small knowledge of vector-matrix or function space notation. Notice that the convexity assumption, of importance in the representation theorem, plays no role in the quadratic convergence.
II. CONVERGENCE OF THE PICARD ALGORITHM
The usual approach based upon the recurrence relation
(11.1 ) p':, I 1 = f( V7l ),
TWO·POINT BOUNDARY·VALUE PROBLEMS 41
also yields a convergent sequence with very much the same restrictions on f and b. The convergence, however, will in general only be geometric, which is to say
(11.2)
where k < 1, or IVn - vn-II ~ knco. The advantage of quadratic convergence, of course, lies in rapidity
of convergence. We shall, moreover, show subsequently, in Chapter Three, that quasilinearization also provides the maximum interval of convergence of successive approximations in a number of important cases.
12. A NUMERICAL EXAMPLE
The equation
(12.1)
is of current interest in connection with magneto hydrodynamics, and its numerical solution will be considered in later pages using quasilinearization. Let us here discuss the one-dimensional version
(12.2) u" = eU, u(O) = u(b) = 0,
which possesses the considerable merit of having a unique and explicit analytic solution. For b = 1, this has the form
(12.3) u(x) = -In 2 + 2In [c sec {c(x - t)/2}],
where c is the root of
(12.4) ~2 = c sec (~)
which lies between 0 and TT/2, namely c = 1.3360557, to eight figures. Let us recall that we obtain (12.2) by means of multiplication by
u' and integration of both sides. This procedure yields u' in terms of u, and a further integration achieves the desired explicit solution of ( 12.3).
42 QUASI LINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
Applying the techniques presented above, we consider the sequence of approximations defined by
Taking a most obvious initial approximation uo(x) = 0, we compute
Table 2.1
THE APPROACH OF un(x) TO u(x)
x uo(x) u1(x) u2(x) u(x)
0.0 0.000000 0.000000 0.000000 0.000000 0.1 0.000000 -0.041285 -0.041436 -0.041436 0.2 0.000000 -0.072974 -0.0732683 -0.0732686 0.3 0.000000 -0.095386 -0.095800 -0.095800 0.4 0.000000 -0.108743 -0.109238 -0.109238 0.5 0.000000 -0.113181 -0.113704 -0.113704 0.6 0.000000 -0.108743 -0.109238 -0.109238 0.7 0.000000 -0.095386 -0.095800 -0.095800 0.8 0.000000 -0.072974 -0.0732683 -0.0732686 0.9 0.000000 -0.041285 -0.041436 -0.041436 1.0 0.000000 0.000000 0.000000 0.000000
the functions u1(x) and u2(x) as described above, using a RungeKutta integration procedure.
Table 2.1 indicates the rapidity of convergence. The digits of un(x) in agreement wi~h those of u(x) are italicized.
13. GENERAL SECOND·ORDER NONLINEAR DIFFERENTIAL EQUATIONS
There is no difficulty in applying the same technique to the study of the equation
(13.1) U" = !(u',u,x),
with nonlinear boundary conditions of the form
(13.2) gl(U(O),U'(O» = 0, ga(u(b),u'(b» = 0,
TWO·POINT BOUNDARY·VALUE PROBLEMS
or, even more generally,
(13.3) gl(U(O),U'(O),U( b ),U'( b» = 0,
g2(U(0),U'(0),u(b),u'(b» = O.
43
We now apply quasilinearization to both the equation and the boundary conditions. Thus, in the case of the equation of (13.1) subject to the conditions (13.2), we generate the sequence {un(x)} by means of the equation
(13.4)
with the linearized boundary conditions
(13.5) gliuiO),u~(O»(Un+l(O) - un(O»
+ glu,(Un(O),u~(O»(u~+1(O) - u~(O» = 0,
and a similar equation derived from g2 for the point x = b. There is no difficulty in establishing existence and uniqueness and
quadratic convergence for sufficiently small b, under obvious conditions on the quantities ju, ju" giu, giu', i = 1, 2.
14. A NUMERICAL EXAMPLE
As a second example, consider the equation
(14.1) u(O) = u(b) = 0,
which arises in the study of the finite deflections of an elastic string under a transverse load. * The explicit analytic solution for a2 = 0.49 and b = 1 is
( 14.2) u(x) = ..!. In [Cos a(x - 1/2)J. a2 cos al2
• This equation was suggested to us by W. Prager.
44 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
Using the recurrence relation
(14.3) -U~+l = 1 - O.49(u~2) + 2(0.49)u~u~+1' U n+1(O) = u n+1(b) = 0,
with the initial approximation uo(x) = 0, we obtain the values shown in Table 2.2.
Table I.I THE CONVERGENCE OF un(x) TO u(x)
x uo(x) u1(x) u2(x) u(x)
0.0 0.000000 0.000000 0.000000 0.000000 0.1 0.000000 0.045000 0.046570 0.046571 0.2 0.000000 0.080000 0.082302 0.082304 0.3 0.000000 0.105000 0.107571 0.107573 0.4 0.000000 0.120000 0.122632 0.122635 0.5 0.000000 0.125000 0.127636 0.127639 0.6 0.000000 0.120000 0.122632 0.122635 0.7 0.000000 0.105000 0.107571 0.107573 0.8 0.000000 0.080000 0.082302 0.082304 0.9 0.000000 0.045000 0.046570 0.046571 1.0 0.000000 0.000000 0.000000 0.000000
15. CALCULUS OF VARIATIONS
A natural and most prolific source of nonlinear differential equations with two-point bpundary-value conditions is the calculus of variations. Consider the problem of maximizing the functional
(15.1) J(u) = fg(U,U') dt,
subject to the initial condition u(O) = c. The vanishing of the first variation produces the Euler equation
(15.2) og _ !!..( Og) - 0 ou dt ou'
with the terminal condition
(15.3) ag I = o. au' /-b
TWO·POINT BOUNDARY·VALUE PROBLEMS
Y
Po
(xo, Yo)
__ -_ P,
(x" Yl)
~------------------------~~ X o
45
Fig. 2.2-The path of a light ray through an inhomogeneous medium
If g is a quadratic form in u and u' plus a linear term in u, the Euler equation is linear, and the two-point boundary condition can be resolved in a straightforward fashion. In general, we face the usual difficulties.
To illustrate the procedure of quasilinearization, consider the following problem arising from an application of Fermat's principle. Let the (x,y)-plane represent an optically inhomogeneous medium, with the velocity of light at (x,y) denoted by v(x,y). A light particle, initially at the point Po, passes through the point Pl' What path does it take?
Pursuant to Fermat's dictum of minimum time, our task is to minimize the integral
i"'l ds 1"'1 .J 1 + y,2 (15.4) J(y) = -- = dx. "'0 v(x,y) "'0 v(x,y)
(See Fig. 2.2.) The Euler equation for this problem is
(15.5) :y [ .J~J -:X {a~,[ .J~J} = o.
Let us take v(x,y) = y. This reduces the equation to
(15.6) 1 + y,2 + yy" = 0,
46 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
subject to the two-point boundary condition
(15.7)
It is known that the optimizing curves are arcs of circles with their centers on the axis. We can use this to check the accuracy of our numerical procedures.
Table 2.3 THE NUMERICAL RESULTS
x zo(x) zix) Z2(X) Z3(X)
1.00 1.00 1.000000 1.000000 1.000000 1.25 1.25 1.386195 1.391934 1.391941 1.50 1.50 1.651987 1.658305 1.658312 1.75 1.75 1.848866 1.854642 1.854050 2.00 2.00 2.000000 2.000000 2.000000
Write the equatio~ in the form y" = -(1 + 1'2)/y, and take as an initial approximation the function zo(x) obtained by drawing a straight line joining Po and Pl. The general recurrence relation is then
(15.8) " (1 + Z~2) + (1 + Z~2) ( )
Zn+l = - 2 Zn+l - Zn ~n Zn
2z~, ') - - (Z n+l - Z n ,
Zn
with zn+1(xo) = Yo, Zn+l(Xl) = Yl· Consider the particular problem where y(1) = 1, y(2) = 2. The
initial approximation is zo(x) = x. The numerical results obtained in this way are shown in Table 2.3. (The program is given in Appendix 1.)
The results for Z3(X) are in agreement with the exact solution to six figures.
16. QUASILINEARIZATION
We can apply the quasi linearization technique in two ways. The Euler equation in (15.2) is a nonlinear differential equation of a type
TWO-POINT BOUNDARY-VALUE PROBLEMS 47
we have already encountered. Applying the customary linearization, we obtain the linear equations
(16.1) gu(Un>u~,t) + guJUn,u~,t)(Un+l - un)
+ guU'(Un>U~,t)(U~+l - U~)
- !!:. [gu'(Un,u~,t) + gu,JUn>U~,t)(Un+l - Un) dt
with the initial condition U n +1(O) = c and the linearized terminal condition
(16.2) gU-<un>u~,b) + (u n+1 - un)gu,Jumu~,b)
+ (U~+l - u~)gu'u,(un>u~,b) = o. Alternatively, we can proceed directly in the following manner.
Let Un be an approximation to the solution of the optimization problem and consider the problem of minimizing the functional
(16.3) I n = r h2(U,Un) dt,
where h2(u,u n ) is the expansion of g(u,u',t) around the point (umu;,),
up to and including second-order terms. Thus,
(16.4) h2(U,U n) = g(un,u~,t) + gu(un,u~,t)(u - un)
+ gu,(umu~,t)(u' - u~)
+ UguJun,u~,t)(u - Un)2
+ 2gu,u,(un,u~,t)(u - un)(u' - u~)
+ gu'u,(un,u~,t)(u' - U~)2].
A straightforward calculation shows that the Euler equation associated with I n is precisely that appearing in (16.1). We see then that an approximate treatment of the exact variational equation is equivalent, in this case, to an exact treatment of the approximate variational problem. One advantage of the second approach lies in the fact that dynamic programming and invariant imbedding techniques can be applied to avoid two-point boundary-value problems
48 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
and the consequent necessity for solving linear systems of algebraic equations. We shall discuss this point in detail below in Sec. 20 and again in Chapter Seven.
17. UPPER AND LOWER BOUNDS VIA QUASILINEARIZATION
Let us now briefly indicate how to apply quasilinearization to obtain upper and lower bounds for certain classes of variational problems. Consider, for illustrative purposes, the problem of minimizing the functional
(17.1) J(u) = f(U'2 + f}J(u» dt,
where u is ,subject to a condition such as u(O) = c. To simplify, let us here take
(17.2) u'(l) = 0,
where the second condition is precisely the free boundary condition. Assume that f}J(u) is convex so that we can write
(17.3)
Then
f}J(U) = max [f}J(v) + (u - v)f}J'(v)]. v
(17.4) min J(u) = min r\u,2 + f}J(u» dt u u Jo
= m~n m;x [f[U'2 + f}J(v) + (u - V)f}J'(V)]] dt.
It can be shown on the basis of quite general results in the theory of games that the minimization and maximization operations can be interchanged. Thus,
(17.5) min J(u) = maX min [r\u,2 + f}J(v) + (u - v)f}J'(v» dt]. u v u Jo
The simplest proof of the validity of the interchange is a direct one, based upon a direct calculation.
TWO-POINT BOUNDARY-VALUE PROBLEMS 49
Po
Fig. 2.3
The whole point of the interchange is to obtain a lower bound for leu), namely
(17.6) m:n J(u) ~ m:n [f(U'2 + q;(v) + (u - v)q;'(v» dt] for all functions vet). This minimum can now be determined explicitly, as we proceed to do below.
Before carrying out the calculation, let us indicate briefly the role of duality, which we have been tacitly employing, in converting minimization problems into maximization problems. Consider the general problem of finding the minimum distance from a point po to a convex set S (see Fig. 2.3). Consider the tangents to the boundary curve B and the distance d from po to a typical tangent. Then it is clear geometrically that
(17.7) PIPO = minimum distance from Po to S,
= maximum distance from Po to a tangent,
= max d.
Furthermore, since d is itself a minimum, namely the mInImum length of a line joining po to a point on the tangent, we see how a max-min enters.
50 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
18. SOLUTION OF INNER PROBLEM
The Euler equation of the minimization problem on the right-hand side of (17.6) is
(18.1) u" - rp'(v)j2 = 0, u(O) = c, u'(I) = o.
Writing u = c + W, we have
(18.2) 1 il U = C + - k(s,t)rp'(v(s» ds, 2 0
where k(s,t) is the Green's function of
(18.3) u" = r, u(O) = u'(1) = o.
A simple c~lculation shows that the minimum value is
(18.4) 1 ilil K(v) = - k(s,t)rp'(v(s»rp'(v(t» ds dt 4 0 0
+ f[(C - v)rp'(v) + rp(v)] dt.
We can thus write
(18.5) min J(u) = max rp(v). u 11
The first part yields immediate upper bounds; the second part yields lower bounds. Judicious choices of trial functions u and v will often yield quite narrow bounds.
19. FURTHER USES OF QUASILINEARIZATION
The same idea can be used to treat unconventional problems in control theory and in variational theory. Consider, for example, the problem of minimizing the nonanalytic functional
(19.1) J(f) = fll - u(r)1 dr,
TWO-POINT BOUNDARY-VALUE PROBLEMS
over all f subject to
(19.2) (a) 0 ~f(t) ~ M,
(b) f fdt < a,
o ~ t ~ 1,
where u and f are connected by the relation
(19.3) u' = -u + f, u(o) = c.
Writing
(19.4) 11 - ul = max (1 - u)rp,
51
and using a min-max theorem, we can interchange min and max and obtain an explicit analytic solution of the problem. The details will be found in the monograph cited in the bibliography at the end of this chapter.
20. DYNAMIC PROGRAMMING
Let us briefly indicate the use of dynamic programming and, more generally, invariant imbedding to avoid two-point boundary-value problems. This is an important consideration since a major source of error in the application of quasilinearization arises from the solution of the linear set of algebraic equations associated with the boundaryvalue problem. This is no particular problem here, but it can be significant in higher-dimensional problems.
To illustrate how dynamic programming circumvents two-point boundary-value problems, consider the problem of minimizing the quadratic functional
(20.1) J(u) = iT
[U,2 + b(t1)U2
] dt1,
over all u for which the integral exists, satisfying the initial condition u(a) = c.
The Euler equation is
(20.2) u" - b(t)u = 0, u(a) = c, u'(T) = o.
52 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
Let us, however, approach the minimization in a different fashion. Write
(20.3) min J(u) = f(a,c). u
Then the principle of optimality yields the nonlinear partial differential equation
(20.4) _ of = min [V2 + b(a)c2 + v Of], oa • oc
with the initial condition j(T,c) = o. Carrying out the minimization, we have
(20.5) - :~ = b(a)c2
- HZf To solve this, we observe that u, by virtue of (20.2), depends linearly on c, and hence j(a,c) is a quadratic function of c,
(20.6) f(a,c) = r(a)c2.
Substituting in (20.5), we see that rea) satisfies the Riccati equation
(20.7) -r'(a) = b(a) - r(a)2, r(T) = o. This is an initial value problem.
In this case, there is no difficulty using either method. In general, in multidimensional variational processes, the important point is that the dynamic programming approach leads to an ordinary system of differential equations with initial conditions which can easily be resolved computationally.
21. INVARIANT IMBEDDING
Alternatively, we can regard the missing initial condition in (20.2), u'(a), as a function of the parameter a. It turns out that this function also satisfies an ordinary differential equation of Riccati type. This approach is useful in the more general case when the linear differential equation under consideration is not the Euler equation of a variational problem.
TWO-POINT BOUNDARY-VALUE PROBLEMS
22. COMBINATION OF DYNAMIC PROGRAMMING AND QUASILINEARIZATION
S3
The general pro blem of determining the minimum of the functional
(22.1) J(u) = iT g(u',u,t) dt,
over all functions subject to u(a) = c, may also be approached by means of dynamic programming. Setting
(22.2) min J(u) = f(a,c), u
we obtain the nonlinear partial differential equation
(22.3) - of = min [g( v,c,a) + v Of], oa v OC with the initial conditionj(T,c) = O. This may be solved computationally with no difficulty. This approach possesses certain advantages over quasilinearization, and many other techniques, in that the length of the interval [a,T] is of slight importance and the presence of constraints actually simplifies the computational solution.
In higher dimensions, however, the computational solution of (22.3) is obstructed by rapid-access storage requirements. We can, to overcome this obstacle, use a combination of local quasilinearization plus global dynamic programming. We shall discuss this in Chapter Seven.
Chapter Two
COMMENTS AND BIBLIOGRAPHY
§1. We have resolutely ignored the very difficult problem area of the existence and uniqueness of solutions of two-point boundary-value problems. For this, the reader is referred to
L. COLLATZ, The Numerical Treatment of Differential Equations, Springer, Berlin, 1960.
§4. For an introduction to the application of vector-matrix techniques to linear systems of differential equations, see
R. BELLMAN, Introduction to Matrix Analysis, McGraw-Hill Book Company, Inc., New York, 1960.
54 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
§5. For a discussion of Green's functions, see
E. CODDINGTON and N. LEVINSON, Theory of Ordinary Differential Equations, McGraw-Hill Book Company, Inc., New York, 1955.
§6. A discussion of some of the roles of convexity in analysis is given in
E. F. BECKENBACH and R. BELLMAN, Inequalities, Springer, Berlin, 1961,
together with many additional references.
§7-§10. We follow the presentation of
R. KALABA, "On Nonlinear Differential Equations, the Maximum Operation, and Monotone Convergence," J. Math. Mech., Vol. 8, 1959, pp. 519-574.
§16. See
R. BELLMAN and R. KALABA, "Dynamic Programming, Invariant Imbedding and Quasilinearization: Comparisons and Interconnections," in Computing Methods i!l Optimization Problems, A. Balakrishnan and L. Neustadt (eds.), Academic Press Inc., New York, 1964, pp. 135-145.
§17. Here we follow
R. BELLMAN, "Quasi-linearization and Upper and Lower Bounds for Variational Problems," Quart. Appl. Math., Vol. 19, 1962, pp. 349-350.
Further results may be found in
M. A. HANSON, "Bounds for Functionally Convex Optimal Control Problems," J. Math. Anal. Appl., Vol. 8, 1964, pp. 84-89.
R. BELLMAN, "Functional Equations and Successive Approximations in Linear and Nonlinear Programming," Naval Res. Logist. Quart., Vol. 7, 1960, pp.63-83.
---, "An Approximate Procedure in Control Theory Based on Quasilinearization," Bull. Univ. Jassi, Vol. X(XIV), 1964, fasc. 3-4.
---, "An Application of Dynamic Programming to Location-allocation Problems," SIAM Rev., Vol. 7, 1965, pp. 126-128.
We are using the concept of duality in a way closely related to that of Friedrichs; for an application, see
T. IKEBE and T. KATO, "Applications of Variational Method to the ThomasFermi Equation," J. Phys. Soc. Japan, Vol. 12, 1957, pp. 201-204.
§19. For a detailed discussion, together with some further applications, see
R. BELLMAN, I. GLlCKSBERG, and O. GROSS, Some Aspects of the Mathematical Theory of Control Processes, The RAND Corporation, R-313, 1958.
---, "Some Nonclassical Problems in the Calculus of Variations," Proc. Amer. Math. Soc., Vol. 7, 1956, pp. 87-94.
TWO·POINT BOUNDARY·VALUE PROBLEMS 55
§20. For various applications of this approach, see
R. BELLMAN and S. DREYFUS, Applied Dynamic Programming, Princeton University Press, Princeton, New Jersey, 1962.
R. BELLMAN, Adaptive Control Processes: A Guided Tour, Princeton University Press, Princeton, New Jersey, 1961.
§21. For invariant imbedding, see
R. BELLMAN, R. KALABA, and G. M. WING, "Invariant Imbedding and Mathematical Physics-I: Particle Processes," J. Math. Phys., Vol. 1, 1960, pp. 280-308.
R. BELLMAN, R. KALABA, and M. PRESTRUD, Invariant Imbedding and Radiative Transfer in Slabs of Finite Thickness, American Elsevier Publishing Company, Inc., New York, 1963.
R. BELLMAN, H. KAGIWADA, R. KALAB A, and M. PRESTRUD, Invariant Imbedding and Time-dependent Transport Processes, American Elsevier Publishing Company, Inc., New York, 1964.
R. KALABA, "Invariant Imbedding and the Analysis of Processes," Chapter 10 in Views of General Systems Theory, ed. by M. Mesarovic, John Wiley & Sons, New York, 1964.
Chapter Three
MONOTONE BEHAVIOR AND DIFFERENTIAL INEQUALITIES
I. MONOTONICITY
Let us now focus our attention upon the monotone properties of the sequence of approximations obtained using quasilinearization. The investigation is significant for several reasons. In the first place, the results are of analytic importance, since they provide new techniques for establishing the boundedness and convergence of the approximations and, indeed, as we have noted in connection with the Riccati equation, occasionally yield the maximum interval of convergence. Secondly, the monotone convergence is quite useful computationally in providing bounds on the solution and in furnishing a check on the numerical results.
The study of the monotone character of the approximations for second-order nonlinear differential equations is equivalent to the study of differential inequalities of the form
(1.1) u" + p(t)u' + q(t)u ~ o. The study of when this implies that u ~ 0 or u ~ 0 leads us into
a number of classical areas of analysis, some still well tended and some allowed to fall into neglect. We shall resist the temptation to make any extensive incursion and restrain our attention to those results specifically required at the moment. Many references to further results will be found at the end of the chapter.
In the course of applying quasilinearization to the Riccati equation, we made repeated use of the fact that we possessed a simple explicit solution of the first-order linear equation
( 1.2) u' = a(t)u + bet).
58 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
This solution in terms of quadratures and exponentials enabled us to obtain all the information we desired concerning the differential inequality
(1.3) u' - a(t)u - b(t) ~ o. In studying (Ll), we do not possess this luxury and, therefore,
must use some ingenuity. In this chapter we present an assortment of artifices, devices, and stratagems, ranging from the calculus of variations to partial differential equations, useful in this area.
Let us note, to begin with, that it is sufficient, in general, to study the inequality
(1.4) un + q(t)u ~ 0,
since the change of variable, U = rSPdtl2v, which preserves sign, converts (1.1) into the form
(1.5) vn + (q(t) - p'/2 - p2/4)v ~ o.
1. AN ELEMENTARY APPROACH
Prior to recourse to some high-powered techniques, let us present a direct elementary method for the study of the inequality
(2.1) un - a(t)u ~ 0, 0 ~ t ~ b,
which can be applied to the study of nonlinear differential equations,
u (t)
-- -....
o I,
Fl •• 3.1
.......... " , , , \
\
t, b
MONOTONE BEHAVIOR AND DIFFERENTIAL INEQUALITIES 59
elliptic and parabolic partial differential equations, and other types of functional equations as well.
Suppose that aCt) > 0 in [O,b] and that u(O) ~ 0, u'(O) > O. We wish to show that under these assumptions u(t) ~ 0 for 0 < t ~ b. Suppose that this result were not true so that u(t) had a zero nearest the origin, say at t = t10 as indicated in Fig. 3.1. There would then necessarily be an intermediate point t2 at which u(t) possesses a relative maximum. At this point, however,
(2.2) u"(t) ~ a(t)u(t) > 0,
a contradiction to the statement that u has a local maximum at t = t 2•
This establishes the fact that u(O) ~ 0, u'(O) > 0 implies that u(t) > 0 for 0 < t ~ b.
3. RELATIONS BETWEEN SOLUTIONS AND COEFFICIENTS
To lay the foundations for our first deeper discussion of the differential inequality of (2.1), we shall consider some relations between solutions and coefficients which are analogues of those relating the roots and coefficients of polynomial equations.
Let U1 and U2 be two linearly independent solutions of
(3.1) u" + p(t)u' + q(t)u = 0,
and consider the determinantal equation
u" u' u
(3.2)
Since this is a linear second-order differential equation which possesses the two linearly independent solutions u = U1 and u = U2,
it must be equivalent to the original equation.
(3.:~trodUCing the notati::."U,) = I u~ U11 Uz Uz
60 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
(the Wronskian of U1 and U2), we may write (3.1) in the form
lUI U: I I U: U: I (3.4) u" + U' U2 U2 + U U2 U2 = 0
W(t) W(t)
(we write, as before, W(t) == W(U1,U2)), a relation which upon comparison with (3.1) yields the expressions we want for p(t) and q(t) in terms of U1 and U2'
Since the presence of W(t) in the denominator is disturbing as far as possible zero values are concerned, let us now establish the result of Abel,
(3.5)
From (3.3) we have, upon differentiation,
(3.6) dW d ( , ') " " - = - U1U2 - U2U1 = U1U2 - U2U1 dt dt
= -(p(t)u~ + q(t)Ul)U2 + Ul(P(t)U~ + Q(t)U2)
= -p(t)(U~U2 - UIU~) = -p(t)W.
This is equivalent to the equation in (3.5). From this it follows that linear independence of Ul and U2 at one point, say t = 0, implies linear independence everywhere. If we take U1 and U2 to be principal solutions, u1(0) = 1, u~(O) = 0, u2(0) = 0, u~(O) = 1, we have W(u1(0),U2(0)) = 1. Hence W(t) is positive for t ~ 0, and, in par-
ticular, not equal to zero, provided that the function f; p(s) ds is
finite for t ~ O. We have already used this result in Sec. 3 of Chapter Two.
4. A USEFUL FACTORIZATION OF THE SECONDORDER LINEAR DIFFERENTIAL OPERATOR
Let us now perform some simple manipulations which will provide us with a very useful decomposition of the equation
(4.1) u" + p(t)u' + q(t)u ::0 0,
MONOTONE BEHAVIOR AND DIFFERENTIAL INEQUALITIES 61
or, equivalently, of the operator D2 + p(t)D + q(t), where D = d/dt. Begin with the relation
(4.2) ~ (~) J:' ,::1, dt U1 U1
and the similarly derived relation
(4.3) :t (Iu: :~ 1/1:: :~ I) I:: :~ 11:
1 :: 1-1:1 :: II:: :~ I
=~~~~--~~~~----~~~.
I Ul U~ /2 U2 U 2
We suppose that U1 and U2 are such that W(Ul>U2) =F O. Now let us invoke some simple determinantal results. Let A be a 3 x 3 matrix
(4.4)
with the inverse
(4.5)
where Aii is the cofactor of at; in the determinantal expansion, and IAI denotes the determinant of A. Since (A-l)-1 = A, we have the identity
(4.6)
or
I A22 A321 (4.7) aulAI = A
23 A .
33
Consider now the determinant
(4.8)
u u' u"
62 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
Letting Aii denote the cofactor of the ijth element, and using (4.7), we have
(4.9)
Identifying these cofactors with the terms on the right-hand side of (4.3), we obtain the relation
(4.10)
Hence,
(4.11) W(UI,U2,U) = W(UI,U2).!!( W(uI,U») W(UI ,U 2) UI dt W(U I ,U2)
W(UI,U2) d ( u~ d (u)) = UI dt W(U I ,U2) dt UI '
or, finally, the desired decomposition
(4.12) u" + p(t)u' + q(t)u = W(U I,U2) ~( u~ ~(~)). UI dt W(U I ,U2) dt UI
In the following chapter, we will indicate the corresponding result for the Nth-order linear differential equation.
5. A POSITIVITY RESULT
From the final result of the foregoing section, (4.12), we can easily demonstrate that a sufficient condition that the relation
(5.1) U" + p(t)u' + q(t)u 20, u(O) = cI , u'(O) = c2,
in 0 ~ t ~ b, implies that u 2 v, where v satisfies the differential equation
(5.2) v" + p(t)v' + q(t)v = 0, v(O) = CI, v'(O) = c2,
is that there exist a solution of the homogeneous equation
wN + p(t) .... ' + q(t)w = 0
MONOTONE BEHAVIOR AND DIFFERENTIAL INEQUALITIES 63
which is positive in [O,b]. Call this solution Ul(t), and let ult) be a linearly independent solution. Then W(U1,U2) is nonzero, and we can choose U2 so that it is positive. As we know, this involves only a choice of U2(O) and u~(O).
From (5.1) and (5.2), we conclude that
(5.3) (u - v)" + p(t)(u - v)' + q(t)(u - v) = ret),
where ret) 2 0 for 0 ~ t ~ b. Applying (4.12), we may write (5.3) in the form
(5.4) d ( u~ d (u - v)) u1r(t) dt W(UbU2) dt ~ = W(U 1,U2) ,
where U1 and U2 are as indicated above. Hence, integrating between o and t,
(5.5) u~ !!(u - v) =It u1r(s) ds > 0 W(U 1,U2) dt U 1 0 W(U 1,U2) - ,
since, by hypothesis, U1 and r are nonnegative and W is positive. There is no constant of integration, since u'(O) = v'(O). Integrating once again,
(5.6)
Again there is no constant of integration, since u(O) = v(O). From this we see that u 2 v, the desired result. Furthermore, we have a lower bound for U - v in terms of L(u) - L(v), where L is the second-order linear operator. Finally, let us note that the existence of u1(t) is a characteristic value condition.
6. A RELATED PARABOLIC PARTIAL DIFFERENTIAL EQUATION
We now wish to study the positivity, or, more precisely, nonnegativity, of the solution of
(6. J) u" + q(x)u + f(x) = 0, 0< x < b,
64 QUASI LINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
with the boundary conditions u(O) = u(b) = 0, by means of the behavior of the solution of the associated heat equation
(6.2) U t = u"'''' + q(x)u + f(x)
with the initial condition
(6.3) u(x,O) = hex), ° ~ x ~ b,
and the boundary conditions
(6.4) u(O,t) = u(b,t) = 0, t > 0.
We have changed variables in order to keep x as a "space variable" and t as a "time variable," for the benefit of our intuition.
The motivation for the introduction of (6.2) is the following. Under appropriate conditions, which we shall examine below, the limiting value of u(x,t), as t - co, is the solution of (6.1). Hence, if we can establish the nonnegativity of the solution of (6.2), for t ~ 0, we will have established the desired non negativity for the solutions of (6.1). We are using an imbedding technique.
Let us suppose that we have established the requisite existence and uniqueness theorems and that we are presently only concerned with the positivity of the solution.
Consider, to begin with, the equation
(6.5) ut = u"'''' + f(x),
and return to the diffusion, or random walk, model which generates this equation. The corresponding difference scheme is
(6.6) u( x, t + ~2) = u(x + A, t) ; u(x - A, t) + f(x) ~2 , where x = 0, A, ... , and t = 0, A2/2, .... We define a function u(x,t) == u(x,t,A) for all x, t ~ 0 by agreeing to use linear interpolation between grid points.
It is immediate from the recurrence relation that u(x,t) ~ 0 for all x, and t ~ 0, if j(x) ~ 0 and u(x,O) ~ O. Assuming that we can establish that the limit of the solution of (6.6), the function U(X,t,A),
MONOTONE BEHAVIOR AND DIFFERENTIAL INEQUALITIES 65
as 6. --+ 0, is the solution of (6.5), we have demonstrated the nonnegativity of the solution of (6.5).
To handle the more general equation
(6.7) Ut = U"'''' + q(x,t)u + f(x),
in the case where q(x,t) is nonnegative for x, t ~ 0, we write in place of (6.6) the relation
(6.8) ( +6.2) u(x + 6., t) + u(x - 6., t) u x t - = ~---'-':"""":"----=------'---"":"';":'
'2 2
6.2 6.2 + q(x,t)u(x,t) - + f(x) - .
2 2
Once again, the nonnegativity of u(x,t) is immediate. However, interestingly enough, the restriction that q(x,t) ~ 0 is inessential, since we can always perform a change of variable
(6.9)
leading to the equation
(6.10) Vt = V"'''' + (q(x,t) + k)v + f(x)ekt•
Hence, if we can find a constant k such that q(x,t) + k 2 0 for all x and t, the proof goes through as before.
Assuming, as above, that there is no difficulty in establishing that the limit of the solution of the difference equation is the solution of the differential equation, we have demonstrated:
If (a) Ut - U"'''' + q(x,t)u ~ 0, ° < x < 1, t > 0,
(b) u(x,O) ~ 0, ° ~ x ~ 1,
(c) u(O,t) = u(1,t) = 0, t > 0,
(d) q(x,t) 2 -k(T) > - 00, ° ~ x ~ 1, O~t~T,
for any T> 0,
then u(x,t) ~ 0, ° ~ x ~ 1, t ~ 0. A great deal of work has been done in connection with the con
vergence of the discrete process to the continuous one.
66 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
7. CHARACTERISTIC VALUES AGAIN
The interesting question now is to determine why we have an unrestricted positivity result for the partial differential equation, and yet a restricted result for the ordinary differential equation. As we shall see, this corresponds to the actual state of affairs, and is not merely a reflection on our analytic approach.
Let us suppose that
(7.1) u" + q(x)u + f(x) = 0, u(o) = u(1) = 0,
possesses a unique solution u = uo(x), and let u(x,t) denote the solution of
(7.2) Ut - UOl'" - q(x)u - f(x) = 0,
u(O,t) = u(l,t) = 0, u(x,O) = hex), ° ~ x ~ 1.
If we set
(7.3) u(x,t) = uo(x) + v(x,t),
we see that v satisfies the homogeneous equation
(7.4) vt - v",,,, - q(x)v = 0,
v(O,t) = v(l,t) = 0, v(x,O) = hex) - uo(x).
For u(x,t) to converge to uo(x) as t --+ 00, it is necessary that v(x,t) --+ 0 as t --+ 00. Using a Sturm-Liouville expansion of the solution, write
ex)
(7.5) v(x,t) = ! eAktvk(x).
Each pair, Ak and vix), is determined by the Sturm-Liouville equation
(7.6)
with the vk(x) normalized by the condition fl v~(x) dx = 1. If all .0
characteristic values of (7.6) are negative, Ak < 0, then v(x,t) --+ 0 as t --+ 00. Only in this case does u(x,t) --+ u(x) as t ---->- 00, for all forcing functions j(x), and therefore only under this constraint on the characteristic values can we conclude the nonnegativity of lI(x) from that of u(x,t).
MONOTONE BEHAVIOR AND DIFFERENTIAL INEQUALITIES 67
8. VARIATIONAL APPROACH
The study of the differential inequality
(8.1) u" + q(t)u ~ 0, ° ~ t ~ b, u(o) = u(b) = 0,
is equivalent to the study of the differential equation
(8.2) u" + q(t)u + f(t) = 0, ° ~.t ~ b, u(o) = u(b) = 0,
for an arbitrary nonnegative function f(t). * Let us now present a method, entirely different from any of the
three preceding, of studying this equation based upon the fact that (8.2) is the Euler equation associated with the problem of minimizing the functional
(8.3) J(u) = f[U f2 - q(t)u 2
- 2f(t)u] dt
over functions u(t) satisfying the constraints u(O) = u(b) = 0, and for which the integral exists.
It follows that to demonstrate that the solution of (8.1) is nonnegative whenever f(t) 2 0, it suffices to show that the function which minimizes J(u) is nonnegative whenever f(t) 2 0.
To begin with, let us require that
(8.4) 7T
2
q( t) < - - € - b2 '
€ > 0,
(a characteristic-value condition; see Sec. 9). This condition ensures that J(u) has a finite lower bound. Standard convexity arguments show that the lower bound is attained by a unique function. References to these background results will be found at the end of the chapter.
To show that the minimizing function is nonnegative, let us proceed by contradiction. Suppose u(t) < 0 in an interval [al>bd lying
• We have switched back to t as the independent variable.
68 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
u
u (t)
o b
Fig. 3.2
within [O,b], as in Fig. 3.2. Consider then a new function wet) equal to -u(t) in this interval and to u(t) elsewhere. This introduces a possible discontinuity into u'(t), but does not affect V-integrability. This change does not affect the quadratic terms, but diminishes the contribution of the term involvingj(t). This contradicts the fact that u(t) is the minimizing function.
Note that this argument shows that there is no need for a solution with negative values, even if we have not previously established uniqueness of solution.
9. DISCUSSION
The equation
(9.1) u" + ku + sin 7Tt/b = 0, u(o) = u(b) = 0,
with the solution
-sin 7Tt/b u = ----'-
k - 7T2/b2 (9.2)
shows that the result is best possible as far as a uniform bound on q(t) is concerned, which is to say, one independent of f(t). If k > 7T2/b 2,
U has a sign opposite to that of sin 7Tt/b.
MONOTONE BEHAVIOR AND DIFFERENTIAL INEQUALITIES 69
The quantity 7T2jb2 enters as the smallest characteristic value of the Sturm-Liouville equation
(9.3) U" + AU = 0, u(o) = u(b) = 0,
exhibiting once again the link with characteristic values. The foregoing argument can be used to show the more general
result that u cannot have more changes of sign than !(t), a variationdiminishing property.
The solution of (8.1) may be written in the form
(9.4) u = lb k(t,s)f(s) ds,
where k is the Green's function. The preceding arguments establish nonnegativity of the Green's function, and indeed its variationdiminishing property. For extensive results concerning variationdiminishing properties of Green's functions, see the references at the end of the chapter.
Observe that the argument in Sec. 8 can be applied to a general variational problem of the form
(9.5) J(u) = f[g(U,U') - 2f(t)u] dt,
where g is even in u and u'. Finally, let us again point out that we have consistently avoided
the more general form
(9.6) u" + p(t)u' + q(t)u = 0,
since the change of variable
(9.7)
converts this into the equation
(9.8) v" + (q(t) _ P';t) _ p2~t») v = 0.
70 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
10. IMPROVEMENT OF CONVERGENCE ESTIMATES
What is really desired in the theory of differential equations, and functional equations in general, is a method of successive approximations which yields rapid convergence to the solution whenever the solution exists, and throughout its whole domain of existence. Let us show that the monotonicity results established above enable us to establish this property for quasilinearization in some simple cases. A full investigation would take us too far afield into the general area of differential inequalities.
Consider, for example, the equation discussed in Sec. 12 of Chapter Two,
(10.1) u" = eU, u(O) = u(b) = O.
The sequence of approximations is generated by the relation
(10.2)
with Uo conveniently taken as O. Since eUn > 0 for all n; we see from Sec. 2 that we have monotonicity,
(10.3)
Assuming that we already know that a unique solution exists, we deduce that the sequence {un}, as a monotone bounded sequence, necessarily converges. Furthermore, we know from a previous discussion that it converges to u. Since u is continuous and the convergence is monotone, the convergence must be uniform (Dini's theorem), a point we have also discussed before.
The positivity of the operator thus provides us with a new technique for establishing convergence, together with improved estimates for the domain of convergence.
Generally, given the equation
(10.4) u" = g(u,x), u(O) = u(b) = 0,
with the associated linear equations yielding the approximations
(10.5) U':"I = g(u",x) + (U,,'I - U")g..(Il,,,X),
MONOTONE BEHAVIOR AND DIFFERENTIAL INEQUALITIES 71
we see that monotonicity depends upon the establishment of a bound of the form
(10.6)
If gu is automatically positive, as above, or if it is monotone, convex, or concave, so that we can draw some conclusions from an examination of the expression -gu(u1,x) or -giu,x), then more general results can be stated.
We have wished merely to indicate some of the possibilities of this approach without delving too deeply into the analytic aspects.
II. DISCUSSION
We have employed three distinct types of approach to study the behavior of the Green's function of the second-order linear differential equation
(11.1) u" + p(t)u' + q(t)u + f(t) = 0, u(O) = u(b) = O.
The first, contained in Secs. 2-5, has been direct. The second and third were indirect approaches, based upon associations of the equation above with other processes and their associated equations. Examining these processes in detail, we deduced the required positivity, and more.
Observe then that we have used imbedding techniques. The original problem was imbedded within a larger family of problems which were actually easier to resolve. This idea is fundamental in dynamic programming and invariant imbedding-techniques which we discussed briefly in Chapter Two. We shall discuss dynamic programming again in Chapter Seven.
Chapter Three
COMMENTS AND BIBLIOGRAPHY
§1. The study of differential inequalities in a systematic fashion dates back to the work of Caplygin. See, for example,
72 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
S. A. CAPLYGIN, New Methods in the Approximate Integration of Differential Equations, Gostekhizdat, Moscow, 1950. (This is a set of reprints of the original papers.)
These investigations lead directly into the study of generalized convexity, an area investigated by Beckenbach, Bing, Valiron, Bonsall, Peixoto, Motzkin, Tomheim, Curtis, Reid, Hartman, Redheffer, and others. References may be found on page 160 of
E. F. BECKENBACH and R. BELLMAN, Inequalities, Springer, Berlin, 1961.
See also the books
M. A. KRASNOSELSKII, Positive Solutions of Operator Equations, Gostekhizdat, Moscow, 1962.
L. COLLATZ, Numerical Treatment of Differential Equations, Springer, Berlin, 1960.
§2. Further ex~mples of what can be done by arguments of this type may be found in Chapters 6 and 7 of
R. BELLMAN, Stability Theory of Differential Equations, McGraw-Hill Book Company, Inc., New York, 1954.
For an application of similar reasoning to harmonic functions, that is, solutions of u",,,, + ulIV = 0, see
E. C. TITCHMARSH, Theory of Functions, Oxford University Press, London, 1939, p.167.
§3. The corresponding results for the nth-order linear differential equations are given in
G. POLYA, "On the Mean-value Theorem Corresponding to a Given Linear Homogeneous Differential Equation," Trans. Amer. Math. Soc., Vol. 24,1922, pp. 312-324.
L. SCHLESINGER, Handbuch der Theorie der linearen Differentialgleichungen, Vol. 1, 1895, p. 52.
§5. This result has been rederived by many authors. It appears to go back to Caplygin. See, for example,
J. E. WILKINS, "The Converse of a Theorem of Tchaplygin on Differential Equations," Bull. Amer. Math. Soc., Vol. 53, 1947, pp. 126-129.
§6. For a discussion of conditions under which the discrete scheme converges to the continuous, see
O. E. FORSYTHE and W. WASOW, Finite Difference Methods for Partial Differential Equations, John Wiley & Sons, Inc., New York. 1960.
MONOTONE BEHAVIOR AND DIFFERENTIAL INEQUALITIES 73
There are many extensions of these techniques and associated applications. For example, we can replace (6.6) by
N
u(x, t + .:l.:) = ! ai[u(x + WiA, t) + u(x - WiA, t)] + ... , i=1
and by judicious choice oftheai and Wi obtain a much higher degree of accuracy. Furthermore, we can apply the same idea to nonlinear partial difference equations and establish nonnegativity in this fashion. Thus Ut = uu'" can be approximated to by
U(X, t + A) = u(x + u(x,t)A, t).
See
R. BELLMAN, I. CHERRY, and G. M. WING, "A Note on the Numerical Integration of a Class of Nonlinear Hyperbolic Equations," Quart. Appl. Math., Vol. 16, 1958, pp. 181-183.
R. BELLMAN, "Some Questions Concerning Difference Approximations to Partial Differential Equations," Boll. d'Unione Mate., Vol. 17, 1962, pp. 188-190.
R. BELLMAN, R. KALABA, and B. KOTKIN, "On a New Approach to the Computational Solution of Partial Differential Equations," Proc. Nat. A cad. Sci. USA, Vol. 48, 1962, pp. 1325-1327.
T. MOTZKIN and W. WASOW, "On the Approximation of Linear Elliptic Partial Differential Equations by Difference Equations with Positive Coefficients," J. Math. Phys., Vol. 31, 1953, pp. 253-259.
Let us note, in passing, an application of quasilinearization to the study of solutions of the heat equation,
R. BELLMAN, "Some Properties of Summation Kernels," Duke Math. J., Vol. 15, 1948, pp. 1013-1019.
Here we use the result
max /u(x)/ = lim (i b
u(x)2n dX)1/2n. a$~$b n __ OO a
§8. This approach was first given in
R. BELLMAN, "On the Nonnegativity of Green's Functions," Boll. d' Unione Mate., Vol. 12, 1957, pp. 411-413.
For the proof of existence of the minimum, see
R. BELLMAN, I. GLICKSBERG, and O. GROSS, Some Aspects of the Mathematical Theory of Control Processes, The RAND Corporation, R-313, 1958.
§9. See
R. BELLMAN, "On Variation-diminishing Properties of Green's Functions," Boll. d'Unione Mate., Vol. 16, 1961, pp. 164-166.
74 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
For detailed discussion of variation-diminishing transformations and the concept of total positivity, see
S. KARLIN, "Total Positivity and Convexity Preserving Transformations," Proc. Symp. Pure Math., Vol. VII, 1963, pp. 329-347,
and other references given in the book cited in §1 above.
§1O. For generalizations of Dini's theorem to LV-spaces, and even more general spaces, see
K. VALA, "On Compact Sets of Compact Operators," Ann. Acad. Sci. Fenn. Ser. A, Vol. 351, 1946, pp. 3-8.
W. MLAK, "Note on Abstract Differential Inequalities and Chaplighin Method," Ann. Polon. Math., Vol. X, 1961, pp. 253ff.
---, "Note on Maximum Solutions of Differential Equations," Contributions to Differential Equations, Vol. 1, 1963, pp. 461-465.
Some papers on differential inequalities of particular interest are
L. COLLATZ, "Aufgaben monotoner Art," Arch. Math., Vol. 3, 1952, pp. 366-376. ---, "Applications of the Theory of Monotonic Operators to Boundary
Value Problems," Boundary Problems in Differential Equations, University of Wisconsin Press, Madison, Wisconsin, 1960.
R. M. REDHEFFER, "Die Collatzsche Monotonie bie Anfangswertproblemen," Arch. Rational Mech. Anal., Vol. 14, 1963, pp. 196-212.
A. I. AVERBUKH, "The Connection Between S. A. Chaplygin's Theorem and the Theory of Optimal Processes," Avtomat. i Telemeh., Vol. 22, 1961, pp. 1309-1313.
G. S. JONES, "Fundamental Inequalities for Discrete' and Discontinuous Functional Equations," J. Soc. Indust. Appl. Math., Vol. 12, 1964, pp. 43-57.
Chapter Fou r
SYSTEMS OF DIFFERENTIAL EQUATIONS, STORAGE AND DIFFERENTIAL
APPROXIMATION
I. INTRODUCTION
In this chapter, we wish to discuss the application of our techniques to the study of higher-order differential equations and systems subject to two-point and multipoint boundary-value problems. As before, we shall first present the quasilinearization procedure which yields quadratic convergence, and then consider questions of monotonicity.
The computational aspects, particularly the problem of economizing on rapid-access storage, lead us to develop a novel way of treating successive approximations. This, in turn, leads us to a new technique for obtaining the numerical solution of differential-difference equations and of more general functional differential equations of the type
(Ll) u'(t) = g(t,u(t),u(h(t))).
We also apply quasilinearization in connection with differential approximation, a method which permits us to reduce quite complicated functional equations to ordinary differential equations. Examples of the efficacy of this procedure will be given.
The nonlinear differential equation
(1.2) u LV ) = g(u, u', ... , U(N-I»)
may be treated directly by means of quasiIinearization. It is usually more convenient, for both analytic and computational purposes, to convert it into a system of first-order equations by means of the
76 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
introduction of the new variables Uh U2 , ••• ,UN' where U1 = U, U2 = u', ... , UN = U(N-I). Then (1.2) becomes
(1.3)
As far as questions of monotonicity are concerned, as we shall see below, there are advantages sometimes in using one form and sometimes the other.
We shall omit all proofs of convergence, since they follow the common pattern which we have already established in Chapter Two.
1. QUASILINEARIZATION APPLIED TO SYSTEMS
Let us then consider the nonlinear system
(2.1) i = 1,2, ... , N,
where the values of the Xi are specified by various conditions at 1= 0 and t = b.
We leave out any I-dependence on the right-hand side merely to simplify the notation. We can either write
(2.2) i = 1,2, ... , N,
or introduce t as a new dependent variable, denoted by X N +h and write
(2.3)
dXN+l == 1. dt
i = 1,2, ... , N,
SYSTEMS OF DIFFERENTIAL EQUATIONS 77
Since this is a system of the original type of one dimension higher, we may as well keep the original form.
It is convenient to use vector-matrix notation and write
(2.4) dx - = g(x), dt
where x is the N-dimensional vector with components XI. X 2, ••• , X N ,
andg(x)istheN-dimensionalvectorwithcomponentsgi (x1, x 2,.··, xN ),
i = 1,2, ... ,N. Let the boundary conditions have the form
(2.5) (x(O),a i ) = bi'
(x(b),ai) = bi'
i = 1,2, ... , k,
i = k + 1, ... , N,
where the 0i are given vectors, the bi are given scalars, and (x,y), as usual, denotes the vector inner product
N
(2.6) (x,y) = ! XiYi-i~1
There is no difficulty in applying the same general technique to the case where there are nonlinear boundary conditions.
Quasilinearization is readily applied as before. We generate a sequence of vectors {xn} by means of the linear equations
(2.7) dx n+1 at = g(xn) + J(xn)(xn+l - x n), n = 0,1,2, ... ,
and the boundary conditions
(2.8) (xn+1(0),ai) = bi'
(x n+1( b ),ai) = bi'
i = 1,2, ... , k,
i = k + 1, ... , N,
with xO an initial guess. Here J(xn) is the Jacobian matrix defined by
(2.9) J( n) (Ogi( n n n)) X = - X l' X 2' ••. , X N , oX i
= (Og;) evaluated at x = xn. ox,
i,j = 1,2, ... , N,
78 QUASI LINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
Prior to the presentation of a numerical example, let us discuss what is involved in the solution of the linear system.
3. SOLUTION OF LINEAR SYSTEM
Consider the equation
(3.1) dx - = A(t)x, dt
subject to the boundary conditions
(3.2) (x(O),ai) = bi'
(x(b),ai) = bi'
i = 1,2, ... , k,
i = k + 1, k + 2, ... , N.
If X(/) represents the solution of the matrix equation
(3.3) dX dt = A(t)X, X(O) = I,
where I is the identity matrix, then the general solution of (3.1) has the form
(3.4) x = Xc,
where c is a vector. Substituting in (3.2) in order to determine c, we obtain a system of linear equations
(3.5) (X(O)c,ai) = bi' i = 1,2, ... , k,
(X(b)c,ai) = bi' i = k + 1, ... , N, or
(3.6) (c,ai) = bi' i = 1,2, ... , k,
(c,X(b)'ai) = bi' i = k + 1, ... , N
(where X' denotes the transpose of X), a set of linear algebraic equations for the components of c.
The condition for a unique solution of this system is, as is to be expected, of characteristic value type. One way to ensure that it holds is to take the interval [O,b] to be sufficiently small.
SYSTEMS OF DIFFERENTIAL EQUATIONS 79
Let us strongly emphasize that the reduction of the computational solution of the original problem to the numerical solution ofa system of linear algebraic equations by no means ends the matter. Serious questions of inaccuracy and instability can easily arise. We shall discuss some of these below. Furthermore, we shall briefly sketch some alternative approaches which avoid the solution of linear algebraic systems. One of these is dynamic programming.
4. A NUMERICAL EXAMPLE
As an example ofthe quasilinearization method applied to a higherorder equation, let us discuss the numerical solution of
(4.1)
with the boundary conditions
(4.2) u(O) = u'(O) = 0, u"(l) = u"'(l) = o.
U sing the recurrence relation
(4.3)
and the initial approximation uo(x) = 0, and proceeding as above, we generate Table 4.1. Here k = 6, and an Adams-Moulton integration procedure was used. The entire calculation consumed about eight seconds on an IBM-7090.
Table 4.1
x u1(x) u.<x) U3(X) u.(x)
0.0 0.0 0.0 0.0 0.0 0.20312 -0.375572 x 10-1 -0.394922 X 10-1 -0.394976 X 10-1 -0.394976 X 10-1
0.40625 -0.128544 -0.135598 -0.\35617 -0.\35617
0.60156 -0.242775 -0.256792 -0.256831 -0.256831 0.8469 -0.373308 -0.395724 -0.395786 -0.395786 1.0 -0.501782 -0.532645 -0.532730 -0.532730
80 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
5. MULTIPOINT BOUNDARY·VALUE PROBLEMS
The technique we have used in the preceding section is equally applicable to the computational solution of equations of the form
dx (5.1) - = g(x),
dt
where conditions are imposed at many t-points, for example,
(5.2) i = 1,2, ... , N,
where tl ~ t2 ~ ... ~ tN' Problems of this type arise in a very natural way in connection with the identification of systems.
In more general fashion, we can treat side conditions of the form
(5.3) f(X(t), dki(t» = bi' i = 1,2, ... , N,
where the integrals are of Riemann-Stieltjes type, and nonlinear versions of these relations. Some examples of the computational treatment of multipoint boundary-value problems will be found in Chapter Six. Relatively little has been done in connection with the analytic study of these equations by comparison with their scientific importance.
6. SUCCESSIVE APPROXIMATIONS AND STORAGE
In the previous pages, we discussed the solution of
(6.1) dx - = g(x), dt
(x(O),ai) = bi'
(x(b),ai) = bi'
i = 1,2, ... , k,
i = k + 1, ... , N,
by means of the successive approximations generated by the relation
(6.2) dXn+1 = g(xn) + J(xn)(xn+1 - x n), dt
(xn +1(O),a i ) = b.. i = 1,2, ... ,k,
i = k + 1, ...• N.
SYSTEMS OF DIFFERENTIAL EQUATIONS 81
We also pointed out how quasilinearization could be used directly in connection with variational problems.
Let us now inquire into the actual computational aspects of the foregoing algorithm. In order to use this equation to obtain the values of X n+1(t), given the values of xn(t) at a grid of t-points, t = 0, ~, 2~, ... , M, it is necessary to store the entire set of values {xn(k~)}, k = 0, 1,2, ... , M. This is the usual procedure, when the method of successive approximations is employed, and is the one used in the preceding calculations.
This is no serious matter if we are computing the solution of a single equation of the type shown above in Sec. 4, or even the solution of a low-dimensional system. But if we wish to treat a system of dimension 50 or 100 where some accuracy is desired, which means that each function may require the storage of 1000 values, then lack of sufficient rapid-access storage can easily block our proposed solution.
In the next section, we shall present a method which enables us to overcome this difficulty to some extent. In this connection, we encounter very interesting and difficult problems in the storage and retrieval of information, which have not been studied to any extent.
7. SIMULTANEOUS CALCULATION OF APPROXIMATIONS
Let us discuss the general problem of determining the numerical solution of the vector equation
(7.1) dx - = g(x) dt
subject to the linear boundary conditions
(7.2) (x(O),ai ) = bi'
(x(b).ai ) = bi'
i = t, 2, ... , k,
i = k + t, ... , N.
82 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
We replace, in the usual way, (7.1) by the sequence of equations
(7.3)
where each Xn is subject to the boundary conditions of (7.2). Let xo(t) be an initial approximation which requires no uncom
fortable amount of storage, say a vector whose components are all constants, polynomials in t, or exponentials. Let xl(t) be determined from (7.2) and (7.3) with n = O.
Briefly, let us recall how Xl is determined. If Xl is the matrix solution of
(7.4)
and Pl is the vector solution of
(7.5) Pl(O) = 0,
then the solution of (7.3) has the form
(7.6)
where the vector Cl is determined by the system of N linear equations
(7.7) i = 1,2, ... , k,
i = k + 1, ... , N.
Assuming, as is the case if b is sufficiently small, that this system has a unique solution, Cl is determined. This yields the full initial value
(7.8)
Carrying out these operations requires the simultaneous solution of N2 + N first-order differential equations and the solution of a set of N simultaneous algebraic equations.
SYSTEMS OF DIFFERENTIAL EQUATIONS
Turning to the equation for X 2, we must solve the system
(7.9)
dP2 = J(XI)P2 + g(xl ) - J(XI)XI, dt
83
where Xl is the vector determined previously. Instead of storing the values of XI(t), we adjoin to (7.9) the equations
(7.10) dXI = J(XO)XI + g(xo) - J(xo)xo, dt
where CI is the initial value determined by (7.8). We must now solve (N2 + N) + N simultaneous differential equa
tions plus N algebraic equations in order to determine the constant vector C2 in the representation X 2 = X2C2 + P2, the full initial value required to determine X 2•
Continuing in this way, we see that at the Rth step we solve simultaneously the differential equations
dXR (7.11) - = J(XR_I)XR, dt
XR(O) = I,
dPR = J(XR-I)PR + g(XR- I) - J(XR_I)XR_b dt
PR(O) = 0,
Xi(O) = Ci ,
i = 1, 2, ... , R - 1.
This entails the simultaneous solution of N2 + RN simultaneous differential equations with initial conditions, but no storage of previous approximations. At each step, we must solve linear algebraic systems of order N.
8. DISCUSSION
Since the convergence is quadratic, we can expect that R will seldom exceed 5 or 10 at most. Hence, if N = 2, we must handle at
84 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
most 24 simultaneous equations; if N = 4, at most 56 equations; if N = 10, at most 200 equations. All of these are quite modest demands in terms of the capacities of current computers.
On the other hand, if N is quite large, say 100, then we modify our procedure in the following ways. To begin with, in place of solving the· matrix equation
(8.1)
at one stroke, an operation requiring the integration of N2 simultaneous equations, together with the N equations for the determination of x,,, we determine the columns of X n+1 one after the other by solving the equations
(8.2) y;(O) = e,.,
where e,. is the vector with a 1 in the jth component and zeros elsewhere,
o
(8.3)
o
This requires the solution of N + RN simultaneous equations N times. As usual, we are engaging in a trade of time, which we possess, for storage, which we do not always possess. We shall discuss another technique for reducing the growth of the order of the simultaneous system of differential equations below in connection with differential approximation.
SYSTEMS OF DIFFERENTIAL EQUATIONS 85
9. BACK AND FORTH INTEGRATION
Let us now briefly mention another approach to the solution of two-point boundary-value problems which avoids both storage and the solution of systems of linear algebraic equations.
To illustrate the idea in simplest terms, consider the scalar equation
(9.1) u" = g(u,u',t), u(o) = u(b) = 0,
and suppose that we proceed in the following fashion. Guess a value for u'(O) and integrate (9.1) numerically with the initial values u(O) = 0, u'(O) = Cl. In this way we obtain values u(b), u'(b}. Ifu(b) = 0, to a desired degree of accuracy, we stop. If not, we integrate backwards from t = b with the initial values u(b) = 0, u'(b) = db where d1 is the value of u'(b), obtained from the previous calculation. In this way, we obtain values of u(O), u'(O). If u(O) = 0, to a desired degree of accuracy, we stop. If not, we integrate forward with the initial values u(O) = 0, u'(O) = C2, where C2 is the value of u'(O) obtained from the previous calculation.
It can be shown that this method yields a sequence {cn} which converges to the correct value of u'(O) in a number of cases. What is needed is a more sophisticated approach, based perhaps upon a summability method which enlarges the domain of applicability.
10. DIFFERENTIAL·DIFFERENCE EQUATIONS
Physical processes involving time delays and retardation effects lead to differential-difference equations rather than the traditional differential equations. A typical equation is
(10.1) u'(t) = g(u(t),u(t - 1», t 2 1,
with the function u(t) now specified in an initial interval
(10.2) u(t) = uo(t), O~t~1.
There is no difficulty in applying any of a number of standard numerical methods available for ordinary differential equations to the
86 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
computational solution of equations of this nature. In so doing, however, we must store the values of the function over an interval of length 1. Can we avoid this? Once again, there is no difficulty if we are dealing with systems of low order, but high-order systems can produce storage difficulties.
II. REDUCTION TO DIFFERENTIAL EQUATIONS
Let us begin by introducing the sequence of functions
(11.1) un(t) = u(t + n), o ~ t ~ 1,
for n = 1, 2, .... Then the single differential-difference equation of (10.1) may be written as a system of equations
(11.2)
for n = 1, 2, ... , with uo(t) given. The initial conditions are
(11.3)
There are two difficulties to overcome in the simultaneous computational solution of (11.2). The initial values are unknown, and the function uo(t) must be stored.
Ignoring the storage of uo(t), a point we shall return to below, let us indicate how the unknown values of un(O) may be determined. Consider the equation
(11.4)
This may be integrated numerically to yield the value u1(1) = u2(O). We now start all over again and integrate the system
(11.5) u{(t) = g(u1(t),UO(t»,
u~(t) = g(U2(t),U1(t»,
which yields the value u2(1) = us(O).
u1(O) = uo(l),
u2(O) = u1(1),
SYSTEMS OF DIFFERENTIAL EQUATIONS 87
Continuing in this fashion, at the kth stage we solve the system consisting of the k equations
(11.6) ui(t) = g(ui(t),Ui_l(t», ui(O) = bi' i = 1,2, ... , k,
where the hi are known values. The determination of the solution over [O,N] thus requires that a
total of 1 + 2 + ... + N = N(N + 1)/2 equations be integrated altogether. If N is not too large, this is a fairly routine operation requiring a minimum of storage and not too much time. As we shall see in the section devoted to differential approximation, we possess methods for reducing this effort.
12. FUNCTIONAL DIFFERENTIAL EQUATIONS
Let us now apply the same general idea to the functional differential equation
(12.1) U'(t) = g(t,u(t),u(h(t))).
Equations of this type arise in the construction of realistic mathematical models in a number of fields, ranging from electromagnetic theory and control theory to respiratory physiology and neurophysiology.
We suppose that h(t) :s: t for t 2 0, so that the future is determined by the past. Furthermore, assume that h'(t) > ° for t 2 ° and let h-1(t), the inverse function, be denoted by H(t). The function u(t) is taken to be known in some initial interval [0,t1], where tl =
H(O). Let the sequence {t n } be defined recursively by the relation
(12.2) tn = H(tn-l), n = 2, 3, ....
Let H(n)(t) denote the nth iterate of H(t), H(n) = H(HP'-ll), n = 2, 3, .... By virtue of our hypotheses, we observe that the function H(t) maps the interval [tn-htn] onto [t",tn+l ] in a one-to-one fashion, and H(k)(t) maps [t n-h( n] onto [t n-1+k,t n+k].
Consider the function
( 12.3) n = O. 1.2 •...•
88 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
where s is restricted to the interval [O,t l ], and H(O)(s) == s. Thus the values of un(s), for ° ::;; s ~ tlo are the values of u(t) for tn ::;; t ::;; tn+!. We have
(12.4)
and the derivative of H(n)(s) can easily be evaluated recursively by the formula
(12.5) E... H(n)(s) = H'(H(n-ll(s» E... H(n-l\s). ds ds
Now we set t = H(n)(s), where ° ::;; s ::;; t l • Then from the equation in (12.1) we get
(12.6) u'(H(n)(s» = g{u(H(n)(s»,u(H(n-l)(s»}
= g(un(s),un_l(s», n = 1,2, ....
Referring to (12.4), we see that (12.6) may be written
(12.7) u~(s) = [:s H(n)(S)] g(unCs),un_tCs», n = 1,2, ....
Thus (12.1) has been replaced by a system of ordinary differential equations where s now ranges over a fixed interval [O,t l ]. We now proceed to obtain the numerical solution, using the repetitive scheme of Sec. 11.
13. DIFFERENTIAL APPROXIMATION
If we want the solution over a large t-interval, the foregoing method can become unwieldy because of the large number of differential equations that require simultaneous solution. What we would like to do is start the method all over again at some time t = k, with the function uk(t) serving as a new initial function. The rub is that we now have to store the functional values, something we tried to avoid at the outset.
One way to avoid this problem is to approximate to the function uk(t) by a function of simple analytic structure, such as a polynomial, an exponential polynomial or a polygonal function. Thus, for
SYSTEMS OF DIFFERENTIAL EQUATIONS 89
example, if we approximate to uk(t) by means of a polynomial M
(13.1) uit) ~! antn,
n~O
to reproduce the values of uk(t) over an interval [O,T], we need only store the M + 1 coefficients [ao, ah ... , aM] plus a rule for forming the polynomial given the coefficients. A similar comment applies if we use the approximation
M
(13.2) uit) ~! ane).nt . n~l
Both of these approximations are particular cases of approximating to a function u(t) by means of a solution v of the linear differential equation
(13.3) v(N) + a 1v(N-1l + ... + aNv = 0, v(i)(O) = Ci ,
i = 0, 1, ... , N - 1.
One way of obtaining this approximation is to ask for the values of ai and Ci which minimize the quadratic functional
(13.4) LT (v - u? dt.
This nonlinear optimization problem, a particular case of a general control and design problem, can be approached by means of quasilinearization. This will be discussed in Chapter Six, as a particular case of the general design and control problem. Here we will present a simpler approximation technique which often yields satisfactory results.
14. SIMPLE VERSION OF DIFFERENTIAL APPROXIMATION
Suppose that u(t) satisfies an ordinary differential equation of the form
(14.1) lI(m) = h(II,II' • ... ,1I(m-U,t), u(i)(O) = Ci ,
i = 0, 1, ... , m - I,
90 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
and we wish to find an approximating linear differential equation of the form
(14.2)
Let us determine the coefficients ai by the requirement that they minimize the quadratic expression
(14.3) IT(u(N) + a1u(N-1l + ... + aNu? dt,
where u is the function determined by (14.1). The minimization of the expression in (14.3) leads to the system of
N simultaneous linear equations
(14.4) IT U(i)(U(N) + a1u(N-l) + ... + aNu) dt = 0,
j = 0, 1,2, ... ,N - 1.
For moderate values of N, that is, N ~ 20, the computational solution provides no difficulty once we have evaluated the integrals appearing as coefficients. We could, if we so desired, integrate by parts and reduce the evaluation of these integrals to the evaluation of
the integrals f: (U(i))2 dt. For moderate size N, however, it is more
convenient to proceed directly as follows. Introduce new variables Uii' i,j = 0, 1, ... , N - 1, defined by
the equations
(14.5) dUii (N-il (j) -=U u dt '
Ui;(O) = 0,
and solve these differential equations simultaneously with the original equation for u, namely,
(14.6) U(N) = g(u,u', ... ,u(N-ll,t).
The values Uii(T) are the desired coefficients for (14.4). Having determined the coefficients at by means of the foregoing
procedures we now wish to determine the approximating function
SYSTEMS OF DIFFERENTIAL EQUATIONS 91
vet) as a solution of (14.2). A first approach IS to use the initial values
(14.7) i = 0, 1,2, ... , N - 1,
and indeed this is what we do below with some success. In general, however, we would proceed in the following fashion.
Let VI, V2, ••• , VN be the N principal solutions of (14.2), the solutions determined by the condition that the matrix whose columns are (VI(O), v~(O), ... , viN- ll (O», etc., is the identity matrix.
Every solution can then be written in the form
(14.8) N
u(t) = L CiVi, i~l
where the Ci are scalars. Let us choose these Ci so as to minimize the expression
(14.9) r T (u _ .f CiVi)2 dt. Jo '~I
The equations for the Ci are
iT N iT UVi dt - LC i ViV j dt = 0,
o j~1 0
(14.10) i = 1,2, ... , N.
To determine the various integrals, we introduce as above the variables Wi and Wij by means of the relations
(14.11) dWi - = UVi, dt
Vi(O) = 0,
dWij - = viv j '
dt
adjoin the equations for the Vi (Eq. (14.2) with appropriate boundary conditions) and the equation for u, and integrate.
15. A RENEWAL EQUATION
Let us now discuss the equation of renewal type
(15.1 ) (t •
u(t) = J(t) + Jo e-(t-.l u(s) ds.
92 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
Taking 0 ~ t ~ 1, we obtain as a third-order differential approximation to the function e-t2 a solution of the equation
(15.2) U(3) + 2.740299u(2) + 7.9511452u(1)+ 5.7636455u = o. Using the initial values obtained from e-t2 , namely
(15.3) u(O) = 1, u'(O) = 0, u"(O) = -2,
we found such excellent agreement between u(t), the solution of (15.2), and e-t2 over 0 ~ t ~ 1, that there was no need to follow the procedure of Sec. 14.
Consider the expression
(15.4) wet) = fk(t - s)u(s) ds.
Differentiating repeatedly, and adding with the coefficients obtained above, we have
(15.5) W(3) + 2.740299w(2) + 7.9511452w(l) + 5.7636455w
= k(O)u"(t) + k'(O)u'(t) + k"(O)u(t)
+ 2.740299[k(0)u'(t) + k'(O)u(t)] + 7.9511452k(0)u(t)
+ fU(S)[k"'(t - s) + 2.740299k"(t - s)
+ 7.9511452k'(t - s) + 5.7636455k(t - s)] ds.
Taking k(t) = e-t2 and assuming that the term under the integral sign is negligible, we obtain a third-order linear differential equation for w = u - f
Let us take J = 1 - t e-s2 ds so that the equation
(15.6) u(t) = 1 - fe- s2 ds + fe-(t-S)2U(S) ds
has the solution u(t) = 1. The function J(t) as given above satisfies the linear differential
equation
(15.7)
with f(O) = 1. 1'(0) = -I. reO) = o.
SYSTEMS OF DIFFERENTIAL EQUATIONS 93
Solving (15.7) together with the approximate linear equation for u obtained from (15.5), we obtain the values for u(t) shown in Table 4.2.
Table 4.2
1 u(/) u'(/) uH(/)
0.1 0.999999 - -0.2 0.999999 -0.146 x 10-3 -0.148 X 10-2
0.3 0.999969 - -0.4 0.999937 - -0.5 0.999909 - -0.6 0.999898 - -0.7 0.999909 0.299 x 10-3 0.174 X 10-3
0.8 0.999938 0.330 x 10-3 0.167 X 10-3
0.9 0.999970 0.272 x 10-3 0.135 X 10-2
1.0 0.999989 0.919 x 10-4 0.189 X 10-2
As we can see, the agreement with the desired solution, u(t) = 1, is excellent.
16. DISCUSSION
Very much the same technique can be used when u(t) satisfies a differential-difference equation. Alternatively, we can take a set of t i values and ask that the expression
M (16.1) L (u(ti ) - V(ti ))2
i=1
be minimized over the choice of coefficients and initial values. We will carry this procedure out in detail in dealing with some identification problems in Chapter Six.
Let us also point out that a combination of differential approximation and quasilinearization can be used to reduce to manageable terms an integro-differential equation such as
(16.2) u" + g(u) + fk(X - xl)h(u) dXl + r(x) = 0,
u(O) = u(l) - O.
94 QUASI LINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
Assume, as is often the case, that k(x) = k( -x) and suppose, for the sake of simplicity, that the solution of linear equations of differential approximation yields an approximation of the form (no multiple roots)
(16.3)
Then
Let
M
k(x) ~ LlXkeAk"', k=l
(16.5) vk = eAk'" L"'e-Ak"'lh(U) dxl ,
Then
(16.6)
and M
o~x~l.
viO) = 0,
wk(1) = 0,
(16.7) u" + g(u) + L (lk(Vk + Wk) + r(x)= 0, u(O) = u(l) = 0. k=l
We can now apply the standard quasilinearization technique to the solution of this two-point boundary-value problem.
The reader may, at this point, ask why it is that we did not start with an approximation of the type appearing in (16.3), rather than the more complex approach of (13.3). The answer is that there are wellknown difficulties associated with determining the coefficients and exponents in (16.3). More precisely, simple examples show that the Ak are unstable functionals of k(x). Essentially, this is due to the strain on the approximation caused by multiple roots of the characteristic equation of (13.3),
(16.8)
SYSTEMS OF DIFFERENTIAL EQUATIONS 95
On the other hand, the ai' the elementary symmetric functions, are stable functionals. Consequently, it is better to attempt to determine the ai first, and then the solution of the differential equation.
17. STORAGE AND MEMORY
The preceding discussion indicates quite clearly that we must be very precise when we speak of storage requirements for the computational solution of various types of problems. What is always meant, but, unfortunately, often not explicitly stated, is that a particular method requires a certain quantity of storage. A more ingenious method may greatly reduce the requirements.
This is pertinent to the study of the riddle of the human memory, or memories. When various people remark that the human mind would require so much energy or so many cells of certain types to perform various feats, all they mean is that these requirements are apparently necessary in the light of the few such procedures we understand or think we understand at the moment. It is quite possible, and indeed plausible, that in the trial and error process of evolution, very much more ingenious techniques for storage and retrieval of information, for pattern recognition, and for decision making have been developed by living organisms. The discovery of these techniques is an outstanding challenge, and it is certainly rather presumptuous to suppose that a few years' effort by a few people will resolve some of the conundrums of the human mind.
18. MONOTONICITY FOR SYSTEMS
Let us now examine the possibility of obtaining some results corresponding to those in Chapter Three for systems of differential inequalities of the form
(18.1 ) dx - ~ A(t)x, dt
x(O) = c,
96 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
where x is an N-dimensional vector. Here x ;;:: y for two vectors is shorthand notation for Xi ;;:: Yi' i = 1,2, ... , N. We wish to impose simple conditions on A(t) which will ensure that x ;;:: y for t ;;:: 0, where y is determined by the equation
(18.2) dy = A(t)y, dt
y(O) = c.
If A(t) is a constant matrix A, we can easily show that a necessary and sufficient condition for this to be true is that
(18.3) i ¥= j.
The expansion
(18.4)
establishes the necessity, as we see upon examining the right-hand side in the immediate vicinity of t = O.
Let us establish the sufficiency in the more general case of variable coefficients by means of the following simple argument. Make the change of variable
(18.5) Y . = ef~aii(S)dsz. , " i = 1,2, ... , N,
where the Yi are the components of y and the Zi are the components of a new vector z. Then (18.2) takes the form
(18.6)
where
(18.7)
with
(18.8)
dz - = B(t)z, dt
z(O) = c,
biit) = 0, . i =j,
= a;,(t)eH(a;;-aii) (I", i ¥= j.
If ailt) ~ 0, it follows that all components biJ(t) are nonnegative, and thus that z(t) ~ z(O) for f ~ O. Hence if c ~ o. y(t) ~ 0 for t ~ O.
SYSTEMS OF DIFFERENTIAL EQUATIONS
Since (18.1) may be written
dx - = A(t)x + f(t), dt
x(O) = c, (18.9)
with f(t) ;;::: 0, we see that z = x - y satisfies the equation
(18.10) dz - = A(t)z + f(t), dt
z(O) = O.
97
The same argument as above in (18.5)-(18.8) shows that z ;;::: ° for t ;;::: 0.
A result of this type is extremely useful in the study of the system.
(18.11) dx - = max [A(q,t)x + b(q)], dt q
an equation which is basic in the study of Markovian decision processes, an important class of dynamic programming processes of stochastic type.
19. MONOTONICITY FOR Nth-ORDER LINEAR DIFFERENTIAL EQUATIONS
Consider the linear differential expression
(19.1) L(u) = U(N) + PI(t)U(N-ll + P2(t)U(N-2) + ... + PN(t)u.
Introduce the Wronskian of order n,
fl(t) f;(t)
f2(t) f2(t)
fin- ll(t) f~n-l)(t)
fn(t) f~(t) f<:-l)(t)
Then the analogue of the decomposition for the second-order linear differential expression, used in Chapter Three, Sec. 4, is valid, namely
(193) Lu - ~ i! ( W~-l ... i! (~(i! (~!£ (~) ... ) . - Wv I dt W:V2WV dt WI Wa dt WOW2 dt WI .
98 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
Here Uh U2, ••• , U.v constitute a set of linearly independent solutions of L(u) = 0.
Let us now introduce what Polya called "property W." A set of solutions U 1 , U 2 , ••• , UN-l possesses property W in an open interval (O,b) if
(19.4) U 1 > 0,
in this open interval. It is clear, using the decomposition of (19.3), that if property W
holds, then
(19.5) L(u) ~ 0, L(v) = 0,
with u(i)(O) = v(i)(O), i = 1,2, ... , kN - 1, implies U ~ v in (O,b). This is an extensive generalization of the Caplygin result established in Chapter Three, Sec. 5.
20. DISCUSSION
We have given a few results which enable us to determine when a sequence of approximations converges monotonically. These results are not as elegant or as usable as the corresponding results for the second-order equation, as is to be expected.
In the higher-order case, we are primarily concerned with quadratic convergence and accuracy. In the next section, we discuss a method which has proved useful in providing a numerical solution of a linear differential equation of sufficient accuracy. The importance of this as far as the effective application of quasilinearization is concerned cannot be overemphasized.
21. USE OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCEDURE IN THE NUMERICAL SOLUTION OF
LINEAR BOUNDARY-VALUE PROBLEMS
There are many pitfalls in the numerical solution of systems of linear algebraic equations. No method which involyes the solution
SYSTEMS OF DIFFERENTIAL EQUATIONS 99
of such a system can be considered quite satisfactory. In an earlier section we showed how we can use a back-and-forth integration procedure to avoid the numerical solution of linear algebraic systems during the soluti09 of a linear two-point boundary-value problem. Alternatively, we may choose the complementary functions (a name we shall employ for the solutions of the homogeneous differential equation) in such a manner as to make the solution of the linear equations for the multipliers as acc;urate as possible. In this section we shall discuss one possibility in which the Gram-Schmidt orthogonalization procedure is employed.
Consider the fourth-order differential equation
(21.1) U(4) - 24u(3) - 169u(2) - 324u1 - 180u = O.
Its general solution is of the form
(21.2)
Let us focus our attention on the particular solution
(21.3)
which satisfies the initial conditions
(21.4) u(O) = 0,
u(O) = 0,
U(2\0) = 2,
U(3)(0) = -12.0.
In addition it satisfies the conditions, at t = 1,
(21.5) u(1) = e-1 - 2e-2 + e-3 = 0.146996,
u(l) = _e-1 + 4e-2 - 3e-3 = 0.241005 X 10-1•
Let us attempt to determine the function u(t) given in (21.3) as the solution of (21.1) subject to the boundary conditions u(O) = 0, u(O) = 0, u(l) = 0.146996, u(l) = 0.0241005.
If we employ the method of complementary functions in a straightforward fashion using a Runge-Kutta integration procedure with an
100 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
integration interval of 0.01, we obtain disappointing results. Let wet) be the solution of (21.1) subject to the initial conditions
(21.6) w(O) = 0, W(O) = 0,
and let z(t) be the solution for which
(21.7) z(O) = u, teO) = 0,
Numerical integration yields the values
(21.8) w(1) = 0.195694 X 1010,
w(1) = 0.587083 X 1011,
z(1) = 0.326157 X 109,
t(1) = 0.978472 X 1010.
Using this information to solve the linear algebraic equations
(21.9)
we find
(21.10)
since
w(1)c1 + Z(1)C2 = u(1),
w(1)c1 + t(1)C2 = u(1),
C1 = -0.609136 X 10-2,
C2 = 0.365481 x 10-\
C1 = U(2)(0), C2 = U(3)(0).
In view of the conditions U(2)(0) = 2, U(3)(0) = -12.0, of (21.4), we see that we have not succeeded in our task of finding the initial values of the function u(t).
The difficulty lies in the fact that the characteristic values associated with the differential operator in (21.2) (-1, - 2, - 3, 30) differ greatly in their real parts. This means that one of the complementary functions will dominate the others and make it difficult to determine what linear combination of complementary functions satisfies the given boundary conditions. We are encountering the phenomenon of "ill-conditioning."
SYSTEMS OF DIFFERENTIAL EQUATIONS 101
To remedy the situation we shall pay more attention to which complementary functions we calculate. Consider the two vectors oc and (3,
(21.11) oc = (:,,,) , (3 = (:, .. ) ,
W(3) Z(3)
where the functions w, z, and their derivatives are as above, that is, they are solutions of the homogeneous equation (21.1) satisfying the initial conditions in (21.6) and (21.7). Using the Runge-Kutta integration procedure five times with a grid-size of 0.01, produce the values of the vectors oc(0.05) and (3(0.05). Next carry out the GramSchmidt orthogonalization procedure on these vectors, leading to the new orthonormal vectors y(O.05) and b(0.05),
(21.12) (0.05) = oc(0.05) y I oc(0.05) I '
and
(21.13) b(0.05) = (3(0.05) - «(3(0.05),y(0.05»y(0.05) , 1(3(0.05) - «(3(0.05),y(0.05»y(0.05)I
where (oc,(3) and loci represent the usual inner product and length. By carrying out the same linear transformation on the initial vectors
(~} (~). we find the new initial vectors which lead to the orthogonal and normalized vectors y(0.05) and b(0.05) via integration of (21.1). These initial conditions are
(21.14)
102 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
and
(21.15)
IP(O.05) _ (P(005~,){005»y(005)1 ( (~) - (P(O.05),y(O.05» (~) ).
Next we integrate (21.1) from t = 0.05 to t = 0.10, using as conditions at t = 0.05, y(0.05) and b(0.05). The vectors so obtained are orthogonalized and so are the conditions at t = O. Continuing in this way we eventually find two initial vectors of the form
(D- (fJ which are initial conditions for (21.1) leading to two orthonormal vectors at time t = 1. In this way the two-space at t = 1, spanned by solutions of (21.1) for which u(O) = ti(O) = 0, is accurately determined. In some instances this will help us to determine the missing initial values of U(2)(0) and U(3)(0) with precision.
Using this method on the problem (21.1), with the boundary conditions as before, u(O) = 0, u'(O) = 0, u(1) = 0.146996, u'(1) =
0.0241005, we found
(21.16)
For the equation
U(2)(0) = 1.9999997,
U(3)(0) = -12.000003.
(21.17) U(4) - 9U(3) - 79u(2) - 159ti - 90u = 0,
for which the general solution is
(21.18)
use of the simple method of complementary functions gave the results U(2)(0) = 2.0013081, U(3) = -12.007849. The boundary conditions
SYSTEMS OF DIFFERENTIAL EQUATIONS 103
are as above, and the solution is that of (21.3). When the GramSchmidt procedure was used twenty times on the interval from t = 0 to t = 1, we obtained U(2)(0) = 1.9999995, U(3)(0) = -12.000003. Again the exact results are given by U(2)(0) = 2, U(3)(0) = -12.
II. DYNAMIC PROGRAMMING
Let us now indicate how dynamic programming may be used to avoid two-point boundary-value problems, and the resultant solution of linear algebraic systems, in treating minimization problems involving quadratic functionals. We have already discussed the scalar version of this in a previous chapter. Let us here consider the vector version.
To keep the algebra as simple as possible, let us suppose that we want to minimize the functional
(22.1) J(x) = LT[(x"X') + (x,Ax)] dt,
subject to the condition x(O) = c. Let A be positive definite so that no characteristic value problems arise.
The Euler equation is
(22.2) x" - Ax = 0, x(o) = c, x'(T) = 0,
a two-point boundary-value problem. Let us proceed III a quite different fashion. Write
(22.3) j(c,T) = min J(x). IX!
Then the usual dynamic programming procedure yields the nonlinear partial differential equation
(22.4) aj = min [(v,v) + (c,Ac) + (v, gradj)], aT "
where gradfis the vector whose components are af/aCt, af/aC2, ... , aj/ac.\'. and where c .. ('2 •••• , c.v are the components of c.
104 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
It is easy to see that the minimum value of v is given by
(22.5) v = -gradff2.
This is the missing initial condition in (22.2). To make effective use of (22.5), we observe that j(c,T) is a quadratic
function of c. Indeed,
(22.6) f(c,T) = (c,R(T)c).
Using this, we see that
(22.7) v = -R(T)c,
where, using (22.4), R(T) satisfies the Riccati differential equation
(22.8) R(O) = o. This is an initial value problem which can be solved computationally in a routine fashion. There are no difficulties as far as R(T) increasing in magnitude, since we know that R(T) approaches a constant as T- 00.
A similar result is obtained, with a bit more arithmetic, for the case where we wish to minimize
(22.9) J(y) = IT
[(x,AX) + (y,By)] dt
over all y, where
(22.10) i = Cx + Dy, X(O) = c.
23. INVARIANT IMBEDDING
In many cases we encounter systems of linear differential equations of the form
(23.1) X' = Ax + By,
y' = Cx + Dy,
x(O) = c,
yea) = d.
If these arise from a variational problem of the type appearing in
SYSTEMS OF DIFFERENTIAL EQUATIONS 105
(22.9) and (22.10), we know how to avoid the two-point boundaryvalue problem. If not, we still possess powerful techniques for finding the missing initial value, y(O), without solving linear algebraic equations. These techniques are furnished by the theory of invariant imbedding. They show that if we write
(23.2) y(O) = R(a)c + T(a)d,
then Rand T, as functions of a, also satisfy Riccati-type differential equations with initial values.
Chapter Four
COMMENTS AND BIBLIOGRAPHY
§1. Not much has been done in the area of multipoint problems. For some recent results and further references see
T. S. VASHAKMADZE, "Multiple-point linear Boundary Problems," AN Gruz SSR Soobshcheniya, Vol. 35, 1964, pp. 29-36.
§2-§4. We follow
R. KALABA, "On Nonlinear Differential Equations, the Maximum Operation, and Monotone Convergence," J. Math. Mech., Vol. 8, 1959, pp. 519-574.
§5. See
R. BELLMAN, H. KAGIWADA, and R. KALABA, "Orbit Determination as a Multipoint Boundary-value Problem and Quasilinearization," Proc. Nat. Acad. Sci. USA, Vol. 48, 1962, pp. 1327-1329.
§7. These results were given in
R. BELLMAN, "Successive Approximations and Computer Storage Problems in Ordinary Differential Equations," Comm. ACM, Vol. 4, 1961, pp. 222-223.
R. BELLMAN, R. KALABA, and B. KOTKIN, Some Numerical Results Using Quasilinearization for Nonlinear Two-point Boundary-value Problems, The RAND Corporation, RM-3113-PR, April 1962.
§9. See
R. BELLMAN, "On the Iterative Solution of Two-point Boundary-value Problems," Boll. d'Unione Mate., Vol. 16, 1961, pp. 145-149.
R. BELLMAN and T. A. BROWN, "On the Computational Solution of Two-point Boundary-value Problems," Boll. d'Unione Mate., Vol. 19, 1964, pp. 121-123.
106 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
§1O. See
R. BELLMAN, "On the Computational Solution of Differential-difference Equations," J. Math. Anal. Appl., Vol. 2,1961, pp. 108-110.
R. BELLMAN and B. KOTKIN, "On the Numerical Solution of a Differentialdifference Equation Arising in Analytic Number Theory," Math. Comput., Vol. 16, 1962, pp. 473-475.
For a discussion of how differential-difference equations enter into chemotherapy, see
R. BELLMAN, "From Chemotherapy to Computers to Trajectories," Mathematical Problems in the Biological Sciences, American Mathematical Society, Providence, Rhode Island, 1962, pp. 225-232,
where further references may be found. For a systematic discussion of differentialdifference equations, see
R. BELLMAN and K. L. COOKE, Differential-difference Equations, Academic Press Inc., New York, 1963.
§12. We follow
R. BELLMAN and K. L. COOKE, "On the Computational Solution of a Class of Functional Differential Equations," J. Math. Anal. Appl., to appear.
R. BELLMAN, J. BUELL, and R. KALABA, Numerical Integration of a Differentialdifference Equation with a Decreasing Time.lag, The RAND Corporation, RM-4375-NIH, December 1964.
R. BELLMAN, J. BUELL, and R. KALABA, Mathematical Experimentation in Timelag Modulation, The RAND Corporation, RM-4432-NIH, February 1965.
§13. For further discussion of differential approximation, see
R. BELLMAN, R. KALABA, and B. KOTKIN, "Differential Approximation Applied to the Solution of Convolution Equations," Math. Comput., Vol. 18, 1964, pp. 487-491.
R. BELLMAN, B. GLUss, and R. ROTH, "On the Identification of Systems and the Unscrambling of Data: Some Problems Suggested by Neurophysiology," Proc. Nat. Acad. Sci. USA, Vol. 52, 1964, pp. 1239-1240.
---, "Segmental Differential Approximation and the 'Black Box' Problem," J. Math. Anal. Appl., to appear.
A. A. KARDASHOV, "An Analysis of the Quality of an Automatic Control System by the Method of Lowering the Order of the Differential Equation," Avtomat. i Telemeh., Vol. 28, 1962, pp. 1073-1083.
Further applications of differential approximation may be found in the book
R. BELLMAN, Perturbation Techniques in Mathematics, PhYsics, and Engineering, Holt, Rinehart and Winston, Inc., New York, 1964,
SYSTEMS OF DIFFERENTIAL EQUATIONS 107
and in the papers
R. BELLMAN and J. M. RICHARDSON, "On Some Questions Arising in the Approximate Solution of Nonlinear Differential Equations," Quart. Appl. Math., Vol. 20, 1963, pp. 333-339.
---, "Renormalization Techniques and Mean-square Averaging-I: Deterministic Equations," Proc. Nat. Acad. Sci. USA, Vol. 47, 1961, pp. 1191-1194.
---, "Perturbation Techniques," Symposium on Nonlinear Oscillations, Kiev, USSR, 1961.
J. M. RICHARDSON and L. C. LEVITT, ':Systematic Linear Closure Approximation for Classical Equilibrium Statistical Mechanics-I: General Method," Bull. Amer. Phys. Soc., Series II, Vol. 9, 1964, p. 255.
L. C. LEVITT, J. M. RICHARDSON, and E. R. COHEN, "Systematic Linear Closure Approximation for Classical Equilibrium Statistical Mechanics-II: Application to the Calculation of Corrections to the Debye-Huckel Theory of an Electron Gas," Bull. Amer. Phys. Soc., Series II, Vol. 9, 1964, p. 277.
Some further papers of interest are
L. KH. LIBERMAN, "Problems in the Theory of Approximate Solutions of Differential Operator Equations in Hilbert Space," lvvz. Matematika, No.3, 1964, pp. 88-92.
E. D. ZAIDENBERG, "A Third Method for the Statistical Linearization of a Class of Nonlinear Differential Equations," Avtomat. i Telemeh., Vol. 25, 1964, pp. 195-200.
J. W. CULVER and M. D. MESAROYIC, "Dynamic Statistical Linearization," AlEE Joint Automatic Control Conference, New York, June 27-29, 1962.
For a general discussion of the reproduction of functional values, see the expository article
V. M. TIKHOMIROV, "Kolmogorov's Work on €-entropy of Functional Classes and the Superposition of Functions," Russian Math. Surveys, Vol. 18, 1963, pp. 51-88,
where many further references may be found.
§15. This follows
R. BELLMAN, R. KALABA, and B. KOTKIN, "Differential Approximation Applied to the Solution of Convolution Equations," Math. Comput., Vol. 18, 1964, pp. 487-491.
R. BELLMAN and B. KOTKIN, A Numerical Approach to the Convolution Equations of a Mathematical Model of Chemotherapy, The RAND Corporation, RM-3716-NIH, July 1963.
§16. For examples of the dangers of exponential approximation, see
C. LANCZOS, Applied Analysis, Prentice-Hall Inc., Englewood Cliffs, New Jersey, 1956.
108 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
§18. This result is contained in
R. BELLMAN, I. GLICKSBERG, and O. GROSS, "On Some Variational Problems Occurring in the Theory of Dynamic Programming," Rend. Circ. Mat.
Palermo, Ser. II, 1954, pp. 1-35.
It is interesting to point out that the study of the equation of (18.10), fundamental in the theory of Markovian decision processes of continuous type, focussed attention on this type of functional equation. From this, the results of
R. BELLMAN, "Functional Equations in the Theory of Dynamic Programming-V: Positivity and Quasi-linearity," Proc. Nat. A cad. Sci. USA, Vol. 41, 1955, pp. 743-746,
developed, and these, in turn, led to
R. KALABA, "On Nonlinear Differential Equations, the Maximum Operation, and Monotone Convergence," J. Math. Mech., Vol. 8, 1959, pp. 519-574.
Finally, let us point out that the study of equations such as (18.10) led to a natural extension of semigroup theory, embracing the calculus of variations; see the remarks in
R. BELLMAN, Dynamic Programming, Princeton University Press, Princeton, New Jersey, 1957, p. 115.
§19. See
G. POLYA, "On the Mean-value Theorem Corresponding to a Given Linear Homogeneous Differential Equation," Trans. Amer. Math. Soc., Vol. 24, 1922, pp. 312-324.
1. SCHLESINGER, Handbuch der Theorie der Linearen Differentialgleichungen, Vol. 1, 1895, p. 52, Formula 14.
For extensions to partial differential equations, based upon the work of Polya and others, see
P. HARTMAN and A. WINTNER, "Mean Value Theorems and Linear Operators," Amer. Math. Monthly, Vol. 62, 1955, pp. 217-221.
Further references will be found in
E. F. BECKENBACH and R. BELLMAN, Inequalities, Springer, Berlin, 1961.
§22. For further discussion of these techniques, see
R. BELLMAN, Introduction to Matrix Analysis, McGraw-Hill Book Company, New York, 1960, Chapter 9,
where many other references may be found. They have been extensively used in modern control theory by the authors and Adorno, Beckwith, Freimer, Kalman, Koepcke, and others.
§23. For a discussion and application of invariant imbedding, see
R. BELLMAN, R. KALABA, and M. PRESTRUD, Invariant Imbedding and Radiative
SYSTEMS OF DIFFERENTIAL EQUATIONS 109
Transfer In Slabs of Finite Thickness, American Elsevier Publishing Company, Inc., New York, 1963.
R. BELLMAN, H. KAGIWADA, R. KALABA, and M. PRESTRUD, Invariant Imbedding and Time-dependent Transport Processes, American Elsevier Publishing Company, Inc., New York, 1964.
R. BELLMAN, R. KALABA, and G. M. WING, "Invariant Imbedding and Mathematical Physics-I: Particle Processes," J. Math. Phys., Vol. 1, 1960, pp. 280-308.
R. KALABA, "Invariant Imbedding and the Analysis of Processes," Chapter 10 in Views on General Systems Theory, M. Mesarovic (ed.), John Wiley & Sons, Inc., New York, 1964.
G. M. WING, An Introduction to Transport Theory, John Wiley & Sons, Inc., New York, 1962.
For an application of dynamic programming to the numerical solution of ill-conditioned systems, see
R. BELLMAN, R. KALABA, and J. LOCKETT, "Dynamic Programming and Illconditioned Linear Systems," J. Math. Anal. Appl., Vol. 10, 1965, pp. 206-215.
---, "Dynamic Programming and Ill-conditioned Linear Systems-II," J. Math. Anal. Appl., to appear.
These methods are useful in connection with the numerical inversion of the Laplace transform.
Chapter Five
PARTIAL DIFFERENTIAL EQUATIONS
I. INTRODUCTION
Let us now turn our attention to partial differential equations. As in the previous chapters, we shall first consider some of the computational aspects and then briefly discuss questions of monotonicity of approximation and positivity. This chapter is on a different level from any of the preceding or subsequent chapters, since it presumes a knowledge of the highly developed and complex art of solving partial differential equations with a digital computer. The reader unversed in this cabalistic computing is advised to skip this chapter.
Our first numerical example will be a nonlinear parabolic partial differential equation and our second a nonlinear elliptic partial differential equation.
The computational situation here is far more complex than in the case of ordinary differential equations. In obtaining the numerical solution of ordinary differential equations, subject to initial value conditions, we can use methods of such guaranteed accuracy that we can be quite confident that the results reflect the efficiency of the technique employed. It is therefore meaningful to compare different methods. In solving partial differential equations, the problem is that of trying to determine the contribution of the actual numerical procedure. As a consequence of the effects of truncation errors, round-off errors, ofthe word-length and actual computing procedures of the computer that is used, of the use of various crude bounds on higher-order derivatives in the choice of techniques, and of the labor involved in carrying out higher-order difference techniques, it is not easy to obtain valid a priori estimates of the advantages of higherorder methods as opposed to lower-order methods, nor to compare
III
112 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
different procedures used by different people. It follows that it is essential to do a certain amount of experimentation.
2. PARABOLIC EQUATIONS
We shall begin with a discussion of parabolic partial differential equations, since we have already discussed them to some extent in connection with the positivity of linear differential operators of the second order.
Given the problem of obtaining a numerical solution of a partial differential equation of the form
(2.1)
with initial values of the form
(2.2) U(X,O) = hex), 0< x < 1,
and boundary conditions of the form
(2.3) U(O,t) = u(l,t) = 0,
following our usual quasilinearization approach, we set up the approximation scheme
(2.4) (U n+1)t = (u n+1)"", + g(um(u n ),,)
+ (u n+1 - un)g,,(um(u n),,)
+ «un+1)" - (un)")gux(um(u n),,,)·
Starting with an assigned value of uo(x,t), we have a linear equation for each Un> n ~ 1.
Let us then proceed directly tQ the actual numerical aspects.
3. A SPECIFIC EQUATION
Let us consider the equation
(3.1) ut - u"'" = (1 + u2)(1 - 2u),
which was constructed so as to have the simple explicit solution
(3.2) u(x,t) = tan (x + t).
PARTIAL DIFFERENTIAL EQUATIONS 113
This solution has the additional advantage of becoming infinite as x + t --+ 7T/2 = 1.57 +, and thus putting numerical algorithms to a good test. The usual Picard iteration method
(3.3)
was compared with quasilinearization over two triangles 0 :::;; t :::;; (1 - x), 0 :::;; x :::;; 1, and 0 :::;; t :::;; 1.5 - x, 0 :::;; x :::;; 1.5.
The numerical values were obtained by using conventional difference approximation techniques which we will describe below. As mentioned before, other computational algorithms for the solution of partial differential equations could yield different estimates of the value of one approach as compared with the other.
4. DIFFERENCE APPROXIMATION
The first question is that of deciding upon a difference approximation to the terms ut - UXX ' Since the increments ~x = ~t = 0.01 were used, the Crank-Nicholson difference operator was employed to obtain a discrete system.
Letting um,n denote the solution of the difference equation analogue of (3.1) at the lattice point m~x, nM, we replace (3.1) by
(4.1) ~t
1 {[ (k+1) 2 (kH) (k+1)] [(kn ) (k n) (k n») - 2(~X)2 u m- I ,n+1 - u m,n+1 + um+I,nH + urn-I,n - 2um,n + um+I,n]f
= [f(u~~n+1) + f(u~~~)]/2, where feu) = (1 + u2)(1 - 2u), and k n denotes the final number of iterations required to obtain an acceptable approximation to the value of um,n at the grid-points on the line T = n~t.
The criterion for acceptance of an approximation was a commonly used one, namely that the maximum relative change per line be less than a prescribed amount before passing to the next line. In this case, we required that
(4.2) I U(k) U (k-I) I m tn,lI - tn,lI .,.., 10- 8
ax (k) ..::::. ' m tim."
114 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
5_ COMPARISON BETWEEN PICARD AND QUASILINEARIZATION
Table 5.1 gives the number of iterations required to achieve the criterion of (4.2) before continuing to the next t-line.
The significant observation was that while the Picard procedure and the quasilinearization procedure required about the same amount
Table 5.1
Number of Base of Triangle Method Iterations I-interval
1.0 Newton 2 (0.01, 0.98) 1 (0.98, 0.99)
1.0 Picard 4 (0.01, 0.68) 3 (0.68, 0.93) 2 (0.93, 0.99)
1.5 Newton 3 (0.01,0.23)
1.5 Picard 28 (0.01,0.50)
of work in the case of the triangle 0 ~ t ~ 1 - x, 0 ~ x ~ 1, about nine times the number of iterations required for the quasilinearization method were needed to obtain comparable accuracy by the Picard procedure when the region of interest was the larger triangle 0 ~ t s 1.5 - x, 0 ~ x ~ 1.5.
6. GRADIENT TECHNIQUES
In Chapter One we indicated that one approach to the determination ofa solution of the scalar equationf(x) = 0 is to use the NewtonRaphson approximation method. In place of the original static process, we consider a discrete dynamic process
(6.1 )
PARTIAL DIFFERENTIAL EQUATIONS 115
a nonlinear difference equation. The limit of the sequence {xn },
under suitable conditions, is the required root. In place of a discrete process, we can consider a continuous process
based upon the differential equation
(6.2) dx dt = f(x),
or, perhaps, in closer analogy with (6.1), the differential equation
(6.3) dx - f(x) -=--dt f'(x)
It is clear that there are arbitrarily many differential equations whose limiting form as t -+ 00 is the static equation f(x) = O. Which one we choose depends upon consideration of accuracy, stability, time, and type of computation available.
The same idea can be used to study general functional equations. Consider the two-point boundary-value· problem
(6.4) u" = g(u,u',x), u(O) = u(b) = O.
Associate with this equation a time-dependent process
(6.5) ut = u"'X - g(u,u""x),
u(O,t) = u(b,t) = 0,
u(x,O) = w(x), 0< x < b,
where w(x) is chosen in some expeditious fashion. Applying the quasilinearization approach of the preceding sections,
we have a way of studying the solution of (6.4) in a different fashion, one that mayor may not involve the solution of linear systems of ordinary differential equations, or linear systems of algebraic equations.
Let us point out that it may be preferable to use a different approach to (6.5) than that described above. For example, polynomial approximation techniques appear to be quite effective.
116 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
Secondly, let us note that we are not particularly interested in the complete time history of u(x,t), but only its "steady-state,"
(6.6) u(x) = lim u(x,t), t-+ 00
hopefully the solution of (6.4). Since we expect this approach to be exponential, that is,
(6.7) u(x,t) '" u(x) + u1(x)e",I<,,)t + u2(x)e",2(,,)t + ... , we can profitably use nonlinear· summability to obtain excellent extrapolation of u(x) from u(x,t) for small t.
Finally, let us note that (6.5) affords only one method for imbedding (6.4) within a continuous family of processes. The theory of invariant imbedding, using transport theory models, furnishes alternative approaches.
These are techniques which would take us too far afield at the moment. In subsequent volumes we shall return to them. The important point to stress is that a mixture of techniques is highly desirable, and it is unwise and unscientific to attempt to attack significant problems with only a single method. References to the preceding approaches will be found at the end of the chapter.
7. AN ELLIPTIC EQUATION
Corresponding experiments were carried out for the elliptic equation
(7.1)
an equation of interest in magnetohydrodynamics. The boundary conditions will be described below.
We shall compare the solution obtained by means of the recurrence relation
(7.2) (uk+1)"" + (uk+1)YY = eUk,
the Picard approximation technique, with that obtained from
(7.3) (Uk+1)",,,, + (uk+1)YY - eUkuk+1 = eUk(1 - Uk),
the quasilinearization technique.
PARTIAL DIFFERENTIAL EQUATIONS 117
There is no difficulty in supplying the analogues of the monotonicity and quadratic convergence proofs, following the lines indicated in earlier sections. The required positivity is easily obtained by following the method of Chapter Three, Sec. 8,. the variational approach.
8. COMPUTATIONAL ASPECTS
We shall consider the rectangular region 0 ~ x ~ !, 0 ~ y ~ 1, for two sets of boundary conditions, U = 0 on the border of the rectangle, and U = 10. As is to be expected, the quasilinearization approach appears in a most favorable light for U = 10, where larger gradients occur.
The conventional approach was followed. The Laplacian is replaced by the difference expression
(8.1) (um+l,n + um,n+l + um-I,n + um,n-I - 4u m,n)/h2,
where xm,n is the discrete analogue of u(m~x,n~y) and h = ~x =
~y, which in this case was taken to be i"4' If we order the points (m,n) so that (m,n) precedes (m',n') if
n < n' or if n = n' and m < m', and denote the resulting set of 15 x 31 = 465 numbers {Umn,k} by Uk' both the Picard and quasilinearization techniques lead to the problem of solving large systems of linear algebraic 'equations.
Normalizing these equations by dividing through by the negative of the coefficient of the central term umn,k' we obtain in each case a system which has the form
(8.2)
Here I, Lk , and Uk are 465 x 465 matrices; I is the identity matrix, while Lk and Uk are respectively lower and upper triangular matrices appropriate to the approximation method used. In our problem, they are transposes of each other and have only two diagonal lines of nonzero elements; in the Picard iteration these elements are identically equal to t, while in the Newton procedure they are equal to
(4 + h2 exp uk)-t,
118 QUASI LINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
where the approximation Uk is evaluated at the central point (m,n) of the star of points indicated in (8.1). The components of the vector bk
in (8.2) are equal to the appropriate values of the right-hand sides of (7.2) and (7.3), respectively.
Since in many realistic studies the size of the problem is so huge as to preclude the use of Gaussian elimination to solve the algebraic systems (8.2), we chose the successive over-relaxation method of Young, primarily because it is the simplest iterative method which is substantially better than the Gauss-Seidel method although it is recognized that faster converging block relaxation and alternating direction methods are not much more complicated than successive over-relaxation.
Applying the successive over-relaxation method directly to the systems (8.2) for a fixed value of k would yield the iterative formula
(8.3) (I - wLk)ur+l,k+1
= [WUk - (w - I)I]ur •k+1 + bk , r = 0, 1,2, ... ,
where w is the over-relaxation parameter and the component equations are solved successively in the order that produces the components of the vector Ur+1,k+l consecutively in the order indicated above. However, it is clear that not much effort should be expended in obtaining a very accurate solution to (8.2) if U/r+l is not a good approximation to the solution of (7.1). Similarly, there is not much point to iterating on the index k if the approximations given by (8.3) are too rough as a consequence of terminating the iteration on r too soon. Thus there exists the open problem as to when one should iterate on k or on r at each step. We avoided the attempt at this relatively difficult analysis and simply iterated on each simultaneously, using the formula
(8.4)
the component equations being solved in the order indicated above. Thus we see a strong interweave between the numerical method used for solving either (8.2) or (8.4) and the rate of convergence of the actual solutions of (8.2) or of (8.4), the difficulty of whose analysis
PARTIAL DIFFERENTIAL EQUATIONS 119
dictates experimentation. We also note that this blend of iterative procedures for the solution of the linear systems (8.2) and the Newton or Picard iterations to solve the original nonlinear problem has the advantage that one does not have to go to the full-scale effort of solving accurately the system (8.2) for each value of k.
As in the parabolic case the criterion for stopping the iterations was that the maximum of the absolute value of the relative change between consecutive iterations of the functional values be no greater than 10-6 • If this were truly a bound on the relative error of the numerical solution and the solution of nonlinear equation (7.1) with the Laplacian replaced by the expression (8.1), then this criterion would be about two orders of magnitude too stringent to be justified by the truncation errors. However, some measure of stringency is dictated by the heuristic character of the stopping criterion when one knows neither the true solution nor effective bounds on it.
In the case of Picard iteration where Lk and Uk are independent of k, if one were to ignore the fact that bk depends on k, the theoretically optimal value of ill is 1.732 for the fastest convergence. For the Newton iteration the value would be a little smaller, again ignoring the change in Lk and in Uk as well as in bk from iteration to iteration, which we clearly cannot do. However, in the case of zero boundary conditions for the Picard iteration, where Lk and Uk are constant, the best value (or values) of ill are undoubtedly near the theoretical value noted above because the change in Uk over the region is very small (the value at the center of the region obtained for the final solution of 0.007071 to four significant figures). Consequently, for this case we experimented only with the values ill = 1.70 and 1.74. No such confidence existed for the case of Picard iteration on the situation where U is equal to 10 on the boundary because of the large variations in Uk over the region, which would undoubtedly introduce large changes at a point from iteration to iteration. Of course, in the Newton iterations, since Lk and Uk change from iteration to iteration, one has even less of an idea as to the optimal value of ill. Consequently t in the remaining cases a number of runs for different values of w
120 QUASILINEARIZATION AND NONLINEAR BOUNDARY.VALUE PROBLEMS
were made to estimate an optimal value of w. Table 5.2 summarizes the results.
From the table we observe that for the case of zero boundary conditions no advantage was obtained from the use of the quadratically converging Newton procedure over the results obtained
Table 5.2
Value of U(k)
Value of w Number of Iterations at (t, i)
u == 0 (Picard Method)
1.74 64 -0.007071 1.70 77 -0.007071
u == 0 (Newton Method)
1.95 258 -0.007071 1.74 64 -0.007071 1.70 77 -0.007071 1.50 154 -0.007071 1.00 426 -0.007071
u == 10 (Picard Method)
1.55 63 5.65995 1.50 53 5.65995 lAO 66 5.65994 1.35 71 5.65993 1.30 76 5.65993 1.20 78 5.65992 1.10 106 5.65997 1.00 134 5.65998
u == 10 (Newton Method)
1.74 58 5.65995 1.70 51 5.65995 1.65 46 5.65995 1.60 42 5.65995 1.55 41 5.65995 1.50 50 5.65995 lAO 66 5.65996 1.00 148 5.65998
PARTIAL DIFFERENTIAL EQUATIONS 121
from the first-order converging Picard procedure. In fact, the same values of w gave the most rapid convergence in both cases. On the other hand, when the boundary conditions were raised to 10, implying rapid changes in U near the boundary, the Newton procedure was more advantageous, requiring some 41 iterations as compared with 53 for Picard's. We note further that curiously the optimal value of w was slightly lower for Picard's procedure.
As an additional note we included the case of Gauss-Seidel relaxation given by w = 1 for comparison with successive overrelaxation. As expected, the successive over-relaxation method ranged from two and one-half to almost seven times faster than GaussSeidel relaxation.
Another incidental observation was that the points at which the maximum relative change in Uk took place were invariably very close to the origin. In most cases it was at the point (i4",i4")'
9. POSITIVITY AND MONOTONICITY
Any detailed discussion of positivity and monotonicity for partial differential operators would enmesh us too deeply in the theory of partial differential equations. Let us merely point out a few simple consequences of the techniques we have already applied.
The method of finite differences, used in Chapter Three, Sec. 6, readily establishes the desired nonnegativity of the solution of
(9.1) U t = U"'''' + f(x,t), t> 0,
U(x,O) = g(x), ° ~ x ~ 1,
U(O,t) = u(l,t) = 0, t> 0,
where f, g ~ 0, and for similar equations in higher dimensions. To obtain the corresponding result for elliptic partial differential
equations such as
(9.2) Uzz + U NN + q(x,y)u = f(x,y).
122 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
we can either use the parabolic partial differential equation as in Chapter Three, Sec. 6, or we can start from the variational problem of minimizing the functional
(9.3) flU! + u! + q(x,y)u 2 - 2f(x,y)u] dA
and proceed as before. In this fashion, we can obtain not only the nonnegativity of the Green's function, but also analogues of the variation-diminishing property.
10. THE HOPF-LAX EQUATION
Consider the conservation relation
(10.1) U t + f(u)", = 0,
with the initial conditions u(x,O) = g(x), T ~ t ~ O. The case f(u) = u2j2 is particularly interesting since the equation is a limiting case of the Bateman-Burgers equation
(10.2)
an equation which plays an important role in various simplified mathematical models of turbulence.
Let us assume that f(u) is strictly convex. Write (10.1) in the form
(10.3)
and introduce the function U(x,t) by means of the relation U", = u.
Then equation (l0.3) becomes
(10.4)
or
(10.5)
upon integrating with respect to x and choosing appropriate initial conditions.
PARTIAL DIFFERENTIAL EQUATIONS
The associated linear equation is
(10.6) Wt + /(V) + (W", - V)/'(V) = 0, W(x,O) = G(x).
Using the easily demonstrated positivity of the operator
a/at + I'(v) a/ax, we have an explicit solution of (10.5) in the form
(10.7) U = min w(x,t,v), v
where w is the solution of (10.6).
123
Taking v to be constant, we obtain a simple explicit solution which is the representation found previously by Lax in the general case, and Hopf in the particular case where j(u) = u2/2, the equation of (10.2). In this case, the equation can be reduced to the linear heat equation and solved explicitly.
Chapter Five
COMMENTS AND BIBLIOGRAPHY
§1. The numerical results were obtained in conjunction with M. Juncosa. See
R. BELLMAN, M. JUNCOSA, and R. KALABA, "Some Numerical Experiments Using Newton's Method for Nonlinear Parabolic and Elliptic Boundaryvalue Problems," Comm. ACM, Vol. 4, 1961, pp. 187-191,
where further references are given.
§6. For a detailed discussion of gradient techniques, see
P. ROSENBLOOM, "The Method of Steepest Descent," Proc. Symp. Appl. Math., Vol. VI, 1956, pp. 127-176.
For a brief account of the application of nonlinear summability techniques, see
R. BELLMAN and R. KALABA, "A Note on Nonlinear Summability Techniques in Invariant Imbedding," J. Math. Anal. Appl., Vol. 6, 1963, pp. 465-472.
For a discussion ofthe use of polynomial approximation in the numerical solution of partial differential equations, see
R. BELLMAN, R. KALABA, and B. KOTKIN, "On a New Approach to the Computational Solution of Partial Differential Equations," Proc. Nat. Acad. Sci. USA, Vol. 48, 1962, pp. 1325-1327.
124 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
R. BELLMAN, "Some Questions Concerning Difference Approximations to Partial Differential Equations," Boll. d'Unione Mate., Vol. 17, 1962, pp. 188-190.
§9. For a number of references to positivity properties of the solutions of partial differential equations, see
E. F. BECKENBACH and R. BELLMAN, Inequalities, Springer, Berlin, 1961.
These properties are, in turn, closely associated with "maximum principles" and any detailed discussion would plump us in the middle of the modern theory of partial differential equations.
§1O. See
P. LAX, "Hyperbolic Systems of Conservation Laws. II," Comm. Pure Appl. Math., Vol. 10, 1957, pp. 537-566,
R. KALABA, "On Nonlinear Differential Equations, the Maximum Operation, and Monotone Convergence," J. Math. Mech., Vol. 8, 1959, pp. 519-574.
R. BELLMAN, J. M. RICHARDSON, and S. AZEN, "On New and Direct Computational Approaches to Some Mathematical Models of Turbulence," Quart. Appl. Math., to appear.
In order to avoid extensive excursions, we omitted any discussion of the connection between the minimization of functionals of the form
J(u) = SoT h(u,u') dt
and partial differential equations of the form Ut = g(u,u""t). See
E. D. CONWAY and E. HOPF, "Hamilton's Theory and Generalized Solutions of the Hamilton-Jacobi Equation," J. Math. Mech., Vol. 13, 1964, pp. 939-986.
This in turn would lead to the use of approximation in policy space in the calculus of variations.
For an application of dynamic programming to the study of parabolic partial differential equations, see
W. H. FLEMING, "The Cauchy Problem for Degenerate Parabolic Equations," J. Math. Mech., Vol. 13, 1964, pp. 987-1008. See also Chapter 4 of
R. BELLMAN, Adaptive Control Processes: A Guided Tour, Princeton University Press, Princeton, New Jersey, 1961.
Chapter Six
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY
I. INTRODUCTION
In this chapter we wish to sketch some typical applications of quasilinearization to characteristic problems in physics, engineering, and biology. We shall discuss in turn problems in optimal design and control, inverse problems in radiative transfer, questions of orbit determination, system identification and adaptive control, periodogram analysis, and, finally, an inverse problem in cardiology.
1. OPTIMAL DESIGN AND CONTROL
A basic problem in modern control theory is that of minimizing the scalar functional
(2.1) ley) = iT g(x,y) dt,
over M-dimensional vector functions y(t), where x, an N-dimensional vector, and yare connected by the differential equation
(2.2) dx - = h(x,y), dt
x(O) = c.
Frequently, the problem is a more general one of minimizing the scalar functional
(2.3) l(y,a) = iT g(x,y,a) dt + cp(a)
over M-dimensional vector functions y(t) and K-dimensional vectors a, where x, y, and a are connected by the differential equation
(2.4) dx - = h(x,y,a), dt
X(O) = c.
'''~
126 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
These entail questions of design since a may be viewed as a vector of design parameters. We shall not discuss any of the still more general problems involving functional differential and partial differential equations.
To treat this variational process by means of quasilinearization, we can proceed, as we know, in two different fashions. We can begin with an initial approximation Yo, ao, and generate Xo by means of (2.4),
(2.5) dxo h( - = xo,Yo,ao), dt
xo(O) = c.
To obtain Xl> YI, aI, we consider the equation
(2.6)
where hI is the expansion of h(x,y,a) around xo, Yo, ao, up to linear terms, and pose the problem of minimizing
(2.7) LT g2(xI'YI,aI,xo,Yo,ao) dt
over YI and at> where g2 is the expansion of g(x,y,a) around Xo, Yo, ao up to quadratic terms.
It is easy to see that the variational equations are linear in XI. Yl> and al •
Alternatively, we can use standard variational techniques to obtain the Euler equation and then apply quasilinearization to this equation. To unify the treatment, we can dispense with the fact that a is a constant by taking it to be a solution of the differential equation
(2.8) da _ 0 - , dt
with an unknown initial condition. Introducing the Lagrange multipliers A(t) and /-t(t), we then
consider the new criterion functional
(2.9) LT[g(x,y,a) +(,\(t),x - h(x,y,a» + (/L(t),d)] dt + tp(a(O»,
and write down in standard manner the appropriate differential equations and boundary conditions.
APPLICATIONS IN PHYSICS. ENGINEERING. AND BIOLOGY 127
3. AN EXAMPLE
Consider, for example, the problem of minimizing the functional
(3.1) J(y,a) = - (x2 + /) dt + - , 1 il a2
202
over the function y and the parameter a, where
(3.2) dx - = -ax + y, dt
x(o) = c, da = 0. dt
The Euler equations and boundary conditions are
(3.3) x = -ax + y, x(o) = c,
~ = x + Aa, A(I) = 0,
d = 0, {teO) = a(O),
it = AX, {t(1) = 0,
y = A.
The relation y = A enables us to eliminate A from consideration. As our initial approximation, we set
(3.4) xo(t) = 1, YoCt) = 0, {to(t) = 0, ao = 0.
Then, either of the two procedures discussed in Sec. 2 yields the following scheme of successive approximations:
Yn+l(l) = 0,
There is no difficulty in applying standard techniques discussed previously to the numerical determination of the sequence {xmy,.,a n }.
In the following section, we describe some specific calculations.
128 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
T
O. 0.1000 0.2000 0.3000 0.4000 0.5000 0.6000 0.7000 0.8000 0.9000 1.0000
T
O. 0.1000 0.2000 0.3000 0.4000 0.5000 0.6000 0.7000 0.8000 0.9000 1.0000
T
O. 0.1000 0.2000 0.3000 0.4000 0.5000 0.6000 0.7000 0.8000 0.9000 1.0000
Table 6.1
SOME COMPUTATIONAL RESULTS Iteration 1
X Y MU
0.100000 X 101 -0.662196 x 10· 0.283436 x 10· 0.910758 x 10· -0.566718 x 10· 0.223168 x 10· 0.830158 x 10· -0.479745 x 10· 0.170912 x 10· 0.757867 x 10· -0.400410 x 10· 0.126965 x 10· 0.693161 x 10· -0.327919 x 10· 0.906022 X 10-1
0.635393 x 10· -0.261546 x 10· 0.611771 X 10-1
0.583983 x 10· -0.200628 x 10· 0.381112 X 10-1
0.538418 x 10· -0.144555 x 10· 0.208900 X 10-1
0.498242 x 10· -0.927650 X 10-1 0.905750 X 10-' 0.463053 x 10· -0.447402 X 10-1 0.221157 X 10-' 0.432498 x 10· 0.167638 X 10-7 0.125729 X 10- 7
Iteration 2
X Y MU
0.100000 X 101 -0.640197 x 10· 0.252000 x 10· 0.917906 x 10· -0.557950 x 10· 0.174601 x 10· 0.845465 x 10· -0.481894 x 10· 0.128792 x 10· 0.781899 x 10· -0.410979 x 10· 0.925063 X 10-1
0.726531 x 10· -0.344449 x 10· 0.640778 X 10-1
0.678772 x 10· -0.281596 x 10· 0.421680 X 10-1
0.638114 x 10· -0.221749 x 10· 0.257069 X 10-1
0.604125 x 10· -0.164271 x 10· 0.138468 X 10-1
0.576444 x 10· -0.108548 x 10· 0.592470 X 10-' 0.554777 x 10· -0.539856 X 10-1 0.143373 X 10-' 0.538896 x 10· 0.223517 X 10-7 0.814907 X 10-8
Iteration 3
X Y MU
0.100000 X 101 -0.640414 x 10· 0.232393 x 10· 0.917788 x 100 -0.558586 X 10· 0.175557 x 10· 0.845306 x 10· -0.482596 x 10· 0.129699 x 10· 0.781742 x 10· -0.411697 x 10· 0.933488 X 10-1
0.726424 x 10· -0.345141 x 100 0.648322 X 10-1
0.678770 x 10· -0.282227 x 10· 0.428066 X 10-1
0.638276 x 10· -0.222290 x 100 0.262037 X 10-1
0.604515 x 10· -0.164697 x 10· 0.141850 X 10-1
0.577132 x 10· -0.108842 x 10· 0.610581 X 10-' 0.555837 x 10· -0.541356 X 10-1 0.148800 X 10-' 0.540406 x 10· 0.558794 X 10-8 0.325963 x 10-·
A
0.283436 x 10· 0.283436 x 10· 0.283436 x 10· 0.283436 x 10· 0.283436 x 10· 0.283436 x 10· 0.283436 x 10· 0.283436 x 10· 0.283436 x 10· 0.283436 x 10· 0.283436 x 10·
A
0.232000 x 10· 0.232000 x 10· 0.232000 x 10· 0.232000 x 10· 0.232000 x 10· 0.232000 x 10· 0.232000 x 10· 0.232000 x 10· 0.232000 x 10· 0.232000 x 10· 0.232000 x 10·
A
0.232393 x 10· 0.232393 x 10. 0.232393 x 10· 0.232393 x 10· 0.232393 x 10· 0.232393 x 100 0.232393 x 100 0.232393 x 10· 0.232393 x 10. 0.232393 x 100 0.232393 x 10·
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY 129
Iteration 4
T X Y MU A
o. 0.100000 X 101 -0.640448 X 10° 0.232504 X 10° 0.232504 X 10· 0.1000 0.917827 X 10· -0.558566 X 10° 0.175554 X 10· 0.232504 X 10· 0.2000 0.845336 X 10· -0.482579 X 10· 0.129696 X 10· 0.232504 X 10· 0.3000 0.781763 X 10· -0.411682 X 10· 0.933461 X 10-1 0.232504 X 10· 0.4000 0.726438 X 10· -0.345128 X 10° 0.648299 X 10-1 0.232504 X 10· 0.5000 0.678777 X 10° -0.288216 X 10· 0.428047 X 10-1 0.232504 X 10· 0.6000 0.638277 X 10· -0.222281 X 10· 0.262024 X 10-1 0.232504 X 10· 0.7000 0.604510 X 10· -0.164690 X 10· 0.141842 X 10-1 0.232504 X 10· 0.8000 0.577121 X 10· -0.108838 X 10· 0.610543 X 10-' 0.232504 X 10· 0.9000 0.555821 x 10· -0.541333 X 10-1 0.148790 X 10-' 0.232504 X 10· 1.0000 0.540384 X 10· 0.335276 X 10- 7 0.791624 X 10-8 0.232504 X 10·
Iteration 5
T X Y MU A
o. 0.100000 X 101 -0.640446 X 10· 0.232506 X 10· 0.232506 X 10· 0.1000 0.917826 X 100 -0.558566 X 10· 0.175554 X 10· 0.232506 X 10· 0.2000 0.845336 X 10° -0.482578 X 10· 0.129696 X 10· 0.232506 X 10· 0.3000 0.781763 X 10· -0.411681 X 10' 0.933459 X 10-1 0.232506 X 10· 0.4000 0.726438 X 10· -0.345128 X 10· 0.648297 X 10-1 0.232506 X 10· 0.5000 0.678777 X 10· -0.282215 X 10· 0.428046 X 10-1 0.232506 X 10· 0.6000 0.638276 X 10· -0.222280 X 10· 0.262024 X 10-1 0.232506 X 10· 0.7000 0.604510 X 10· -0.164690 X 10· 0.141842 X 10-1 0.232506 X 10· 0.8000 0.577121 X 10· -0.108838 X 10· 0.610542 X 10-' 0.232506 X 10· 0.9000 0.555820 X 10· -0.541333 X 10-1 0.148790 X 10-' 0.232506 X 10· 1.0000 0.540383 X 10· 0.428408 X 10- 7 0.162981 X 10-7 0.232506 X 10·
4. NUMERICAL RESULTS
Taking c = 1, and xo, Yo, flo, and ao as in (3.4), we obtain the results in Table 6.1. We see that the optimal value of a is given by
(4.1) a '"'-' 0.2325.
(The program is given in Appendix 2.) To test this conclusion, we set a = 0.2325 in the variational
problem described by (3.1) and (3.2), and solved the resultant variational problem. This is easily done for the analytic structure we have carefully chosen, and we obtain the following results, illustrated graphically, for other nearby values of a. (See Fig. 6.1.) This shows that the value of a given in (4.1) is indeed optimal.
130 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
~ U
0.370
0.360
0.350
0.340 '--_'--_'--_.l...-_.l...-_.l...-_.l....-_..L-_..L----l
o 0.10 0.20 0.30 0.40
System design parameter, a
Fig. 6. I-Cost as a function of design parameter
5. AN INVERSE PROBLEM IN RADIATIVE TRANSFER
Let us now consider a particular version of what are called "inverse problems." In the two previous volumes in this series, we discussed the problem of determining the reflected flux from an inhomogeneous, plane-parallel, nonemitting, and isotropically scattering atmosphere of finite optical thickness Tb where the optical properties depend only upon T, the optical height. (See Fig. 6.2.) Let parallel rays oflight of net flux 7T per unit area normal to their direction of propagation be incident on the upper surface in the
Incident Flux Reflected Flux
\\ \\/1 " { } ,
Fla. 6.1-The physical situation
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY 131
direction characterized by flo, where flo is, as usual, the cosine of the angle measured from the inward normal to the surface. The bottom surface, for simplicity, is taken to be completely absorbing.
Formulating the process in invariant imbedding terms, let r(fl,flo,Tl) denote the intensity of the diffusely reflected light in the direction cos-1 fl and set R(fl,flo,Tl) = 411'. Then R satisfies the integrodifferential equation
(5.1) oR ( -1 -1) - = - fl + flo R OTI
[ 1 (1 d 'J + A(Tl) 1 + '2 Jo R(fl,fl',T1) ;,
[ 1 (1, dfl'J
. 1 + '2 Jo R(fl ,flo,Tl);: ,
with the initial condition R(fl,flo,O) = O. The function A(T) is the albedo for single scattering.
To obtain a numerical solution of (5.1) via digital computers, we apply Gauss quadrature of order N to the integrals, and treat the resulting system of ordinary differential equations
(5.2) dd Rij(Tl) = -(fli1 + fljl)Rij Tl
+ A(T1) [1 + ! !R;kh) WkJ 2 k=1 flk
Rij(O) = 0,
where
(5.3)
Suppose, on the other hand, that we are given the values RitCTl),
i,j = I, 2, ... , N, and are asked to determine the function A(T),
o :5: T :5: T 1 • This is the type of question that arises naturally in astrophysics in connection with various types of experimental work.
132 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
6. ANALYTIC FORMULATION
In order to tackle a problem of this nature, we must impose some analytical structure on A(T), which is to say, we must add some further information concerning the nature of the physical process. To illustrate, consider the case where the medium consists essentially of two layers, each with constant albedo, separated by a thin zone of rapid transition from one value of the albedo to the other. * Let the albedo have the form .
(6.1) A(T) = a + b tanh [!O(T - c)],
where a, b, and c are constants to be determined. We see that Al ~ a - b in Layer I, and A2 ~ a + b in Layer 2.
Given the N2 values bij = r;;(TI ), we would like to determine the three parameters a, b, and c, and Tb the thickness. To this end, we ask for the values of a, b, c, and TI which minimize the mean-square error
(6.2) S = L {rilTI) - bi;}2, i,;
where Ri;(TI) = 4,uir;;(TI), and the Ri;(TI) satisfy the equation of (5.2). This variational problem is treated below by means of the quasi
linearization techniques we have previously discussed.
7. NUMERICAL RESULTS
We begin by generating "observations" by choosing a = 0.5, b = 0.1, c = 0.5, and integrating to a thickness of TI = 1.0, using N = 7. We now carry out three types of numerical experiments:
(a) Given a, b, and Tl, determine c, the altitude of the interface.
(b) Determine Tb the over-all thickness, given a, b, and c.
(c) Determine a, b, and c, given TI'
• The importance of these problems was pointed out to the authors by S. Ueno of Kyoto University, Japan.
APPLICATIONS IN PHYSICS, ENGINEERING. AND BIOLOGY 133
As will be seen from the following brief tabulation of the results for cases (a) and (c) in Tables 6.2 and 6.3, the quasilinearization technique works well. The success of the method depends crucially upon the initial approximation. In other words, we can only expect
Table 6.1
SUCCESSIVE APPROXIMATIONS OF c, THE LEVEL OF THE INTERFACE
Approximation Run 1 Run 2 Run 3
0 0.2 0.8 0.0 1 0.62 0.57 2 0.5187 0.5024 No 3 0.500089 0.499970 convergence 4 0.499990 0.499991
True Value 0.5 0.5 0.5
Table 6.3
SUCCESSIVE APPROXIMATIONS OF AI' A2• AND c
Approximation Al=a-b A2 =a+b c
0 0.51 0.69 0.4 1 0.4200 0.6052 0.5038 2 0.399929 0.599995 0.499602 3 0.399938 0.599994 0.499878
True Value 0.4 0.6 0.5
the method to be successful if initially we have at least a crude idea as to the nature of the actual physical process. (The programs are given in Appendix 3.)
The quasilinearization technique in case (c), with N = 7, involved the integration of 124 linear differential equations, and the solution of a system of linear algebraic equations of order 3. The calculations took about two minutes on an IBM-7044.
134 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
Subsequently, we shall indicate how a combination of dynamic programming and quasilinearization will lead to an improved technique for determining the positions ofthe interfaces in a multilayered medium.
8. THE VAN DER POL EQUATION
One of the most famous of all nonlinear differential equations is the Van der Pol equation,
(8.1) x" + A(X2 - l)x' + x = o.
Although a nonlinear equation equivalent to this had been studied by Rayleigh, this particular equation achieved prominence only in connection with the development of modern electronics. The equation provides an excellent description of the behavior of a multivibrator, and thus has application to many other scientific fields, such as cardiology or electroencephalography, in which similar "flip-flop" phenomena are observed. This was pointed out by Van der Pol himself in his original paper.
It is of some interest then to ask whether or not observation of the solution can yield an effective determination of the parameter A. Assume that the following three observations of the displacement x have been made:
(8.2) x(4) = -1.80843,
x(6) = -1.63385,
x(8) = -1.40456.
We wish to determine both A and u(4) = x'(4). It is not necessary to use least-squares, since only three observations are given. To obtain an initial approximation, we observe that
(8.3) x(6) ~ x(4) ~0.087 ~ x'(4),
x(8) - x(6) ~ 0.114 ~ x'(6), 2 - -
x'(6) - x'(4) ~ 0.014 ~ x"(4). 2 - -
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY 135
Hence, referring to (8.1), we obtain
(8.4) Ao = 7,
as an initial guess for A. We next integrate (8.1), or the equivalent system
(8.5) x' = u,
u' = _A(X2 - 1)x' - x,
A' = 0,
subject to the initial conditions
(8.6) x( 4) = -1.80843,
u(4) = +0.08,
A(4) = 7.0.
To obtain the (n + l)st approximation, after having calculated the nth, we use the linearized relations
(8.7) xn+! = un+!'
Un+1 = -An(X! - 1)un - Xn + (Xn+l - xn)[-2AnXnUn - 1]
+ (Un+! - Un)[-An(X! - 1)]
+ (An+l - An)[-(X! - 1)un],
in+l = 0,
together with the three-point boundary conditions of (8.2). The results of a numerical experiment are summarized in Table 6.4.
A second experiment was carried out with poorer estimates of the initial velocity and system parameter. The results are presented in Table 6.5. A third trial in which the initial estimate of the system parameter was taken to be 20 resulted in an overflow, so that no results are available.
The data in Eq. (8.6) were generated by integrating Van der Pol's equation with
(8.8) A = 10.0,
x(O) = 1.0,
X(O) = 0.0.
A graph is presented in Fig. 6.3. (The programs are given in Appendix 4.)
136 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
Table 6.4 THE FIRST NUMERICAL EXPERIMENT
Initial Iteration Iteration Iteration True Approx. 2 3 4 Values
x(4) -1.80843 -1.80843 -1.80843 -1.80843 -1.8084322 u(4) +0.08 +0.0564454 +0.0794911 +0.079366 0.079366909 ).(4) +7.0 9.91541 10.0004 10.00000 10.00000
Table 6.5 THE SECOND NUMERICAL EXPERIMENT
Initial Iteration Iteration Iteration True Approx. 2 4 6 Values
x(4) -1.80843 -1.80843 -1.80843 -1.80843 -1.8084322 u(4) 0.1000 -2.0758 0.599288 0.0792063 0.079366909 ),(4) 5.000 3.87992 11.4091 9.99956 10.000
2.0
1.0
i o r-~------------------------------~-------------4
-1.0
-2.0
o 2 4 6 8 10 12 14
Fig. 6.3-Solutlon of the Van der Pol equation for). = 10.0, x(O) = 1.0, X(O) = 0.0
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY
9. ORBIT DETERMINATION AS A MULTIPOINT BOUNDARY-VALUE PROBLEM
The general problem of solving the vector equation
(9.1) x' = f(x,t),
subject to the multipoint conditions
(9.2) N
Laijxlti) = bi' ;=1
o ~ t ~ b,
j = 1,2, ... , N,
137
whereO ~ tl ~ t2 ~ ..• ~ tN ~ b, has been mentioned previously in connection with quasilinearization. Let us now consider a particular version of some interest occurring in orbit determination. Suppose that the equation of motion of a heavenly body is, in normalized units,
(9.3) x' = u, , y' = v,
u' - _ x v' = _ y - (x2 + y2)1.5 ' (x2 + y2)1.5·
The motion takes place in the (x,y)-plane, the plane of the ecliptic. At times ti' i = 1,2,3,4, we are given the quantities (J(ti ) = (J;, which yield the conditions
(9.4) yeti) = (x(ti) - 1) tan (Ji' j = 1,2,3,4.
(See Fig. 6.4.)
y
H (x, y)
------------.. ------~ __ --~------------~ x Sun
Fig. 6.4-The physical situation
138 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
We wish to determine the orbit in convenient form by obtaining the values of x, y, U, v at time t = 2.5. For simplicity, we are assuming that the earth is stationary at (1,0), and that the sole force on the heavenly body is the gravitational attraction of the sun.
10. NUMERICAL RESULTS
Consider the case for which the observational data are (in radians)
(10.1) 6(0.5) = 0.251297,
6(1.5) = 0.783690,
6(1.0) = 0.510240,
6(2.0) = 1.076540.
These data were generated by assuming that at time zero we have x(O) = 2.0, yeO) = 0, u(O) = 0, v(O) = 0.5. The situation, with a stationary earth, is as shown in Fig. 6.5. As an initial approximation to the trajectory, we assume that the observed body is at the point (1.0,0) at time zero and that it moves in a circular orbit with a period equal to 3.664, a very poor initial approximation. Then, using a
Sun Earth
Fig. 6.S-Approxlmate directions to the heavenly body at four times of observation from a fixed earth
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY 139
Runge-Kutta integration method and the quasilinearization procedure, we found the results shown in Table 6.6.
The calculation took about one and one-half minutes, * no attempt having been made to streamline the calculations. It produced, of course, not merely the results of Table 6.6 but an ephemeris for o ~ t ~ 2.5 at intervals of 0.01. (The programs are given in Appendix 5.)
x(2.5) u(2.5) y(2.5) v(2.5)
Table 6.6
THE APPROXIMATIONS TO THE DISPLACEMENTS AND VELOCITIES AT TIME 2.5
Initial First Second Third Fourth Precise Value to Approx. Approx. Approx. Approx. Approx. 6 Figures
-0.455175 1.554360 1.169660 1.190170 1.193560 1.193610 -0.873320 -1.638830 -0.495101 -0.656686 -0.664207 -0.664263
0.975790 2.982690 0.865184 1.043930 1.060540 1.060700 -0.408690 1.807620 0.201206 0.237695 0.247433 0.247499
Perhaps even more surpnsmg is the fact that if, as an initial approximation to the orbit, we consider the heavenly body to coincide with the position of the earth over all time, we still obtain the same rapid convergence to the correct orbit.
The method is readily extended to cases where more than four observations are available through use of the method ofleast-squares. Furthermore, perturbations due to the motion of the earth and the presence of other planets are readily incorporated, since they merely result in modifications of equations (9.3).
II. PERIODOGRAM ANALYSIS
A basic problem in the scientific world is that of detecting hidden periodicities. Indeed, it is difficult to think of a domain where questions of this nature do not arise in important ways. Analytically, the problem takes a very simple form. Given the information that a
• On an IBM 7090.
140 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
function J(t) has the structure R
(11.1) J(t) = !~i cos wit, ;=1
we wish to determine the amplitudes and frequencies, and R, from a set of observations, bi = J(ti), i = 1,2, ... , M. We shall consider the simpler problem where R is assumed known and take M ~ 2R.
Our approach, which is different from any of the standard ones, is to convert this into a variational problem involving multipoint boundary values and then to employ quasilinearization. We begin by writing
R
(11.2) J(t) = !utCt), i=l
where the Ui are solutions of
(11.3) U~(O) = o. Our aim is to determine the ~i and the Wi so as to minimize the quantity
M
(11.4) ! (J(ti ) - bi )2. i=l
To treat this, we employ quasilinearization as in the preceding pages, starting with initial approximations (~(~l,w(~) which produce the initial functions Ui (O)(t). In the next section, we present some numerical results.
12. NUMERICAL RESULTS
Let us consider a simple case where
(12.1)
with
(12.2)
3
J(t) = !~iCOSWit, i=1
~1 = 1.000000,
~2 = 0.500000,
(Xa = 0.100000,
WI = 1.11000,
W2 = 2.03000,
Wa = 3.42000,
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY
and six observations,
t
0.00 1.00 2.00 3.00 4.00 5.00
f(t)
l.6oooo 0.126895
-0.823201 -0.558708 -0.356338
0.351082
14
The results of the successive approximations are given in Table 6.7
Table 6.7
Approximation i (Xi Wi
0 1 0.900000 1.00000 2 0.600000 2.00000 3 0.200000 3.00000
1 1 0.943582 1.09000 2 0.486676 1.92583 3 0.169742 2.45651
--2 1 1.00218 1.11113
2 0.476529 2.01352 3 0.121287 2.62338
3 1 1.00002 1.11001 2 0.503119 2.03179 3 0.096864 2.80801
--4 1 0.999988 1.11000
2 0.499537 2.02981 3 0.100475 2.85931
True 1 1.000000 1.11000 2 0.500000 2.03000 3 0.100000 3.42000
142 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
The case of seven observations,
t
0.00 0.83 1.67 2.50 3.33 4.17 5.00
yielded the results shown in Table 6.8.
Table 6.8
Approximation i (Xi
0 1 0.900000 2 0.600000 3 0.200000
1 1 0.984727 2 0.535234 3 0.057545
2 1 0.998859 2 0.491950 3 0.107930
3 1 1.00904 2 0.500095 3 0.099857
4 1 1.000000 2 0.500000 3 0.100000
True 1 1.000000 2 0.500000 3 0.100000
J(t)
1.60000 0.452413
-0.679691 -0.820313 -0.367504 -0.381874
0.351082
Wi
1.00000 2.00000 3.00000
1.14378 2.07572 3.29235
1.11124 2.02810 3.39973
1.11003 2.03018 3.41930
1.11000 2.03000 3.42000
1.11000 2.03000 3.42000
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY
The case of eight observations,
t
0.00 0.71 1.43 2.14 2.86 3.57 4.29 5.00
produced the results given in Table 6.9.
Table 6.9
Approximation i (Xi
0 1 0.900000 2 0.600000 3 0.200000
1 1 0.981747 2 0.531127 3 0.056783
2 1 0.998520 2 0.492646 3 0.105177
3 1 1.00006 2 0.500003 3 0.099888
4 1 1.000000 2 0.500000 3 0.100000
True 1 1.000000 2 0.500000 3 0.100000
f(t)
1.60000 0.694147
-0.484599 -0.849519 -0.649072 -0.302561 -0.378655
0.351082
Wi
1.00000 2.00000 3.00000
1.13890 2.06075 3.22805
1.11046 2.02508 3.41224
1.11003 2.03016 3.41981
1.11000 2.03000 3.42000
1.11000 2.03000 3.42000
14
144 QUASI LINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
13. DISCUSSION
It was found that a reasonably good initial approximation is required to obtain convergence. For example, when experiments similar to those described in Tables 6.7 and 6.8 were carried out with the initial values
(13.1)
it was found that the successive approximations did not converge.
14. PERIODIC FORCING TERMS
Similar techniques can be used to determine the !Xi and Wi in the forcing term, when the function under observation, u(t) , is given as a solution of a linear differential equation of the form
(14.1) u" + blu' + b2u = !Xl cos WIt + !X2 cos w 2t,
u(O) = Cl> u'(O) = C2•
Actually, we can do very much better. Given a set of observations, {u(t i )}, we can determine the initial conditions Ch C2, the system parameters bI> b2, and the parameters in the forcing function.
15. ESTIMATION OF HEART PARAMETERS
Recently it has been found possible to account for the potentials which are observed on the skin of a patient and which are due to ventricular depolarization. The basic idea is to divide the ventricles into a number of segments, each of which contains a dipole source of current. The dipole is located in the center of the segment, and its axis is orthogonal to the surface of its segment. The dipoles have time-varying moments which are dependent upon the depolarization wave which sweeps over the ventricles. These concepts are illustrated in Figs. 6.6 and 6.7. The postulated sequence of depolarization events
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY
RIGHT VENTRICLE
Time sequence human
myocardial depolarization
o
SEPTUM LEFT VENTRICLE
R. V. apex ,. --I
L. V. apex
,- --I Body of R. V. I- __ I
,.. Body of L. V. ....,
Septum - L-R
Millisec
Fig. 6.6-Myocardial segments
145
is obtained via an extrapolation of the well-known experimental results of A. Scher for canine hearts. The body is viewed as a volume conductor. Surface potential time histories can be produced, in this fashion, which are in good agreement with those obtained from both normal and abnormal hearts, as has been demonstrated by numerous high-speed computer runs.
Current Dipole
Moment
~----------------a---------------------~Time
Fl •• 6.7-Typlcal dipole moment function F;(t)
146 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
The aim of the present discussion is to show that it is also possible to solve the inverse problem; that is, given the time histories of the surface potentials we can deduce the time histories of the individual dipole moments. From the mathematical viewpoint we treat this as a nonlinear multipoint boundary-value problem. It is resolved using quasilinearization following the lines laid down in the previous pages.
16. BASIC ASSUMPTIONS
Introduce standard orthogonal reference axes, x, y, z, and consider the dipole located in the ith segment at point (Xi,Y;,Zi) and having (li,m;,n i ) as the direction cosines of its axis. This produces a potential w at the point (aj,bj,c j) which depends upon the assumptions made about the body and its surrounding medium as volume conductors. If, for simplicity, we consider the body to be homogeneous and of infinite extent, then we may write
(16.1)
where
(16.2) (a) k = const,
(b) Fi(t) = moment of dipole in ith segment at time t,
(d) 'i2j = (ai - Xi)2 + (b j - Yi)2 + (e j - Zi)2.
Points infinitely far from the dipoles are at zero potential. Let us introduce a reference point (ao,bo,co) and assume that at N surface points the potentials with Iespect to that at the reference point are determined. In addition, we assume that the ventricles are divided into M segments.
Now we must consider the dipole moment functions Fi(t) , i = 1, 2, ... , M, in more detail. One is pictured in Fig. 6.7. We have found that these curves are well represented as solutions of the
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY 147
ordinary differential equations
(16.3) o ~ t ~ 80 msec,
i = 1,2, ... , M.
The parameters ti are the times at which the maxima of the moments occur, the k i are related to the broadness of the curves, and the hi are related to the maxima of the moments. The analytical solution of these equations is of no interest to us, since in more refined studies, (16.3) will be replaced by much more complex differential equations.
We denote the potential produced at time t at the jth observation point by Vlt),j = 1,2, ... ,N, with 0 ~ t ~ 80 msec. We have
M M (16.4) Vlt) = k ! (Flt) cos ()ii)rij2 - k ! (Fi(t) cos ()i)r;2,
;=1 i=1
j = 1,2, ... , N, where
(16.5) (a) cos ()i = [li(aO - Xi) + m;(bo - Yi) + n;(co - zi)]r;-t,
(b) r~ = (a o - Xi)2 + (bo - Yi)2 + (co - Zi)2,
i = 1,2, ... , M.
17. THE INVERSE PROBLEM
Assume that a heart has been observed for 0 ~ t ~ 80 msec and that the surface potential time series are recorded; that is,
bj(t) = observed surface potential at time t
at the jth observation position,
j = 1, 2, ... , Nand t = 0, 1, 2, ... , 80 msec.
We wish to determine the current dipole moment functions Fi(t), i = 1, 2, ... , M, which produce surface potentials that are in best agreement with the observed potentials. Let us put this in more precise mathematical language. We wish to determine the 3M constants ki' Ii' and hi in the differential equations of (16.3) which
148 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
are such that the sum of the squares ofthe differences ofthe potentials produced at the observation points and the observed potentials is as small as possible. If
80 N
(17.1) S =! !(V;(r) - birW, r=O ;=1
where the functions Vj(t) are determined from (16.4), and Fi(t) are determined from (16.3), we wish to minimize S through an appropriate choice of the heart parameters't i , ki' and hi, i = 1,2, ... , M.
18. NUMERICAL EXPERIMENTS
Earlier computational work led us to believe that we could handle the inverse problem in the manner described in previous sections. Some numerical experiments were conducted to verify this. First, we chose M, the number of ventricular segments, equal to five and N, the number of observation points, equal to three. Then various values were assigned to the parameters ti' ki' and hi, i = 1, 2, ... , 5, to simulate a normal heart and abnormal hearts exhibiting hypertrophies and infractions. Also values were assigned to (Xi,Yi,Zi), (a;,bj,c j ), and (/i,mi,ni)' By integrating (16.3) with these values, and using (16.4) and (16.5), we produced sets of hypothetical surface potentials corresponding to a variety of hearts. We used segments 1, 8, 11, 14, and 20, and these five segments of the heart produced good simulation of potentials measured by three orthogonal leads in normal human subjects. (See Fig. 6.8.)
We assumed that the values ti , i = 1,2, ... , 5, are known, which we take to represent a heart with a normal conduction system. We also assumed a set of surface potential measurements b;(r),} = 1,2, 3; r = 0, 1,2, ... , 80. Then use of the method of quasilinearization did produce the "missing" values of the heart parameters, k i and hi,
i = 1,2, ... , 5. It should be borne in mind that the computational load is considerable. We are required to solve 110 simultaneous linear differential equations and to solve linear algebraic equations
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY
'" -c: :J
g>., O~--P~----------:::0001 ~ g
o 20 40 60 80
t, millisec
149
Fig. 6.S-Potentials produced by five-segment normal heart model
of order ten at each stage in the calculation. Much remains to be done to investigate the range of convergence of the method, to determine the effects of errors in the observed surface potentials, and to make the computational program as efficient as possible. These preliminary experiments merely indicate feasibility of this approach to the formulation and numerical solution of inverse problems in cardiology. (Some programs are given in Appendix 6.)
150 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
Chapter Six
COMMENTS AND BIBLIOGRAPHY
§2. See
R. BELLMAN, H. KAGIWADA, and R. KALABA, "A Computational Procedure for Optimal System Design and Utilization," Proc. Nat. Acad. Sci. USA, Vol. 48, 1962, pp. 1524--1528.
§5. This material was first presented in
R. BELLMAN, H. KAGIWADA, R. KALABA, and S. UENO, Inverse Problems in Radiative Transfer: Layered Media, The RAND Corporation, RM-4281-ARPA, December 1964 (to appear in Icarus).
R. BELLMAN, H. KAGIWADA, R. KALABA, and S. UENO, On the Identification of Systems and the Unscrambling of Data-II: An Inverse Problem in Radiative Transfer, The RAND Corporation, RM-4332-ARPA, November 1964.
The values of the reflection matrix for N = 7 and the albedo function 11.( r) = 0.5 + 0.1 tanh 100r - 0.5) for a slab of thickness rl = 1.0 are given below in the table, "The Measurements {b ii}."
THE MEASUREMENTS {hi;}
j=1 2 3 4 5 6 7
i = 1 0.079914 0.028164 0.014304 0.009104 0.006707 0.005515 0.004970 2 0.143038 0.091522 0.058437 0.040826 0.031405 0.026378 0.023989 3 0.167000 0.134331 0.099653 0.075106 0.060044 0.051445 0.047248 4 0.178898 0.157955 0.126408 0.099392 0.081253 0.070435 0.065042 5 0.185284 0.170817 0.142072 0.114229 0.094495 0.082423 0.076332 6 0.188723 0.177733 0.150791 0.122665 0.102104 0.089349 0.082870 7 0.190354 0.180898 0.154995 0.126773 0.105829 0.092748 0.086083
It is interesting to compare them with the corresponding values tabulated in Vol. I of this series for homogeneous slabs of thickness 1.0 and albedos 0.4, 05., and 0.6. There are significant differences introduced by the inhomogeneities.
§8. Our discussion is based on
R. BELLMAN, H. KAGIWADA, and R. KALABA, Quasilinearization, System Identification, and Prediction, The RAND Corporation, RM-3812-PR, August 1963.
An interesting application of these techniques is to the field of electroencephalography. For the formulation, see
E. M. DEWAN, "Nonlinear Oscillations and Electroencepholography," J. Theoret. Bioi., Vol. 7, 1964, pp. 141-159.
APPLICATIONS IN PHYSICS, ENGINEERING, AND BIOLOGY 151
§9. This approach was sketched in
R. BELLMAN, H. KAGIWADA, and R. KALABA, "Orbit Determination as a Multipoint Boundary-value Problem and Quasilinearization," Proc. Nat. A cad. Sci. USA, Vol. 48, 1962, pp. 1327-1329.
An interesting programming advance in the determining of the partial derivatives is given in
R. BELLMAN, H. KAGIWADA, and R. KALABA, Wengert's Numerical Method for Partial Derivatives, Orbit Determination, and Quasilinearization, The RAND Corporation, RM-4354-PR, November 1964 (to appear in Comm. ACM).
For another application, see
A. LAVI and J. C. STRAUSS, Parameter Identification in Continuous Systems, to appear.
§11. This section is based on
R. BELLMAN, H. KAGIWADA, and R. KALABA, On the Identification of Systems and the Unscrambling of Data-I: Hidden Periodicities, The RAND Corporation, RM-4285-PR, September 1964.
§15. The mathematical model is presented in
R. SELVESTER, C. COLLIER, and R. PETERSON, "Analogue Computer Model of the Vectorcardiogram," Circulation Research, to appear.
This section is based on joint research reported in
R. BELLMAN, C. COLLIER, H. KAGIWADA, R. KALABA, and R. SELVESTER, "Estimation of Heart Parameters Using Skin Potential Measurements," Comm. ACM, Vol. 7, 1964, pp. 666-668.
We avoided any discussion of recent applications of a combination of dynamic programming and quasilinearization to system identification. See
R. BELLMAN, B. Gwss, and R. ROTH, Identification of Differential Systems with Time-varying Coefficients, The RAND Corporation, RM-4288-PR, November 1964.
R. BELLMAN, B. Gwss, and R. ROTH, "Segmental Differential Approximation and the 'Black Box' Problem," J. Math. Anal. Appl., to appear.
R. BELLMAN, B. GLUSS, and R. ROTH, "On the Identification of Systems and the Unscrambling of Data: Some Problems Suggested by Neurophysiology," Proc. Nat. Acad. Sci. USA, Vol. 52, 1964, pp. 1239-1240.
For the application of Laplace transform methods, see
R. BELLMAN, H. KAGIWADA, and R. KALABA, "Identification of Linear Systems Using Numerical Inversion of Laplace Transforms," IEEE Trans., to appear.
Chapter Seven
DYNAMIC PROGRAMMING AND QUASILINEARIZATION
I. INTRODUCTION
In this chapter, we wish to discuss some of the connections between dynamic programming and the theories of positive operators and quasilinearization. To begin with, we trace the origin of quasilinearization in the concept of approximation in policy space. Following this, we will indicate how a combination of quasilinearization and dynamic programming can be used to provide an approximation method for the solution of multidimensional variational problems which greatly increases the scope of the individual theories. Furthermore, we show how to combine dynamic programming and quasilinearization to treat some complicated identification problems.
References to detailed discussions will, as in previous chapters, be found at the end of the chapter. .
2. A BASIC FUNCTIONAL EQUATION
Let us consider a multistage decision process taking place in a state space S, where pES represents a typical state, T(p,q) is an element of a family of transformations on S characterized by q, the policy vector, qED, the policy or decision space, and g(p,q) represents the single-stage return.
If we denote by!(p) the total return from an unbounded process, obtained using an optimal policy, the principle of optimality yields the functional equation
(2.1) f(p) = max [g(p,q) + f(T(p,q))]. Q
153
154 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
A question of immediate interest is that of determining the existence and uniqueness of solutions of this equation under various assumptions concerning the functions g and T and the spaces Sand D.
A variety of methods in classical analysis can be called upon for this purpose, such as fixed-point theorems, approximation by discrete processes, and the method of successive approximations. Thus, for example, using the last method named, we can study (2.1) by means of the sequence {fN(P)} generated by
(2.2) fN(P) = max [g(p,q) + fN-l(T(p,q))], N ~ 1, q
with !o(p) chosen in some adroit fashion. This is a meaningful approximation technique because it IS
equivalent to considering the unbounded process as a limit of bounded processes.
3. APPROXIMATION IN POLICY SPACE
One of the advantages possessed by the theory of dynamic programming is that it permits, and indeed encourages, a type of approximation, called approximation in policy space, which does not exist in classical analysis. The basic idea is quite simple. Instead of approximating to the function !(p), we approximate to the policy function q(P). From the standpoint of applications, this is a very natural approach. As we shall see in a moment, it possesses an important analytic advantage.
Let us guess an initial policy qo(P) and determine the corresponding return function!o(p) by means of the functional equation
(3.1)
To obtain an improvement of qo(P), we ask for the function ql(P) which maximizes the expression
(3.2) g(p,q) + fo(T(p,q)).
DYNAMIC PROGRAMMING AND QUASILINEARIZATION ISS
Using this policy, we generate the functionjl(p) as the solution of
(3.3)
As in (3.1), the solution is generated by straightforward iteration,
In this fashion, we generate a sequence of policies {qN(P)} and return functions {fN(P)}.
Let us now examine what is required to justify the statement that ql(P) is an improvement on qo(p). We have
(3.5) fo(p) = g(p,qo) + fo(T(p,qo))
~ g(P,ql) + fo(T(p,Ql))'
by virtue of the way in which ql was chosen. Comparing this with the equation of (3.4) for !t(p), we see that the inequality /0 ~jl is now a consequence of a positivity relation for the operator
(3.6) T(f) = f(p) - f(T(p,q)).
In this way, then, approximation in policy space leads us to the area of functional inequalities. Once having seen the advantages that accrue, it is a simple step to the procedures described in the previous chapters for converting descriptive processes into decision processes.
Assuming that T(j) satisfies the requisite positivity properties, let us show that each functionjip) is bounded by j(p). We have
(3.7) fn(P) = g(p,qn) + fn(T(p,qn)),
f(p) = max [g(p,q) + f(T(p,q))] q
from which the desired inequality follows. This monotonicity of convergence furnishes us with a very flexible
way of establishing the existence of solutions of (2.1).
156 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
4. DYNAMIC PROGRAMMING AND QUASILINEARIZATION
The principal difficulty in the application of the theory of quasilinearization lies in the restrictive conditions required to ensure convergence of the sequence of successive approximations. The examples given in Chapter Six, however, illustrate the fact that these methods often work better in practice than might be expected. Nevertheless, this is not a very satisfactory state of affairs and it is essential to develop foolproof (or should we say divergence-proof) techniques.
Suppose, for instance, that we wish to apply quasilinearization to the minimization of the functional
(4.1) J(u) = LT g(u,u') dt,
where u is a scalar function constrained by the terminal conditions u(O) = Co, u(T) = c'. Ifwe take uo(t), the initial approximation, to be the straight line joining (O,co) and (T,c'), then to guarantee convergence we must put a bound on T. If the initial approximation were more adroitly chosen, we could afford a larger value of T. Unfortunately, in many applications, we do not possess sufficient information to do better initially than some simple choice such as the foregoing.
To overcome this difficulty, let us use a blend of quasilinearization and dynamic programming. Essentially, we intend to apply quasilinearization for local optimization, and dynamic programming to obtain global optimization.
To illustrate the combination of techniques, let us return to the variational problem posed above. In the (c,T)-plane, we construct a grid of points as indicated in Fig. 7.1.
We have kept the number of points small (and the example onedimensional) in order to present the ideas clearly without complicating the diagrams or notation. In practice, we could very easily accommodate grids of several thousand points in phase space in the treatment of multidimensional variational problems.
DYNAMIC PROGRAMMING AND QUASI LINEARIZATION 157
C
Cl • C, • C}U •
C, • C" • CII •
Cn C' •
CII • CII •
o 2S 3S 4S T=5S
Fig. 7.1
Let us suppose that from coone can go only to one of three points Cl>C 2,C 3 ; from Cb C2,C 3 to one of the points C4'C S'C6 , and so on; from C1o,Cll, or C12, one must go to c'. The allowable paths from one grid point to another are those that minimize the functional
(4.2) l(k+1)6
J(u) = g(u,u') dt, k6
with u(kb) = C i , u(k + l)b = c;, with Ci and C; "nearest neighbors" in the foregoing sense. Call these paths geodesics and the corresponding value of J(u) a geodesic length. Let di ; denote this length.
We have previously indicated how the geodesic distances can be determined via quasilinearization if b is sufficiently small. The bound on b can be determined a priori in terms of bounds on g and its partial derivatives. Let us now show how to use dynamic programming to piece these geodesic arcs together to find an excellent approximation to the actual minimizing function.
5. FUNCTIONAL EQUATIONS
The optimization problem posed in the preceding section is a special case of a more general routing problem for which very
158 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
effective algorithms exist. In this case, the structure of the mesh makes the calculation particularly simple.
Introduce the function
(5.1) f. = the minimum total geodesic length from Ci to c',
defined for i = 0, 1, ... , 12. Then the principle of optimality yields the functional equation
(5.2) h = min [dij + Ii]' i*i
connecting the f.. In this case, this equation reduces to a recurrence relation, since f. is immediately determined for the points (c10,C U 'C12).
For these, we have
(5.3)
Using these values in (5.2), we can calculate f. for (c7,CS,C9). Con· tinuing in this fashion, we calculate (c 4,CS,C6), (C1,C 2,C 3), and finally Co, the desired value.
6. DISCUSSION
Once the quantities dij have been determined, the determination of the f. requires negligible time. The time required to obtain the dij depends directly, of course, upon their number, which, in turn, depends on Cl. There are, however, many techniques that can be used to reduce the time of computation.
Having determined the optimal "polygonal" path in this fashion, we can, if we wish, use it as the initial approximation in an attack on the original problem via quasilinearization. In many cases, the "polygonal" path will provide a sufficiently accurate solution.
7. DYNAMIC PROGRAMMING AND DIFFERENTIAL APPROXIMATION
Let us now rapidly sketch ways in which we can combine dynamic programming and differential approximation. Suppose that the
DYNAMIC PROGRAMMING AND QUASILINEARIZATION 159
problem we are considering is that of determining the parameters a and c so that the solution of
(7.1) dv - = g(v,a), dt
v(O) = C,
is close to the given function u(t) in the sense of minimizing the expression
(7.2)
As in the minimization problem discussed in the previous sections, the quasilinearization technique can fail if the initial approximation is not sufficiently close.
One reason for this failure may be that u(t) is really composed of parts of a number of distinct processes. Thus, if we write vet) = v(t,a,c) (denoting explicitly the dependence on a and c), it may well be that
(7.3) o ~ t ~ t1,
t1 ~ t ~ t2,
= v(t,aN,cN), tN- 1 ~ t ~ tN.
Knowing that u(t) has this structure, a quite typical situation in many cases, how do we determine the quantities ai' ci, and ti? This is a system identification problem.
8. DYNAMIC PROGRAMMING AND THE IDENTIFICATION OF SYSTEMS
To indicate the way in which we intend to apply dynamic programming, let us consider perhaps the simplest version of the foregoing problem, that of obtaining optimal polygonal approximation to a given function u(t) over an interval [0, T]. Let us suppose that we allow a polygonal curve with (N + I) line segments, as indicated in
160 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
u
.... _-~------------------------------- T o t,
Fig. 7.1
Fig. 7.2. We wish to determine the N values, t i , and the 2N parameters, a i and hi, so as to minimize the quantity
(8.1) N Jti+l
DN = i~ tj (u(t) - ai - bi t)2 dt.
Here to = 0, tN +l = T. Write
(8.2)
for any two numbers Sl, S2 satisfying 0 :::;; Sl :::;; S2 :::;; T. The function ~(Slh) is readily obtained by means of the two linear algebraic equations obtained by taking partial derivatives with respect to a i
and hi' Introduce the functions
(8.3) fN(a) = min DN, ai,bi,ti
defined for N = 0, 1, 2, ... , a ~ O. Then the principle of optimality readily yields the recurrence relation
(8.4) N ~ 1,
fo(a) = ~(O,a). Once the functions ~(tN,a) have been calculated, the determination
of the sequence {!N(a)} is quite routine.
DYNAMIC PROGRAMMING AND QUASILINEARIZATION 161
9. DISCUSSION
The quantity ~(Slh) arises in connection with the approximation of u(t) by means of a solution of
d2v (9.1) dt2 = 0, v(o) = a, v'(O) = b.
This is a very simple form of differential approximation. If we wish instead to use the solution of (8.1) for curve fitting, which is to say, to engage in more sophisticated differential approximation, we require the services of quasilinearization as described in Chapter Six. Once again, there are a number of techniques we can use to decrease the time required to compute ~(ShS2) for a grid of (slh)-values in o ~ Sl ~ S2 ~ T.
Let us note finally that we could use a Cebycev criterion,
(9.2) min max lu(t) - ai - bitl
in place of the mean-square criterion without in any way increasing the computational burden.
Chapter Seven
COMMENTS AND BIBLIOGRAPHY
§1-§3. For an introduction to the theory of dynamic programming, with applications to numerous areas, see
R. BELLMAN, Dynamic Programming, Princeton University Press, Princeton, New Jersey, 1957.
R. BELLMAN and S. DREYFUS, Applied Dynamic Programming, Princeton University Press, Princeton, New Jersey, 1962.
R. BELLMAN, Adaptive Control Processes: A Guided Tour, Princeton University Press, Princeton, New Jersey, 1961.
For a discussion of positive operators, and many further references, see
W. WALTER, Differential- und Infegral-Ungleichungen, Springer, Berlin, 1964.
E. F. BECKENBACH and R. BELLMAN, Inequalities, Springer, Berlin, 1961.
§4. Thcsc tcchniqucs havc bccn dcvelopcd in somc joint work with S. A/cn.
162 QUASILINEARIZATION AND NONLINEAR BOU NDARY-VALUE PROBLEMS
§5. For a detailed discussion of routing problems, see the book by Bellman and Dreyfus referred to above, as well as
R. KALABA, "On Some Communication Network Problems," Combinatorial Analysis, American Mathematical Society, Providence, R. I., 1960.
M. POLLACK and N. WIEBENSON, "Solutions of the Shortest-route Problem-A Review," Operations Res., Vol. 8, 1960, pp. 224-230.
D. L. BENTLEY and K. L. COOKE, "Convergence of Successive Approximations in the Shortest Route Problem," J. Math. Anal. Appl., to appear.
§8. See
R. BELLMAN, "On the Approximation of Curves by Line Segments Using Dynamic Programming," Comm. ACM, Vol. 4, 1961, p. 284.
R. BELLMAN, B. GLUss, and R. ROTH, "On the Identification of Systems and the Unscrambling of Data: Some Problems Suggested by Neurophysiology," Proc. Nat. Acad. Sci. USA, Vol. 52, 1964, pp. 1239-1240.
--, "Segmental Differential Approximation and the 'Black Box' Problem," J. Math. Anal. Appl., to appear.
R. BELLMAN, H. KAGIWADA, and R. KALABA, Dynamic Programming and an Inverse Problem in Neutron Transport Theory, The RAND Corporation, RM-4495-PR, March 1965.
Appendix One
MINIMUM TIME PROGRAM
164 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
C C
C
C
26051~ TI< 4J EC TOR Y
FOR lULl" t:QUATION 1.+ IY' )**2 + Y Y"
COMMON r,';,C,K DIMENSION r(500),CIIO)
20 READ lO,XA,YA,XB,YH,H,NHP,NFP,L 10 FO~MAT{5t:12.4,3I3)
PRINT 100,XA,YA,X8iYB,H,NHP,NFP,L
O.
100 FORMATIIH15X5HINPUTI IH05X23HXA,YA,XB,YB,H,NHP,NFP,L I 1 IH05X 211P2E14.4,3X),El4.4, OP3I6)
C
C C C C C
"IH= I XA-XB) IH NN=NH/NHP
L=HIGHEST APPROXIMATION IXA,YA), INITIAL POINT IXB,Y8), FI"IAL POINT
MAX=L+l Cll)=IYB-YA)/{XB-XA)
KM=O PRINT 103,KM,CIl)
103 FORMATIIH048X,I2,lOX 2HC=,lPi14.6) C
C
DO 3 K=2,MAX N=2*K+4
DO 30 1=1,500 30 TIll=O.O
T(2)=XA TI 3) =t-j
KM=K-l DLl 1 I=l,KM
J=2*I+2 TIJ)=YA
1 TIJ+ll=CII) 6 TIN-2)=0.
C
TI N-l) =0. TIN) =0. TlN+l)=l. TIN+2)=1. TIN+3)=0.
PRINT 104 104 FORMATIIH142XIHX,20X4HUIX))
C
C
9 8
106 C
CALL INT(T,Nol,O.,O.,O.,O.,O.,O.)
DO 8 I=l,NN 00 9 J=l,NHP CALL INTM
PRINT l06,TIZ),TIN-4) FORMATI29XIP2E21.6)
APPENDIX ONE
C
260<;15 TRAJECTORY
CIK)=IYB-TIN-2)-YA*TIN+2»/TIN) 3 PRINT 103,KM,CIK)
C FINAL APPROXIMATION AND EXACT SOLUTION XI=0.S*IXH+XA+IYB**2-YA**2)/IXB-XA)J RS=YB**Z+IXB-XI)**2
PRINT 105, XI,RS lOS FORMATIIHI28XIPlE21.6)
C
C
K=MAX+l N=2*K+4
OU 31 1=1,500 31 TlI)=O.O
Tl 2)=XA T(3)=H DO 11 I=I,MAX
J=2*1+2 TlJJ=YA
II TIJ+I)=CIl)
C
TIN-Z)=O. TlN-lJ=O. T( NJ=O. Tl N+I )=1. TIN+2)=I. TIN+3)=0.
PRINT 101 101 FORMATIIHI42XIHX,ZOX4HUIXJ,15X8HEXACT UJ
C
C
19
2 102
C
107 C
CALL INTIT,~,I,O.,O.,O.,O.,O.,O.)
)(=XA F=NFP O=F*H "lNN=NH/NHP
DO 2 I = I, NNN 00 19 J=I,NFP
CALL INTM )(=X+O Y=SQRTFIABSFIRS-I)(-XIJ**2JJ
PRINT 102,TI2J,TIN-4),Y FURMATI29X,IP3E21.6J
CIKJ=IYB-TIN-2)-YA*TIN+Z) J/TIN) CA={XI-)(A)/YA PRINT 103,MAX,CIKI PRINT 107.CA FORMATIIH060X2HC=,IPEI4.6)
GO TO 20 ENOII,I,O,O,O,O,l,O,O,O,O,O,O,O,O)
165
166 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
C
260'>13 OAUX
SUilROUTI:,E DAUX Cl)Mi~O,~ T,\j,C,K DlHENSION TISOO),CI10)
TlN+4)=Cll) f(,'H5):0.O
DO ) 1=2, K J=2*I+2 "1:HJ
TCM)=TIJ+l) 4A=-2.*TIJ-l)/TIJ-2) 88=1 1.+TIJ-l )**2)/T(J-2)**2 CC=-2./TIJ-2)
TIM+l):AA*TIJ+I)+RB*TIJI+CC T( M+2) =TI J+31 TIM+31=AA*TIJ+31+8B*TIJ+21 TIM+4)=TIJ+'j) TIM+5)=AA*TIJ+51+RB*TIJ+41
RETURI~
ENDII,I,O,O,O,O,I,O,O,O,O,O,O,O,O)
Appendix Two
DESIGN AND CONTROL PROGRAM
168 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEM
260550 DESIGN PROBLEM
COMMON NI,KMAX,NP,NMAX,HGRID,CX,T,X,y,U,V,W,PREV,H,P,A,8.C.HE DIMENSION T{25Dl,WI~,SOOI,HI~,~,500I,PI~,SOOI,AI2,2I,8121,CI.
PREVIIlI C X. STATE VARIA8LE C Y, CONTROL VARIA8LE C UI=MU1, LAGRANGE MULTIPLIER C VI=AI, DESIGN PARAMETER C
C
I CALL INPUT 2 CALL START
C K ITERATIONS C
3 00 19 K=I,KMAX C INITIALIZE
00 5 1=1,250 5 TI I 1"'0.
Tl21:0. Tt 31=HGRID Tt~I"'l. Tt91z 1. TIIIlI:I. Tt 19J"'I. NzO PRINT III, ITILI,L=~,~31 NEQ=20 CALL INTIT,NEQ,Nl,O.,O.,O.,D.,O.,O.1
C INTEGRATE P'S AND H'S, STORING GRID VALUES
C
C
N .. O 8 00 II M= I , NP
NaN+1 )(zWCI,NI ,(=WI2,NI UzWC3,NJ VzWCll,NI CAll INTM L=3 00 9 1=1,1l
00 9 Jzl,~ lzL+I
9 HII,J,NI=TILI 00 10 JzI.1I
L=L+I 10 P(J.N)=TILI II CONTINUE
6 PRINT II"N,ITtL),L=Il,~31 PRIHT .... .1.1..1--.t-l--t (.HI t. JLNI. J'" 1.4J ... jzl. 111 .. 1 PI J.N). J=I. III
III FORMATIIHOI9XII0,IIE20.6/130X~E20.611 IFIN-NMAXI8.7.7
7 PRINT IID.K ~O FORMATIIHO/6SX9HITERATION.131
C EVALUATE CONSTANTS C
APPENDIX TWO
C
C
260550 DESIGN PR08LEM
NaNMAX AII.1J"'HI2.2,NI AI2.1,=HI2,3,NI Al1,21"'HI3,2,NI + HI4,2,NI A12,2J"'H13,3,NI + HI4,3,NI
BII'=-PI2,NI - CX*HII.2,NI 8121=-pel,NI - CX*HII,3.NI
OET-AI 1,11*A12,21 - AI2,II*AI 1,21 G=ABSFIOETI IFIG-.0000011200,200, 12
C 200 201
PRINT 201,OET.IIAII,JI.J=I,2I,8IJJ,I=1.21 FORMATIIH020X12HOETERMJNANT=,EI6.6/r40X3E20.611
CALL EXIT C
c
12 ClI'=CX CI21-18111*AI2,21 - B12J*A11,211/0ET Cl31=18121*AII,11 - 81 II*AI2,ll Ii0ET C(41=Cl31 GRIO=O. PRINT 45, GRIO,ICIJI,J=l,41
45 FORMATrlH026X4HGRIO,I4X1HX,19X1HY,ISX2HMU,19XIHAI 20XF10.4,4E20.61
C NEW VALUES OF VARlA8LES
C
N=O 13 DO 14 Mal,NP
N=N+1 DO 14 1= 1,4 WI hNI"'PI I,NI 00 14 J-I,4
14~_ILL~=WIJ.NI + CIJI*HIJ,I,NI FN=N GRJD=FN*HGRJO PRINT 50, GRIO,IWII.NI,J=I.41 IFIN-NMAXI13,15.15
C COMPARE C'S, PRESENT AND PREVIOUS
C
c
15 DO 16 lal.4 G=A8SFICII'-PREVIIIJ I fiG';:~oooOO 1116,16,11
16 CONTINUE GO TO I
11 DO IS I = I. 4 18 PREvel'=ClII
19 CONTINUE GO TO 1
50 FORMATI20XFI0.4,4E20.61 ENDII,I,O,O,O,O,I,O,O,O,O,O,O,O,OI
169
170 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEM
260552 INPUT - DESIGN
SUBROUTINE INPUT COMMON Nl.KMAX,NP,NMAX,HGRIO,CX,T.X.Y,U,V,W,PREV,H,P,A,B,C DIMENSION T(250).W(~.500).H(~.~.500).P(~.5001.A(2.21,B(21.C(~ PREVI~I
C Nl. INTEGRATION UPTlOJIi C KMAX, NUMBER OF ITERATIONS C NP. NUMBERQf GRIDS BEFORE PRINT-QUT C NMAX, TOTAL NUMBER OF INTEGRATIONS IN INTERVAL C HGRID. GRID SIZE C ex, X(O) C
READ 110,Nl.KMAX,NP,NMAX,HGRIO,CX PRINT lO.Nl.KMAX.NP.NMAX.HGRIQ~X
110 FORMAT(~I~,2E8.21 10 FORMAT{IH120xl~HDESIGN PRDBL£MJI
1 ~8X2HN1,6X4HKMAx,8X2HNP,6X~HNMAX,13X5HHGRID,16X~HXI011
2 30X~I10.2E20.61 RETURN
END(1.1.0."O.0.O.1.0.0.0.0.0.0.0.0)
APPENDIX TWO 17
6055) START (FOR 260550)
<;IJBRntITJ NF START COMMON Nl,KMAX,NP,NMAX,HGRIO,CX,T,X,Y,U,V,W,PREV,H,P,A,B,C DIMENSION T(250).w(~.500).H(~.~.500).P(~.500) .AI2.2).BI2).CI~) PREVI~I
INITIALLY T=O. X(T)=CX. YIT1=0. MUIll=O. AII1=0 DO 1 N= 1 ,NMAX WCl.Nl=CX WI2,N)=0. WU,N)=O. W(IJ,N)=O.
00 2 1=1, ~ 2 PR<.:VIIl=WII,l)
K=O PRINT ~O,K GRIO=O. PRINT ~5,GRIO,IWII,l),I=1,~)
~O FORMATllHO/65X9HITERATION,I31 ~5 FORMATllH026X~HGRIO,1~X1HX,19X1Hy,18X2HMU,19X1HA/20XFl0.~,~E2
11H019X22HFOR ALL VALUES OF Tl~.E)
RETURN ENO(1.1.0.0.0.0.1,0.0.0.0,0,0.O.O)
172 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
60553 DAUX IFOR DESIGNI
SUBROUTINE DAUX COMMON Nl,KMAX,NP,NMAX,HGRIO,CX,T,X,y,O.V.h.PREV,H,P.A~C.NEQ DIMENSION TI250I,Wt4.5001,HI4.4,5001.Pt4.5001.AI2.21.8f2).C(~J.
PREVt",
l=20 00 I 1=1,11
l=l+4 Ttll=-V-Ttl-201 + TIl-191 -X-Tll-111 Tll+II=T(l-201 + V-T(l-191 + Y_T(l-ITI TIl+2J=Y-TIl-201 + X-TIl-IYI Tll+31=O.
TI401=-V-T(201 + TI211 - X-T1231 + V-X Tl4 1)=f(20 I + V*TI 211 + Y-Tl231 -- V-V T(421=Y-TI201 + X-Tt211 - X-y TI"31=O.
RETURN END(I.l.0.0.0,O,l.0.0,O.O.O,O,O.OI
Appendix Three
PROGRAM FOR RADIATIVE TRANSFER-INVERSE PROBLEM FOR TWO SLABS, THREE CONSTANTS
174 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
$JOB 26G9,STRAT3,HK016u,5,0,20,P $PAUSE IIBJOR STRAT2 MAP IIBFT( RTINV
(
(
(
(
(
C
2
899 1000
90\.1 1001
901
(OMMON N,RT(7),WTI7),WRI7),ARI7,7),NPRNT,M1MAX,KMAX,DELTA,XTAU 1 XLAMI3h B217t7),RZI7,7).JFLAG,RI28dC1hTI14911,SIG, Z PI28d01)tfiIZ8,3,101),PLAMI3),HLAMI3,3),P217,7), 3 H217,7,3),(0'lSTI3hNEQ
PHASE
READ1000,N PRINT899 PRINT900,N READ1LJ01, I RT (1),1=1 ,N) PRINT901,IRTII) ,1=1,N) READ1v01,IWT(1),1=1,N) PRINT901,IWT(1),1=1,N) DO 2 l=l,N WR I I) =WT I I)/RT 11 ) DO 2 J = 1, N ARII,J)= 1.0/RTII) + 1.0/RTIJ)
FORMATl1H146X36HRAD1ATIVE TRANSFER - INVERSE PROBLEM I ) FORMAT(6112) FORMAT(6120) FORt~AT 16E1Z.8) FORt~AT 16"ZO.8) READ1uOO,NPRNT,M1MAX,KMAX PRINT900,NPRNT,M1MAX,KMAX READ1J01,DELTA PRINT901,DELTA READ1J01,XTAU,IXLAM(1) ,1=1,3) PRINT902 PRINT903,XTAU,IXLAMII) ,1=1,3)
902 FORMATl1H1Z3HPHASE I - TRUE SOLUTION II 903 FORMATI1HCI
1 1X11HTHICKNESS =, F10.4 Z 1XllHALBEDOIX) =, 20HA + B*TANHl10*IX-C» II 3 1X3HA =, E16.8, 10X3HB =, E16.81 10X3HC =, E16.8 II)
CALL NONLI N DO 3 1=1, N DO 3 J=l,N
3 B211,J)=RZII,J)
C PHASE II C
C
4 READ1~C1,XTAU,IXLAMII) ,1=1,3) K=O PRINT904,K PRINT903,XTAU,IXLAMII) ,1=1,3)
CALL NONLIN
APPENDIX THREE
(
(
(
(
(
(
(
904 FORMAT I 1H 1 13HAPPROXIMATION, 13/ )
QUASILINEARIZATION ITERATIONS
DO 5 K1=1,KMAX PRINT904,K1 CALL PANDH (ALL LINEAR (ONTINUE
RI:AD1 000.IGO GO TO 11,4).I GO END
$IBFT( DAUX LIST SUBROUTINE DAUX DIMENSION V2(707),XI3).FI7),GI7)
1 .VLAM(3)
17
(OMMON N,RT(7),WTI7),WRI7),ARI7,7),NPRNT,M1MAX,KMAX,DELTA,XTA 1 XLAM(3), B217,71 ,R217,7) ,IFL/IG,RI28olC1) ,T(1491) .SIG, 2 PI28,101),HI28,3,101),PLAMI3),HLAMI3,3),P217,7), 3 H217,7,3),(ONSTI3),NEQ
GO TO 11,2),IFLAG (
(NONLINEAR C
(
(
L=3 DO 4 l=l,N DO 4 J=l.I L=L+1
4 V21I,J)=TIl) DO 5 l=l,N DO 5 J=I,N
5 V21IoJ)=V2IJoI) DO 51 1=1.3 L=L+1
51 VLAMII)=TIL') SIG=T(2) Y=XTAU*SIG DO 52 1=1,3
52 XI I )=VLAMI I) CALL ALBEDQIY,X,Z) ZLAMDA=Z
DO 6 l=l,N FII)=O.O DO 7 K=l,N
7 FII)=FII) + WRIK)*V21I,K) 6 FII)=G.5*FII) + 1.0
1)0 8 l=l,N DO 8 J=l,1
176 QUASI LINEARIZATION AND NONLINEAR BOU NDARY·VALUE PROBLEMS
(
L=L+l DR=-AR(I.J)*V2(I.J) + ZLAMDA*F(I)*F(J)
8 T(l)=DR DO 9 1=1.3 L=L+l
9 T(l)=O.O RETURN
(
(LINEAR (
(
(
2 SIG=T(2) Y=XTAU*SIG DO 21 1=1.3
21 X(I)=XLAM(I) (ALL ALBEDO(Y.X.Z) ZLM~DA=Z
DO 16 1=1. N F ( I ) =0. () DO 17 K=loN
17 F(I)=F(I) + WR(K)*R2(I.K) 16 F(I)=O.5*F(I) + 1.0
(P I S (
(
L=3 DO 14 l=l.N DO 14 J=l.I L=L+l
14 V2(I.J)=T(L) DO 15 1=1. N DO 15 J=I.N
15 V2(I.J)=V2(JoI) DO 18 1 = 1 • 3 L=L+l
18 VLAM ( I ) = T ( l)
DO 10 1=1. N G(I)=O.O DO 10 K= 1. N
10 G(I)=G(I) + (V2(I.K)-R2(I.K))*'tJR(K) ARG=10.C*(Y-XLAM(3) ) TARG=TANH(ARG) XTANX=-10.0*XLAM(2)*(1.0-TARG**2)
I~= 3 + NEI"J DO 12 1=1. N DO 12 J=1.I FIJ=F(I)*F(J) (APF=-AR(I.J)*R2(I.J) + ZLAMDA*FIJ Tl=(APF T2=-~R(I.J)*(V2(I.J)-R2(I.J))
1 + O.5*ZLAI~DA*(F(I)*G(J} + F(J}*G(I}} T3= (VLAI~ ( 1) -XLAM (1) } *F I J
APPENDIX THREE
(
T4=IVLAM(2)-XLAMI2) )*TARG*FIJ T5=IVLAM(3)-XLAMI3))*XTANX*FIJ
M=M+1 12 TIM)=T1+T2+T3+T4+T5
DO 19 1=1,3 M=M+1
19 TIM)=O.O
(H'S (
(
(
(
(
DO 100 K=1.~
D0241=1.N DO 24 J=l,1 L=L+1
24 V211.J)=TIl) DO 25 l=l.N DO 25 J=I.N
25 V211.J)=V2IJoI) DO 26 1=1.3 L=L+l
26 VLAMII)=TIl)
DO 20 l=l.N GIl) =0.0 DO 20 J=l.N
20 GII)=GII) + V2(I,J)*WRIJ)
DO 22 l=l.N DO 22 J=lt! FIJ=FII)*FIJ) T1=0.0 T2=-ARII.J)*V211.J) + 0.5*ZLAMDA*IFII)*GIJ) + FIJ)*G(I)) T3=VLAM(1)*FIJ T4=VLAM(2)*TARG*FIJ T5~VLAM(3)*XTANX*FIJ
t~=M+ 1 22 TIM)=Tl+T2+T3+T4+T5
DO 29 1=1.3 ~1=M+l
29 TIM)=O.O 100 CONTINUE
RETURN END
$IBFTC NONLI N SUBROUTINE NONLIN
177
COMMON N.RT(7),WTI7).WRI7),ARI7.7).NPRNT.M1MAX,KMAX.DELTA.XTAU 1 XLAM(3), 6217,7) ,R217,7),IFLAG.RI28.1Cl).TI1491).SIG. 2 P 1 28 • 1 0 1 ) , H 1 28.3, 101 ) , P LA ~1 1 3 ) • H LA" 1 3 • 3) • P 2 1 7 • 7) • , H217.7.3l.CONSTI,ltNEQ
C NONLINEAR D.E. FOR TRUE SOLUTION OR FOR INITIAL APPROX. (
IFLAG=l
178 QUASILINEARIZATION AND NONLINEAR BOUNDARY.VALUE PROBLEMS
C
C
C
C
T(2)=0.0 T(3)=DELTA M=1 L1=0 L'l='l DO 1 1=1,N DO 1 J=1.I L1=Ll+1 L3=L3+1 R211,J)=0.0 RILl,M)=R211,J) T 1 L3) =R2 1 I ,J) DO 2 1=1,3 L3=L3+1
2 TIL3)=XLAMII)
NEQ=IN*IN+l))/2 + 3 CALL INTSIT,NEQ,2,0,0,0,0,C,O)
SrG=T(2) CALL OUTPUT
DO 5 M1=1,M1MAX DO 4 M2= j, NPRNT CALL INTM M=M+l L1=0 L3=3 DO 3 1=1,N DO 3 J=l,1 L1=L1+1 L3=L3+1 R21 I ,J)=TIL3)
3 RI Ll,M)=R21 I,J) 4 SIG=T(2) 5 CALL OUTPUT
RETURN END
$IBFTC PANDH SUBROUTINE PANDH COMMON N,RT(7),WTI7),WRI7),ARI7,7),NPRNT,M1MAX,KMAX,DELTA,XTAU
1 XLAM(3), B2(707),R217,7),IFLAG,RI28,JOl),TI1491),SIG. 2 PI28,lOl),HI28,3,101),PLAMI3),HLAMI3,3),P217.7), 3 H217,7,3)oCONSTI3),NEQ
IFLAG=2 T(2)=0.0 T(3)=DELTA M=l
Ll=O L3=3 DO 1 1=1, N
APPENDIX THREE 179
DO 1 J~I.I Ll=Ll+1 L3=L3+1 P(Ll.MI=O.O T(L3I=P(Ll.MI DO 2 I~I.3
L3~L3+1
PLAM(II=O.O 2 T(L31=PLAM(II
c CH'S C
(
(
C
(
DO 7 K~I.3 Ll~O
DO 3 I~l.N
DO 3 J~l.I Ll=Ll+l L3=L3+1 H(Ll.K.MI=O.O
3 T(L31=H(Ll.K.MI
DO 7 1=10'3 L3~L3+1
HLAM( I.KI =0.0 IF(I-KI7.6.7
6 HLAM(I.KI=l.O 7 T(L31=HLAM(I.KI
L~()
DO 8 l=l.N DO 8 J=l.r L~L+l
8 RZ(I.JI=R(L.MI DO 9 l=l.N DO 9 J=I.N
9 RZ(I.JI=R2(J.II
NEQ=4*((N*(N+111/2 + 31 CALL INTS(T.NEQ.2.0.0.0.0.v.01 LMAX=(N*(N+111/2 P R I NT 52.r ( 2 I • ( P ( L • MI. H ( L 01 • MI. L = 1 • L MAX I
52 FORMAT(lHOF9.4.5E20.8/(10X5E20.81 I
DO 51 Ml=l.MIM~X DO 50 M2=1.NPRNT (ALL INTM M=M+l
(pREV.APPROX. R(I.JI Ll=O DO 10 I=I.N DO 10 J=l.I Ll=Ll+1
10 R2(I.JI=R(Ll.MI DO 11 l=l.N
180 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
DO 11 J= I , N 11 R2II,J)=R2IJ,I)
L1 =0 L3=3 DO 12 I=I,N DO 12 J=I'[ L1=L1+1 L3=L3+1
12 PILl,M)=TIL3) L3=L3+3 DO 13 K=I03 L1=0 DO 14 I=I,N DO 14 J=I'[ L1=L1+1 L3=L3+1
14 HILl,K,M)=TIL3) 13 L3=L3+3 50 CONTINUE 51 PRINT52,T(2),IPIL,M),HIL,I,M),L=1,LMAX)
RETURN END
BFTC LINEAR SUBROUTINE LINEAR DIMENSION CHKI(3) DIM ENS ION A I 49,3) , B I 49 ) , E MA T I 50,50) , PI VO T I 50 ) , IN D E X I 50 ,
1,IPIVOTI50),FVECI50ol) COMMON N,RT(7),WTI7),WRI7),ARI7,7),NPRNT,M1MAX,KMAX,DELTA,XTAU,
1 XLAMI3lo B217,7"R217,7),IFLAG,PI28olCl)oTI1491),SIG, 2 P128ol01) ,HI28,3ol01) ,PLAM(3) ,HLAMI3,3) ,P2(707), 3 H217,7,3),CONSTI3),NEQ .
OUNOARY CONDITIONS MLAST=NPRNT*MIMAX + 1 DO 1 K = 1,3 L=O DO 2 1=I,N DO 2 J=I'[ L=L+l H211,J,K)=HIL,K,MLAST) DO 1 1=1,N DO 1 J= I, N H2II,J,K)=H2IJ,I,K) L=O DO 3 1=1,N DO 3 J=1d L=L+l
3 P2II,J)=PIL,MLAST) DO 4 1=1,N DO 4 J= I, N
4 P211,J)=P2IJ,I) EAST SQUARES
DO 5 K=I,3 L=O DO 5 1=1, N
APPENDIX THREE
(
(
(
(
(
C
(
DO 5 J=I.N L=L+l
5 AIL,K)=H211,J.K) L=O DO 6 1=I,N D06J=],N L=L+l
6 BIl)=B21 I ,J) - P21 I,J)
LMAX=N**2 PRINT60
60 FORMAT(lHO) DO 61 L=l,LMAX
61 PRINT82.IAIL.K),K=I,3).BIL)
DO 8 '1=1,3 DO 7 J=I03 SUM=O.O DO 9 L=I,LMAX
9 SUM=SUM + AIL,I)*AIL,J) 7 FMATII.J)=5UM
SU"'=O.O DOI0L=I,U~AX
10 SUM=SUM + AIL,1)*BIL) 8 FVE(II,1)=5UM
PRINT60 D081 1=1.3
81 PRINT82,IEMATII,J),J=I03),FVE(llol) 82 FORMATII0X6E2v.8)
CALL MATINVIEMAT,3,FVEC,I,DETERM,PIVOT,INDEX,IPIVOT)
DO 11 1=1,3 11 CONSTII)=FVECllol)
DO 20 1=1,3 20 XLAMII)=CONSTII)
PR I NT 90 3. X T AU, I XLAM I I ) • 1=1 ,3) 903 FORMATll>-iOI
1 lX11HTHICKNESS =, E16.8 I 2 lX11HALBEDOIX) =, 20HA + S*TANHII0*IX-()) II
lX3HA =, EI6.8, 10X3HB =. EI6.8, 10X3HC =, E16.8 II)
(NEW APPROXIMATION (
M=1 L=O D0121=I,N DO 12 J=lo1 L=L+l SUM=PIL.M) DO 13 K=I03
13 SUM =SUM + CONSTIK)*HIL,K,M)
181
(
(
182 QUASI LINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
12 R(L."1)=SU"1 L=O DO 14 I=l,N DO 14 J=l.r L=L+l
14 R2(I,J)=R(L.M) SIG=O.O (ALL OUTPUT
DO 50 M1=1,M1MAX DO 18 M2=1,NPRNT M=M+1 L=O DO 15 I=l,N DO 15 J=l.I L=L+1 SUM=P(L."1) DO 16 K=l03
16 SUM=SUM + (ONST(K)*H(L.K,M) 15 R(L.M)=SUM
L=O DO 17 I=ltN DO 17 J= 1, I L=L+1
17 R2(I.J)=R(L,M) 18 SIG=SIG + DELTA 50 (ALL OUTPUT
RETURN END
$IBFT( OUTPUT SUBROUTINE OUTPUT DIMENSION X(3) (OMMON N.RT(7).WT(7),WR(7),AR(7,7),NPRNT.M1MAX,KMAX,DELTA.XTAU
1 XLAM(3), B2(7t7),R2<7t7).IFLAG.R(28tlC'1),T(1491)oSIG. 2 P(28tl01) ,H(28.3.101) ,PLAM(31 ,HLAM(3.3) ,P2(707). 3 H2(7.7.3),(ONST(3).NEQ
DO 1 1=1. N DO 1 J= I. N R2(I.J)=R2(J.r) Y=XTAU*SIG DO 3 1=1,3
3 X( I )=XLAM( I) (ALL ALBEDO(Y,X,Z) PRINT100. SIG.y,Z
100 FORMAT(lHO 7HSIGMA =,F6.2, 4X5HTAU =, F6.2, 4X8HALBEDO =.F6.21 DO 2 J=l.N
2 PRINT101.J. (R2( I ,J) .I=l.N) 101 FORMAT(I10. 7F10.6)
RETURN END
$IBFT( ALBEDO SUBROUTINE ALBEDO(Y,X.Z) DIMENSION X(3)
APPENDIX THREE
(aMMON N,RT(7),WT(7),WR(7),AR(7,7),NPRNT,MIMAX,KMAX,DELTA,XTA 1 XLAM(3), B2(7,71,R2(707),IFLAG,R(28dOl),T(1491),SIG, 2 P(28,101),H(28.3,101),PLAM(3),HLAM(3,3),P2C707), 3 H 2 ( 7 07 , 3 ) , (0 N 5 T ( 3 "N EQ
ARG=1 0 .O*(Y-X(3)) Z=X(l) + X(2)*TANH(ARG) RETURN fND
$ENTRy RTINV 7
25446046E-0112923441E-0029707742E-0050000000E 0070292258E 008707655 97455396E 00 64742484E-Cl13985269E-0019091502E-0020897958E-0019091502E-001398526 64742484E-Jl
llJ 10 4 0.01
1.0 1.0
0.5 r.6
0.1 .09
Appendix Four
VAN DER POL EQUATION PROGRAM
186 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
(VDP021 VAN DER POL - PrlASES II, III (OMMON S(RA(H,T,N1,NMAX,KMAX,HGRID,NTIwE,XOBS,~,H,P,A,B,X,U,Z,T
PREV,NOBS,IFLAG DIMENSION S(RA(H(64),T(150),NTIME(1~"),XOBS(10J),W(3,1001),
H(3,3,1001) ,P(3,1001) ,A(16,16),~(16,1) ,PREV(3) (
( INPUT NOBS OdSERVATIONS OF X AT KNOWN TI~ES ( DETERMINE SYSTEM PARAMETER LAMBDA (Z IN FURTRAN) ( INPUT AND START
(
(
(
(
(
(ALL INSTRT
K ITERATIONS
DO 99 K=l,KMAX
DO 2 1=1,150 T!I)=C.O T(2)=TI T(3)=rlGRID T(4)=1.0 T(8)=1.0 T(12)=1.0
X=PREV(l) U=pREV(2) Z=PREV(3)
(ALL INT(T,12,N1,O.,0.,O.,O.,0.,0.) N=l L=3
DO 21 1=1, ~ DO 21 J=1,3
L=L+1 21 H(!,J,N)=T(L)
DO 22 1=1,3 L=L+1
22 P(!,N)=T(l)
( INTEGRATE P'S AND H'S
(
DO 4 N=2,NMAX X=W(l,N) U=\-I (2,N) Z=\-I (3,N) (ALL INTr-'.
L=3 DO 3 1=103 DO 3 J=1.3
L=L+1 3 H(r.J.N)=T(L)
DO 4 1=103 L=l+l
4 P(!.N)=T(l)
( DETERMINE (ONSTANTS. OR INITIAL VALUES (ALL (NSTNT TIME=TI PRINT 50. K.TIME,(B(I,l) .1=1.3)
(
( NEW VARIABLES DO 7 N=2.NMAX
DO 6 1=1.3
APPENDIX FOUR 187
6
7 C C
8
9 1 ",
C 99
C 50
70
"II.NI=PII.NI D06J=1.3 VI 1 I • N 1 ='tI 1 I. N 1 +8 1 J.1 1 *H 1 J. I • N 1 FN=N-1 Tl~E=FN*HC,RID+TI
PRINT 70. TlME,(1J1 I .N) d=l >31
COMPARE CONSTANTS DO 8 1=1.3
G=ABSF 1 B 11.11 -PREVI I II IFIG-.COOJ01)8.8.9
CONTI NUF GO TO 1
DO 1e 1=1 t3 PREY 1 I 1 =R 1 I tl 1
CONTINUE GO TO 1
FOR~ATI1HO//59X9HITERATION.13//38X1HT.13X1HX,19X1HU,17X6HSYSTE
1 3~XF10.2,3E20.61
FORMATI30XF1G.2.3E20.61 END
CVDP022 INPUT-START. VAN DEi, POL PHASE Iltlll SUBROUTI~E INSTRT
COMMON SCRACH,T,N1.N~AX,KMAX,HC,RID.NTIME.XOBS,w,H,P.A,B.X,U.Z
PREV.NOBS.IFLAG D I ME NS I ON SCRACH 1 64 1 • Til 5':; 1 • N TI,~ Ell CO 1 , X OBS 1 1001 , 'Ii 13.1001 1 ,
HI 3,3,10011 , PI 3 • 1 vO 1 1 • A 1 16, 161 , B 1 16.11 • P R EV (3) C INPUT
C C
C
C
READ 110,N1.NCBS,~MAX,NMAX,HGRID.INTIMEIII ,XOBSIII,I=l.NOBSI IFINMAXI9,9,1
PRINT 10,N1,NOBS,K~AX,NMAX,HGRID,INTIMEIII .XOBSIII,I=l.NOBSI
START IFLAG=l
READ 12~,TI,X,U,Z
DO Z 1=1 t1 SO TIII=O.O TIZI=TI TI31=HGRID T 141 =X, T151=U T161=Z
CALL INT(T,3tNltO.,O.,o.,J.,C.,o.) K=O
PRINT 50, K,TIZI,ITIII,I=4.61
DO 4 N=2.N~1AX
(ALL INH' DO 31=1.3
3 WII,NI=TII+31 4 PRINT 70. T!2ltITIII.r=4.61
IFLAG=2 PREVI I I=X PREVI21=L! PRFVI31'Z
C
188 QUASI LINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
RETURN 9 CALl:. EXIT
10 FORMAT(lH130X 1 58HVAN DER POL - PHASE II - DETERMINATION OF SYSTEM PARAMETERII 2 20X4I10,FIJ.4/118XI12,E16.6,I12,Ell).6,I12,E16.6,I12,El6.61 I
50 FORMATIIHOI159X9HITERATION,I31138XIHT,13XIHX,19XIHU,17X6HSYSTEMII 1 30XFI0.2,3E20.61
70 FORMATI30XFI0.2,3E20.61 11 0 FORMA T 141 5 , F 1 V • 2 , 2 ( 14, Ell. 6 II I 4 I 14, E 11 • I) I I I 120 FORMAT(4E12.1)1
END
CVDP023 DAUX - VAN DER POL - PHASf 11,111
C
SUBROUTINE DAUX COMMON SCRACH,T,Nl,NMAX,KMAX,HGRID,NTIME,x085,W,H,P,A,B,X,U,l,TI
PREV,NOBS,IFLAG D I ~1E NS ION SC RA CH (64 I , TIl 501 .. NT! ME I lOa I , X OBS I 1 OJ I , W I 3,1001 I , HI3,3,10011,PI3,lJ011,AI16,161,8116,11,PREVI31
GO TO (102 I olFLAG C NONLINEAR EQUATIONS
TI71=T(51 TI8'=-ITI41**2-1.I*TI51*TI61-TI41 TI91=(,.
RETUR"J C LINEAR EUUATIONS
2 XX=X**2 AA=-2.*X*U*Z-1. BB=Z*ll.-XXI CC=U*ll.-XXI L=13
DO 3 1=1,4 L=L+3
TILl=TIL-lll TIL+ll=AA*TIL-121+BB*TIL-111+CC*TIL-101
3 TIL+21=O.0 TIL+ll=TIL+11+U*Z*13.*XX-1.1
RETURN END
CVDP024 CNSTNT - PHASE II
C
SUBROUTINE CNSTNT COMMON SCRACH,T,Nl,NMAX,KMAX,HGRID,NTIME,XOBS,~,H,P,A,B,X,U,Z,TI
PREV,NOBS,IFLAG DIMENSION SCRACHI641,T 1150 I ,NTIMEI 100 I ,XOBSI 1001 ,WI 301001 I,
HI 3 , 3 , 1001 I , PI 3 , 1 00 1 I , A I 16,16 I , BIll), 1 I , PR EV 131
DO 1 1=1,3 N=NT!MEIII Blloll=XOI3SIII-Pll,NI DO 1 J=1,3 AI [oJI=HIJo1,NI
CALL MATINVIA,3,B,1,DETI DO 2 1=1,3
2 Wllo1l=Bllo1l RETURN
END
Appendix Five
ORBIT DETERMINATION PROGRAM
190 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
C260582 CELESTIAL MECHANICS - PHA~E II
C
CO~~ON SCRACH,T,NtQ,N1,N~AX,K~AX,HGqID,A2,A3,A4,A5,A6,A7, NGRID,THETA,W,H,P,A,A,X,U,Y,V
DIMENSION SCRACH(64),T(250),NGRID(4),TH~TA(4).W(4.500).H(4.4,50 .P(4,580).A(16.16).B(16.1)
1 CALL INPUT 2 CALL START(W)
C K ITERATIONS DO 19 K=l.KMAX
NEQ=20 4 T(2)=TZERO
T(3)=HGRID DO 5 1=4.43
5 T(I)=O. T(4)=1. T(9)=1. T(14)=1. T(19)=1.
6 CALL INT(T.NEQ.N1.A2.A3.A4.A5.A6.A7) C INTEGRATE OVER RANGE
7 DO 11 N=l.NMAX X=W( loN) U=W(2.N) Y=W(3,N) V=W(4.N) CALL INTt.!
C STORE P'S AND H'S
C C
8
9
10 11
12
13
114 14 15
115 C C
40
16
L=3 DO 9 1=1.4
DO 9 J=1.4 L=L+1
H( I.J.N)=T(l) DO 10 J=1.4
L=L+1 P(J.N)=T(L)
CONTINUE
CO~PUTE CONSTANTS DO 14 1=1.4
N=NGRID( I) THET=THETA(i) S THET=S I NF (THET) CTHET=COSF(THET) "0 13 J=1.4 A(I.J)=H(J.1.N)*STHET -H(J.3.N)*CTHET B( 1.1 )=( 1.-P( loN) )*STHET +P(3.N)*CTHET FORMAT(lH09X5E20.6)
PRINT 114. (A( I,JJ) ,JJ=1.4) .B( loll CALL MATINV(A.4.B.l.DET) PRINTl15,(B(I.1).I=l,4)
FORMAT(lH09X4E20.6)
COMPUTE NEW 'II'S PRINT40.K
FORMAT(lHO/65X 9HITERATION.13//26X4HGRID.14X1HX.19X2HX'.18XIHY 19X2Hy' )
DO 18 N=l.NMAX DO 17 1=1.4 W( I.N).P( I.N)
APPENDIX FIVE 191
DO 17 J=1.4 17 W(!.N)=W(!.~) + [](J.ll*H(J.J.N) 50 FOR~AT(20X.II0.4E20.6) 18 PRINT50.N,(W( I.N) 01=104)
(
19 CONTINUE C
GO TO END
(26C585 INPUT
C
SUBROUTINE INPUT CO~~ON S(RA(H.T.N[Q.N1.IIIMAX.~MAX,HGRID.~2.A~.A4.A5,A6,A7.
NGRID.THETA,W.H.P,A.B.X.U.y,V DIMENSION S(RACH(64),T(?~0).NGRID(4).TH~TA(4).~(4.5"0).H(4.4.
, P (4.5"0) • A (16.1"6) , g (16.1 1
READI10,lIIl.NMAX.K~AX,HGRID.At.A3.A4,A5.A6.A7 110 FORMAT(313.7E8.1)
REA D1 20 • ( N G RID ( I 1 , T H ET A ( I ) • I = 1 .4 1 120 FORMAT(4(13.EI2.4)I
PRINT10,Nl,NMAX,KMAX.HGRID,A2.A3.A4,A5.A6.A7,(NGRID(I).THETA(I 11=104)
10 FORMAT(IHI2UX 8HINPUT ••• I 1 18X2HNl.1X4HNMAX.1X4HKMAX.9X5HHGRID,12X2HA2.12X2HA3.12X2HA~. ? 12X2HA5.12X2HA6,12X2HA7/15X315.7EI4.41/ '21X4HN(I).10X8HTHETA(I).6X4HN(21.10X8I-'THETA(2). 4 6X4HN(31.1 u X8HTHETA(31.6X4HN(41.10X8HTHETA(4)/15X4(110.EI8.6
RETU~m
END (260584 DAUX
(
SU8ROUTINE DAUX (OMMON S(RA(H.T.NEQ.Nl.NMAX,KMAX,HGRID.A2.A3.A4.A5.A6.A7.
NGRID.THETA.W,H,P,A,B.X,U.y,V DIMENSION S(RACH(64).r (250) ,NGRID(41 .THETA (4) .'J( 4,500) .H( 4.4.
,P(4,500l.A(16,161,B(16.11
XX=X**2 YY=Y**2 '<=XX+YY S=R**3 D=SQRTF(R**2*S) E=SQRTF(S) AA=3.*X*Y/D BB=(2.*X)l-YY)/D ((=(2.*YY-XX)/D
N=23 ,,0 1 1=1.4
N=III+1 T(N)=T(N-191 T(N+l)=T(N-20)*B8 + T(N-18)*AA T(N+2)=T(N-17) T(N+3)=T(N-20)*AA + T(N-181*((
N=N+3 T(40)=T(211 T(41)=T(20)*88 + T(221*AA - 3.*X/E T(421=T(23) T(43)=T(20)*AA + T(22)*(( - 3.*Y/E
RETURN END
(260583 START(W)
192 QUASI LINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
SU9ROUTINF START(W) COMMON SCRACH.T.NEQ.Nl.NMAX.~MAX.HG~ID.A2.A3.A4.A5.A6.A7.
NGRID.THFTA.W.H.P.A.B.X.U.y.V D 1 ~1E NS 1 ON SCRACH ( 64 ) • T ( 25 C ) • NGR 1 D ( 4) • THf' T A (4) • W ( 4.500) • H ( 4.4 •
• P(4.50 n ).A(16.16).B(16.1) C X=l. X·=O. y=O. Y'=C
c
K=O PRINT4r:.'(
40 FOR~AT(lHl 65X 9HITERATION.I3112~X4HG~ID.14XIHX.19X2HX·.18Xl 1 ~9X2HY')
DO 2 N=l.N~AX \<i(l.r-n=l. no 1 1=2.4
1 ';1( T ,N) =f"I. 2 PRINT50."."I(I.N)d=1.4)
5C FORMAT(2CX.I10.4E20.6)
RETURN E"D
Appendix Six
CARDIOLOGY PROGRAM
194 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
(MAIN2 (
HEART MODEL - INVERSE PROBLEM - 2
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(OMMON NMAX,DELTA,TAU(5),SIGMA(5) ,ZKAPPA(5),D(OS(5,~)'X(HAMB(5,3 1 XOB S VR ( 4,3) , AD 1ST N ( 3 ,5) , F ( 5 , (31) , BOBS ( 3 ,81 ) .T I ME ( 81 ) , T ( 1323 )
(OMMON KMAX, ASIGMA(5loAKAPPA!5),P(5,81),H(5ol0,81),PS(5lo 1 HS (5,10) .FPREV( 5) ,AMATRX (243010) ,BVE(TR (243) ,EMATRX( 50,50). 2 FVE( TR ( 50) , (( 1 0 ) , V ( 3 ,81 ) , N(HANG, I (HA NG ( 5 ) , F (H,\NG ( 5 ) , I FL AG
,SCHAt\lG(5)
DISTANCE MODEL 5 DIPOLES OR CHAMBERS GAUSSIAN FUN(TION FOR MAGNITUDE OF MOMENTS
TAU,SIGMA,ZKAPPA = TIME OF ~AX., HALF-WIDTH, MAX. MOMEN D(OS(I,J) = DIRE(. (OSINE OF DIPOLE I, AXIS J XCHAMB(I,J) = LOCATION OF DIPOLE I, (OORDINATE J XOBSVR(I.J) = LOCATION OF OBSEPVE~ I, (OORDINATE J
J=1 X-AXIS OR X-DIRECTION J=2 Y-AXIS OR Y-DIRECTION J=3 Z-AXIS OR Z-DIRE(TION
(ALL FPST READ1IJOO.NMAX PRINT899 PRINT90Q,NMAX READI001.DELTA PRINT901.DELTA
( INPUT DIPOLE PARAMETERS (
(
C
DO 2 N=1,5 2 READ1 002,I,TAU(I),SIGMA(I),ZKAPPA(I)
DO 3 N=1,5 3 READ1 v 03ol'(D(OS(I,J)oJ=1,3).(X(HM1B(I,j) .J=1.3)
PRINT902 DO 4 1=1,5
4 PRINT903,I,TAU(I),SIGMA(I),ZKAPPA(I),(X(HAMB(I,J).J=I,3), 1 (D(OS(I.J),J=l03)
( INPUT OBSERVER LOCATIONS ( (OMPUTE DIPOLE-OBSERVER (OEFFICIENTS (
DO 5 N=I,4 5 READ10030l ,(XOBSVR( I,J )oJ=1,3)
DO 6 1=1,5 (ONUM=O.O CODEN=O.O DO 61 L=I03 YX=XOBSVR(4.L) - X(HAMB(I.L) (ONUM=(ONUM + D(OS(I,L)*YX
61 (ODEN=(ODEN + yX**2
APPENDIX SIX
C
C C
CODEN=SQRTCCODEN**3) CZEROI=CONUM/CODEN DO 6 J=1.3
CJNUM=O.O CJDEN=O.O DO 62 L=1.3 YX=XOBSVRCJ.L) - XCHAMBCI.L) CJNUM=CJNUM + DCOSCI.L)*YX
62 CJDEN=CJDEN + YX**2 CJD=N=SQRTC(J~EN**3)
6 ADISTNCJ.I)= CJNUM/CJDEN - CZ~ROI
PRINT904 DO 7 J=1.3
7 P R I NT 905 • J • C X OB S V R C J • L ) • L = 1 .3 ) • C A,) 1ST N C J • I ) • I = 1 .5 ) J=4 PRINT905.J.CXOBSVRCJ.L).L=1.3)
C INPUT CHANG"'S CSIGM.'. KftPPA) C
C
70 PRINT908 READIOOQ.NCrlANG PRINT900.NCHANG" IFCNCHANG.LE.O)GO TO 80 READIOOO. C ICHANGC 1).1=1 .NCHMG) PR I NT 9 00. C I CHANG C I ) • 1=1 • NCHANG ) READ1 G01.CSCHANGCI).FCHANGCI).1=1.NCHANG) P R I NT 9 01. C SCrlAN G C I ) • FCHA N G C I ) • 1=1 • N CHA NG )
C STORE CHANGES
C C C C
DO 71 l=l.NCrlANG L=ICHANGCI) SSIG=SIGMACl) SIGMACL)=SCHANGCI) SCHft.NG C I) =SSI G SKAP=ZKAPPACL) ZKAPPA C l) =FCHANG C I )
71 FCHANGCI)=SKAP
C INITIAL INTEGRATION STEP C
8e IFLAG=l K=l L=3
DO 8 1=1.5 L=L+l
8 TCL)=ZKAPPACI)*EXPC-O.5*CTAUCI)/SIG,"'IICI))**2) DO 81 1=1.5
L=L+1 81 TCl)=SIGMAC I)
195
196 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
T(2):0.0 T(3):DELTA (ALL INTS(T.10.2.0.0.C.0.0.0.0.O,0.o.c.r)
L:3 DO 82 1:1.5
L:L+1 82 F( loK):T(l)
( OBSERVATIDN
(
(
(
(
(
DO 83 j:103 BOBS(j.K):O.O DO 831:1.5
83 BOBS(j.K):BOBS(j.K) + ADISTN(j.I)*F(I.K) TIME(K):T(2) PRINT906 PR I NT 907. T I ME ( K ) • ( F ( I • K) • I: 1 .5) • ( BOBS ( j • K ) • j: 1 .3)
INTEGRATE
DO IOU K:2.NMAX (ALL I NTM
L:3 DO 91 I: 1.5
L:L+1 91 F(I.K):T(L)
DO 92 j:103 BOBS(j.K):O.O DO 92 I: 1.5
92 BOBS(j.K):BOBS(j.K) + ADISTN(j.I)*F(I.K) TIME(K):T(2) P R I NT 907. T I ME ( K ) • ( F ( I • K) • I: 1 .5) • ( BOBS ( j. K ) • j: 1 .3)
100 (ONTINUE (
(
( PHASE II - DETERMINATION OF HEART PARAMETERS (
(
(
(
(
(ALL PHASE2
IF(N(HANG.LE.O) GO TO 102
( RESTORE ZKAPPA DO 101 1:1.N(HANG L:I(HANG(I) SIGMA(L):S(HANG(I)
101 ZKAPPA(L):F(HANG(I) (
( NEXT PROBLEM 102 READ100C.IGO
PRINT900,[GO GO TO(1.70),[GO
(
APPENDIX SIX
C C FORMATS C
1000 FORMATI611Z} lOCI FORMATI6E1Z.6}
899 FORMATIIH1 4X38HHEART MODEL - INVE~SE P~OBLEM -PHASE I II} 9~Q FORMATI61Z0} 901 FORMATI6EZO.6}
1002 FORMATII10.3FIO.4} 1003 FORMATII1C,6F1C.4}
197
902 FORMATI7HODIPOLE,3X.7X3HTAU'SX5HSIGMA,~X5HKAPPA,5X,3X7HX-COORD 1 3X7HY-COORD,3X7HZ-COORD.5X.4X 6HX-DCOS.4X6HY-DCOS, 2 4X6HZ-DCOS II}
903 FORMATI15.SX3FIO.4,SX3F10.4,5X3FIO.4} 904 FORMATI10HOO~SERVER .5X. 3X7HX-COORD.3X7HY-COORD.3X7HZ-COORD.5
1 l1XIH1,j1XIH2']lX1H'tllXlH4']lXlH~//} 905 FORMATII10,5X3FIO.4. SX5FIZ.6} 906 FORMATIIH08XlrlT,5X. 5X5HFIIT}.5X5HFZIT}.5X5HF3IT} ,5X5HF4IT}.
I 5X 5 HF 5 IT} .5 X t7 X 5HV 1 IT} t7X 5 HV 2 IT} ,7 X 5 HV 3 IT} I I } 907 FORMATIFlr.4.~X5FI0.4.5X'.IZ.6} 9U8 FORMATIIHI 4X13HABNORMALITIES II}
END $IBFTC PHA5EZ
C
SUBROUTINE PHASE2 COMMON N~AX.DELTA.TAUI5}.SIGMAI5}.ZKAPPAI5}.DCOSI5.3}.XCHAMBI5
1 XOBSVRI4t3} .ADISTNI3.5} .FIS.81} .BOB513.81} .TlMEI81}.TI13Z3} COMMON Kr~AX. ASIGlv,AI5}.AKAPPAIS}.PI5,81},HI5tlO.8I}.PSI5}.
I HS I 5 .10 } • FPREV IS} • AM A T RX I Z 4 3.10 } • BVE CT R I 243 } • EM A TR X I 50. 50} • 2 FVECTRI50}.CIIC}.VI3.81}.NCHANG,ICHANGI5},FCHANGI5}.IFLAG
C SOLVE THE INVERSE PROBLEM C C ASIGMA APPROX. TO SIGMA C AKAPPA APPROX. TO ZKAPPA C PII,K} PARTICULAR SOLU. DIPOLE I, TIME K IMOMENT} C HII,J,K}=HOMOGENEOUS SOLU. J, ... C PSII} PARTICULAR SOLU. DIPOLE I ISIGMA COMPONENT} C HSII,J}= HOMOGENEOUS SOLU. J, ... C VIJ,K} = POTENTIAL DIFFERENCE OBSERVED BY ELECTRODE J C AT TIME K, AS COMPUTED IN PHASE II. COMPARE C WITH BOBSIJ.K} OF PHASE I. C C C I~PUT
PRINT899 READI000,KMAX PRINT900,KMAX PRINT904 DO 1 N=105 READIOOZ, I ,ASIGMAII} ,AKAPPAII} PR I NT 905, I , AS I GMA I I } , AKA P PA I I }
C INITIAL APPROXIMATION KI=O
PRINT909,Kl
198 QUASILINEARIZATION AND NONLINEAR BOUNDARY·VALUE PROBLEMS
(ALL APPRXI (
( HIGHER APPROXIMATIONS (
IFLAG=2 DO 200 K1=I,KMAX
K=1 DO 2 1=1,5 P(!ol)=O.O PS ( I ) =0.0 DO 3 J=lol0 H( I ,Jol) =0.0
3 HS(I,J)=O.O H(!.I,1l=1.0
15=1+5 2 HS ( I , I 5 ) = 1 .0
L=3 DO 4 1=1,5
L=L+l 4 T(L)=P(!ol)
DO 5 1=10 5 L=L+l
T(L)=PS(I) DO 6 J= 1010 DO 7 1=1,5
L=L+l 7 T ( L ) =H ( I , J , 1 )
DO 6 1=1,5 L=L+l
6 TlLl=HS(!,J) T(2)=0.0 T(3)=DELTA DO 8 1=1,5
8 FPREV(!)=F(i,K) (
(ALL INTS(T,110,2,0.O,0.0,0.0,0.0,0.0,0.0) (
DO 9 K=2,NMAX CALL I NTM
L=3 DO 10 1=1,5
L=L+1 10 P(!,K)=T(l)
L=L+5 DO 11 J=I,10 DO 12 1=1,5
L=L+l 12 H(!,J,K)=T(L) 11 L=L+5
DO 9 1=1,5 9 FPREV( I )=F( I ,K)
C COMPUTE NEW APPROXIMATION (
PRINT909,Kl
APPENDIX SIX
200 CALL APPRX2 C
C RETURN
899 FORMAT(lHl 4X 8HPHASE II II) 1000 FORMAT(6112)
900 FORMAT(6120) 1002 FORMAT(IIC.10X.5F10.4)
905 FORMAT(IIO.2F10.4) 904 FORMAT(lHC.8X1HI.5X5HSIGMA.5X5HKAPPAII) 909 FORMAT(lH156X13HAPPROXIMATION.1311)
END $ I BFTC APPRX 1
SUBROUTINE APPRX1
199
COMMON N.MA X • DEL T A. T AU ( 5) • S I GM A ( 5 ) • ZKAP PA ( 5 ) • DCOS ( 5.3) • X CH AMB ( 5 .3 1 XOBS VR ( 4 • 3 ) • A DIS T N ( 3 .5) • F ( 5 .81 ) • 809S ( 3 • 81 ) • T I ME ( 81 ) • T ( 1323 )
COMMON KMAX. ASIGMA(5).AKAPPA(S).p(5.e1).H(5t10.81).PS(5lt
C
1 HS ( 5.10) • F P R EV ( 5) • AMA T R X ( 243.10 ) • Pv E CTR ( 243 ) • EMA T RX ( 50.50) • 2 FVECTR(50) .C(10).V(3.81).NCHANG.ICHANG(5).FCHANG(5).IFLAG
C PRODUCE INITIAL APPROXIMATION C INTEGRATE NONLINEAR D.E.
IFLAG=l K=l L=3
DO 8 1=1.5 L=L+1
8 T ( L ) = A K A P P A ( I ) * E X P ( - O. 5 * ( TAU ( I ) I A S I GM A ( I ) ) * * 2 ) DO 81 1=1.5
L=L+1 81 T(L)=ASIGMA(I)
T(2)=0.0 T(3)=DELTA CALL INTS(T.10.2.0.0.0.0.0.0.0.0.0.0.0.0)
L=3 DO 82 1=1.5
L=L+1 82 F(].K)=T(L)
C POTENTIAL COMPUTED
C C
DO 83 J=1.3 V(J.K)=O.O DO 83 1=1.5
83 V(J.K)=V(J.K) + ADISTN(J.I)*F(].K) PRINT906 PRINT907.TIME(K).(F(I.K) .1=1.5).(V(J.K).J=1.3)
C INTEGRATE DO 100 K=2.NMAX CALL I NTM
L=3 DO 91 1=1. S
L=L+l 91 FII.K)=T(l)
200 QUASILINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
C
C
DO 92 J=1,3 VIJ,K)=O.O DO 92 1=1,5
92 VIJ,K)=Vlj,K) + ADISTNIJoI)*FII,K) PRINT907,TIMEIK),IFII,K) ,1=1,5),IVIJ,K) ,J=1,3)
100 CONTINUE RETURN
906 FORMATIIH08XIHT,5X,5X5HFIIT),5X5HF2IT) ,5X5HF3IT),5X5HF4IT), 1 5X5HF51T),5X,7X5HVIIT),7X5HV2IT),7X5HV3IT)//)
907 FORMATIFI0.4,5X5FI0.4,5X3F12.6) END
$ IBFTC APPRX2 SUBROUTINE APPRX2
C C C C C C
C
C
o I MENS ION PIVOT (50) , I NDEX I 50,2) , I PIVOT I 5(' ) , EM I 50,50) , FV I 50,1 ) COMMON NMAX,DELTA,TAU(5),SIGMAI5),ZKAPPAI5),DCOSI5,3),XCHAMBI5
1 XOB SVR 14,3) , AD I ST N I 3 ,5) , F I 5 ,81) , BOBS I 3 , 8 1 ) , T I ME I 81 ) , T I 1323 ) COMMON KMAX, ASIGMA(5) .AKAPPA(5),PI5,81),HI5,10,81),PSI5),
1 HSI5,10),FPREVI5).AMATRXI243,10),BVECTRI243),EMATRXI50,50), 2 FVECTR(50),CII0).VI3,81),NCHANG,ICHANGI5),FCHANGI5),IFLAG
1=0
APPLY BOUNDARY CO~DITIONS SOLVE LINEAR EQUATIONS COMPUTE NEW APPROXIMATION
SET UP MATRIX, VECTOR
DO 3 J=I,3 DO 3 K = 1,81
1=1+1 DO 1 L=1010 AMA T R X I I , L ) = 0.0 DO 1 M=1.5 AMATRXII.L)=AMATRXII,L) + ADISTNIJ,M)*HIM,L,K) BVECTRII)=BOBSlj,K) DO 2 M=105
2 BVECTRII)=BVECTRII) - ADISTNIJ,M)*PIM,K) 3 CONTINUE
DO 6 1=1,10 DO 4 J=lolO EMATRXII,J)=O.O DO 4 L=I,243
4 EMATRXII,J)=EMATRXII,J) + AMATRXIL,I)*AMATRXIL,j) FVECTR I I) =0.0 DO 5 L=1,243
5 FVECTRII)=FVECTRII) + AMATRXIL,I)*BVECTRIL) 6 CONTINUE
C INVERT MATRIX DO 61 1=1,50 FVII,l)=FVECTRII) DO 61 J=1,50
APPENDIX SIX 20
(
61 EM(I.JI=EMATRX(I.JI (ALL MATINV(EM.I0.FV.1.DETERM.PIVOT.INDEx.IPIVOTI DO 62 1=1.S0
62 FVE(TR ( I 1 = FV ( loll
( NEW APPROXIMATION
(
C
PRINT904 DO 7 1=1.10
7 ((II=FVE(TR(II DO 8 l=l.S F(lolJ=((11 ASIGMA (II =(( I+S 1 AKAPPA( I 1 =F (I tl IHXP(0.5* (TAU( I 1 IASIGMA( I 1 1**21 PR I NT 9 05. I • AS I GMA ( I 1 • AKA P PA ( I 1 PRINT906
K=l DO 83 J=l03 V(J.KI=O.O DO 83 1=1.5
83 V(J.KI=V(J.KI + ADISTN(JoII*F(I.KI PR I NT 9 0 7. T I ME {K 1 • { F ( I • K 1 • 1= 1 • S I • ( V ( J. K 1 • J = 1 .31
DO 100 K=2.NMAX DO 9 1=1. S F( I .KI=P(! .KI DO 9 J=lolO
9 F(I.KI=F(I.KI + C(JI*H(I.J.KI DO 93 J=1.3 V(J.KI=O.O DO 93 1=1. S
93 V(J.KI=V(J.KI + ADISTN(JoII*F(I.KI 100 PRINT907.TIME(KI.(F(I.KI.I=1.51.(V(J.KI.J=1.31
904 90S 906
RETURN
FORMAT( IHO.8XIHI. 5X 5HSIGMA. 5X 5HKAPPAIII FORMAT(II0.2FI0.41 FORMA T ( 1 H 0 8 X 1 HT • S X. S X S HF 1 ( T I • SX SHF 2 ( T 1 .5 X S HF 3 ( T I • 5X SHF 4 ( T I •
1 5X5HFS(TI.5X,7XSHVl(TI.7XSHV2(TI.7X5HV3(TIIII 907 FORMAT(FI0.4.SX5FI0.4.SX3FI2.61
END $IBFTC DAUX
(
(
(
(
(
SUBROUTINE DAUX COMMON NMAX.DELTA.TAU(51.SIGMA(SI.ZKAPPA(SI.D(OS(5.31.XCHAMB(
1 XOBS VR ( 4031 , AD I ST N ( 3 • S 1 • F ( S , 81 1 • BOBS ( 3 • B 1 1 or I ME ( 81 1 • T ( 1323 I COMMON KMAX. ASIGMA(SI .AKAPPA(SI.P(5.Bl1.H(5.10.811.PS(51.
1 HS(S,101,FPREV(S),AMATRX(243.10l.BVECTR(2431,EMATRX(SO,501. 2 FVE(TR(501.((101.V(3,811,NCHANG.ICHANG(SI,F(HANG(SI,IFLAG
DIMENSION XF(SI.XSG(51
GO TO (1.2101FLAG
NONLINEAR EQUATIONS
(
(
202 QUASI LINEARIZATION AND NONLINEAR BOUNDARY-VALUE PROBLEMS
L=3 M=13
DO 10 1=1,5 L=L+l M=M+l
10 TCM)=-CTC21-TAUCII)*TCL)/CTCL+51**2) DO 11 1=1,5
M=M+l 11 TCMI·O.O
RETURN
( LINEAR EQS. FOR p, H (PARTI(ULAR
(
(
2 L=3 DO 3 1=1,5
L=L+l XFCI)=TCl) DO 4 1=1,5
L=L+l 4 XSGCI)=TCLI
M=113 DO 5 1=105
M=M+l TX=- C T C 2) -TAU C I ) I I C AS I GMA C I ) **2 I
5 TCMI=TX*CXFCI) + XSGCI)*C-2.0*FPREV{J)/ASIGMACl)) + 2.0*FPREVCI DO 6 1=1,5
M=M+l 6 TCM)=O.O
(
(HOMOGENEOUS
(
(
(
DO 13 J=1,10
DO 7 1=1,5 L=L+l
7 XFCI)=TCL) DO B 1=1,5
L=L+l 8 XSGCII=TCLl
DO 9 1=1,5 M=M+l
TX=-CT(2)-TAUCI))/CASIGMACI)**21 9 TCMI=TX*CXFCI) + XSGCII*C-2.0*FPREVCII/ASIGMACI))
DO 12 1=1,5 ' M=M+l
12 TCM)=O.O
13 (ONTINUE RETURN END
INDEX
Abel, N., 60 Adjoint equation, 33 Adorno, D., 108 Approximation in policy space, 19,
124, 154 Averbukh, A. I., 74 Azen, S., 124
Back and forth integration, 85 Bateman-Burgers equation, 122 Beckenbach, E. F., 2, 24, 54, 72,
108, 124, 162 Beckwith, R., 108 Bellman, R., 23, 24, 53, 54, 55, 72,
73, 105, 106, 107, 108, 109, 123, 124, 150, 151, 161, 162
Bentley, D. L., 162 Bihari, 1., 24 Bing, R. H., 72 Blood, J., 6 Bonsall, F. F., 2, 72 Brown, T. A., 6, 105 Buell, J., 6, 106
Calculus of variations, 44, 54 Calogero. F .• 24
Caplygin, S. A., 2, 22, 24, 71, 72, Characteristic values, 66 Chemotherapy, 106, 107 Cherry, 1., 73 Coddington, E., 54 Cohen, E. R., 107 CollatZ, L., 2, 53, 72, 74 Collier, c., 151 Control, 125 Control processes, 54, 73 Control theory, 54 Convergence, 38 Convexity, 34 Convolution equations, 107 Conway, E. D., 124 Cooke, K. L., 106, 162 Culver, J. W., 107 Curtis, P. C., 72
Dewan, E. M., 150 Difference approximations, 73, 1
124 Differential approximation, 75,
106, 158 Differential-difference equations.
106
204
Differential inequalities 57, 71, 74 Dini, D., 21 Dini's theorem, 70, 74 Dreyfus, S., 55, 162 Duality,20 Dynamic programming, 2, 19, 23,
51,53,54,71,79, 103, 124, 153
Electroencepholography, 150 Elliptic equation, 116 Euler equation, 44 Existence and boundedness, 37 Existence and uniqueness, 53
Factorization, 60 Fenchel, W., 2 Fermat's principle, 45 Finite difference methods, 72 First-order linear equation, 15 Fleming, W. H., 124 Forsythe, G. E., 72 Freimer, M., 108 Friedrichs, K. 0.,2, 54 Functional differential equations, 87 Fundamental1emma, 22
Gauss-Seidel method, 118 Glicksberg, I., 54, 73, 108 Gluss, B., 106, 151, 162 Gradient techniques, 114 Gram-Schmidt orthogonalization, 98 Green's function, 34, 54, 69 Gross, 0., 6, 54, 73, 108
Hammer, P. C., 13, 23 Hanson, M. A., 54 Hardy, G. H., 23 Hartman, P., 72, 108 Heron of Alexandria, 12
INDEX
Hopf, E., 124 Hopf-Lax equation, 122
Identification of systems, 106, 159 Ikebe, T., 54 Ill-conditioned linear systems, 109 Ill-conditioning, 100 Inequalities, 54, 72 Inhomogeneous equation, 29 Integro-differential equation, 93 Invariant imbedding, 52, 55, 71, 104,
108, 109, 123 Inverse problem, 130, 146
Jacobian matrix, 77 Jones, G. S., 74 Juncosa, M., 6, 123
Kagiwada, H., 6, 55, 105, 109, 150, 151, 162
Kalaba, R., 23, 24, 25, 54, 55, 73, 105, 106, 107, 108, 109, 123, 124, 150,151,162
Kalman, R. E., 108 Kantorovich, L. V., 22, 23 Kardashov, A. A., 106 Karlin, S., 74 Kato, T., 54 Koepcke, R. W., 108 Kolmogorov, A., 107 Kotkin, B., 6, 73, 105, 106, 107, 123 Krasnoselskii, M. A., 72 Krylov, V. I., 23 Kuhn, H., 2 Kumar, K. S. P., 25
Lagrange, J., 30 Lagrange multipliers, 126 Lanczos, C., 107 Langenhop, C. E., 24
INDEX
Laplace transform, 109 Lavi, A., 151 Lax, P., 124 Levinson, N., 54 Levitt, L. C., 107 Liberman, L. Kh., 107 Linear closure, 107 Linearization, 25 Liouville, J., 23 Lockett, J., 109
Magnethohydrodynamics, 41 Markovian decision processes, 108 Matrix analysis, 53 Maximum operation, 15 Mean-value theorem, 72, 108 Memory, 95 Mesarovic, M. D., 55, 107 Method of steepest descent, 123 Minkowski, A., 2 Mlak, W., 74 Monotone behavior, 57 Monotonicity, 20, 21, 95, 97, 121 Moser, J., 24 Motzkin, T., 72, 73 Multiple-point linear boundary
problems, 105 Multipoint boundary-value prob
lems,80
Nemytski, V. V., 25 Neurophysiology, 106 Neustadt, L., 54 Neutron transport theory, 162 Newton-Raphson-Kantorovich
approxiination, 19 Newton-Raphson method, 7, 22 Nonlinear oscillations, 150 Nonlinear problems, 25 Nonlinear summability, 116, 123
Numerical integration, 73 Numerical treatment, 72
Optimal design, 125 Orbit determination, 105, 137 Ostrowski, A., 23
Parabolic equations, 112 Parabolic partial differential eq
tion,63 Partial differential equations, 111 Peixoto, M., 2, 72 Periodogram analysis, 139 Perturbation techniques, 106, 107 Peterson, R., 151 Picard algorithm, 40 Picard iteration, 117 Pollack, M., 162 Polya, G., 72, 108 Positivity, 34, 62, 108, 121 Prager, W., 43 Prestrud, M., 55, 108, 109 Property W, 98
Quadratic convergence, 9, 11, 21 Quasilinearization, 2, 13, 36
Radbill, J. R., 25 Radiative transfer, 130, 150 Random walk, 64 Rayleigh, J. W. S., 134 Redheffer, R. M., 2, 72, 74 Reid, W. T., 72 Renewal equation, 91 Renormalization techniques, 107 Riccati differential equation, 104 Riccati equation, 7, 14, 24 Richardson, J. M., 5, 107, 124 Ritt, J. F., 23 Rosenbloom, P., 123 Roth, R., 106, 151, 162
206
Savenko, S. S., 23 Scattering theory, 24 Scher, A., 145 Schlesinger, L., 72, 108 Schrodinger equation, 7 Schwimmer, H. S., 25 Selvester, R., 151 Shortest-route problem, 162 Simultaneous calculation, 81 Smith, M. C., 25 Square roots, 12 Sridhar, R., 6, 25 Stepanov, V. V., 25 Storage, 75, 80, 95 Strauss, J. C., 151 Sturm-Liouville equation, 29, 69 Sturm-Liouville expansion, 66 Successive approximations, 19, 54,
105 Sylvester, J. J., 13, 23 Systems, 76
Temple, G., 25 Thomas-Fermi equation, 54 Tikhomirov, V. M., 107 Titchmarsh, E. c., 72 Tornheim, L., 72 Total positivity, 74 Tucker, A. W., 2 Tukey, J., 23
INDE
Turbulence, 124 Two-point boundary-value problem
27, 105
Ueno, S., 132, 150
Vainberg, M. M., 23 Vala, K., 74 Valiron, G., 72 Van der Pol, B., 134 Van der Pol equation, 134 Variation-diminishing, 73 Variation of parameters, 30 Vashakmadze, T. S., 105 von Karman, T., 25
Walter, W., 162 Wasow, W., 72, 73 Watson, G. N., 23 Weibenson, N., 162 Wengert's numerical method, 151 Whiteside, D. T., 22 Wilkins, J. E., 72 Wing, G. M., 55, 73, 109 Wintner, A., 108 Wronskian, 30, 60, 97 Wu, C. S., 25
Zaidenberg, E. D., 107
Selected RAND Books
BELLMAN, RICHARD. Adaptive Control Processes: A Guided Tour. Princeton, N.J.: Princeton University Press, 1961.
BELLMAN, RICHARD. Dynamic Programming. Princeton, N.J.: Princeton University Press, 1957.
BELLMAN, RICHARD. Introduction to Matrix Analysis. New York: McGraw-Hill Book Company, Inc., 1960.
BELLMAN, RICHARD (ed.). Mathematical Optimization Techniques. Berkeley and Los Angeles: University of California Press, 1963.
BELLMAN, RICHARD, and KENNETH L. COOKE. Differential-Difference Equations. New York: Academic Press, 1963.
BELLMAN, RICHARD, and STUART E. DREYFUS. Applied Dynamic Programming. Princeton, N.J.: Princeton University Press, 1962.
BELLMAN, RICHARD E., HARRIET H. KAGIWADA, ROBERT E. KALABA, and MARCIA C. PRESTRUD. Invariant Imbedding and Time-Dependent Transport Processes, Modern Analytic and Computational Methods in Science and Mathematics, Vol. 2. New York: American Elsevier Publishing Company, Inc., 1964.
BELLMAN, RICHARD E., ROBERT E. KALABA, and MARCIA C. PRESTRUD. Invariant Imbedding and Radiative Transfer in Slabs of Finite Thickness, Modern Analytic and Computational Methods in Science and Mathematics, Vol. 1. New York: American Elsevier Publishing Company, Inc., 1963.
BUCHHEIM, ROBERT W., and the Staff of The RAND Corporation. New Space Handbook: Astronautics and Its Applications. New York: Vintage Books, A Division of Random House, Inc., 1963.
DANTZIG, G. B. Linear Programming and Extensions. Princeton, N.J.: Princeton University Press, 1963.
DRESHER, MELVIN. Games of Strategy: Theory and Applications. Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1961.
DREYFUS, STUART. Dynamic Programming and the Calculus of Variations. New York: Academic Press, 1965.
DUBYAGO, A. D. The Determination of Orbits. Translated by R. D. Burke, G. Gordon, L. N. Rowell, and F. T. Smith. New York: The Macmillan Company, 1961
EDELEN, DOMINIC G. B. The Structure of Field Space: An Axiomatic Formulation of Field Physics. Berkeley and Los Angeles: University of California Press, 1962.
FORD, L. R., JR., and D. R. FULKERSON. Flows in Networks. Princeton, N.J.: Princeton University Press, 1962.
HARRIS, THEODORE E. The Theory of Branching Processes. Berlin, Germany: Springer-Verlag, 1963; Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1964.
HITCH, CHARLES J., and ROLAND McKEAN. The Economics of Defense in the Nuclear Age. Cambridge, Mass.: Harvard University Press, 1960.
JUDD, WILLIAM R. (Ed.). State of Stress in the Earth's Crust. New York: American Elsevier Publishing Company, Inc., 1964.
QUADE, EDWARD S. (ed.). Analysis for Military Decisions. Chicago: Rand McNally & Company. Amsterdam: North Holland Publishing Company, 1964.
SELIN, IVAN. Detection Theory. Princeton, N.J.: Princeton University Press, 1965.
WILLIAMS, J. D. The Compleat Strategyst: Being a Primer on the Theory of Games of Strategy. New York: McGraw-Hill Book Company, Inc., 1954.