31
Nodim& Andysir, Theory, Methods & Applications, Vol. 7, No. 2, pp. 177-207, 1983. 0362%546~83/020177-31 $03.00/O Printed in Great Britain. @ 1983 Pergamon Press Ltd. BIFURCATION AND OPTIMAL STOCHASTIC CONTROL P. L. LIONS CEREMADE-Universitt Paris IX-Dauphine, Place de Lattre de Tassigny, 75775 Paris Cedex 16, France (Receiued 24 May 1982) Key words andphrases: Bifurcation, semilinear elliptic problems, Hamilton-Jacobi-Bellman equations, optimal stochastic control. INTRODUCTION WE CONSIDER here two questions concerning Bifurcation Theory and Optimal Stochastic Control. The first one concerns the interpretation in terms of Optimal Stochastic Control of a bifurcation (in semilinear second-order elliptic equations). Let us give a typical example: let 0 be a bounded, connected, smooth domain in [WN. We consider nonnegative solutions of -Au + AuP = Au in 0, u E P(d), u>OinO, u =OonaO (1) where A> 0,p > 1. It is well-known (see for example Rabinowitz [43], Berestycki [6], Lions [37]) that, if we denote by A1 the first eigenvalue of -A (with Dirichlet boundary conditions), we have: (i) for 0 < A < &, the unique solution of (1) is u = 0; (ii) for h > Ai, th ere exist exactly two solutions of (1): 0 and us where ui(x) > 0 in 0. In other words, at A = &, there is bifurcation of the curve (A, us) from the trivial branch of solutions (A, 0) (this is by the way an immediate consequence of the general result concerning bifurcation from a simple eigenvalue-see Crandall & Rabinowitz [ 111). To give a stochastic interpretation of the solutions of (1)) we introduce the following Optimal Stochastic Control problems: where (52, F, F,, P) denotes a probability space with a right-continuous filtration of complete sub-a-algebras of F-and E denotes the expectation-where the state of the system is given by the process x + B, where B,/V? is a Brownian motion with respect to F,, where Kt, o) is the control required to be in Ka, Ki which are detailed below and where rXis the first exit time of x + B, from 0 (or 0). Finally Ka (resp. Ki) is the set of bounded progressively measurable processes E such that: 0 G E a.e. in R, x 52 (resp. 0 < 6 G E a.e. in R, x Q, for some 6 > 0 depending on Q. Remark. These optimal stochastic control problems are explained below in more detail. Let us point out for the moment that the quantity minimized in (2) or (3) is not necessarily finite 177

Bifurcation and optimal stochastic control

Embed Size (px)

Citation preview

Page 1: Bifurcation and optimal stochastic control

Nodim& Andysir, Theory, Methods & Applications, Vol. 7, No. 2, pp. 177-207, 1983. 0362%546~83/020177-31 $03.00/O

Printed in Great Britain. @ 1983 Pergamon Press Ltd.

BIFURCATION AND OPTIMAL STOCHASTIC CONTROL

P. L. LIONS CEREMADE-Universitt Paris IX-Dauphine, Place de Lattre de Tassigny, 75775 Paris Cedex 16, France

(Receiued 24 May 1982)

Key words andphrases: Bifurcation, semilinear elliptic problems, Hamilton-Jacobi-Bellman equations, optimal stochastic control.

INTRODUCTION

WE CONSIDER here two questions concerning Bifurcation Theory and Optimal Stochastic Control.

The first one concerns the interpretation in terms of Optimal Stochastic Control of a bifurcation (in semilinear second-order elliptic equations). Let us give a typical example: let 0 be a bounded, connected, smooth domain in [WN. We consider nonnegative solutions of

-Au + AuP = Au in 0, u E P(d), u>OinO, u =OonaO (1)

where A> 0,p > 1. It is well-known (see for example Rabinowitz [43], Berestycki [6], Lions [37]) that, if we

denote by A1 the first eigenvalue of -A (with Dirichlet boundary conditions), we have:

(i) for 0 < A < &, the unique solution of (1) is u = 0; (ii) for h > Ai, th ere exist exactly two solutions of (1): 0 and us where ui(x) > 0 in 0.

In other words, at A = &, there is bifurcation of the curve (A, us) from the trivial branch of solutions (A, 0) (this is by the way an immediate consequence of the general result concerning bifurcation from a simple eigenvalue-see Crandall & Rabinowitz [ 111).

To give a stochastic interpretation of the solutions of (1)) we introduce the following Optimal Stochastic Control problems:

where (52, F, F,, P) denotes a probability space with a right-continuous filtration of complete sub-a-algebras of F-and E denotes the expectation-where the state of the system is given by the process x + B, where B,/V? is a Brownian motion with respect to F,, where Kt, o) is the control required to be in Ka, Ki which are detailed below and where rX is the first exit time of x + B, from 0 (or 0). Finally Ka (resp. Ki) is the set of bounded progressively measurable processes E such that: 0 G E a.e. in R, x 52 (resp. 0 < 6 G E a.e. in R, x Q, for some 6 > 0 depending on Q.

Remark. These optimal stochastic control problems are explained below in more detail. Let us point out for the moment that the quantity minimized in (2) or (3) is not necessarily finite

177

Page 2: Bifurcation and optimal stochastic control

178 P. L. LIONS

but takes its values in [0, + ~1, and that nevertheless uj,, U! are finite for every x E 0 and for every A>O.

Our main result on this simple example states that we have:

(i)IfO<AG&, thenufi-uj=O; (ii) If 1\. > 1,r, then =uj, <u: = up in 0.

Of course formally (1) is the Hamilton-Jacobi-Bellman equation associated with the control problems (2) or (3) ( see Fleming and Rishel [20], Bensoussan and Lions [5], and Krylov [26] for a general presentation of Hamilton-Jacobi-Bellman equations), thus it is natural that ui, U: are solutions of (1). A more interesting phenomenon is that, since Ku is in some sense the closure of K1, when the bifurcation occurs, the cost function [that is, the quantity minimized in (2) or (3)] becomes highly sensitive on the values of the control E(t, w).

The second question that we consider below is concerned with the existence of analogues of eigenvalues and eigenfunctions for the nonlinear operator of Hamilton-Jacobi-Bellman equations namely:

dg, = ;;y (&P), for Q, E a(O)

where Ai = - c&(x) & +bi(x) d k +c’(x)* is a sequence of uniformly elliptic operators with smooth coefficients. We denote by ill(Al) the first eigenvalue of the operator Ai with Dirichlet boundary conditions (corresponding to a unique-up to a multiplicative constant-positive eigenfunction).

We introduce here two constants Al, x, such that:

(i) A1 Gnf l~,(Ai) s sup &(A;) d 2, i>l ia

(ii) If /1< hl, and if (h(x)). rzl is a sequence of smooth functions then there exists a unique

solution u E W”.=(O) of:

sup (A,u -fi) = ilu a.e. in 0, u =OonaO. (4) iB1

(iii) If A < 11, and if (h(x)) Isl is a sequence of nonnegative smooth functions then there exists a unique nonnegative solution u E W2,X(0) of (4).

(iv) There exist Q)~, t,!~ E W2.=(0) satisfying:

&ql = ;!I: (Aiql) = A191 a.e. in 0, q1 < 0 in 0, ql= OonaO (5)

dvl = .s;y (A&) = &WI a.e. in 0, cr>OinO, t$,=OonaO. (6)

(v) Let (q,, A) E W2*“(0) x R satisfy:

.S;eg7 = Ag, a.e. in 0, q= OonaO.

If 91 =G 0 then h = hr and Q, = BQ), for some 8 2 0, and if Q? 2 0 then A =& and cp =

eql for some e 3 0.

From this list of results, it is clear that hl, 1, play the role of eigenvalues and we call them

* In everything that follows, we use the implicit summation convention.

Page 3: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 179

demi-eigenvalues (in particular because of some relation with a result of Berestycki [7]

concerning nonlinear Sturm-Liouville problems). Let us also give a simple example showing the relevance of Al, 1, for bifurcation problems:

consider the equation

Seu + jllulp-‘u = hu a.e. in 0, u E w’~=(o), u = 0 on a0. (7)

We prove by a simple application of the above results:

(i) If A G Al, the unique solution of (7) is u = 0. (ii) If Ai < 13. < Al, the only solutions of (7) with constant sign are u = 0 and ui. where Us

is a negative solution of (7). (iii) If A > xi, th ere are exactly three solutions of (7) with a constant sign namely: u = 0,

us the negative solution of (7) and 6~ the positive solution of (7).

Finally let us mention that A1, 1, have very natural stochastic interpretations (in terms of Optimal Stochastic Control) and claims (i)-(v) above extend the results on the solvability of Hamilton-Jacobi-Bellman equations obtained by Lions [31], Evans & Lions [17], but heavily rely on these works (for the obtention of a priori estimates).

I. OPTIMAL STOCHASTIC CONTROL PROBLEMS ASSOCIATED WITH BIFURCATIONS

1.1. An example Let 0 be a bounded, connected, smooth domain in [WN. Let us consider the following

equation:

-Au + ,ZuP = ilu in 0, z4 E P(O), uaOin0, u =0 ondO, (I)

where A. > 0, p > 1. We recall that for Iz G A1 (=A,( - A)) (1) has a unique solution u = 0 and that for A > A1, (1) has exactly two solutions: u = 0 and us which is the unique positive solution of (1) (see for example Berestycki [6], Amann & Laetsch [3], Rabinowitz [42]).

We now introduce the optimal stochastic control problems that are associated with (1): this

is based upon the remark that (1) is equivalent to

sup[-Au+A(p~P~‘-l)u-jl(P-l)~p]=OinO OSC

uaOin0, u E P(O), u =OonaO I

indeed remark that if u solves (l), then: 0 (E-, ]EjP). Next, ‘f

G u G 1 and (1’) follows from the convexity of 1 u solves (l’), then the supremum is obtained at each point x of 0 for

E = u(x) and this yields (1). Let (Q, F, Ft, P) be a probability space with a right-continuous filtration of complete sub-

a-algebras Ft of F and with some adapted Brownian motion B,. We set B, = v’%?,. The state of the system we want to control is given by x + B, (for x = 0). Let E be a bounded nonnegative progressively measurable process that we will call the control, for such a control Ewe introduce the following cost function J(x, {) E [0, +m].

where, rX is the first exit time of x + B, from 0 (or 0)

Page 4: Bifurcation and optimal stochastic control

180 P. L. LIONS

Next, let K0 be the set of bounded nonnegative progressively measurable processes E and let K1 be the subset of K. consisting of processes E satisfying:

E(t, w) 2 6 a.e. in [w, X Q

for some 6 > 0 depending possibly on E. Then we introduce for all x E 6

(2’)

(3’)

Of course, 24; 3 0 in 0, ui J, = 0 on aO(i = 1, 2) and in view of the heuristic dynamic programming principle (see [20] for example) one would expect that ui solve (1) (i =l, 2) -provided these functions are at least finite. In fact, we prove:

THEOREM I. 1.

(i) If 0 < A G Ai, then u;(x) =&j(x) = 0, for all x E 6. (ii) If A > Ar, then u;(x) =O, u:(x) = uh(x) for all x E 0 (we recall that UA is the unique

positive solution of (1)).

Remark 1.1. As it will be clear from the proof, in the definition of ~2 we may replace K1 by

K~={~EK~,V~>O,36>O~(tAz~(u))~6a.e.inR+~~}

where r,” is the first exit time of x + B, from 0” = {x E 0, dist(x, 80) > a}. We will give after the proof of theorem I.1 a few remarks on the existence of optimal

Markovian controls for (3) [0 is an optimal control for (2)] and on the Cauchy problem

associated with (1).

Proof of theorem 1.1. Since J(x, 0) = 0, it is clear that u: = 0, VA > 0. Now, to prove (i) we

need to prove that U: = 0 if A G Ai. We recall first the well-known stochastic characterization

of A-i (see for more general results Lions [32]).

LEMMA I. 1. The first eigenvalue Ai is given by:

A1 = sup A > 0, ,sz$ E[eAtx] < + 02 . ( )

H

Now, if we take the constant control Kt, w) = E> 0, we have:

I

r, J(x, &) = il(p - 1)EP.E exp{At(l - p&P-l)} dt

0

and for E small enough, we find:

J(x, E) = y_-_g;: E[exp(A(l -p~~-l)rJ - 11.

Now, in view of lemma 1.1, if A < hi the expectation is bounded independently of E and we

Page 5: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 181

conclude since:

0 G 24$(x) =G J(x, &) 4 0. E’O

If k = Ai, we remark that by Ito’s formula we have:

J(x, &) = A(p - l)&,(X)

where a =~a*-‘, and u, is the solution of:

-Au, = 1 + Ai(1 - or)u,in 0, u, E c*(O), v,=OonaO.

But in view of the following lemma, we conclude since we have

0 G ut(x) G J(x, &) c C&J+&“(O) 6 cc.

(9)

LEMMA 1.2. Let u, be the solution of (9); as 1y goes to 0, then av, converges in C*(s) to 8~1 where ~1 is the normalized eigenfunction associated with Ai:

-ATI = Atqi in 0, Q)l E C2(0>, ql=OonaO

lldlL%a = 1, q1 > 0 in 0, I

and where 8 is given by: 6’ =i o q1 dx . i

Proof of lemma 1.2. If we denote by w, = (YV,, we have

-Aw,= cu+ A,(1 - cu)w,in 0, w,> 0 in 0, w, E P(O), woi= 0 on80.

Multiplying this equation by q1 and integrating by parts over 0, we find:

4 I

ow~~l~= a i

9 d.x + Ml - 4 w&J1 d.x 0 I 0

or

4 i

ow,P)l~= J^

oQ?lb. (10)

This proves in particular that w, is bounded in L:,,(O) and more precisely we have: Jo w,@ dx 6 C (independent of cu). We may now (for example) apply the method of Brezis & Turner [lo] to obtain

(Iw,II~=(~) G C (independent of a)

and by LP and Schauder estimates this yields

Ilw,llc2-B(d) c c (0 < P < 1). Now if w, converges in C’(o) to some w, obviously

-Aw = Aiw in 0, w2Oin0, w =OonaO, w E P(O);

and from (10) we deduce Al Jo wql clx =Jo vl dX. Therefore w = @I and

Page 6: Bifurcation and optimal stochastic control

182 P. L. LIONS

We next turn to the proof of (ii): we first prove that we have

&(X) 6 u:(x) = ;‘c”;, J(x, 0

Indeed let E E Kr be such that 1(x, E) < + 03 (X is fixed in 0). Because of the definition of K,, this implies:

Therefore there exists tn + + ~0 such that: n

We now apply Ito’s formula to UL(X +B,) exp{k - hpJ&$P-‘(s, w) ds} between 0 and rX A tn, and we obtain

us = E u,& + B,AJ exp AZ, A t, - Ap [ { I

rzAtn S’P-‘(s, w) ds)]

GAt” +E

[I (G-G - ( 3 ) ( o p ‘t o.,~+~d-sn~+~~)~exp~)‘-Yb15’.-‘(~,~)~~dt,.

But: PZ~P-~U* -us s (p - 1)1$” , and we deduce

Q(X) s J(x, E) + E [ u~(x + B,AJ exp AZ, A t, - Ap

i I

GA&l EP-+, co) ds

0 )1

.

But the second term may be bounded by

l/uillL=&[ 1c,2,) exp(h - AP I,” Ep-’ 6, 4 ds)]

and this goes to 0 as 12 goes to 0~. Now, for LY small enough, there exists ur solution of

-Au: + il(z.Q’ = Au! in O”, ur> Oin O”, ur=OondOa

(indeed the first eigenvalue of -A in 0” converges to ii, as agoes to 0). In addition it is easy to show that, extending ur to 0 by 0, we have:

u?- up in C(0). U-0

But the above proof shows that

u?(x) =S ;z;-, J%, n, tlxEOE

where

Y(x, 6) = E[b':A(p - l)cp(t, co) exp(ht - Ap [cpM1(s, w) ds) dt].

Page 7: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 183

Now if f E K2 then &tr\ t:(o), u) C/K 1 and we deduce from the above inequality:

for all x E d,, on the other hand if x @ 6, this inequality is trivially true. Thus, taking (Y+ 0, we obtain:

(11)

We now prove that:

where &(t, o) = u~(x + Br~J.

We first remark that &(-A -13 + Apu, “-‘) > 0. Indeed from the equation (l), we deduce:

&(-A - A + Au{-‘) = 0

and this yields the above inequality in view of well-known comparison principles for eigen- values. But this implies (by an extension of lemma I.1 which can be found for example in

1321)

sup E exp 60Arx+A0Atx-Ap (13) XEO [ ( I 0

OAr, .;-‘(x + B,) ds)] < m

0 stopping time

for some 6 > 0. Thus, applying Ito’s formula to UA(X +B,) exp(At - ApJfi EC-’ ds) between 0 and TA tX, we

find:

Therefore

and the first term may be bounded by:

c 1cT,3fj exp(h - @ [ lj$-’ ds)] dt G C ITz em6’dt = ;e-“[because of (13)];

while the second term is bounded by CedaT.

Page 8: Bifurcation and optimal stochastic control

184 P. L. LIONS

Letting T-+ CC we find: uh(x) = J(x, &), Vx E 0. Now to obtain (12), we argue as follows: let a> 0, we introduce

<“(t, w) = Ex(t, w) if f d rF((w)

1 l/@-l) = - 0 P

if t> e(w).

Obviously E” E Ki, now by similar computations as above we show:

And this last term may be bounded by:

But we estimate the first term by

CE{ 1 z, - @‘]“q E i [ exp ;Iqe- Apq b’&-ld.r]]l’q

q’ = q/q - 1 and q E (1, + 00) is determined such that:

E[exp(i.qZ- Apq6’:E$-ldsj] d C(ind. ofx, a)

this is possible because of (13), choosing q - 1 small enough. Now since E[eATx] <a for A <

Ai and since r:(w) s rX(o); we see that: E[ 1 r, - rFlq’]‘/“’ ---+ 0 a.e. IX-t0

Next, we estimate the second term by:

d 5 e_aT [in view of (13)].

And we have shown: lim J(x, 5”) = u*(x) . This proves (12) and completes the proof of theorem P+0

1.1. n

Actually, we proved more than theorem 1.1, namely we prove the

COROLLARY 1.1. If il > Ai, then we have:

where EX is the optimal control given by: &(t, W) = un(x + B,(w)).

Page 9: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 185

Remark 1.2. A feedback control like & is called a Markovian control; thus we proved the existence of an optimal Markovian control.

Remark 1.3. We would like to make a few comments on the Cauchy problem associated with (1) namely:

au at-Au+L!‘=J.uinOx (O,+m), 24 E c’(S x (0, + co))

U(X, 0) = u&z) in 6, u E c(O x (0, +-)), u =0 ona x[O, +m) (14)

where u. E Co(s) = {v E C(o), u = 0 on aO} and u. 3 0. It is well-known that there exists a unique solution of (14) and it is a simple exercise on Ito’s formula to check that we have: VXEO’, Vt30

where

J(x, t, g) = E[ r,” A(p - l)Ep exp/As - Ap PEP-’ do) ds].

Now, if A > ill and if u. + 0, then it is well-known (see for example [6]) that u(x, t) + us in C*(o). Therefore, in view of theorem 1.1, if E $ Ko, E E K1 (or K2)

f--t,=

(indeed, formally, this is the case when E vanishes ‘a lot’ E[exp(At -LppSb Ep-l(s) ds)] becomes large).

and in this case the term

Remark 1.4. Everything we said in this section remains trivially valid if we replace -A by a general uniformly elliptic second-order operator

A = -al,(x) a, + b&x) ai + C(X)

where aij EC(o), bi, c E L”(O) and c 20, z CZ;,(X)&~~ a v\Ej* VX E 0, VE E RN for some

v > 0. Then we just have to replace x + B, by the diffusion process associated with A. We can treat as well Neumann boundary conditions or even more general ones as

g+ y(x) u=OonaO,

where yE C+(aO), II is the unit outward normal to a at the point x of ao.

I.2. Interpretation of solutions of semilinear elliptic equations Our goals in this section are first to extend the results of the previous section and second

to give a stochastic interpretation of some solutions of semilinear elliptic equations. But as we will speak here only of optimal stochastic control problems and not of differential games problems, the only nonlinearities which we can treat here are either convex or concave (we

Page 10: Bifurcation and optimal stochastic control

186 P. L. LIONS

hope to come back on this point in a future study). the following three types of equations:

-Au + nf(u) = ilu in 0, u E P(O),

To simplify we will look for solutions of

uZOin0, u =0 ondO, (15)

where A > 0, f(0) = f’(0) = 0, feC’(R), f is strictly convex and lim f(t)/t = + 03; f++o:

-Au = /if(u) in 0, u E C’(O), u 20 in0, u =0 on80 (16)

where A > 0, f(0) > 0, f~ C’(R) and f satisfies -either f is concave and lim f(t)t~’ < 0,

I--t+C.Z -or f is convex.

The first case [equation (15)] . is known [see [3,6]] that for A

1s very similar to the case treated in the preceding section. It < A1 the only solution of (15) is u = 0, while for A. > Ai, there

are exactly two solutions 0 and Z.Q, of (15) and us is the unique positive solution of (15). With the same notations as in the preceding section, we introduce: VEE &

J(X, 8 = E[[ M’(Kt, m)> Kt, a) -f(Kt, +>] exp{it - A /‘f’(HtY m)> dt}]; 0

since f’(t) - f(t)t 2 0 for t a 0, we see that: 0 GJ(x, 6) d +a. Exactly as in the preceding section, we find:

THEOREM 1.2. If fE C’(R), f(0) = f’(O) = 0, fis strictly convex and lim f(t)/l = + 00 then we f++s

have :

(i) If 0 < A G Ai:

(ii) If A.>Ai:

inf J(x, E) = 0, vx E 0 5EKO

where & is the optimal control given by: &(t, w) = u(x + B,(o)). H We skip the proof of this result since it is absolutely identical to the proof of theorem I. 1.

Let us also mention that remarks 1.2-4 are still valid here. We now turn to the case of (16) when f is assumed to satisfy:

fE C’(R),f(O) > 0,fis concave and lim f(t)t-’ < 1. (17) f’+r

Then it is well-known (see Berestycki [6], Lions [37], Amann & Laetsch [3]) that (16), in this case, has a unique solution u (which is positive in 0). We denote by g(t) = f(t) -f(O) and by h(t) = g(t) - g’(t)t. We keep the notations of Section I.1 and we introduce for EE & the

Page 11: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 187

following cost function:

then by a proof identical to the proof of theorem I.1 (even simpler) one finds:

THEOREM 1.3. Under assumption (17), we have:

u(x) = p$J(x, g) =J(x, &), vx E 6;

where & is the optimal control defined by: &(t, w) = U(X + B,(U)). H

Let us also mention that analogues of remarks 1.2-4 are still valid here. We finally consider the case of (16) when f is assumed to satisfy:

fE Cl@), f(O> > 0, f '(0) 3 0, f is convex. (18)

Then it is well-known (see Crandall & Rabinowitz [12], Gelfand [23], Joseph & Lundgren [24], Bandle [4], Mignot & Puel [39], Lions [37]) that there exists a constant 1 E (0, +x] such

that:

(i) If A < 1, then (16) has a minimum positive solution us. In addition, we have:

&(-A - Iif ‘(uA)) > 0; (19)

(ii) If A > 1, then (16) has no solution; (iii) If X < 00, A = 1 and if (16) has a solution, then (16) has a unique positive solution MA.

In addition we have:

&(-A - Af’(ui.)) = 0, ux= $Q. (20) -+

In addition, in [12], [39] sufficient conditions are given insuring that A< x and that (16) has a solution for ?, = ;i. Finally let us mention that in [12] and in de Figueiredo, Nussbaum & Lions [18, 191, various conditions on f are given insuring the existence. for h < K, of a solution different from us. Nevertheless, uk is the only ‘stable’ solution of (16) (the precise meaning of the stability is explained in Fujita [21] and Lions [35, 361).

We keep the notations of the preceding section and we consider again the set K. of bounded progressively measurable processes 6 such that:

Kt, 0) 2 0 a.e. in Iw, X Q.

But, here in order to be able to define the cost function we have to restrict our controls 5 to the set (depending on x):

Then we define the cost function J(x, &‘) for E EK; :

J(x,E) = E[b" k{f(K 6 4) - f ‘(W, 4) 54 41 exp[ [Af ‘(E(s, 4) ds} dt]

Page 12: Bifurcation and optimal stochastic control

188 P. L. LIONS

obviously J(x, E) is well defined since t is bounded (E E &). Finally, we look for

u(x) = ;&G’ 3

We have:

THEOREM 1.4. Under assumption (18) and if A < A, then we have:

&.(x) = u(x) = ;&igz J(x, 5r) = J(& &), vx Ed) 0

where & is the optimal control (in K;) defined by: &(t, w) =UA_ (x + B,(o)). In addition if

1<a, and if (16) has a solution for A = /! then we have:

u:(x) = U(X) = sup J(x, $) = \iFJ(“. Ex”). vx E 0 CEKA

where !$i is the control (in K$ defined by: E$(t, w) =u~(x + B,(o)) n We see that if A < ;i, & is an optimal Markovian control, while for A = 2 [if (16) has a

solution] then Ep define the so-called E-optimal Markovian control. Let us also mention that analogues of remarks 1.2-4 hold here.

Proof of theorem 1.4. We first show that: UJ,(X) 3 u(x), Vx E 0. Indeed let E E Ka”, just as

in the proof of theorem 1.1, there exists t,,-+ ~0 such that: n

Now, applying Ito’s formula to u~(x +B,) exp(la Af ‘(6) ds) between 0 and r, A t,, we find:

u,(x + B,,) l(rx>t, ) exp[ 6’ af ‘(83 ds}]

But the first term goes to 0, because of the choice of t,. And the second term is larger than:

E [rAG (Af (Ks)) - Af ‘(Ks)) Hs)) exp(p Af G) da) ds ] 0

and this quantity converges, as n+ w, to 1(x, c) since g E K& Next, we define &(t, w) = UA(X + B,(w)); since we have:

Ar(-A - Af ‘(d) > 0,

a trivial application of Ito’s formula shows (i) that & E K; , and (ii) that u*(x) = J(x, &). This proves the case il < ;i of theorem 1.4.

Now if A = il < 30 and if ux exists, then clearly:

UX(X) = F; J(& 5;) = Y$ UJ,(X)

Page 13: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 189

and f$ E K6 (for A = 1) since Ai(- A -Xf’(un)) > a(- A -~~‘(<ux)) = 0. Thus:

m(x) =z u(x), vx EO.

But on the other hand, if @EKt for A = ;i then EEK$ for every A < 1 thus:

J(x, E) s l$ u$) = ux(x) and u(x) = u&x),

and this completes the proof of theorem 1.4. n

We would like to conclude this section by a general remark: we have seen in the three above cases a stochastic interpretation of some solutions of semilinear elliptic equations. It seems that, in general, one can give a stochastic interpretation of stable solutions of semilinear elliptic equations (in particular if one has &(-A -f’(u)) > 0 for the solution u).

II. DEMI-EIGENVALUES FOR THE HAMILTON-JACOBI-BELLMAN OPERATOR

II. 1. Notations and assumptions Let 0 be a bounded, connected, smooth domain in RN. Let (Ai),ai be a sequence of

uniformly elliptic second-order operators:

Ai = -a&) &$ + b;(x) ak + c’(x)

where &, bi, ci satisfy:

(21)

3v> 0, Vi> 1, vx EO, ve E RN : a;,(x)&, 51 F= +g. (22)

We will be concerned with the following type of equations:

sup {Aiu -fi} = 0 a.e. in 0, iZ1

U E I@=(O), u =OonaO; (23)

where (fi) are given functions satisfying:

s;T Ilfill~w, < + w. (24)

This problem arises in connection with the general problem of Optimal Control of solutions of stochastic differential equations via the argument of Dynamic Programming: these equations are known as Hamilton-Jacobi-Bellman equations.

Let us briefly describe the associated Optimal Stochastic Control problem: we define an admissible system A as the collection of (i) a probability space (52, F, F,, P) with a right continuous filtration of complete sub o-algebras Ft of F, (ii) a Brownian motion B, adapted to F,, (iii) a bounded progressively measurable process i(t, W) with values in N*, (iv) a family Cyx(t))xEo of solutions of the equation:

[

dy,(t) = d”‘Cyx(t)) dB, - b’(‘)(y,(t)) dt, tao

yx =x; (25)

where a’(x) =V’?(U~(X))~‘~ (f or example). For each admissible system A, we define a cost

Page 14: Bifurcation and optimal stochastic control

190 P. L. LIONS

function J(x, A)

where rX is the first exit time from 0 of the process (yx(t)). Finally, we minimize J(x, A) over all admissible systems A:

u(x) = inf J(x, A). (26) .4

Let us recall briefly a few known results: (1) If C’(X) 2 0 (Vx E 0, Vi 2 1) and if there exists u E W’~“(O) solution of (23) then u is given by (26) (and in addition one can define E-optimal Markovian controls); (2) If ci(x) 2 0 and u given by (26) belongs to W2.x(0) then u solves (23)-for the proofs of these two facts, see Krylov [26], Nisio [40] and Bensoussan & Lions [5]; (3) If c’(x) 2 0, then there exists u E W2.X(0) solution of (23) and thus u is given by (26)-see Lions [31], Evans & Lions [17] for the proof of this result; in Krylov [25], Brezis & Evans [9], Lions [29], Evans & Friedman [16] and Lions & Menaldi [38] some previous results concerning the solution of (23) were obtained. Finally let us mention that the most general results concerning the solution of (23) are given in Lions [30, 33, 34]-including the case when the operators Ai degenerate.

Remark 11.1. In these references, sometimes, instead of ‘control’ processes i(t, o) with values in fV*, are taken controls u(t, w) with values in a closed set of R” (for example). In this case the only additional assumption is that a(x, u), b(x, u), c(x, u), f(x, u) are continuous with respect to u and everything we say below remains valid in this case (remark that by taking a dense family (u,);~~ in V, one can reduce this case to the preceding one).

Before concluding this section, we want to mention the method of proof used in [17, 311 in order to solve (23): one considers the following penalized system (for each m b 1 fixed):

Alui + /32(uf - uz) = fi in 0, u; E P(O), u1 = 0 on a0: Z

A2u$ + pE(uz - ~2) = f2 in 0. uf E P(O), uz=OonaO, (25’)

where

A, u: + PE(G -UT.) =f,inO, u’,” E P(O), urf=OoonaO;

/$(I) = i/!l(t), /3 E C=(R), P(t) = 0 ifts 0, P’(t) > 0 ift > 0, P”(t) > 0 ift > 0.

In [17, 311 it is proved that if c’ 2 0 (Vi 2 1) and if we assume (21), (22). (24) then: (~u~~~~~.=~~~ d C (indep. of m and of E); and, as E goes to 0, u: + u,, E~JV~.~(O) which

c.l(ci) is the solution of

su’cp, (Aiu, -fi) = 0 a.e. in 0, U, E w2.=(o), u,=OondO. . .

Since, u, is bounded in W2,m(0), taking m+ 00, one obtains the solution u =lim U, of (23). m

Remark 11.2. But, an easy examination of the proof of [17,31] shows that if for E small enough

Page 15: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 191

there exist (~&)i~i~~ solution of (29, if ]]u&.-coj SC (indep. of E, i, m) and if we assume (21), (22) and

;:9 Ilfillww, cm (24’)

3c>o, Vx E P/Xl = 1, I 3 < C in 9’(O) (VZ 2 l),

then we have: ]]u’,]lw~.-~~~ 6 C (indep. of E, i, m) and we obtain in this case, as in [17, 311, a solution of (23-m) for all m and a solution of (23). We will use this remark many times in what follows.

Remark 11.3. In Evans [14, 151, it is proved that if (21), (22) and (24) are satisfied then any solution u of (23) satisfies: u E C2,a(0) for some 0 < a~< 1 [depending only on the bounds in (21), (22) and (24)].

In what follows, we show in Section II.2 that roughly speaking, if (23) possesses a pair of ordered sub and supersolutions, then there exists a solution of (23) between these two functions. This auxiliary result will be one of the key ingredients which enables us to prove in Section II.3 the existence of demi-eigenvalues such as indicated in the Introduction. Finally in Section 11.4, we present various applications and comments on these demi-eigenvalues.

II.2 An auxiliary result Let fi(x; t) be given functions on c) x R satisfying:

;:p Ilfk t)llwz.“(oxBR) < +m> VR< “o; (26’)

where BR = {EE R, 1El-c R}. We will study in this section the equation:

;LJ {A,u(x) -fi(x, u(x))} = 0 a.e. in 0

u E I+Oc(0), u = 0 on a0. I

We will assume that there exists u, ti respectively subsolution and supersolution of (23’)

that is satisfying:

;t~ {AiU(x) -fi(X, u(x))> c 0 a.e. in 0

SE’: {M(x) - fi(x, a(x))] 2 0 a.e. in 0 (27)

u, u E I@“(O), u GO<tionaO, u GtiinO.

Then we have the

THEOREM 11.1. Under assumptions (21), (22), (26’) and if there exist u, U satisfying (27); then there exists u solution of (23’) which satisfies in addition u < u G u in 6. n

This result will be very useful in the following sections and of course is an extension of the well-known result corresponding to the case Ai = A(Vi 3 l), fi = f(Vi 3 1) (see Amann [l, 2] for this special case).

Page 16: Bifurcation and optimal stochastic control

192 P. L. LIONS

Proof of theorem 11.1. We first make some preliminary substitutions: there exists K > 0 such that ci(x) + K 2 1 in @Vi) and j(x, t) + Kt is increasing for t E [-Co, Co] (where CO is some fixed constant larger than max(]l 11 u L=(O), I~&.-co,). We will denote by&x, t) the functions defined by:

&,r) =fi(x,tAu(x)) +K(A+));

and we denote by Ai the operator:

Ai = -ail(x) a:/ + b;(x) 8k + (c~(x) + K).

We first claim that it is enough to show the existence of u satisfying:

;t~ {A&x) -5(x, u(x))} = 0 a.e. in 0

uEW2.“(0), u=OonaO, u>uinCi.

Indeed if this is the case, remarking that we have:

f(x, U(X)) c fi(x, u(x)) + Ku(x) in 0

we deduce from the definition of fi:

;;y {Ajti(x) - ji(x, ii(x))} 2 0 a.e. in 0

tiE W2,=(0), G>OonaO.

and we conclude: u G ti in 0, from the following lemma:

(23”)

LEMMA II. 1. We assume (21, (22) and let (_Qial,(&r be two sequences of functions satisfying:

St’: ~IlfillL-co, + ll&Yo,> < Oc ; fi >gi a.e. in 0, Vi > 1.

We assume there exist U, u E W21cc(0) satisfying

sup (Aiu -fJ 2 0 a.e. in 0, a.e. in 0 ia

SUP (A,u -gi) d 0 i>l

u 2 v on a0.

(i) If u 2 u in 0, then either u = u in 0 or u(x) > U(X) in 0 and au/an < au/&z on a0 (n denotes the unit outward normal).

(ii) If c’(x) 3 0 (Vi 2 n, Vx E 0) then: u 2 v in 6.

(The proof of this lemma will be given later on.) Next, to prove the existence of u satisfying (23”), we argue as follows: for each m fixed, we

consider the penalized system:

Aiuf + &(uk - uz) = fi(x, u$ in 0, uA E C2(0), uk = 0 on a0

(25’)

A,u: + pJz.4; - uf) =&(x, u?) in 0, u? EC?(O), uT=O ondO.

Obviously u is a subsolution of (25’) since:

Vi2 1, A+.4 G J(x, u(x)) in 0, u E WZXZ(0)) u GO ondO.

Page 17: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 193

Next, recall that in view of the results of Evans & Friedman [ 161 if gi, . . . , g, EC(O) , there exists a unique solution U = KF of:

Alurn + &(ul - u2) = fi in 0, U1 E W2,P(O) (p < a), u’=Oon?90

(25”)

Amum + p,(um - u’) = fm in 0, ~4~ E W2sp(0) @ < to), urn = 0 on 80.

where U=(u’,. . . ,zP), F = (f’,. . . , f”) . In addition K is a compact mapping from C(o) into C(o) and: KF’ 2 KF2 if F1 3 F2 (where F’ 2 F2 means f i 2 f ‘, for 1 s i s m). Now

;; :S;;(O). u 2 u in 0 then Ilfi(x, u)]~-co, 6 C (indep. of E, i, m) and thus (see for example

7

sup IIKi(u)IIL-(o) 1SiSrn

G Ci (indep. of E, i, m)

where Ki(_u) is the solution of (25”) corresponding to gi = 5(x, u). Then if C = {U E C(o), u 2 u in 0, II4-(0~ s CI >, the map (K,(u). . . . , K,(u)) is a compact continuous map from the convex set C into C and thus by Schauder fixed point theorem, there exist (uf, . . . , ~7) solution of (25’) and in addition we have:

u:>uinO, Il&-(o, c Cl.

But this last bound enables US to obtain the following estimate by the same method as mentioned in remark 11.2:

Ilu’,ll~z,=(~) G C (indep. of E, i, m).

And passing to the limit (E+ 0, m + a~) exactly as in [16, 311 we prove the existence of u satisfying (23”). And this completes the proof of theorem II. 1. n

Proof of lemma 11.1. We will only prove part (i) of lemma II.1 since part (ii) is obtained by exactly the same argument as in the proof of uniqueness in Lions [31].

Now, to prove (i), we first claim that we may assume without loss of generality that C’(X) 2 a > 0 (for some a > 0). Indeed we have for all ,U > 0:

sup (Aiu + pu - (fi + ,uu)) 2 0 a.e. in 0 ir1

szy (Aiu + pu - (gi + pu)) s 0 a.e. in 0

and choosing p large enough, we are done. Thus we assume I?(X) 2 (Y> 0 (Vi 2 1, Vx E 0).

Next, we remark that we have:

sup(Ai(u - U) 2 0 a.e. in 0, u - U E w’9=(o), u-v>OinO. iP1

And thus there exists &k[, pk, yE L”(O) satisfying:

Page 18: Bifurcation and optimal stochastic control

194 P. L. LIONS

We may now apply Bony’s maximum principle [8] to show that either u = u or u > u in 0. And if u > u in 0, following the proof of Protter & Weinberger [41] concerning the Hopf maximum principle, one obtains easily: au/an < au/an on JO.

11.3. Existence and properties of demi-eigenvalues We will keep the notations of the preceding sections (II.1 and 11.2) and we introduce the

nonlinear operator &:

dp = ;~p(A~q) E L”(O), if 9 E W2.5(0).

Let us recall that we denote by hl(A,) the first (or lowest) eigenvalue of the operator Ai (with Dirichlet boundary values).

Our main result is the following:

THEOREM 11.2. Under assumptions (21), (22); there exist two constants Ai, Ai such that:

(i) the following inequalities are satisfied:

Ai s inf AI G SUP AI G Xl; (28) 131 IS1

If ci(x) 2 0 in 0 for all i 2 1; then Jbi > 0. (29)

(ii) If A. < Ai and if (J(x)) la1 is a sequence of functions satisfying (24) then there exists a unique solution u E W2.“(0) of

sup(A,u - fi) = Au a.e. in 0, u=OonaO. (4) 121

(iii) If h < di and if (h(x)) 1s a sequence of nonnegative functions satisfying (24) then there exists a unique nonnegative solution u E W2,X(0) of (4).

(iv) There exist ql, ql E W2,“(0) satisfying:

sQg?i = y$Ajyll) = Alql a.e. in 0, vi < 0 in 0, vi = OonaO (5)

&vi = Sup(Aiv1) = X,W~ a.e. in 0, W1 > 0 in 0, tjjl = OonaO. (6) rZ=l

(v) Let (q, A) E W*,“(O) X R satisfy:

&Q, = Ag? a.e. in 0, Q, = 0 on do.

If cp<O in d, then A=hi and q= 8cpi for some 0~0. And if TzO in 0, thenil= Ai and q = 0~1 for some 0 2 0.

Remark 11.4. In view of this result, it is clear that A,, I1 play the role of the first eigenvalue for the nonlinear operator s&. And we see that the nonlinearity (of the max form and thus typically Lipschitz roughly speaking) creates a pair of what we call demi-eigenvalues. This phenomenon was first observed by Berestycki [7] in a totally different setting namely bifurcation theory with nondifferentiable mappings: in [7] was considered the case:

-u’I = Au+ - pu- in (0, l), rJZ E C2([0, II), u(0) = u(1) = 0

Page 19: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 195

where A, p > 0. But this is actually a particular case of the above case: indeed set ,n = An and remark that the above equation is equivalent to, if LX~ 1,

(

1 max -u”, - -u”

0! 1 = Au in (0, l), u E C2([0, II), u(0) = u(1) = 0

and to, if (Y< 1,

min i

1 -u’, - -u’

LX 1 = Au in (0, l), u E C2([0, II), u(0) = u(l) = 0.

Since in this specific context it is possible to show the existence of infinitely many demi- eigenvalues, it would be interesting to see if this remains true in the general context of Hamilton-Jacobi-Bellman equations.

Remark 11.5. let us mention that, using the results of Evans [14, 1.51 one can prove qI, vl E C2’w(0) [for some G! E (0, l)]. In addition from lemma II.1 one deduces: a/an (ql) > 0 on 80, a/&r (vi) < 0 on du. Finally let us mention that part (ii) of lemma II.1 remains valid if Ai > 0 while it remains valid if Ai > 0 and if fi, gi, u, u are nonnegative.

It is possible to give a purely analytical proof of theorem II.2 without any help from probability theory but we prefer to make a simpler proof which uses both partial differential equations and probabilistic techniques. This will enable us to prove at the same time (we keep the notations of Section 11.1):

THEOREM 11.3. Under assumptions (21), (22); the two constants A1, 2, of theorem II.2 satisfy:

(i) We have

Al = sup il E R, sup sup E { xE-a A

[rexp(df- [c'""(yl(s)) ds)dt] < -}

=sup[AE R,;F$s;p E[exp(i,r,- ~c’@)(yx(t))dr)] < CO) I

(30)

&=SUP AER,supinfE r xEd A

[fexp(hl- i, ci@)(y&)) ds) dt] < -)

=suP{AE ~,;~~ir$E[exp(Ar,- /oec’(‘)(y,(t))dt~] < -};

(31)

(ii) If A < Ai and if (f@))’ Ip1 is a sequence of functions satisfying (24) then the unique solution u in W2$Ja(0) of (4) is given by:

u(x) = inf E [ ffi(l)(yx(t)) exp[ At + [c~“‘(Y~(~)) dr] dt]; (32)

(iii) If A < A, and if (fi(~)). Ip1 is a sequence of nonnegative functions satisfying (24) then the unique nonnegative solution u of (4) in Wz~m(0) is given by (32).

We may now turn to the proof of theorems II.2 and II.3 which is divided in several steps: we need first to introduce a few notations. We will denote by 9 the nonlinear operator defined by:

939 = inf(Ajq) E L”(O) for q E W2.0c(0).

Page 20: Bifurcation and optimal stochastic control

196 P. L. LIONS

And we introduce the sets I and J:

J = {A E R, 3u~ E W2*“(0) : 9313~~ = Au* + 1 a.e. in 0, up 2 0, in 0, VA = 0 on JO}

-in view of [17], if A. 6- sup]Ic&,=(o) , A E Z f’ J. iS1

Our proof consists of six steps: Step 1: There exist Ai, ;ii E R U (+ to) such that Z = ]-CC, A,[, J = ]-a, A1[ and Ai < 1; Step 2: Ai (resp. ;ii) is less than the constant defined in (30) [resp. (31)] and they are finite; Step 3: Proof of parts (ii), (iii) of theorem II.2 and of parts (ii), (iii)

of theorem 11.3; Step 4: IIu,&,= (resp. IIuJL~ + + 00 as h+ & (resp. A+ ki) and proof of (30) and (31); Step 5: Proof of parts (iv), (v) of theorem 11.2; Step 6: Proof of part (i) of theorem

11.2.

Step 1: There exist Ai, &ERu{+~} such that Z=]-x,&[ and J=]-~,AI[ and

Li C Ai.

We first prove that if A E Z (resp. L E J) then ZL E Z (resp. ,u E J) for all p c A. Indeed, let us take for example the case of I, if A E Z than for p G A:

&&, 2 VuA + 1 a.e. in 0, us 2 0 in 0, uI. = 0 on a0

while obviously: .z& = 0. Thus by theorem 11.1, there exists ulr E W2,x(0) satisfying:

SQu, = j,u4, + f a.e. in 0, uh?=u,sOinO, u,=OonaO;

and ,LLEZ. Next, to prove that, for example, Z is open, we argue as follows: let A E I, we need to prove

there exists E > 0 such that h + E E I. But, if k 2 1, we have:

d(kuA) = A(kuh) + k = (A + .c)kui + 1 + {k - 1 - &kuA} a.e. in 0.

Thus choosing E small enough such that ~IIu&,-co, < f and k Z= 2, we obtain:

d(ku$ 2 (A + E) (ku$ + 1 a.e. in 0, kuh=Oon80.

Since &O = 0, we deduce from theorem 11.1. the existence of u(=u~.,.J such that:

&4 = (A + c)u + 1 a.e. in 0, uaOin0, u =OonaO.

Finally, if A E J, we have:

sev* 2 CQVA = Au* + 1 a.e. in 0, vh>OinO, V~ = 0 on a0

and since 0 = 0, we conclude that h E J from theorem 11.1.

Step 2: hi (resp. Ai) is less than the constant defined in (30) [resp. (31)] and they are finite.

More precisely, we denote by pi, pi:

[vex+- 1 c’(‘)(y,(s)) ds) dt] < -}.

Page 21: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 197

It is a simple exercise that we skip to check that ,UI, ,& are also given by:

~i=suP[LEIW,~~PosU,PE[exp{lr,-~c’(~)(y,(s)))ds}]< CO}

p1 = sup A E R, I

sup inf E[exp[k - i,” 2(+,(s)) ds]] < CO} xE0 A

Next, let A. E J (for example) and let us prove that h 6 ,ul. Let UA be such that: 933~~ = AVA + 1 a.e. in 0, v1 2 0 in 0, vi = 0 on 80. Let A be an admissible system, we apply Ito’s formula to v~(yX(t)> exp@t -6 c ‘(“)(y,(s)) ds} between 0 and T A tx, and we find:

V&X) = I?[ ~7~(y,(T A q)) exp[dT A tX - ITAr’ c’@)(yx(t)) dr)] 0

+ E/7n” {Ai(t)(Yx(t)) - ~uA(Y~(~>)> [ exp@ - [c@‘(Y~(s)) ds) dl} 0

and this yields:

for all T < ~0. Taking T-, cc, we conclude. It just remains to prove that ,fii is finite (i.e. ,& < +a). Without loss of generality, we may

assume that for some 6 > 0, Q = IO, c?[~ C 0. And clearly pi c p where p is given by:

[fexp(l.r- i &)(y,(s)) ds) dlj < -}

where i, is the first exit time from Q of the process yJt>. Now consider the one dimensional control problem:

dyx(t) = ,$i q(t, 4 dW,(t) + b(t, 4 dt, y,(O) =.x E [O, fil

where q, bj are any progressively measurable processes such that:

V+~ VC, lb1 s C

where C is such that: [[a$-(~) s vC, jlb$-co, SC . Therefore, it is clear that p G p. given by:

where A is any admissible system corresponding to the one-dimensional problem, where a, is the first exit time from [0, 61 of yX(t) and where & =sup I/c~(/L=(o) . We claim now that p. is

I also given by:

?t E R, QUA E C2([0, S]) : -vu’ + Iu’J = 2~4~ + 1 in)), a[,

UJ, 2 0, u*(O) = UA(6) = 0 1 .

Page 22: Bifurcation and optimal stochastic control

198 P. L. LIONS

Indeed if we denote by ,& this last constant, remarking that the following equations are equivalent if A 3 ho

A- ho --vu!+ /u;l=-~~+~in]O,d[,u,~O

or

we already know that if ,& 2 Lo then ,& s ,UO. But it is very easy to show that PO > A0 and thus

we know that PO < ~0. Next, we show that if ,& < CC then /u*IL= -+ +a, as A--+ PO. Indeed if it is not the case, there exists uy, solution of:

--@& + lu$] = 92.~~ + 1 in]O, S[, up0 E C*([O, a]),

UpO(0) = Up0(8) = 0, up0 3 0 in [0, S].

But since the set of A such that there exists UA as above is open (see Step 1) we have a contradiction with the definition of p,. Thus, ]uAIL= + +w as h-+ PO, if PO is finite. But, by the same argument as above (applying Ito’s formula), we see that for A < fro

[I

0, U&Y) = C inf E exp[(h - Ao)t] dt

A 0 I

(since A < b, this is a verification result totally similar to those introduced in Krylov [25, 261). Since max u&x) + +a, as A + ,&, one sees immediately that

LO. 61 0,

exp{(io- &)t}dt 1 = +a; thus ,u~ = PO

We now conclude by proving that ~0 < ~0. Indeed let A < ~0 and let UJ. E C’([O. 61) satisfy:

h - A0 -vu?. + ]ukl = - C uh+ 1 in]O, 6[, ui3 0, UA(O) = Q(6) = 0.

As indicated before: U&K) =Cinf E[J$ exp{(il - &)t} dt] and thus is unique, therefore A

u*(x) = ~~(6 - x) for x E [0, S]. In particular ui(6/2) = 0. In addition it is easy to prove that:

ui 2 0 on [0, 6/2] ( one may use for example the general results of Gidas-Ni-Nirenberg [22]).

Now from the equation one sees that ui EW’%“(O, S) and u = ui satisfies:

a-- LOuin _vu” + D’ = -

C 1 F o s A

‘2 ’ u(0) c 0, u - = 0 0 2

“E c*(o.;) ” c([o,;]) and thus A - &zo/C is less than the lowest eigenvalue Ai of the operator -v d’/dx2 + d/dx on

the domain IO, 6/2[ with Dirichlet boundary conditions at 0. Therefore ,LAQ s ho + Chi and we conclude. We could also argue directly by sample ODE methods.

Page 23: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 199

Step 3: Proof of parts (ii) and (iii) of theorems II.2 and 11.3.

Let jl E J (i.e. ;1< hi), from the definition of J, there exist UJ,. UJ, E W2,m(0) such that:

r

x&), = AUK + 1 a.e. in 0, us 2 0 in 0, u), = 0 on a0

sQ(_UA) = A(-UJ - 1 a.e. in 0, -ui~GOinO, -vE, = 0 on a0.

And thus if (J(x)) rrl is a sequence of functions satisfying (24) then for k large enough we have:

sup[Ai(kuA) -fi] 2 n(kud a.e. in 0, kuA>OinO ia

sup[Ai(-ku*) -fi] c il(ku$ a.e. in 0, -kvn~Oin~ ibl

and we deduce from theorem 11.2, the existence of u E W2,m(0) solution of:

sup[Aiu -fJ = ilu a.e. in 0, u =OonJO iP1

In the same way, if il < & and if (h(x)), 1a1 is a sequence of nonnegative functions satisfying (24), we prove the existence of a nonnegative solution u of the same problem (in this case take 0 as a subsolution since fi 3 0, Vi 2 1).

Finally to prove uniqueness of such solutions u, it is enough to show the stochastic representaion (32). But this is a simple remake of the arguments introduced by Krylov [25, 261, using the fact that if ,Y < A1 (resp. h < 2) then h < ,ul (resp. ;1< ,&).

Remark first that in view of the representation proved above:

this implies immediately: Ai = ~1, & = pi; that is part (i) of theorem 11.3. Let us prove now that, for example, ]]u&,-co, + +a as A+ ii. If this were not the case, there would exist A,,: Xi such that ]]u,&- S C (independent of n). We are going to prove that this would

imply: II~~~ll~2~~~0~ c C (independent of n). But this would show that there exists uI, E W2’m(0) solution of

&A, = Aiu~~ + 1 a.e. in 0, uk, >OinO, U;I, = 0 on d0

(pass to the limit, as n+ 00, as in [13, 161). And this would contradict the definition of ii since Z is open (Step 1).

Therefore, we need to prove: ]]u~J~z.-~~~ SC (indep. of n) as soon as ]]u,&,= is bounded. To simplify the notations, we denote by u” = uh”. Without loss of generality we may assume that ci 2 0, Vi 2 1 (if this is not the case, add a constant to both ci and a). As in the proof

Page 24: Bifurcation and optimal stochastic control

200 P. L. LIONS

of theorem 11.1, we know there exist ~2,’ solution of: (m 2 1 is fixed)

A&,’ + p&@ -

i

cQ2) = &(u:,’ A Co) + 1 in 0

A,&Y + &(u:~” - ulf,‘) = &(~‘f.~ A Co) + 1 in 0

with u’$’ EC*(~) , z@' > 0 in 6, z@’ = 0 on d0 and where CO ~.((zY((~=(c,) Vn 2 1). In addition

II~~“llL-(o, G Ci for some constant Ci (indep. of m, n, i, E). But this implies as in remark II.2 and in the proof of theorem 11.1:

]]u:,‘l]~,-~~, G C2 (indep. of m, n, i, E).

Now, taking a+ 0, m+ ~4, we obtain the existence of L7” solution of:

SQZC” = &(k” A Co) + 1 in 0, ti” E w+(o), II~“IIw~“(o) c c2 fin S 0 in 0, ti” = 0 on a0.

To conclude, we are going to prove that fi” = un. But, rewriting the above equation as

I

&” + &( ti” - Co) + = &iY + 1 in 0, i? E w*, “( 0)

~7 > 0 in 0, 9” = 0 on 30;

we prove as above that any solution of the preceding equation is given by

a”(x) = inf E 1 + G(t)il& 1 - &t - A, A,6

j-j(s)ds - b ci@)(gX(s)) i II ds dt

where s(t, w) is any progressively measurable process with values in (0, 1). Now since IIu”]IL-(o, G CO, U” is also a solution of the preceding equation and u” = ti”.

Step 5: Proofs of parts (iv) and (v) of theorem 11.2.

From Step 4, we know that JJu,&,=(o) +=a as A+ X1 and JJu,&=(o) -+ + 00 as A-+ Al. We then

define VA =K&~.=~o~ and VA =4~~ll~“(O), obviously we have: v~, ?A E W*,“(O)

d% = AWA - ll,ll’,_,,, a.e. in 0, i@AsOinO, VA= OonaO

and IIPAII~=(o) =IIwAIIL-(01 = 1. Now exactly as in the proof of Step 4, we obtain:

llQdw(0) =G CO? IIVAIIW “(0) 6 co where CO does not depend on A (for h > /tl - I). And passing to the limit as A+ /il or

A+ x1 (in the same way as in [13, 16, 311) we obtain part (iv) of theorem 11.2. We next prove part (v) of theorem 11.2. For example let (IJJ, A) E W2,-(0) x R be such

that:

&~JI = L9# a.e. in 0, r@aOinO, iQ= OonaO.

Page 25: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 201

We first show that A. = xi; indeed if 13 < Ai, it is then trivial to deduce from the stochastic representation (or the uniqueness) that t@ = 0 in 0. On the other hand if A > x1, we argue as follows: first, we remark that lemma II.1 implies that if IJ f 0, there exists a, j3 = 0 such that: pvi 3 v s CX~/J~ > 0 in 0. Now by the same verification method as the one introduced by Krylov [25, 261 one obtains easily:

vi(x) = i;f E[ vl(y,(TA rJ) exp(xTA r, - jrA*‘c’(‘)(ydr)) di]] 0

V(X) = i;f E[ q( yX( T A rJ) exp( AT A rX - jrn, c”“(~~(t)) d f ]] 0

for all T < m. Therefore:

~(x)~~i~fE[p(yXTAr,))exp{E.TAh- jrnBc”“(rXt)) df] .lcrCrXj] 0

- P S e-("-"l)Tcu~l(x) in 0,

and choosing T large enough, we obtain a contradiction that proves: A = 1,. We now show that, necessarily, t$ = 0~~ for some 8 2 0. Indeed, let us first remark that

by a simple application of lemma II.1 we have:

Thus if q # &jji (for any 0 2 0), and if we denote by

8=sup(~C1O,,~t/~~C~inO)

then necessarily: $J 2 8~~ in 0 and I& f eI@,. But let A > 0 be such that i + 1, > 0 we have:

&t+!~ + AW = (A + J1)W in 0

And applying lemma 11.1, we deduce: w - or/+ > 0 in 0 and a/an (W - 0yQ < 0 on ~30. Thus there exists E > 0 such that:

and this contradicts the definition of 8. This proves our claim. (Let us mention that the above argument is an adaptation of a device due to Laetsch [24].)

Step 6: Proof of part (i) of theorem 11.2. We first prove (29): indeed in view of [17], if ci 3 0 (Vi 2 1) then there exists a solution

ug E lV+(O) of

33v0= lin0, u. = 0 on a0

and thus 0 E J; since J is open, this yields (29).

Page 26: Bifurcation and optimal stochastic control

202 P. L. LIONS

From (5), we deduce:

&PI =z &~t a.e. in 0, ql < 0 in 0, VI= OonaO;

but this implies: AI 3 ill; and we obtain the first part of (28). Finally to prove the second part: let A <sup AI , there exists i such that AI > A. Thus,

it is well-known that there exists UJ, E C’(b) satisfying:

A, tih = 1 + Ati, in 0, UA>OinO, iii= 0 on a0;

and therefore: &i, 2 1 + Ati, in 0, U,J 2 0 in 0. Since we have: &O = 0 < 1; applying theorem II.1 we obtain the existence of us (between 0 and a~,) satisfying:

&LA = 1 + hUJ, a.e. in 0, UJ,~ OinO, uh=OonaO.

Thus h E I and k < 1,. This proves (28) and completes the proof of theorems II.2 and 11.3. n

Remark 11.6. As we will see in the next section, A1 and 2, possess many of the properties of the lowest eigenvalue for a second-order uniformly elliptic operator. For the moment let us just mention that, obviously, L1 = AI (for some i 3 1) if and only if, denoting by u1 the eigenfunction corresponding to AI( we have:

A,ul - Alul 2 0 in 0, for all j # i.

In this case we have in addition ql = - 81~1, for some 0 > 0. Indeed if A1 = AI( we have:

A,ql - kl(Ai)ql G 0 in 0, q1 < 0 in 0, q1 = 0 on80

and this implies: cpl = - 8ul for some 8 > 0; and we conclude. In the same way, A1 = A,(AJ if and only if we have:

AjUl - X,U, d 0 in 0, for all j + i.

In this case we have in addition: y1 = f%i, for some 0 > 0.

II.4 Applications and properties of demi-eigenvalues We first give a very simple bifurcation result which has only the value of an example. We

keep the notations of the preceding sections and we consider the equation:

Opu + AIc+~‘u = hu a.e. in 0, u E W’,“(O), u =OonaO (33)

and we take A > 0; we assume (21), (22) and ci 2 0 (Vi k 1) for simplicity. Finally let p > 1. We will consider here only the existence of solutions with constant sign.

We then have:

THEOREM 11.4. Under assumptions (21), (22) and if ci z 0 (Vi 2 1); then we have:

(i) If A. < A1, the only solution of (33) is: u = 0. (ii) ;1< 11, the only nonnegative solution of (33) is: u = 0.

(iii) If A > A1, th ere exists a unique negative solution UJ, of (33). (iv) If A > Ai, there exists a unique positive solution tin of (33).

In other words, we have the following bifurcation diagram for the equation (33) (and for solutions of constant sign):

Page 27: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 203

Fig. 1

Remark 11.7. It is possible to give, as in part I, the stochastic interpretation of UJ,, Us: we will not do it here. It is also possible to show that uk, tih are continuous with respect to A (in the space C’%“(o) for any cy< 1) and that UA-+ 0 as A+ A1 and UJ,-+ 0 as A--+ hi.

Remark 11.8. On some simple examples, it is possible to show that there may be between A1 and 1, bifurcation of continua of solutions with no constant sign.

Remark II.9. This type of split bifurcation diagram (and of the existence of demi-eigenvalues) is intimately connected with the Lipschitzian character of the nonlinearity arising in d: we will give below a striking example explaining this claim (see also [7]).

Proof of theorem 11.4. We first show parts (i) and (ii). Now if A < A1, using theorem II.3 (and the fact that d(x) = Alul - P ’ 2 0 in s), the stochastic interpretation of:

S& + du = ilu a.e. in 0, u E w”~=(O)) u =OondO,

immediately yields: u = 0.

Next, if il G Xi, the claim follows obviously from the following lemma, proved below:

LEMMA 11.2. Under assumptions (21), (22); and let (d’), be a bounded sequence of nonnegative functions in L”(O). We assume there exists (y, A) E W2,“(0) x R such that:

sup(A# + d’q) = AI+ a.e. in 0, w= OonJO. (34) ibl

(i) If q > 0 in 0, then i\. 2 xi and A = 2, if and only if t/ = Byi (for some 8 > 0): in particular we have then inf d’ = 0 a.e.

(ii) If y < 0 in 0, then A s A1 and A = A1 if and only if yj = Bg?i (for some 8 < 0): in particular we have then sup d’ = 0 a.e.

i

Proof of lemma 11.2. We will only prove part (i) since (i) and (ii) are totally similar. We first show: A 3 ii. Indeed if A < 2,. from theorem II.3 and using the stochastic representations, it is easy to conclude: w = 0. Now, if A = i1, let 0 be defined by:

0 = sup(/J > 0, Wl c v,>.

Page 28: Bifurcation and optimal stochastic control

204 P. L. LIONS

If II, = 19vi, we are done since we have then:

If 11, f Sr+1, we argue as follows: we first observe that

On the other hand there exist (ykl, /$, y E L”(O) satisfying:

LYkl(,x)~$& 3 vj$l*, Vg E RN a.e. in 0

and such that:

stp(AiV) - stT(Ai@+i) = -~ldkl(V~ - @VI) + Pkdk(lll- @+l> + Y($J - @$i> a.e. in 0.

And this yields:

-Q&(VJ- &~l)+Sk&(ljf- @%)+ (y-Xi)&- BWl)a (~~d~~~ a.e.inO

7$ - et& E W2,m(0), $I,- 8v120in0.

And using Bony’s maximum principle [8] as in the proof of lemma 11.1 we deduce: t,!~ - @@i > 0 in 0, d/&z (v--8@) < 0 on 80, and this contradicts the definition of 8. This completes the proof of the lemma. n

We now turn to the proof of (iv) in theorem II.4 [since the proof of (iii) is identical, we will skip it]. Let A > x1 and let us show the existence of ii~. We first remark that for E small enough, we have:

i

~(&~,) -+ A&j2$ = ~;irljlr + il~P*ilC E/ZW, a.e. in 0

&r#$ E w29”(o), slfil = 0 on 30.

On the other hand there exists K large enough such that K bsl]?&= and K 2 1; thus

d(K) + ilKP 3 AK in 0, Ka mpl in0.

Then, applying theorem 11.1, the existence of z& is proved. Next, let uh, UA be two positive solutions of (33); we may assume without loss of generality

that ~16 U,J and we are going to use an argument due to Amann & Laestch [3]. Let k = SUP(P E 0, I), ClUd s VA, necessarily k < 1 and we have: VA 3 kuh in 0. Now we have:

d(kun) f hkuf = hkun and thus d(kmi) + AkPuj s Ikui

therefore:

&u~ - st(kuA) + (Pu$~” - A} (0~ - kuA) 2 0 in 0,

and from lemma II.1 we deduce: U,J - kuA > 0 in 0, 8/&z (VA - kuJ < 0; on 80; which contradicts the definition of K. This proves the uniqueness and we conclude.

Remark 11.10. We only used the fact that f(t) = At - jlfp satisfies (for h > Ai): fE W$:(rW),f(t)t-’ is strictly decreasing on R+, f’(0) > ;ii and Iin_ f(t)t-’ < xi.

Page 29: Bifurcation and optimal stochastic control

Bifurcation and optimal stochastic control 205

We now conclude by a result, announced in remark 11.9, showing that the existence of demi-eigenvalues and of split bifurcation diagrams is mainly a consequence of the Lipschitz character (and non-differentiability) of the nonlinearity arising in the operator d. The example that we give below can be interpreted in terms of optimal stochastic control but we will not do it here.

We first remark that if we take for Ai:

Ai = -A + a5,

where Ei is a dense family of SN-‘(l& = l), then:

dq = -Ag, + IVql, vg, E P(O).

Thus theorem II.2 yields in particular: 3 & > 4

(i) If ?, < & there exists a unique solution UA E P(O) of

-Au,J + ]Vu~l = 1 + huh in 0, unaOinO, uk= QonaO.

(ii> As A-+% llu~~Lm converges in C’(6) [and thus in C*(O)] to the unique solution ~1 of:

-Ar/+ + ]Vt/~~l = 1,~~ in 0, vlZ-OinO, Il~1IlL” = 1, tpI = OonaO.

(iii) Finally, if 7~ E C*(O) satisfies:

- Aq + IVv/ = ;1W in 0, q>OinO, q= OonaO

then A = 1, and $J = &/ji for some 8 > 0.

We now consider a somewhat related problem, namely:

-Aun + lVu# = 1 + AZQ, us 2 0 in 0, uA=OonaO;

- ALQ + IVuAls = A&, Q>OinO, uI= OonaO;

where p > 1.

(35)

(36)

PROPOSITION II. 1.

(i) For all il > 0, there exists a unique solution us of (35) and us is continuous with respect to A. [for example in the space C*(O)].

(ii) If 13 < Ai, th ere is no solution of (36); while if A. > A1 there exists a unique positive solution tin of (36) and tik is continuous with respect to jl [for example in the space

cvm-

Proof of proposition 11.1. Since this proposition is not essential for our concern here, we will indicate only the main lines of its proof. We first show that if il is bounded, then solutions of (35), (36) are a priori bounded in W1,m(0) [and thus in C*(O)]: the existence can then be obtained by the techniques of Lions [28]. We will then prove the uniqueness of Us, &.

The proof of the a priori bounds for uh, tin are totally similar and we will do it only for UI. We argue as follows: first remark that we have:

i, /Vu*/* dx + j-- lVu#& dx G i, us dx + a j- uj dx 0

Page 30: Bifurcation and optimal stochastic control

206 P. L. LIONS

but

for some constant C(=C(/?)) > 0 and where

6= &(B+ 1) > 2

and this shows: IIu,&J~ . <C . Therefore uh is bounded in L2”‘“-*‘(0) (in F(O), p < m if N G 2) and remarking that we have: -Au, G 1 + Au;, in 0, u,! 2 0 we then deduce by a straightforward bootstrap argument:

II&“(0, =s c. Using this bound it is easy to show that, on a convenient neighborhood of the boundary rE = {x E 0, dist (x, a0) < E}, we have:

-A(,&) + lV(,&)/fia 1 + AuAinr,, ~63 ui,onaT,

for some large enough constant p > 0 and where 6(x) = dist(x, do). This implies:

II ~uJ~fl IlwO) c C. And using the results of Lions [28], one then obtains:

lluhllw~=co, s C.

We conclude by giving the proof of the uniqueness of uh (the same argument works for GA): again we will use the device of Laetsch [27]. If ui, uh are two solutions and if I*A 6 VA, we denote by: k = sup(p E (0, l), JULIE d uh in 0). Then k < 1 and ku* s vi. Next, we have:

-A(,%,) + IV&4 @S k(1 + Az.Q) < 1 + Au,> 4 -Au>, + lVu#

and from the maximum principle, this yields:

ku?, < VA in 0, $ (ku* - ~1,) > 0 on a0

which contradicts the definition of k. W

Acknowledgement-The author would like to thank A. Bensoussan for discussions on the first question treated here.

REFERENCES

1. AMANN H.. Fixed point equations and nonlinear eigenvalue problems in ordered Banach spaces, SZAM Reu. 18. 620-709 (1976).

2. AMANN H., Nonlinear operators in ordered Banach spaces and some applications to nonlinear boundary value problems, in Nonlinear Operators and the Calculus of Variations, Bruxelles, 1975. Springer, Berlin (1976).

3. AMANN H. & LAETSCH T., Positive solutions of convex nonlinear eigenvalue problems. Zndiana Uniu. Math. J. 25, 259-270 (1976).

4. BANDLE C., Existence theorems, qualitative results and apriori bounds for a class of nonlinear Dirichlet problems, Archs ration. Mech. Analysis 58, 219-238 (1975).

5. BENSOUSSAN A. & LIONS J. L., Applications de InCquations Variationnelles en ContrGle Stochastique. Dunod. Paris (1978).

6. BERESTYCKI H., Le nombre de solutions de certains problimes semi-IinCaires, J. funct. Analysis (1981). 7. BERESTYCKI H., On some nonlinear Sturm-Lionville problems, J. diff. Eqns 26. 375-390 (1977). 8. BONY J. M., Principe du maximum dans les espaces de Sobolev, C.r. hebd. Stanc. Acad. Sci., Paris 265, 333-

336 (1967).

Page 31: Bifurcation and optimal stochastic control

I . Bifurcation and optimal stochastic control 207

9. BRBZIS H. & EVANS L. C., A variational approach to the Bellman-Dirichlet equation for two elliptic operators, Archs ration. Mech. Analysis 71, 1-14 (1979).

10. B~lji% H. & TURNER R. E. L., On a class of superlinear elliptic problems, Communs P.D.E. 2(6), 601-614 (1977).

11. CRANDALL M. G. & RABINOWITZ P. H., Bifurcation, perturbation of simple eigenvalues and linearized stability, Archs ration. Mech. Analysis 52, 161-180 (1973).

12. CRANDALL M. G. & RABINOWITZ P. H., Some continuation and variational methods for positive solutions of nonlinear elliptic eigenvalue problems, Archs ration. Mech. Analysis 58, 207-218 (1975).

13. EVANS L. C., A convergence theorem for solutions of nonlinear second order elliptic equations. Indiana Uniu. math. J. 27, 875-887 (1978).

14. EVANS L. C., Classical solutions of fully nonlinear, convex second-order elliptic equations, Communs pure uppl. Math.

15. EVANS L. C., Classical solutions of the Hamilton-Jacobi-Bellman equation for uniformly elliptic operators, to appear.

16. EVANS L. C. & FRIEDMAN A., Stochastic optimal switching and the Dirichlet problem for Bellman’s equation, Trans. Am. math. Sot. 253, 365-379 (1979).

17. EVANS L. C. & LIONS P. L., R&solution des Cquations de Hamilton-Jacobi-Bellman, C.r. hebd. Stanc. Acad. Sci., Paris 290, 1049-1052 (1980).

18. DE FIGIJEIREDO D. G., LIONS P. L. & NUSSBAUM R. D., Estimations a priori pour les solutions positives de probltmes elliptiques semilinkaires, C.r. hebd. Stanc. Acad. Sci., Park 290, 211-220 (1980).

19. DE FIGUEIREDO D. G., LIONS P. L. & NUSSBAUM R. D.. A priori estimates for positive solutions of semilinear elliptic equations, J. math. pures Applic, 61. 41-63 (1982).

20. FLEMING W. H. & RISHEL R., Optical Deterministic and Stochastic Control. Springer, Berlin (1975). 21. FUJITA H., On the nonlinear equations Au + e” = 0 and au/at = Au + i, Bull. Am. Math. Sot. 75, 132-135 (1969). 22. GIDAS B., NI N. & NIRENBERG L., Symmetry and related properties via the maximum principle, Communs Math.

Phys. 68, 209-243 (1979). 23. GELFAND I. M., Some problems in the theory of quasilinear equations, Am. math. Sot. Transl. (Ser. 2), 29,

295-381 (1963). 24. JOSEPH D. D. & LUNDGREN J. S., Quasilinear Dirichlet problems driven by positive sources, Archs ration. Mech.

Analysis 49, 241-269 (1973). 25. KRYLOV N. V., On control of the solution of a stochastic integral equation, 17, 114-132 (1972). 26. KRYLOV N. V., Controlled D#usion Processes. Springer, Berlin (1980). 27. LAETSCH T., A uniqueness theorem for elliptic quasivariational inequalities, J. funct. Analysis 12,226-287 (1975). 28. LIONS P. L., R&solution de problkmes quasilinkaires, Archs ration. Mech. Analysis 74, 335-354 (1980). 29. LIONS P. L., Some problems related to the Bellman-Dirichlet equation for two operators, Communs P.D.E.

S(7), 753-771 (1980). 30. LIONS P. L., Control of diffusion processes in [w”, Communs pure appl. Math. 34, 121-147 (1981). 31. LIONS P. L., Rtsolution analytique des problemes de Bellman-Dirichlet, Actu Murhemarica 146, 151-166 (1981). 32. LIONS P. L., Problkmes elliptiques du deuxikme ordre non sous forme divergence, Proc. R. Sot. Edinb. 84A,

263-271 (1979). 33. LIONS P. L., Equations de Hamilton-Jacobi-Bellman dCgCn&es, Cr. hebd. Stanc. Acad. Sci., Paris 289.

329-332 (1979); and to appear. 34. LIONS P. L., Equations de Hamilton-Jacobi-Bellman, in Seminaire Goulaouic-Schwartz 1979-1980. 35. LIONS P. L., Asymptotic behavior of some nonlinear heat equations, Nonlinear Phenomena, to appear. 36. LIONS P. L., Structure of the set of steady-state solutions and asymptotic behaviour of semilinear heat equations,

J. D$f Eq. (to appear). 37. LIONS P. L., On the existence of positive solutions of semilinear elliptic equations, MRC Report No, 2209,

University of Wisconsin-Madison SIAM Review, (to appear), 38. LIONS P. L. & MENALDI J. L., Optimal control of stochastic integrals and Hamilton-Jacobi-Bellman equations,

SIAM J. Control Optim., 20, 58-95 (1982). 39. MIGNOT F. & PUEL J. P., Sur une classe de problemes nonlinkaires avec nonlinCarit6 positive, croissante, convexe

Communs P. D. E. 40. NISIO M., Some remarks on stochastic optimal controls, Japan J. math. 1, 159-183 (1975). 41. PROTTER M. H. & WEINBERGER H., Maximum Principles in Differential Equations. Prentice Hall, New York. 42. RABINOWITZ P. H., A note on a nonlinear eigenvalue problem for a class of differential equations, J. diff, Eqns

9, 536-548 (1971). 43. RABINOWITZ P. H., Variational methods for nonlinear eigenvalue problems, in Eigenualues of Nonlinear Problems,

C.I.M.E., Ediz. Cremonese, Rome (1979).