27
1 A. E. Ozdaglar 2 and P. Tseng 3 Communicated by A. Miele 1 We thank Professor Dimitri Bertsekas for his comments and support in the writing of this paper. 2 Assistant Professor, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts. 3 Professor, Department of Mathematics, University of Washington, Seattle, Washington.

Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

Existence of Global Minima forConstrained Optimization1

A. E. Ozdaglar2 and P. Tseng3

Communicated by A. Miele

1 We thank Professor Dimitri Bertsekas for his comments and support in thewriting of this paper.

2 Assistant Professor, Department of Electrical Engineering and ComputerScience, Massachusetts Institute of Technology, Cambridge, Massachusetts.

3 Professor, Department of Mathematics, University of Washington, Seattle,Washington.

Page 2: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

Abstract. We present a unified approach to establishing the existence ofglobal minima of a (non)convex constrained optimization problem. Our re-sults unify and generalize previous existence results for convex and noncon-vex programs, including the Frank-Wolfe theorem, and for (quasi)convexquadratically constrained quadratic programs and convex polynomial pro-grams. For example, instead of requiring the objective/constraint func-tions to be constant along certain recession directions, we only require themto linearly recede along these directions. Instead of requiring the objec-tive/constraint functions to be convex polynomials, we only require the ob-jective function to be a (quasi)convex polynomial over a polyhedral set andthe constraint functions to be convex polynomials or the composition of co-ercive functions with linear mappings.

Key Words. Solution existence, global minima, constrained optimization,recession directions, convex polynomial functions.

2

Page 3: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

1 Introduction

Consider the constrained optimization problem

min f0(x)s.t. fi(x) ≤ 0, i = 1, ..., r,

(P)

where fi : <n → (−∞,∞], i = 0, 1, ..., r, are proper lower semicontinuous(lsc) functions. We are interested in conditions on f0, f1, ..., fr under which aglobal minimum of (P) exists. In what follows, we denote by D the feasibleset of (P), i.e.,

D = domf0 ∩ C, C = ∩ri=1Ci, Ci = {x | fi(x) ≤ 0}, i = 0, 1, ..., r,

(1)with domf = {x | f(x) < ∞} for any f : <n → (−∞,∞].

The existence question has been studied extensively in the case where (P)is a convex program, i.e., f0, f1, ..., fr are convex (Refs. 1–5, in particular,Section 2.3.1 of Ref. 4). In particular, Luo and Zhang (Ref. 5, Theorem3) showed that (P) has a global minimum whenever f1, ..., fr are convexquadratic and f0 is quasiconvex quadratic over a polyhedral set. By adaptingthe proof idea of Luo and Zhang, Auslender (Ref. 2, Theorem 2), provedexistence of global minimum of (P) when f0, f1, ..., fr are convex, have thesame effective domain, and each belongs to the class F defined in Ref. 2. Thisresult is further expanded in Refs. 3–4. The result of Luo and Zhang was alsoextended by Belousov and Klatte (Ref. 6) to the case where f0, f1, ..., fr areconvex polynomials. However, for nonconvex programs, the results have beenless complete (Refs. 2–3, 7–9, in particular, Section 3.4 of Ref. 3). Auslender(Ref. 2, Theorem 3; also see Ref. 3, Corollary 3.4.3) showed that if f0 belongsto F and the feasible set C is asymptotically linear and the horizon function off0 is nonnegative on the horizon cone of C, then (P) has a global minimum.However, the assumption of C being asymptotically linear excludes casessuch as when f1, ..., fr are convex quadratic. Thus, these existence results fornonconvex programs do not generalize the aforementioned results for convexprograms.

In this paper, we present an approach to establishing the existence ofglobal minimum of (P) that unifies and generalizes some of the approachesin Refs. 2 and 4–6, in particular, Section 2.3 of Ref. 4, for convex programs,and the approaches in Refs. 2–3, 7–8, for nonconvex programs. In particular,in Section 3, we prove our main result, which asserts the existence of a

1

Page 4: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

global minimum under key properties about the asymptotic behavior of thefunctions fi along “directions of unboundedness” of the corresponding levelsets. In the following sections, we use this result to prove the existence ofglobal minima for many different classes of nonconvex and convex programs.In particular, we consider nonconvex programs in which the functions fi havethe form

fi(x) = hi(Aix) + bTi x + ci, (2)

where hi is an lsc function having certain coercivity properties. The abovefunctions may be viewed as nonconvex generalizations of convex quadraticfunctions. We next consider several problem classes studied in Refs. 5–6, 8and prove existence under more general assumptions. In particular, insteadof requiring f0 to be constant along certain recession directions as in Ref.8, Theorem 21, we only require f0 to linearly recede along these directions.Instead of requiring f0 to be quadratic over a polyhedral set and f1, ..., fr tobe quadratic as in Ref. 5, we only require f0 to be quasiconvex polynomialover a polyhedral set and f1, ..., fr to have the form (2); see Section 5. Insteadof requiring f0 to be a convex polynomial as in Ref. 6, we only require f0 tobe quasiconvex polynomial over a polyhedral set; see Sections 4 and 5. Wealso use our main result to deduce, as a direct consequence, the Frank-Wolfetheorem (Ref. 10); see Section 6 of this paper. The notion of a functionretracting along “asymptotically nonpositive” directions will play a key rolein our analysis.

Regarding notation, all vectors are viewed as column vectors, and xT ydenotes the inner product of the vectors x and y. We denote the 2-norm‖x‖ = (xT x)1/2 and ∞-norm ‖x‖∞ = maxi |xi|, where xi denotes the ithcomponent of x. For any proper lsc function f : <n → (−∞,∞], we denote,for each γ ∈ <, the γ-level set as

levf (γ) = {x | f(x) ≤ γ}.We write γk ↓ 0 when the sequence γk approaches 0 monotonically fromabove.

2 Functions Retracting along Asymptotically

Nonpositive Directions

The existence of a global minimum of (P) is closely related to propertiesof the level sets of the functions fi and “directions of unboundedness” for

2

Page 5: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

these sets. In particular, when (P) has a finite minimum value which weassume without loss of generality to be zero, there exists a sequence xk ∈ C,k = 1, 2, ...., with f0(x

k) ↓ 0. Existence of global minima amounts to f0

having certain asymptotic linearity properties along each direction d thatis a cluster point of {xk/‖xk‖} when ‖xk‖ → ∞. We define these notionsbelow. The first two definitions play a key role in the analysis of Auslender,among others (Refs. 2–3, 7–8).

Definition 2.1 For any set S ⊆ <n, its horizon cone S∞ (Ref. 11, Section3B), also called asymptotic cone in Ref. 3, is defined to be

S∞ =

{d | ∃ xk ∈ S, tk →∞, with

xk

tk→ d

}.

When S is a closed convex set, we have

S∞ = {d | ∃ x ∈ S, with x + αd ∈ S ∀ α ≥ 0} ,

and moreover S∞ is convex.

Definition 2.2 For any function f : <n → (−∞,∞], let f∞ be the horizonfunction (Ref. 11, p. 88), also called the asymptotic function in Ref. 3, definedby

f∞(d) = limh→dt→∞

inff(th)

t.

We say that d ∈ <n is a recession direction of f (Ref. 8, p. 7) if

f∞(d) ≤ 0.

The recession cone of f is defined to be Rf = {d | f∞(d) ≤ 0}.We next introduce the notion of a direction along which a function asymp-

totically approaches below zero. Such directions will play a key role in ouranalysis.

Definition 2.3 For any function f : <n → (−∞,∞], we say that d ∈ <n isan asymptotically nonpositive direction (AND) of f if there exists a sequencexk ∈ domf , k = 1, 2, ..., satisfying

limk→∞

sup f(xk) ≤ 0, ‖xk‖ → ∞, andxk

‖xk‖→ d.

3

Page 6: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

We refer to {xk} as a unbounded sequence (US) associated with d.Notice that every AND is a recession direction of f with unity norm.

If f is convex and infx f(x) ≤ 0, then the converse is also true. This isbecause there exists {yk} ⊆ <n with limk→∞ sup f(yk) ≤ 0, so that, for anyrecession direction d of f with unity norm, the sequence xk = yk + tkd, withtk = ‖yk‖2 + k, is a US associated with d.

We will develop conditions for each fi, based on its behavior along anyAND d, which collectively will ensure existence of global minima for (P).Thus, the question of existence is decomposed into questions about whethereach fi has certain properties along an AND d. In particular, the followingtwo properties along an AND will play a key role in our analysis.

Definition 2.4 For any function f : <n → (−∞,∞] and any d ∈ <n, wesay that f recedes below 0 along d on a set S ⊆ domf if, for each x ∈ S,there exists α ≥ 0 such that

f(x + αd) ≤ 0 ∀ α ≥ α.

Definition 2.5 For any function f : <n → (−∞,∞] and any AND d of f ,we say that f retracts along d on a set S ⊆ domf if, for any US {xk} ⊆ Sassociated with d, there exists k such that

f(xk − d) ≤ max{0, f(xk)} ∀ k ≥ k. (3)

In the case of S = domf , we say that f retracts along d. We define fto retract strongly along d in the same way except that “max{0, f(xk)}” isreplaced by “f(xk)” in (3).

Roughly speaking, the above definition says that one can retract fromxk along d to stay in the level set whenever xk is sufficiently far from theorigin. Notice that any AND d of f is an AND of max{0, f} and vice versa.Moreover,

f retracts strongly along d ⇒ f retracts along d⇔ max{0, f} retracts strongly along d.

The next two examples show that neither of the preceding two notions impliesthe other.

4

Page 7: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

Example 2.1 Let

f(x, y) =

0 if x ≥ 0 and y = 0,0 if x = 0 and y = 1,1 otherwise.

Then f is proper, lsc, and d = (1, 0) is an AND. Moreover, f retracts alongd, but f does not recede below 0 along d on levf (0).

Example 2.2 Let

f(x, y) ={

0 if x ≥ 0 and |y| ≤√

x,1 otherwise.

Then f is proper, lsc, and d = (1, 0) is the only AND of f . f recedes below0 along d on domf ; however, f does not retract along d. In particular, thesequence {(k,

√k)} is a US associated with (1, 0) for which the relation (3)

does not hold for any k.

The notion of function retraction is closely related to the class F of func-tions defined in Ref. 2, Definition 7, also called asymptotically level stablefunctions in Ref. 3, p. 94. Specifically, a proper lsc function f : <n →(−∞,∞] belongs to F if, for any α > 0, any convergent sequence {εk} ⊂ <,any sequence {xk} ⊆ <n, and any d ∈ <n satisfying

xk ∈ levf (εk), ‖xk‖ → ∞,

xk

‖xk‖→ d, f∞(d) = 0,

there exists k such that

xk − αd ∈ levf (εk) ∀ k ≥ k.

Various examples of functions in F , such as convex piecewise linear-quadraticfunctions and asymptotically linear functions, are given in Ref. 2. The fol-lowing lemma shows that f ∈ F implies f retracts along any AND whosehorizon function value is 0.

Lemma 2.1 If f ∈ F , then, for any AND d of f , either f retracts along dor f∞(d) < 0.

5

Page 8: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

Proof. Fix any AND d of f . Then d 6= 0 and f∞(d) ≤ 0. If f∞(d) < 0,the proof is complete. Suppose that f∞(d) = 0. For any US {xk} associatedwith d, let εk = max{0, f(xk)}. Then xk ∈ levf (ε

k) for all k and εk → 0, sof ∈ F implies there exists k such that, for all k ≥ k, xk − d ∈ levf (ε

k) or,equivalently, f(xk − d) ≤ max{0, f(xk)}.

In general, f∞(d) < 0 does not imply f retracts along d. In his earlierwork (Ref. 7, p. 777 and Ref. 8, Theorem 13), Auslender considered a largerclass of functions which, roughly speaking, allow different α for different k.However, this class is too large to apply to the problem (P) since one needsa common α to work for the objective function and the constraint functions.Hence the class F was introduced in Ref. 2 and, analogously, we introducethe notion of a function retracting along an AND (which uses a commonα = 1). We will discuss the class F further in Section 7.

3 Main Existence Result

DefineL(γ) = levf0(γ) ∩ C,

where C is given by (1). Below, we prove our main result on the existenceof a global minimum of (P). This amounts to showing that L(γ) 6= ∅ for allγ > 0 implies L(0) 6= ∅. The proof, by induction on r, uses similar ideasas in the proof of Theorem 3 in Ref. 5 and Theorem 1 in Ref. 2 (also seeProposition 1.5.7 in Ref. 4), but we deal with more general functions thanconvex quadratic or convex functions belonging to F . The following keyassumption on the functions will be used:

(A1) (a) For each i ∈ {0, 1, ..., r} and each AND d of fi,either (i) fi recedes below 0 along d on domfi,or (ii) fi retracts along d and, in case r 6= 0, fi recedes below 0along d on Ci.

(b) In case r 6= 0, C0 ⊆ domfi for i = 1, ..., r.

Proposition 3.1 (Existence of Global Minima of (P)). Suppose that fi :<n → (−∞,∞], i = 0, 1, ..., r, are proper, lsc, and satisfy Assumption A1.Also, suppose that L(γ) 6= ∅ for all γ > 0. Then L(0) 6= ∅.

6

Page 9: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

Proof. Take any sequence of scalars {γk}k=1,2,... ↓ 0. By assumption,L(γk) 6= ∅ for all k. Since fi is lsc for all i, L(γk) is closed. Let

xk ∈ arg min{‖x‖ | x ∈ L(γk)}.

If xk has a convergent subsequence, then the limit point of this subsequencewould, by lsc of fi for all i, be in L(0). Thus, it suffices to consider the caseof ‖xk‖ → ∞.

Since {xk/‖xk‖} is bounded, it has a subsequence converging to somelimit d 6= 0. By passing to a subsequence if necessary, we can assume thatxk/‖xk‖ → d. Since f0(x

k) ≤ γk while fi(xk) ≤ 0 for i = 1, ..., r, this implies

that d is an AND of fi for i = 0, 1, ..., r. Then, by Assumption A1(a), foreach i ∈ {0, 1, ..., r}, either fi recedes below 0 along d on domfi or fi retractsalong d. Moreover, if r 6= 0, then fi recedes below 0 along d on Ci fori ∈ {0, 1, ..., r}.

If f0 recedes below 0 along d on domf0, then for any x ∈ D 6= ∅, we havex ∈ domf0 so that f0(x + αd) ≤ 0 for all α sufficiently large. In case r 6= 0,for each i ∈ {1, ..., r} we have x ∈ Ci and hence fi(x + αd) ≤ 0 for all αsufficiently large. Thus, x + αd ∈ L(0) for all α sufficiently large, implyingL(0) 6= ∅. Thus, it remains to consider the case of f0 retracting along d. If fi

retracts along d for all i = 1, ..., r, then for each i ∈ {0, 1, ..., r}, there existki such that

xk − d ∈ levfi(γk,i) ∀ k ≥ ki,

where γk,0 = γk and γk,i = 0 for i = 1, ..., r. Then, for all k ≥ maxi=0,1,...,r

ki,

xk = xk − d

would satisfy xk ∈ L(γk). Then, by xk/‖xk‖ → d, we would have dT xk/‖xk‖ →‖d‖2, which implies dT xk →∞ and hence

‖xk‖2 = ‖xk‖2 − 2dT xk + 1 < ‖xk‖2

for all k sufficiently large, contradicting xk being an element of L(γk) of least2-norm. Thus, either L(0) 6= ∅ or else r ≥ 1 and fi recedes below 0 along don domfi for some i ∈ {1, ..., r}.

We now complete the proof by induction on r. The above argumentshows that Proposition 3.1 is true when r = 0. Suppose Proposition 3.1 istrue for r = 0, 1, ..., r for some r ≥ 0. Consider r = r + 1. We showed above

7

Page 10: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

that either L(0) 6= ∅ or there exists some AND d of f0, f1, ..., fr such that fi

recedes below 0 along d on domfi for some i ∈ {1, ..., r}. Let us drop fi fromL(γ) to obtain

L(γ) = levf0(γ)⋂ r⋂

i=1i6=i

Ci

.

Then L(γk) ⊃ L(γk) 6= ∅ for all k. Also, Assumption A1 is still satisfied upondropping fi. Thus, by the induction hypothesis, L(0) 6= ∅. For any x ∈ L(0),since fi recedes below 0 along d on Ci for all i ∈ {0, 1, ..., r} \ {i}, we havex + αd ∈ L(0) for all α ≥ 0 sufficiently large. Moreover, Assumption A1(b)implies x ∈ levf0(0) = C0 ⊆ domfi so fi receding below 0 along d on domfi

implies fi(x + αd) ≤ 0 for all α sufficiently large, yielding x + αd ∈ L(0).Proposition 3.1 shows that when f0, f1, ..., fr satisfy Assumption A1 and

the minimum value of (P) is finite which we assume without loss of generalityto be zero, a global minimum of (P) exists. We remark on Proposition 3.1and its assumptions in more detail below.

(i) It can be seen from its proof that Proposition 3.1 still holds if werelax Assumption A1(b) so that C0 ⊆ domfi holds only for thosei ∈ {1, . . . , r} for which there exists some AND d such that fi doesnot retract along d. For instance, if every constraint function fi ispolyhedral, then it can be shown that fi retracts along every AND offi, so that Assumption A1(b) can be dropped altogether. In general,Assumption A1(b) cannot be dropped as we will show in Example 4.1.

(ii) Proposition 3.1 still holds if we relax the assumption on f0 withinAssumption A1(a) to

For each AND d of f0, there exists F ⊆ domf0 such that

(i) f0 recedes below 0 along d on domf0 \ F ,

(ii) f0 retracts along d on F ,

(iii) In case r 6= 0, f0 recedes below 0 along d on C0.

[The assumption on f0 within Assumption A1(a) corresponds to F =∅ or F = domf0.] This is because in the proof we can analogouslydivide into two cases: If D \ F 6= ∅, then for any x ∈ D \ F we havef0(x + αd) ≤ 0 for all α sufficiently large. Otherwise D ⊆ F , and the

8

Page 11: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

proof proceeds as before. This seemingly technical generalization ofProposition 3.1 is needed for Sections 5 and 6.

The following example shows that the assumption on the constraintfunctions f1, ..., fr cannot be relaxed in the same way.

Example 3.1 Let

f0(x, y) =

1

|y|+1− 1

2if |y| < 1 and x ≥ 0,

0 if |y| ≥ 1 and x ≥ 0,∞ otherwise,

f1(x, y) =

0 if |y| ≤ 1− e−x and x ≥ 0,1 if |y| > 1− e−x and x ≥ 0,∞ otherwise.

Then, both f0 and f1 are proper, lsc. Any d with ‖d‖ = 1 and d1 ≥ 0 isan AND of f0. It can be seen that for any AND d of f0, f0 retracts alongd, and also f0 recedes along d on C0. The only AND of f1 is d = (0, 1),and f1 does not recede below 0 along d on domf1. Moreover, thesequence {(k, 1− e−k)} is a US associated with d = (0, 1), which doesnot satisfy (3), hence f1 does not retract along d. But f1 does satisfythe following relaxed assumption (which is analogous to the relaxedassumption on f0): Let F1 = {(x, y) | x ≥ 0, |y| ≥ 1}. Then f1 recedesbelow 0 along d on domf1 \ F1, retracts along d on F1, and f1 recedesbelow 0 along d on C0. Although the point (xk, yk) = (k, 1 − e−k)satisfies f1(x

k, yk) = 0 and f0(xk, yk) → 0, there is no solution to

f1(x, y) ≤ 0 and f0(x, y) ≤ 0.

(iii) If, for each i = 1, ..., r and each AND d of the constraint function fi, fi

retracts along d and recedes below 0 along d on Ci, then Proposition 3.1still holds if the second part of Assumption A1(a)(ii) on the objectivefunction f0, namely “in case r 6= 0, f0 recedes below 0 along d on C0”,is dropped. In fact, in this case, (P) can be reduced to the case of r = 0by working with the extended objective function

f0(x) ={

f0(x) if x ∈ D,∞ otherwise

It can be shown that, for this equivalent problem, Assumption A1(a)is satisfied. This device will be used in the proof of Lemma 6.1.

9

Page 12: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

The following example shows that the second part of Assumption A1(a)(ii)on f0 cannot be dropped in general.

Example 3.2 Let

f0(x, y) =

{1

|y|+1if x > 0 and |y| < 2

√x,

0 otherwise,

f1(x, y) ={

0 if x ≥ 1 and |y| ≤√

x,1 otherwise.

Then, both f0 and f1 are proper, lsc. Moreover, any d with ‖d‖ = 1 is anAND of f0 and f0 retracts along d. The only AND of f1 is d = (1, 0) andf1 recedes below 0 along d on domf1. Also, the point xk = k, yk =

√k

satisfies f1(xk, yk) = 0 and f0(x

k, yk) = 1/(√

k + 1) → 0. However,there is no solution to f1(x, y) ≤ 0 and f0(x, y) ≤ 0. The problem isthat f0 does not recede below 0 along d = (1, 0) on C0.

If the objective function f0 retracts strongly along each AND d of f0, f1, ...fr

and each constraint function fi retracts along d, then the existence of a globalminimum of (P) can be deduced without assuming its minimum value is fi-nite. This result generalizes an existence result of Auslender (Ref. 7, Theorem2.4); see Proposition 7.3.

Proposition 3.2 Consider the problem (P). Suppose that D 6= ∅ and, foreach AND d of f0, f1, ...fr, we have that f0 retracts strongly along d and fi

retracts along d for i = 1, ..., r. Then (P) has a global minimum.

Proof. Let γ∗ denote the minimum value of (P), which we assume withoutloss of generality to be either 0 or −∞. Then, for any sequence γk ↓ γ∗,L(γk) 6= ∅ for all k. Let xk be an element of L(γk) of minimum 2-norm. Weclaim that {xk} has a cluster point, which would be a global minimum of (P).We argue this by contradiction. Suppose that ‖xk‖ → ∞. By passing to asubsequence if necessary, we assume that xk/‖xk‖ converges to some d. Thend is an AND of f0, f1, ..., fr. Since f0 retracts strongly along d, f0(x

k − d) ≤f0(x

k) ≤ γk for all k. For each i ∈ {1, ..., r}, since fi retracts along d andxk ∈ Ci, fi(x

k − d) ≤ max{0, fi(xk)} ≤ 0 for all k sufficiently large. Thus

xk − d ∈ L(γk) for all k sufficiently large. But also ‖xk − d‖2 < ‖xk‖2 for allk sufficiently large (see the proof of Proposition 3.1), contradicting xk beingof minimum 2-norm.

10

Page 13: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

4 Linearly Receding Functions

In this section we introduce an important class of functions that satisfy As-sumption A1(a). We say that f linearly recedes along d if there exists θ ∈ <(depending on d) such that θ ≤ 0 and

f(x + αd) = f(x) + αθ ∀ α ∈ <, ∀ x ∈ domf. (4)

We say that f is constant along d if we can take θ = 0 in the above expression.Consider the following assumption on a proper lsc f : <n → (−∞,∞]:

(A2) For any AND d of f , f linearly recedes along d.

Assumption A2 says that f is affine on each line intersecting domf andparallel to an AND of f . This assumption is weaker than f belonging to theclass F1 defined in Ref. 2, Definition 3, namely, f is convex and f linearlyrecedes along any recession direction of f . [Recall that any AND of f isa recession direction of f .] This assumption is also weaker than f beingasymptotically directionally constant (Ref. 3, p. 86), namely, f is constantalong any recession direction of f . We will show in Lemma 4.3 that there isa large class of nonconvex functions f that satisfy Assumption A2 but arenot asymptotically directionally constant nor in F1.

The following lemma shows that it suffices to verify Assumption A2 foreach fi in lieu of Assumption A1(a).

Lemma 4.1 If f0, f1, ..., fr each satisfies Assumption A2, then they satisfyAssumption A1(a).

Proof. Consider any proper lsc f : <n → (−∞,∞] satisfying AssumptionA2. Fix any d ∈ <n that is an AND of f . By Assumption A2, f linearlyrecedes along d, i.e., there exists θ ∈ < such that θ ≤ 0 and (4). If θ < 0,then for any x ∈ domf , we have from (4) that f(x + αd) → −∞ as α →∞.Thus f recedes below 0 along d on domf . If θ = 0, then for any x ∈ domf ,we have from (4) that f(x − αd) = f(x) for all α ∈ <. It follows that fretracts along d and f recedes below 0 along d on levf (0).

Applying the above result to f0, f1, ..., fr, we see that they satisfy As-sumption A1(a).

Using Lemma 4.1, we give below an example showing that Proposition3.1 is false if Assumption A1(b) is dropped.

11

Page 14: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

Example 4.1 Define

φ(t) ={− log t− log(1− t) if 0 < t < 1,∞ otherwise.

Letf0(x1, x2) = x1, f1(x1, x2) = φ(x1)− x2.

Then f0 is linear, so f0 satisfies Assumption A2. Since φ is lsc, then f1 islsc and, in fact, convex. Since φ has bounded support, it is not difficult tosee that, for any d = (d1, d2) ∈ <2 that is an AND of f1, we have d1 = 0and d2 ≥ 0. Then f1 linearly recedes along d. Thus f0, f1 each satisfiesAssumption A2. By Lemma 4.1, they satisfy Assumption A1(a).

As x1 → 0+ and setting x2 = φ(x1), we have that f0(x1, x2) → 0 andf1(x1, x2) ≤ 0. However, there is no x ∈ <2 satisfying f0(x) = 0 and f1(x) ≤0 (since f1(x) ≤ 0 implies x1 > 0). Here, the problem is that domf1 is notcontained in the 0-level set of f0 (in fact, the two sets are disjoint), thusviolating Assumption A1(b).4

The following lemma shows that the assumption θ ≤ 0, implicit in As-sumption A2, is redundant whenever f does not tend to −∞ at a superlinearrate. In particular, convex functions have the latter property.

Lemma 4.2 Suppose that f : <n → (−∞,∞] is proper, and

f∞(d) > −∞ ∀ d ∈ <n. (5)

If an AND d of f and θ ∈ < satisfies (4), then θ ≤ 0.

Proof. Fix any AND d of f and θ ∈ < that satisfies (4). If θ = 0, thenclearly θ ≤ 0. If θ 6= 0, then fix any x ∈ domf , which exists since f is proper.Since d is an AND of f , there exists an associated US {xk}. For each k, lettk = ‖xk − x‖ and dk = (xk − x)/‖xk − x‖. Then dk → d and

xk = yk + tkd, with yk = x + tk(dk − d).

4We can make these two sets have nonempty intersection by redefining φ(t) to have thevalue 1 (instead of ∞) for all t ≤ −1, say. However, the resulting f1 is no longer convex.If f0, f1, ..., fr are convex, can Assumption A1(b) be relaxed to C0 ∩ (∩r

i=1domfi) 6= ∅?This is an open question.

12

Page 15: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

For each k, since xk ∈ domf , (4) implies yk ∈ domf and

f(xk) = f(yk + tkd) = f(yk) + tkθ.

Thus

θ =f(xk)− f(yk)

tk. (6)

Notice that {f(xk)} is bounded above and, by ‖dk−d‖ → 0 and f satisfying(5), we have

limk→∞

inff(yk)

tk= lim

k→∞dk 6=d

inff(x + tk(dk − d))

tk‖dk − d‖‖dk − d‖ ≥ 0.

If dk = d for k along a subsequence, then yk = x so f(yk)/tk → 0 along thissubsequence. Then the limit superior of the right-hand side of (6) as k →∞is nonpositive, implying θ ≤ 0.

If f : <n → (−∞,∞] is convex and infx f(x) ≤ 0, then we saw in Section2 that

d is an AND of f ⇔ d ∈ Rf .

In this case, the assumption that f is constant along any AND d of f (i.e.,θ can be taken equal to 0 in (4)) is equivalent to Rf ⊆ Lf , where Lf = Rf ∩(−Rf ) denotes the constancy space of f . Since Lf ⊆ Rf , this is equivalentto

Rf = Lf .

Similarly, it can be seen that f linearly recedes along d at rate θ if and onlyif (d, θ) ∈ Lepif , where epif denotes the epigraph of f and LS = S∞∩ (−S∞)denotes the lineality space of a set S ⊆ <n+1. Thus, Assumption A2 isequivalent to

Rf ⊆ Proj<nLepif ,

where Proj<n(x, ζ) = x for all (x, ζ) ∈ <n × <. Some related conditions aregiven in Ref. 4, Proposition 2.3.4.

We now give examples of functions that satisfy Assumption A2. Considerthe following assumption on f .

(A3) f(x) = h(Ax) + bT x + c,where A ∈ <m×n, b ∈ <n, c ∈ < and h : <m → (−∞,∞] is a functionsatisfying (5) and

either (i) b = 0, lim‖y‖→∞

inf h(y) = ∞ or (ii) b 6= 0, lim‖y‖→∞

infh(y)

‖y‖= ∞.

13

Page 16: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

Assumption A3 is satisfied by, for example, convex quadratic functionsand the function g(Ax−b) in Ref. 8, p. 13 with g coercive (i.e., g∞(d) > 0 forall d ∈ <n). We show below that Assumption A3 implies Assumption A2.

Lemma 4.3 Let f : <n 7→ (−∞,∞] be a proper lsc function that satisfiesAssumption A3. Then f satisfies Assumption A2.

Proof. First, we claim that

d is an AND of f ⇒ Ad = 0, bT d ≤ 0. (7)

To see this, fix any AND d of f and let {xk} be an associated US.If b = 0, then f(xk) = h(Axk) + c is bounded above, so that ‖Axk‖ is

bounded. Hence ‖Axk‖/‖xk‖ → 0, implying Ad = 0. Since b = 0, we alsohave bT d ≤ 0.

If b 6= 0, then we have

limk→∞

suph(Axk)

‖xk‖+

bT xk + c

‖xk‖= lim

k→∞sup

f(xk)

‖xk‖≤ 0.

Thus h(Axk)/‖xk‖ is bounded from above. If ‖Axk‖ → ∞ along a subse-quence, then boundedness from above of

h(Axk)

‖xk‖=

h(Axk)

‖Axk‖‖Axk‖‖xk‖

and h(Axk)/‖Axk‖ → ∞ imply ‖Axk‖/‖xk‖ → 0 along this subsequence. If‖Axk‖ is bounded along a subsequence, then ‖xk‖ → ∞ implies ‖Axk‖/‖xk‖ →0 along this subsequence. In either case, we see that ‖Axk‖/‖xk‖ → 0 andhence Ad = 0. Also, we have from {f(xk)} being bounded above (by, say, κ)that

bT xk

‖xk‖≤ −h(Axk)

‖xk‖+

κ− c

‖xk‖,

which, together with

limk→∞

infh(Axk)

‖xk‖≥ 0, 5

5Why? If ‖Axk‖ is bounded along a subsequence, then h(Axk) is bounded below along

this subsequence (since h is lsc and nowhere equal to −∞) and henceh(Axk)‖xk‖

→ 0. If

14

Page 17: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

yields

limk→∞

supbT xk

‖xk‖≤ 0,

so that bT d ≤ 0. Thus d satisfies

Ad = 0, bT d ≤ 0.

It follows from f(x) = h(Ax) + bT x + c and (7) that f satisfies AssumptionA2.

Thus, if each of f0, f1, ..., fr satisfies Assumption A3, then it satisfiesAssumption A2. By Lemma 4.1, they collectively satisfy Assumption A1(a).So, if they in addition satisfy Assumption A1(b), then Proposition 3.1 yieldsthat a global minimum of (P) exists. To our knowledge, this existence resultis new.

5 Quasiconvex Polynomial Functions over Poly-

hedral Sets

Consider the following polynomial assumption on f , generalizing the quadraticassumption of Luo and Zhang (Ref. 5, Section 4).

(A4) f(x) ={

g(x) if x ∈ X,∞ otherwise,

with g : <n → < a polynomial function and

X = {x | Bx ≤ b},

for some B ∈ <m×n, b ∈ <m. Moreover, g is assumed to be quasiconvexon X (Ref. 12, Chap. 9).

Any convex polynomial f satisfies this assumption. A nonconvex exampleis g(x1, x2) = x1x2, X = [0,∞)× (−∞, 0]. The following lemma shows thatf satisfies the relaxed assumption in Remark (ii) following Proposition 3.1.

‖Axk‖ → ∞ along a subsequence, then the assumption of h∞(d) > −∞ for all d ∈ <m

impliesh(Axk)‖Axk‖

is bounded below along this subsequence, which together with‖Axk‖‖xk‖

→ 0

implies limk→∞

infh(Axk)‖xk‖

= limk→∞

infh(Axk)‖Axk‖

‖Axk‖‖xk‖

≥ 0.

15

Page 18: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

Lemma 5.1 Let f : <n → (−∞,∞] satisfy Assumption A4. Then, for anyAND d of f , there exists F ⊆ X such that (i) f recedes below 0 along d onX \F , (ii) f retracts along d on F , (iii) f recedes below 0 along d on levf (0).

Proof. Fix any AND d of f . Let {xk} be a US associated with d. Let

γk = sup`≥k

max{0, f(x`)} ∀ k.

Then {γk} ↓ 0 and, for each k, x` ∈ levf (γk) for all ` ≥ k. Since g is lsc

quasiconvex on X, the level sets of f are closed convex. Since levf (γk) is

closed convex and x`/‖x`‖ → d, d is in the horizon cone of levf (γk) (Ref.

13). Thus, d is in the intersection of the horizon cones of levf (γk) for all k.

Since the horizon cones of levf (γk), k = 1, 2, ..., are nested, this implies that

d is in the horizon cone of levf (γ) for all γ > 0.Fix any x ∈ X. Then x ∈ levf (γ) for some γ > 0, so that x+αd ∈ levf (γ)

for all α ≥ 0. Thus, B(x + αd) ≤ c and g(x + αd) ≤ γ for all α ≥ 0. Theformer implies Bd ≤ 0, i.e., d ∈ X∞. Since g(x+αd) is a polynomial functionof α, the latter implies that

either g(x + αd) → −∞ as α →∞ or g(x + αd) = g(x) ∀ α ∈ <. (8)

DefineF = {x ∈ X | g(x + αd) = g(x) ∀ α ∈ <}.

Then, for any x ∈ X \ F , we have from Bd ≤ 0 that x + αd ∈ X for allα ≥ 0, so that (8) yields

f(x + αd) = g(x + αd) → −∞ as α →∞.

This shows that f recedes below 0 along d on X \ F . Fix any US {xk} ⊆ Fassociated with d. For each i ∈ {1, ...,m}, let Bi and bi denote the ith rowof B and c. If Bid = 0, then Bi(x

k − d) = Bixk ≤ bi for all k. If Bid < 0,

then it follows from Bixk/‖xk‖ → Bid and ‖xk‖ → ∞ that there exists ki

such thatBi(x

k − d) ≤ bi ∀ k ≥ ki.

Let k = max{i|Bid<0} ki. Since xk ∈ F , this yields

f(xk − d) = g(xk − d) = g(xk) = f(xk) ∀ k ≥ k,

16

Page 19: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

implying that f retracts along d on F . By Bd ≤ 0 and (8), for any x ∈levf (0), there exists α ≥ 0 such that

x + αd ∈ X, g(x + αd) ≤ g(x) ≤ 0 ∀ α ≥ α.

Thus f(x + αd) ≤ 0 for all α ≥ α, so f recedes below 0 on levf (0).We note that the above proof generalizes to any continuous function g

that is either constant or tends to ∞ or −∞ on each line. In the case whereg is quadratic, i.e., g(x) = 1

2xT Qx + qT x for some symmetric Q ∈ <n×n and

q ∈ <n, it can be seen that, for any AND d of f , either dT Qd < 0, in whichcase F = ∅, or else dT Qd = 0, (Qx + q)T d ≤ 0, in which case F is the set ofmaxima of the linear program

max (Qx + q)T d s.t. x ∈ X.

Thus, F is a face of X in this case. In the special case where f is a convexpolynomial function on <n, Lemma 5.1 can be further sharpened as is shownbelow.

Lemma 5.2 Let f : <n → < be a convex polynomial function. Then, forany AND d of f , either (i) f recedes below 0 along d on <n or (ii) f retractsalong d on <n and f recedes below 0 along d on levf (0).

Proof. Fix any AND d of f . Then d is a recession direction of f . Since fis convex, this implies that, for any x ∈ <n, f(x + αd) ≤ f(x) for all α ≥ 0(Refs. 4 and 13). Since f(x + αd) is a polynomial function of α, this impliesthat

either limα→∞

f(x + αd) = −∞ or f(x + αd) = f(x) ∀ α ∈ <.

If limα→∞ f(x + αd) = −∞ for every x ∈ <n, then clearly f recedes below 0along d on <n. Thus, it remains to consider the case where, for some x ∈ <n,f(x + αd) = f(x) for all α ∈ <. Since f is convex, this implies d ∈ Lf andhence, for every x ∈ <n,

f(x + αd) = f(x) ∀ α ∈ <.

Thus, f retracts along d on <n and f recedes below 0 along d on levf (0)(which may be empty).

17

Page 20: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

Suppose that f0 satisfies Assumption A4 and f1, ..., fr each satisfies As-sumption A3. By Lemma 5.1, f0 satisfies the relaxed assumption in Remark(ii) following Proposition 3.1. By Lemmas 4.3 and 4.1, f0, f1, ..., fr satisfyAssumption A1(a), subject to the above modification. Thus, if they also sat-isfy Assumption A1(b), then by Proposition 3.1 and Remark (ii) followingit, (P) has a global minimum whenever its minimum value is finite. Thisresult generalizes Corollary 2 and Theorem 3 in Ref. 5 which further assumeg and f1, ..., fr are quadratic. Even in the special case of f0 being convexpolynomial and f1, ..., fr being convex quadratic, our result appears to benew. Unlike the proof in Ref. 5, our proof does not rely on the canonicalform of a quasiconvex quadratic function over a convex set.

Suppose that f0 satisfies Assumption A4 and f1, ..., fr are convex poly-nomial functions on <n. By Lemma 5.1, f0 satisfies the relaxed assumptionin Remark (ii) following Proposition 3.1. By Lemma 5.2, f0, f1, ..., fr satisfyAssumption A1, subject to the above modification. Thus, by Proposition3.1 and Remark (ii) following it, (P) has a global minimum whenever itsminimum value is finite. This result generalizes Theorem 3 in Ref. 6, whichfurther assumes f0 to be convex on <n.

Notice that, because g is quasiconvex only on X, we cannot treat Bx ≤ cas constraints but, rather, must incorporate it into the objective function.Then we exploit the fact that a recession direction for a polyhedral set canbe retracted from points in the set that are sufficiently far out.

6 The Frank-Wolfe Theorem

In this section we assume that f0 is a quadratic function over a polyhedralset, as was studied by Frank and Wolfe (Ref. 10) and many others; see Refs.5–6 for more detailed discussions.

(A5) f(x) ={

g(x) if x ∈ X,∞ otherwise,

with g(x) = 12xT Qx + qT x for some symmetric Q ∈ <n×n and q ∈ <n,

andX = {x | Bx ≤ b},

for some B ∈ <m×n, b ∈ <m.

The following lemma shows that f0 satisfies the relaxed assumption inRemark (ii) following Proposition 3.1 for r = 0.

18

Page 21: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

Lemma 6.1 Let f : <n → (−∞,∞] satisfy Assumption A5. Then, for anyAND d of f , there exists F ⊆ X such that (i) f recedes below 0 along d onX \ F , (ii) f retracts along d on F .

Proof. Fix any AND d of f . Let {xk} be a US associated with d. Thenxk ∈ X for all k and limk→∞ sup g(xk) ≤ 0. Then

0 ≥ limk→∞

supg(xk)

‖xk‖2= lim

k→∞sup

12(xk)T Qxk + qT xk

‖xk‖2=

1

2dT Qd.

Thus dT Qd ≤ 0. Also, d ∈ X∞ (see the proof of Lemma 5.1).Define

F = {x ∈ X | ∇g(x)T d ≥ 0}.

For any x ∈ X \ F , since dT Qd ≤ 0, we have

g(x + αd) = g(x) + α∇g(x)T d +1

2α2dT Qd → −∞ as α →∞.

Since d ∈ X∞, we also have x + αd ∈ X for all α ≥ 0. Thus f recedes below0 along d on X \ F .

Also, we have using dT Qd ≤ 0 that

g(x− d) = g(x)−∇g(x)T d +1

2dT Qd ≤ g(x) ∀ x ∈ F.

Since X is polyhedral, we have that xk−d ∈ X for all k sufficiently large (seethe proof of Lemma 5.1). This implies that if {xk} ⊆ F , then f(xk − d) ≤f(xk) for all k sufficiently large. Thus, f retracts along d on F .

Suppose that f0 satisfies Assumption A5 and r = 0. By Lemma 6.1,f0 satisfies the relaxed assumption in Remark (ii) following Proposition 3.1.Thus, by Proposition 3.1 and Remark (ii) following it, (P) has a global mini-mum whenever its minimum value is finite. This is the classical Frank-Wolfetheorem (Ref. 10).

In Section 1 of Ref. 6, it is mentioned that Andronov et al. (Ref. 14)had extended the Frank-Wolfe theorem to the case of a cubic function over apolyhedral set. It is also known that this result does not extend to polynomialof degree 4 or higher (Ref. 10). Can this result be deduced from Proposition3.1 similarly as the Frank-Wolfe theorem?

19

Page 22: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

7 Further Applications

In this section, we present further applications of Proposition 3.1 and Lemma4.1 and indicate their connection to existing results.

Following Refs. 2–3, we say that a closed set S ⊆ <n is asymptoticallylinear if

δS ∈ F ,

where δS is the indicator function for S, i.e.,

δS(x) ={

0 if x ∈ S,∞ otherwise.

An example of such a set S is the Minkowski sum of a compact set witha finite collection of polyhedral sets. The level set of a convex quadraticfunction is generally not asymptotically linear. We have as a corollary ofProposition 3.1 and Lemma 2.1 the following refinement of Ref. 2, Theorem3 and Ref. 3, Corollary 3.4.3.

Proposition 7.1 Consider any proper lsc f : <n → (−∞,∞] and anyclosed asymptotically linear set S ⊆ <n such that S ∩ domf 6= ∅. Supposethat infx∈S f(x) = 0 and

f∞(d) ≥ 0 ∀ d ∈ S∞.

Suppose also that either f ∈ F or f is constant along any AND of f . Thenthere exists an x∗ ∈ S with f(x∗) = 0.

Proof. Definef0(x) = f(x) + δS(x) ∀ x ∈ <n.

Then f0 is proper, lsc, and infx f0(x) = 0. Moreover, (f0)∞(d) ≥ f∞(d)for d ∈ S∞ and (f0)∞(d) = ∞ otherwise. Thus, our assumption on f∞(d)implies

(f0)∞(d) ≥ 0 ∀ d ∈ <n. (9)

Also, for any ε ∈ <, x ∈ levf0(ε) if and only if x ∈ levf (ε) and x ∈ S. Forany nonzero d ∈ <n, (f0)∞(d) = 0 implies f∞(d) = (δS)∞(d) = 0.

Suppose that f ∈ F . Since f and δS are in F , the above observationsshow that f0 is in F . Then, by Lemma 2.1, (9), and Proposition 3.1 withr = 0, there exists an x∗ ∈ <n with f0(x

∗) = 0.

20

Page 23: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

Suppose instead that f is constant along any AND of f . Fix any ANDd of f0 and any associated US {xk}. Then d is an AND of f , so that fis constant along d. Also, xk ∈ S for all k, implying (δS)∞(d) = 0. SinceδS ∈ F , this implies that there exists a k such that xk − d ∈ S for all k ≥ k.Also, f(xk − d) = f(xk) for all k. Thus f0(x

k − d) = f0(xk) for all k ≥ k.

This shows that f0 retracts along any AND of f0. By Proposition 3.1 withr = 0, there exists an x∗ ∈ <n with f0(x

∗) = 0.If S is further assumed to be asymptotically multipolyhedral, then in-

stead of f being constant along any AND of f , it suffices that f linearlyrecedes along any AND of f (see Section 4). The next lemma shows that iff0, f1, ..., fr are convex and in F , then they satisfy Assumption A1(a).

Lemma 7.1 Consider any proper lsc f : <n → (−∞,∞] such that f ∈ Fand f is convex. Then, for each AND d of f ,

either (i) f recedes below 0 along d on domf

or (ii) f retracts along d and f recedes below 0 along d on levf (0).

Proof. Fix any AND d of f . Then f∞(d) ≤ 0. Since f is convex, thisimplies that f recedes below 0 along d on levf (0). If f∞(d) < 0, then theconvexity of f implies that f recedes below 0 along d on domf . If f∞(d) = 0,then Lemma 2.1 implies that f retracts along d.

Lemma 7.1 and Proposition 3.1 together yield the following existenceresult of Auslender (Ref. 2, Theorem 2).

Proposition 7.2 Suppose that fi : <n → (−∞,∞], i = 0, 1, ..., r, areproper, lsc, convex, and belong to F . Also, suppose that domf0 = domfi,i = 1, ..., r, and L(γ) 6= ∅ for all γ > 0. Then L(0) 6= ∅.

If a proper lsc function f : <n → (−∞,∞] satisfies Assumption A2 and(5), then the indicator of its 0-level set,

f ◦(x) ={

0 if f(x) ≤ 0,∞ otherwise,

satisfies (5) (since (f ◦)∞(d) ≥ 0 for all d ∈ <n) but may not satisfy Assump-tion A2. In particular, take f(x) = ‖Ax‖2 + bT x for some A, b with b 6= 0.Then, we know from Section 4 that f satisfies Assumption A2 and (5). How-ever, for any d ∈ <n such that Ad = 0 and bT d < 0, it is easily seen that d is

21

Page 24: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

an AND of f ◦, but f ◦ does not linearly recede along d (since f ◦(x + αd) = 0for all α ≥ 0 while f ◦(x + αd) = ∞ for all α sufficiently negative). Thus, wecannot incorporate constraints into the objective by means of an indicatorfunction and still satisfy Assumption A2 for the objective function f0.

In Ref. 7, Theorem 2.4 (also see Ref. 8, Theorem 21), Auslender consid-ered the following problem

min g(x) s.t. x ∈ X, (10)

where X is an asymptotically multipolyhedral set in <n and g : <n →(−∞,∞] is a proper lsc function. Auslender showed that if

domg ∩X 6= ∅, g∞(d) ≥ 0 ∀ d ∈ X∞, (11)

and either (i) g is constant along each d ∈ X∞ ∩Rg or (ii) epig is an asymp-totically multipolyhedral set or (iii) g is weakly coercive, then (10) has aglobal minimum. Notice that (iii) implies (i). This is because g being weaklycoercive means that g∞(d) ≥ 0 for all d ∈ <n and, for any d ∈ Rg, g isconstant along d.

Auslender’s existence result generalizes one of Rockafellar (Ref. 13, The-orem 27.3) for the case where X is a polyhedral set, g is convex, and g isconstant along each d ∈ X∞ ∩ Rg. In the case where g and X are convex,it can be seen that (10) having finite minimum value implies (11), but notconversely (e.g., g(x) = − log(x) if x > 0 and g(x) = ∞ otherwise satisfiesg∞(d) ≥ 0 for all d ∈ <). Thus, in this case, the assumption (11) is weakerthan assuming (10) has a finite minimum value. In general, neither impliesthe other. We show below that Auslender’s result may be viewed as a specialcase of Proposition 3.2.

Proposition 7.3 Consider the problem (10). Suppose that (11) holds andeither (i) g linearly recedes along each d ∈ X∞ ∩Rg or (ii) epig is an asymp-totically multipolyhedral set. Then (10) may be reformulated in the form of(P) while satisfying the assumptions of Proposition 3.2. Hence (10) has aglobal minimum.

Proof. Suppose that g linearly recedes along each d ∈ X∞ ∩ Rg. Since Xis asymptotically multipolyhedral, X = S + K, where S is a compact set in<n and K = ∪`

j=1Kj with each Kj being a polyhedral cone. Then, x ∈ X if

and only if x = x1 + x2 with x1 ∈ S and x2 ∈ K, so (10) is equivalent to

min f0(x1, x2) s.t. f1(x1, x2) ≤ 0,

22

Page 25: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

where

f0(x1, x2) ={

g(x1 + x2) if x1 ∈ S,∞ otherwise,

f1(x1, x2) ={

0 if x2 ∈ K,∞ otherwise.

Notice that f0 and f1 are proper, lsc. We now verify that this problem satisfiesthe assumptions of Proposition 3.2, and hence it has a global minimum. By(11), domg ∩ X 6= ∅, so that D = domf0 ∩ C1 6= ∅. Fix any AND (d1, d2)of f0, f1. Then this is a recession direction of fi, i.e., (fi)∞(d1, d2) ≤ 0, fori = 1, 2. We have

(f0)∞(d1, d2) = limd′1→d1,d′

2→d2

t→∞

inff0(td

′1, td

′2)

t

=

limd′1→0,d′

2→d2

t→∞

infg(t(d′1 + d′2))

tif d1 = 0,

∞ otherwise,

={

g∞(d2) if d1 = 0,∞ otherwise.

Similarly,

(f1)∞(d1, d2) ={

0 if d2 ∈ K,∞ otherwise.

This implies d1 = 0, d2 ∈ K, and g∞(d2) ≤ 0. Thus, d2 ∈ X∞ ∩ Rg and ourassumption on g implies g∞(d2) ≥ 0 and g linearly recedes along d2. Thelatter implies there exists θ ≤ 0 such that

g(x + αd2) = g(x) + αθ ∀ α ∈ <, ∀ x ∈ domg.

Since g∞(d2) ≥ 0, then θ = 0. This and d1 = 0 imply that

f0((x1, x2) + α(d1, d2)) = g(x1 + x2 + αd2)= g(x1 + x2)= f0(x1, x2) ∀ α ∈ <, ∀ (x1, x2) ∈ domf0.

Thus f0 is constant (and hence retracts strongly) along (d1, d2). Since (d1, d2)is an AND of f1, there exists a US {(xk

1, xk2)} such that xk

2 ∈ K for all k,‖(xk

1, xk2)‖ → ∞, and (xk

1, xk2)/‖(xk

1, xk2)‖ → (d1, d2). Since d1 = 0, then

23

Page 26: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

‖xk2‖ → ∞ and xk

2/‖xk2‖ → d2. Since d2 ∈ K and K is the union of polyhedral

cones, there exists k ≥ 0 such that

xk2 − d2 ∈ K ∀ k ≥ k

(see the proof of Lemma 5.1). Then

f1((xk1, x

k2)− (d1, d2)) = f1(x

k1, x

k2 − d2) = 0 ∀ k ≥ k.

Thus f1 retracts along (d1, d2).Suppose that epig is an asymptotically multipolyhedral set. Then (10) is

equivalent tomin h(x, µ) s.t. (x, µ) ∈ Y,

where h(x, µ) = µ and Y = epig ∩ (X × <). The assumption (11) impliesdomh ∩ Y 6= ∅ and h∞(d, δ) = δ ≥ 0 for all (d, δ) ∈ Y∞. Also, δ ≤ 0 for any(d, δ) ∈ Rh, which implies δ = 0 so that h is constant along (d, δ). It canbe shown that the intersection of two asymptotically multipolyhedral sets isalso asymptotically multipolyhedral. Thus case (ii) reduces to case (i).

References

1. AUSLENDER, A., COMINETTI, R. Cominetti, and CROUZEIX,J. P., Convex Functions with Unbounded Level Sets and Applica-tions to Duality Theory, SIAM Journal on Optimization, Vol. 3,pp. 669-687, 1993.

2. AUSLENDER, A., Existence of Optimal Solutions and Duality Re-sults under Weak Conditions, Mathematical Programming, Vol.88, pp. 45-59, 2000.

3. AUSLENDER, A. and TEBOULLE, M., Asymptotic Conesand Functions in Optimization and Variational Inequalities,Springer-verlag, New York, NY, 2003.

4. BERTSEKAS, D. P. with NEDIC, A. and OZDAGLAR, A. E., Con-vex Analysis and Optimization, Athena Scientific, Belmont, MA,2003.

5. LUO, Z.-Q. and ZHANG, S., On the Extension of Frank-Wolfe The-orem, Computational Optimization and Applications, Vol. 13,pp. 87-110, 1999.

24

Page 27: Existence of Global Minima for Constrained Optimization 1 ......global minima for many different classes of nonconvex and convex programs. In particular, we consider nonconvex programs

6. BELOUSOV, E. G. and KLATTE, D., A Frank-Wolfe Type Theoremfor Convex Polynomial Programs, Computational Optimizationand Applications, Vol. 22, pp. 37-48, 2002.

7. AUSLENDER, A., Noncoercive Optimization Problems, Mathemat-ics of Operations Research, Vol. 21, pp. 769-78, 19962.

8. AUSLENDER, A., How to Deal with the Unbounded in Optimiza-tion: Theory and Algorithms, Mathematical Programming, Vol.79, pp. 3-18, 1997.

9. BAOICCHI, C., BUTTAZZO, G. GASTALDI, F., andTOMARELLI, F., General Existence Theorems for Unilat-eral Problems in Continuum Mechanics, Archive for RationalMechanics and Analysis, Vol. 100, pp. 149-189, 1998.

10. FRANK, M. and WOLFE, P., An Algorithm for Quadratic Program-ming, Naval Research Logistics Quarterly, Vol. 3, pp. 95-110,1956.

11. ROCKAFELLAR, R. T. and WETS, R. J.-B., Variational Analysis,Springer-verlag, New York, NY, 1998.

12. MANGASARIAN, O.-L., Nonlinear Programming, McGraw-Hill,New York, NY, 1969.

13. ROCKAFELLAR, R. T., Convex Analysis, Princeton UniversityPress, Princeton, NJ, 1970.

14. ANDRONOV, V. G., BELOUSOV, E. G., and SHIRONIN, V.M., On Solvability of the Problem of Polynomial Programming,Izvestija Akademii Nauk SSSR, Tekhnicheskaja Kibernetika, No.4, pp. 194-197, 1982 (in Russian). Translation appeared in Newsof the Academy of Science of USSR, Department of TechnicalSciences, Technical Cybernetics, No. 4, pp. 194-197, 1982.

25