17
J Optim Theory Appl DOI 10.1007/s10957-013-0377-6 Necessary and Sufficient Conditions for Efficiency Via Convexificators Do Van Luu Received: 27 January 2012 / Accepted: 17 July 2013 © Springer Science+Business Media New York 2013 Abstract Based on the extended Ljusternik Theorem by Jiménez-Novo, necessary conditions for weak Pareto minimum of multiobjective programming problems in- volving inequality, equality and set constraints in terms of convexificators are estab- lished. Under assumptions on generalized convexity, necessary conditions for weak Pareto minimum become sufficient conditions. Keywords Fritz John and Karush–Kuhn–Tucker necessary and sufficient conditions for weak Pareto minimum · Upper and lower convexificators · Upper regular convexificators · Quasiconvex functions · Scalarly K-asymptotic pseudoconvex functions 1 Introduction The notion of convexificator has provided good calculus rules for establishing nec- essary optimality conditions in nonsmooth optimization. In 1994, Demyanov [1] in- troduced the notion of a convex and compact convexificator, which was further de- veloped by Demyanov and Jeyakumar (see [2]). This notion is a generalization of the notions of upper convex and lower concave approximations (see [3]). The notions of nonconvex closed convexificator and approximate Jacobian were introduced by Jeyakumar and Luc in [4] and [5], respectively. Making use of these notions, differ- ent Lagrange multiplier rules for efficiency were established (see, e.g., [512], and references therein). The notion of convexificator is a generalization of some notions of known subdifferentials such as the subdifferentials of Clarke [13], Michel-Penot Communicated by Vladimir F. Dem’yanov. D. Van Luu (B ) Institute of Mathematics, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet Road, 10307 Hanoi, Vietnam e-mail: [email protected]

Necessary and Sufficient Conditions for Efficiency Via Convexificators

  • Upload
    do

  • View
    216

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory ApplDOI 10.1007/s10957-013-0377-6

Necessary and Sufficient Conditions for EfficiencyVia Convexificators

Do Van Luu

Received: 27 January 2012 / Accepted: 17 July 2013© Springer Science+Business Media New York 2013

Abstract Based on the extended Ljusternik Theorem by Jiménez-Novo, necessaryconditions for weak Pareto minimum of multiobjective programming problems in-volving inequality, equality and set constraints in terms of convexificators are estab-lished. Under assumptions on generalized convexity, necessary conditions for weakPareto minimum become sufficient conditions.

Keywords Fritz John and Karush–Kuhn–Tucker necessary and sufficient conditionsfor weak Pareto minimum · Upper and lower convexificators · Upper regularconvexificators · Quasiconvex functions · Scalarly K-asymptotic pseudoconvexfunctions

1 Introduction

The notion of convexificator has provided good calculus rules for establishing nec-essary optimality conditions in nonsmooth optimization. In 1994, Demyanov [1] in-troduced the notion of a convex and compact convexificator, which was further de-veloped by Demyanov and Jeyakumar (see [2]). This notion is a generalization ofthe notions of upper convex and lower concave approximations (see [3]). The notionsof nonconvex closed convexificator and approximate Jacobian were introduced byJeyakumar and Luc in [4] and [5], respectively. Making use of these notions, differ-ent Lagrange multiplier rules for efficiency were established (see, e.g., [5–12], andreferences therein). The notion of convexificator is a generalization of some notionsof known subdifferentials such as the subdifferentials of Clarke [13], Michel-Penot

Communicated by Vladimir F. Dem’yanov.

D. Van Luu (B)Institute of Mathematics, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet Road,10307 Hanoi, Vietname-mail: [email protected]

Page 2: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

[14], Mordukhovich [15]. A Fritz John type necessary condition for a weak minimumof constrained multiobjective programming without set constraint via approximateJacobians was derived by Luc [7]. Some necessary optimality conditions of FritzJohn type for weak minima in terms of upper semi-regular and upper convexifac-tors were established by Dutta-Chandra [8] for multiobjective optimization problemswith inequality constraints. Necessary optimality conditions for Pareto minima andweak Pareto minima were established by Luu [10], but for a weak Pareto minimumof the problem without set constraint under a linear independence condition for thegradients of equality constraint functions.

In this paper, under the regularity condition (RC) introduced in [16], which is moregeneral than the linear independence condition mentioned above, we develop furthernecessary conditions and sufficient conditions for weak Pareto minimum of multi-objective programming problems involving inequality, equality constraints and a setconstraint in terms of convexificators. After some preliminaries, Sect. 3 is devoted todeveloping Fritz John necessary conditions for weak Pareto minima of multiobjec-tive optimization problems involving inequality, equality and set constraints based onthe extended Ljusternik Theorem by Jiménez-Novo in [16]. Section 4 deals with con-straint qualifications and Karush–Kuhn–Tucker necessary conditions for weak Paretominima. In Sect. 5, under assumptions on generalized convexity, necessary conditionsfor weak Pareto minimum become sufficient conditions.

2 Preliminaries

Let f be an extended-real-valued function defined on Rn. Recall that the lower (resp.

upper) Dini directional derivative of f at x ∈ Rn in a direction v ∈R

n is defined by

f −(x;v) := lim inft↓0

f (x + tv) − f (x)

t,

(resp. f +(x;v) := lim sup

t↓0

f (x + tv) − f (x)

t

).

Iff f +(x;v) = f −(x;v), their common value is called the directional derivativeof f at x in the direction v and is denoted by f ′(x;v). The function f is calleddirectionally differentiable at x iff its directional derivative at x exists in all di-rections. If f is Fréchet differentiable at x with Fréchet derivative ∇f (x), thenf ′(x;v) = 〈∇f (x), v〉.

Following [4], the function f is said to have an upper (resp. lower) convexificator∂∗f (x) (resp. ∂∗f (x)) at x ∈R

n iff ∂∗f (x) (resp. ∂∗f (x)) ⊆ Rn is closed, and

f −(x;v)� supξ∈∂∗f (x)

〈ξ, v〉 (∀v ∈Rn),

(resp. f +(x;v)� inf

ξ∈∂∗f (x)〈ξ, v〉 (∀v ∈R

n))

.

A closed set ∂∗f (x) ⊆ Rn is said to be a convexificator of f at x iff it is both upper

and lower convexificator of f at x.

Page 3: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

Adapting the definition in [9], the function f is said to have an upper semi-regularconvexificator ∂∗f (x) at x iff ∂∗f (x) ⊆ R

n is closed, and

f +(x;v)� supξ∈∂∗f (x)

〈ξ, v〉 (∀v ∈Rn). (1)

Example 2.1 Let f : R → R be defined as

f (x) :=

⎧⎪⎨⎪⎩

x, if x ∈ Q∩ [0,+∞[,x4 − 4x3 + 4x2, if x ∈ Q∩] − ∞,0],0, otherwise,

where Q is the set of rationals. Then

f +(0;v) ={

v, if v � 0,

0, if v < 0,

f −(0;v) = 0 (∀v ∈ R).

The set {0;1} is an upper semi-regular convexificator of f at x, and so it is upperconvexificator of f at x. The set {0} is lower convexificator of f at x.

Following [4], if equality holds in (1), then ∂∗f (x) is called an upper regularconvexificator. It can be seen that for a locally Lipschitz function, the Clarke subdif-ferential and the Michel-Penot subdifferential are convexificators of f at x (see [4]for details). Moreover, for a locally Lipschitz function which is regular in the senseof Clarke [13], the Clarke subdifferential is an upper regular convexificator (see [8]).Note that, if a function f admits an upper regular convexificator at x, then it alsoadmits an upper semi-regular convexificator at x, and so, it admits an upper convexi-ficator at x.

Example 2.2 Let us consider the function f : R →R given as follows:

f (x) ={

x2| cos πx|, x �= 0,

0, x = 0.

It can be seen that f +(0;v) = f −(0;v) = 0 (∀v ∈ R). Then the Clarke subdifferen-tial and the Michel-Penot subdifferential of f at x = 0 are [−π,π] and {0}, respec-tively. The sets {0}, [−π,π] and {−π;π} are convexificators of f at x = 0. The set{0} is upper regular convexificator of f at x.

Recall [17] that an extended-real-valued function f , defined on a set Q ⊆ Rn, is

said to be quasiconvex at x ∈ Q with respect to Q iff, for each x ∈ Q,

f (x) � f (x) =⇒ ∀t ∈]0,1[, f(tx + (1 − t)x

)� f (x).

f is said to be quasiconvex on Q iff it is quasiconvex at each x ∈ Q. f is calledquasilinear at x ∈ Q with respect to Q iff ±f are quasiconvex at x with respect to

Page 4: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

Q. Yang [18] showed that, if f is continuous and quasiconvex and admits a convexlower convexificator on a convex set Q, then, for any x, y ∈ Q,

f (x) � f (y) =⇒ ∃ξ (n) ∈ ∂∗f (y), limn→∞

⟨ξ (n), x − y

⟩� 0.

However, if f admits an upper regular convexificator at x, we have the following.

Proposition 2.1 Assume that f admits an upper regular convexificator ∂∗f (x) at x

and it is quasiconvex at x ∈ Q with respect to a convex set Q. Then

∀x ∈ Q, f (x) � f (x) =⇒ ∀ξ ∈ ∂∗f (x), 〈ξ, x − x〉 � 0.

Proof Since f is quasiconvex at x with respect to Q, for each x ∈ Q satisfyingf (x) � f (x), we have

f +(x;x − x) � 0.

In view of the upper regularity of the convexificator ∂∗f (x), for each x ∈ Q satisfyingf (x) � f (x), it follows that

supξ∈∂∗f (x)

〈ξ, x − x〉 = f +(x;x − x) � 0.

Hence, the conclusion follows. �

Following [18], an extended-real-valued function f that admits a convex lowerconvexificator ∂∗f (x) on Q is said to be lower asymptotic pseudoconvex on Q iff,for every x, y ∈ Q, we have

∃ξ (n) ∈ ∂∗f (x), limn→∞

⟨ξ (n), y − x

⟩� 0 =⇒ f (y) � f (x).

Adapting the above definition by Yang in [18], an extended-real-valued function f

that admits a convexificator ∂∗f (x) at x is called asymptotic pseudoconvex at x withrespect to Q iff, for every x ∈ Q, we have

∃ξ (n) ∈ conv ∂∗f (x), limn→∞

⟨ξ (n), x − x

⟩� 0 =⇒ f (x) � f (x),

where conv indicates the convex hull.

Example 2.3 Let f,g : R → R be defined as

f (x) :={

x, if x � 0,12x, if x > 0,

g(x) :=

⎧⎪⎨⎪⎩

x, if x ∈ Q,

2x, if x ∈ (R \Q)∩] − ∞,0],12x, if x ∈ (R \Q) ∩ [0,∞[.

Page 5: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

Then a convexificator of f at 0 is ∂∗f (0) = { 12 ;1}, and f is asymptotic pseudoconvex

at 0 with respect to Q = R, while a lower convexificator of g at 0 is ∂∗g(0) = { 12 ;2},

and g is asymptotic lower pseudoconvex at 0 with respect to Q = R.

Let K be a closed and convex cone in Rn, and

K∗ := {ξ ∈R

n : 〈ξ, x〉� 0,∀x ∈ K}

be its nonnegative polar cone. Let f : Q ⊆ Rn → R

m, and so, f = (f1, . . . , fm). Letfk admit convexificators ∂∗fk(x) at x. The function f is called scalarly K-asymptoticpseudoconvex at x with respect to Q iff, for any λ ∈ K∗, the function λT f is asymp-totic pseudoconvex at x with respect to Q.

Recall [19] that the Bouligand and Clarke tangent cones to a set C ⊆ Rn at a point

x ∈ C are defined, respectively, by

K(C; x) := {v ∈R

n : ∃vn → v,∃ tn ↓ 0 such that x + tnvn ∈ C,∀n},

T (C; x)

:= {v ∈ R

n : ∀xn ∈ C,xn → x,∀tn ↓ 0,∃ vn → v such that xn + tnvn ∈ C,∀n}.

The cone of attainable directions to C at x ∈ C is

A(C; x) ={v ∈R

n : ∃δ > 0,∃γ : [0, δ] → Rn such that

γ (0) = x, γ (t) ∈ C ∀t ∈]0, δ], γ ′(0) = limt↓0

γ (t) − γ (0)

t= v

}.

The Clarke normal cone to C at x is

N(C; x) = {ξ ∈R

n : 〈ξ, v〉� 0 ∀v ∈ T (C; x)}.

Note that the cones T (C; x) and N(C; x) are nonempty, closed and convex,N(C; x) = −T ∗(C; x), and T (C; x) ⊆ K(C; x). In case C is convex, T (C; x) =K(C; x).

3 Fritz John Necessary Conditions for Weak Pareto Minima

The present section will be devoted to developing Fritz John necessary conditionsfor local weak minima in terms of upper and upper semi-regular convexificators. Weshall be concerned with the following multiobjective programming problem (P):

minf (x) s.t. x belongs to M := {x ∈ C : g(x) � 0, h(x) = 0

},

where f,g,h are maps from Rn into R

r , Rm, R

�, respectively, and C a subsetof Rn. Then f = (f1, . . . , fr ), g = (g1, . . . , gm),h = (h1, . . . , h�), where f1, . . . , fr ,g1, . . . , gm, h1, . . . , h� are extended-real-valued functions defined on R

n. For

Page 6: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

x, y ∈Rn, we write x � y iff xi � yi (i = 1, . . . , n). Thus g(x) � 0 means that

gi(x) � 0 (i = 1, . . . ,m), and h(x) = 0 means that hj (x) = 0 (j = 1, . . . , �). Forthe sake of simplicity, we set: I = {1, . . . ,m}, J = {1, . . . , r}, L = {1, . . . , �}. Notethat necessary conditions in terms of convexificators for this type problem with a setconstraint or inequality constraints were studied by Dutta-Chandra [8], and withoutset constraints by Luu [10]. We set

I (x) := {i ∈ I : gi(x) = 0

},

H := {x ∈ R

n : h(x) = 0}.

The following extension of classical Ljusternik theorem was proved by Jiménez-Novo in [16].

Proposition 3.1 [16] Assume that

(a) C is convex and x ∈ H ∩ C;(b) h is continuous in a neighborhood of x and Fréchet differentiable at x with

Fréchet derivative ∇h(x);(c) the following regularity condition (RC) holds:

0 ∈∑j∈L

γj∇hj (x) + N(C; x) =⇒ γ1 = · · · = γ� = 0.

Then

A(H ∩ C; x) = T (H ∩ C; x) = (ker∇h(x)

) ∩ T (C; x)

= cl[(

ker∇h(x)) ∩ cone(C − x)

],

where cl indicates the closure.

Remark 3.1 If C = Rn,h is of the class C1 on a neighborhood of x and ∇h1(x), . . . ,

∇hr(x) are linearly independent, then Proposition 3.1 becomes the classicalLjusternik theorem. In fact, under these assumptions the mapping ∇h(x) is surjec-tive, T (C; x) = R

n, the regularity condition (RC) holds, and we obtain ker∇h(x) =T (C; x).

The regularity condition (RC) will be illustrated by the following example.

Example 3.1 Let h :R3 → R2 and C ⊂ R

3 be defined as

h := (h1, h2), (x, y, z) = 0,

h1(x, y, z) = x + 2y + z,

h2(x, y, z) = 2x + 4y − z,

C := {(x, y, z) : −1 � x � 0,−1 � y, z � 1

}.

Page 7: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

Then ∇h1(0,0,0) = (1,2,1),∇h2(0,0,0) = (2,4,−1), T (C;0) = −R+ × R × R,N(C;0) = R+ × {0} × {0}. It can be seen that the condition (RC) is satisfied. In fact,if

0 ∈ γ1∇h1(0) + γ2∇h2(0) + N(C;0),

which means that

(0,0,0) ∈ (γ1 + 2γ2,2γ1 + 4γ2, γ1 − γ2) +R+ × {0} × {0},then γ1 = γ2 = 0. Hence, the condition (RC) is satisfied.

Recall that a point x ∈ M is said to be a local weak Pareto minimum of Problem (P)iff there exists a number δ > 0 such that there is no x ∈ M ∩ B(x; δ) satisfying

fk(x) < fk(x) (∀k ∈ J ),

where B(x; δ) stands for the open ball of radius δ around x.The following assumption is needful to derive necessary conditions for weak effi-

ciency.

Assumption 3.1 There exists an index s ∈ J such that fs admits an upper convex-ificator ∂∗fs(x) at x. For every k ∈ J, k �= s and i ∈ I (x), the function fk and gi

admit upper semi-regular convexificators ∂∗fk(x) and ∂∗gi(x) at x, respectively. Allthe functions gi(i /∈ I (x)) are continuous at x.

Basing on the extended Ljusternik Theorem by Jiménez-Novo in [16] we establishthe following necessary condition for local weak Pareto minimum of (P).

Theorem 3.1 Let x be a local weak Pareto minimum of (P). Assume all the hypothe-ses of Proposition 3.1 and Assumption 3.1 are fulfilled. Suppose, in addition, that allthe functions fk(k ∈ J ) and gi(i ∈ I (x)) are locally Lipschitz at x. Then the followingsystem is impossible:

supξk∈conv ∂∗fk(x)

〈ξk, v〉 < 0 (∀k ∈ J ), (2)

supζi∈conv ∂∗gi(x)

〈ζi, v〉 < 0(∀i ∈ I (x)

), (3)

⟨∇hj (x), v⟩ = 0 (∀j ∈ L), (4)

v ∈ T (C; x). (5)

Proof Let us show that the following system has no solution v ∈ Rn:

f −s (x;v) < 0, (6)

f +k (x;v) < 0 (∀k ∈ J, k �= s), (7)

g+i (x;v) < 0

(∀i ∈ I (x)), (8)

Page 8: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

⟨∇hj (x), v⟩ = 0 (∀j ∈ L), (9)

v ∈ T (C; x). (10)

Assume, to the contrary, that the system (6)–(10) has a solution v0 ∈ Rn. Then v0 ∈

(ker∇h(x)) ∩ T (C; x). We invoke Proposition 3.1 to deduce that(ker∇h(x)

) ∩ T (C; x) = A(H ∩ C; x),

which implies that there exist δ > 0 and γ : [0, δ] → Rn such that

γ (0) = x, γ (t) ∈ H ∩ C(∀t ∈]0, δ]),

γ ′(0) = limt↓0

γ (t) − γ (0)

t= v0.

(11)

Thus,

γ (t) ∈ C and h(γ (t)

) = 0(∀t ∈]0, δ]). (12)

It follows from (11) that γ (t)−γ (0)t

+ o(t)t

→ v0 and γ (t)−γ (0)t

→ v0 as t ↓ 0, whereo(t)/t → 0 as t ↓ 0. Since fs is locally Lipschitz at x, it follows from [20, p. 286]that

f −s (x;v0) = lim inf

t↓0

fs(x + tv0) − fs(x)

t

= lim inft↓0

fs(x + t (γ (t)−γ (0)

t+ o(t)

t)) − fs(x)

t

= lim inft↓0

fs(x + (γ (t) − x)) − fs(x)

t

= lim inft↓0

fs(γ (t)) − fs(x)

t< 0.

Therefore, for every natural number p, there exist tp ∈]0, 1p[ ( 1

p� δ) such that

lim inft↓0

fs(γ (t)) − fs(x)

t= lim

p→+∞fs(γ (tp)) − fs(x)

tp< 0.

Consequently, there is a natural number N1 such that, for all p � N1,

fs

(γ (tp)

)< fs(x). (13)

Since fk is locally Lipschitz at x for all k ∈ J, k �= s, it follows from (11) that

f +k (x;v0) = lim sup

t↓0

fk(x + t (γ (t)−γ (0)

t+ o(t)

t)) − fk(x)

t

= limp→+∞ sup

t∈]0, 1p

[

fk(γ (t)) − fk(x)

t< 0.

Page 9: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

Hence, there exists a natural number N2(� N1) such that, for all p � N2, t ∈]0, 1

p[, fk(γ (t)) < fk(x), and so, for all k ∈ J, k �= s,

fk

(γ (tp)

)< fk(x). (14)

Analogously, there exists a natural number N3(� N2) such that, for all i ∈ I (x),

p � N3,

gi

(γ (tp)

)< 0. (15)

In view of the continuity of gi(i /∈ I (x)), there exists a natural number N4(� N3)

such that, for all i /∈ I (x),p � N4,

gi

(γ (tp)

)< 0. (16)

Combining (12)–(16) yields, for all p � N4,

fk

(γ (tp)

)< fk(x) (∀k ∈ J ),

gi

(γ (tp)

)< 0 (∀i ∈ I ),

h(γ (tp)

) = 0,

γ (tp) ∈ C.

This conflicts with the fact that x is a local weak Pareto minimum of (P). By virtueof Assumption 3.1, we deduce that the system (2)–(5) is also impossible. The proofis complete. �

We set

D(x) :=⋃{∑

k∈J

λk conv ∂∗fk(x) +∑

i∈I (x)

μi conv ∂∗gi(x) +∑j∈L

γj∇hj (x)

+ N(C; x) : λk � 0 (∀k ∈ J ),μi � 0(∀i ∈ I (x)

),

γj ∈ R (∀j ∈ L), (λ,μ,γ ) �= (0,0,0)

},

where λ = (λk)k∈J ,μ = (μi)i∈I (x), γ = (γj )j∈L.A Fritz John type asymptotic necessary conditions for local weak Pareto minimum

of Problem (P) in terms of convexificators can be stated as follows.

Theorem 3.2 Assume that x is a local weak Pareto minimum of (P) and all the hy-potheses of Theorem 3.1 are fulfilled. Then there exist λk � 0 (∀k ∈ J ), μi � 0 (∀i ∈I (x)) with (λ, μ) �= (0,0) and γ ∈ R

� such that

0 ∈ cl

(∑k∈J

λk conv ∂∗fk(x) +∑

i∈I (x)

μi conv ∂∗gi(x) +∑j∈L

γj∇hj (x) + N(C; x)

).

(17)

Page 10: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

Proof Let us show that

0 ∈ clD(x). (18)

Contrary to (18), suppose that

0 /∈ clD(x).

It can be seen that the set D(x) is nonempty and convex. Applying a strong separationtheorem for the disjoint convex sets D(x) and {0} (see [21], Corollary 11.4.2) yieldsthe existence of v0 ∈ R

n, v0 �= 0 such that

supξ∈D(x)

〈ξ, v0〉 < 0. (19)

According to Theorem 3.1, the system (2)–(5) is impossible. Then, by taking λk = 1,

λp = 0 (∀p ∈ J,p �= k),μi = 0 (∀i ∈ I (x)), γ = 0, and 0 = ζ ∈ N(C; x), it followsfrom (19) that

supξk∈conv ∂∗fk(x)

〈ξk, v0〉 < 0 (∀k ∈ J ). (20)

In the same way as above, one gets that

supηi∈conv ∂∗gi(x)

〈ηi, v0〉 < 0(∀i ∈ I (x)

). (21)

Let us see that ⟨∇hj (x), v0⟩ = 0 (∀j ∈ L). (22)

In fact, if (22) were false, then 〈∇hj0(x), v0〉 �= 0 for some j0 ∈ L. By takingξs ∈ ∂∗fs(x), λs = 1, λk = 0 (∀k ∈ J, k �= s),μi = 0 (∀i ∈ I (x)), γj = 0 (∀j ∈ L,

j �= j0),0 = ζ ∈ N(C; x), it follows from (19) that

〈ξs, v0〉 + γj0

⟨∇hj0(x), v0⟩< 0. (23)

Observe that |〈ξs, v0〉| < +∞ and |〈∇hj0(x), v0〉| < +∞. By letting γj0 be suffi-ciently large if 〈∇hj0(x), v0〉 > 0, while γj0 < 0 with its absolute value be largeenough if γj0 be sufficiently large if 〈∇hj0(x), v0〉 < 0, we arrive at a contradictionwith (23). Hence (22) holds.

Next let us show that

v0 ∈ T (C; x). (24)

Indeed, if it were not so, there would exist η0 ∈ N(C; x) such that 〈η0, v0〉 > 0. Byletting λk = 0 (∀k ∈ J, k �= s), λs > 0, ξs ∈ ∂∗fs(x),μi = 0 (∀i ∈ I (x)), γ = 0, forα > 0 we have αη0 ∈ N(C; x), and

λs〈ξs, v0〉 + α〈η0, v0〉 < 0. (25)

Since 〈η0, v0〉 > 0, for α sufficiently large, one gets a contradiction with (25). Hence,〈η, v0〉� 0 (∀η ∈ N(C; x)). Observing that T (C; x) is a closed and convex cone, wehave the result that v0 ∈ N0(C; x) = T 00(C; x) = T (C; x).

Page 11: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

Combing (20)–(22) and (24) yields the result that the system (6)–(10) has asolution v0, and so is the system (2)–(5). This conflicts with Theorem 3.1. Con-sequently, (18) holds, and thus there exist λ

(n)k , ξ

(n)k ∈ conv∂∗fk(x) (∀k ∈ J ),

μ(n)i � 0, η

(n)i ∈ conv ∂∗gi(x) (∀i ∈ I (x)), γ

(n)j ∈ R (∀j ∈ L), and ζ (n) ∈ N(C; x)

with (λ(n),μ(n)I (x), γ

(n)) �= (0,0,0) such that

0 = limn→∞

(∑k∈J

λ(n)k ξ

(n)k +

∑i∈I (x)

μ(n)i η

(n)i +

∑j∈L

γ(n)j ∇hj (x) + ζ (n)

), (26)

where λ(n) = (λ(n)k )k∈J ,μ

(n)I (x) = (μ

(n)i )i∈I (x), γ

(n) = (γ(n)j )j∈L. Since (λ(n),μ

(n)I (x),

γ (n)) �= (0,0,0), it can be taken them so that ‖(λ(n),μ(n)I (x)

, γ (n))‖ = 1 (∀n). With-

out any loss generality, it can be viewed as (λ(n),μ(n)I (x), γ

(n)) → (λ, μI (x), γ ) with

λ � 0, μI (x) � 0, γ ∈ R� and ‖(λ, μI (x), γ )‖ = 1. Since clA + clB ⊆ cl(A + B), it

follows from (26) that

0 ∈∑k∈J

λk cl conv∂∗fk(x) +∑

i∈I (x)

μi cl conv ∂∗gi(x)

+∑j∈L

γj∇hj (x) + N(C; x)

⊆ cl

(∑k∈J

λk conv ∂∗fk(x) +∑

i∈I (x)

μi conv ∂∗gi(x) +∑j∈L

γj∇hj (x) + N(C; x)

).

(27)

It can be seen that (27) holds with (λ, μ) �= (0,0). In fact, if (λ, μ) = (0,0), thenγ �= 0. However, this, together with (27), yields a contradiction with the regularitycondition (RC). Hence (λ, μ) �= (0,0). The proof is complete. �

Corollary 3.1 Assume that C = Rn, and all the hypotheses of Theorem 3.2 are

fulfilled in which the regularity condition (RC) in Proposition 3.1 is replaced bythat ∇h1(x), . . . ,∇h�(x) are linear independent. Then there exist λk � 0 (∀k ∈ J ),

μi � 0 (∀i ∈ I (x)) with (λ, μ) �= (0,0) and γj ∈R (∀j ∈ L) such that (17) holds.

Proof For C = Rn, N(C; x) = {0}. Hence, if ∇h1(x), . . . ,∇h�(x) are linearly in-

dependent, then the regularity condition (RC) is satisfied. Taking account of Theo-rem 3.2, the conclusion follows. �

In case the set D(x) is closed, we get the following direct consequence of Theo-rem 3.1, in which the closure in (17) can be removed.

Corollary 3.2 Let C = Rn. Assume that all the hypotheses of Theorem 3.2 are ful-

filled and the set D(x) is closed. Then there exist λk � 0 (∀k ∈ J ), μi � 0 (∀i ∈ I (x))

Page 12: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

with (λ, μ) �= (0,0) and γj ∈R (∀j ∈ L) such that

0 ∈∑k∈J

λk conv ∂∗fk(x) +∑

i∈I (x)

μi conv ∂∗gi(x) +∑j∈L

γj∇hj (x) + N(C; x).

Remark 3.2 In case C = Rn, as also in Remark 3.1 in [22], if ∂∗fk(x) (k ∈ J ) and

∂∗gi(x)(i ∈ I (x)) are bounded and the following condition is fulfilled:

0 /∈ conv

(⋃k∈J

conv ∂∗fk(x) ∪⋃

i∈I (x)

conv ∂∗gi(x)

)+ lin

{∇hj (x) : j ∈ L},

then D(x) is closed, where lin stands for the linear hull. In fact, it can be seen thatconv ∂∗fk(x)(k ∈ J ) and conv ∂∗gi(x)(i ∈ I (x)) are compact, and so is

conv

(⋃k∈J

conv∂∗fk(x) ∪⋃

i∈I (x)

conv∂∗gi(x)

).

This implies that the set

E(x) := conv

(⋃k∈J

conv ∂∗fk(x) ∪⋃

i∈I (x)

conv ∂∗gi(x)

)+ lin

{∇hj (x) : j ∈ L}

is closed. Moreover, since 0 /∈ E(x), and

D(x) = coneE(x),

it follows that D(x) is closed, where coneE(x) is the cone generated by E(x).

Remark 3.3 Necessary conditions for efficiency obtained in Theorem 3.2, Corollar-ies 3.1, 3.2 and some other optimality conditions in the paper, which are expressed interms of ∂∗f (x), may be sharper than those expressed in terms of generalized subdif-ferentials such as the Clarke, Mordukhovich and Michel-Penot subdifferentials. Thisis shown in Example 2.1 by Jeyakumar-Luc [4]. Some discussions on the advantageof the nonconvexity of convexificators are given by Jeyakumar-Luc [4, p. 604].

4 Constraint Qualifications and Karush–Kuhn–Tucker Necessary Conditionsfor Efficiency

In Sect. 3, we have derived the Fritz John necessary condition for efficiency (17)under Assumption 3.1 and the hypotheses of Proposition 3.1, where the regularitycondition (RC) holds:

0 ∈∑j∈L

γj∇hj (x) + N(C; x) =⇒ γ1 = · · · = γ� = 0.

Page 13: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

In case C = Rn, this condition becomes that the system ∇h1, . . . ,∇h� is indepen-

dent. Note that under the necessary condition (17) we have (λ, μ) �= (0,0). To ob-tain necessary conditions for efficiency in which λ �= 0, we introduce the followingconstraint qualification, which is called (CQ1): there exist s ∈ J , d0 ∈ T (C; x), andnumbers ak > 0(k ∈ J, k �= s), bi > 0(i ∈ I (x)) such that

(i) 〈ξk, d0〉 � −ak (∀ξk ∈ conv ∂∗fk(x),∀k ∈ J, k �= s); 〈ηi, d0〉 � −bi (∀ηi ∈conv ∂∗gi(x),∀i ∈ I (x));

(ii) 〈∇hj (x), d0〉 = 0 (∀j ∈ L).

We also introduce another constraint qualification, which is called (CQ2): for ev-ery λk � 0 (k ∈ J, k �= s);μi � 0 (∀i ∈ I (x)), not all zero, and γj ∈ R (∀j ∈ L), wehave

0 /∈ cl

( ∑k∈J,k �=s

λk conv ∂∗fk(x)+∑

i∈I (x)

μi conv ∂∗gi(x)+∑j∈L

γj∇hj (x)+N(C; x)

).

In the sequel, we give a relationship between (CQ1) and (CQ2).

Proposition 4.1 Assume that fk and gi admit upper convexificators ∂∗fk(x) and∂∗gi(x) at x, respectively, for all k ∈ J, k �= s, i ∈ I (x), h is Fréchet differentiable atx and C is convex. Then (CQ1) implies (CQ2) (for s ∈ J ).

Proof Assume, to the contrary, that (CQ1) holds, but (CQ2) is false, whichmeans that there exist λk � 0(∀k ∈ J, k �= s),μi � 0 (∀i ∈ I (x)) with (λ(s),μ) �=(0,0) (where λ(s) = (λk)k∈Jk �=s ,μ = (μi)i∈I (x)), ξ

(n)k ∈ conv ∂∗fk(x) (∀k ∈ J,

k �= s), η(n)i ∈ conv ∂∗gi(x) (∀i ∈ I (x)), γj ∈ R (∀j ∈ L) and ζ (n) ∈ N(C; x) such

that

limn→∞

( ∑k∈J,k �=s

λkξ(n)k +

∑i∈I (x)

μiη(n)i +

∑j∈L

γj∇hj (x) + ζ (n)

)= 0.

It follows from (CQ1) that there exist d0 ∈ T (C; x) such that

0 = limn→∞

( ∑k∈J,k �=s

λk

⟨ξ

(n)k , d0

⟩ + ∑i∈I (x)

μi

⟨η

(n)i , d0

⟩ + ∑j∈L

γj

⟨∇hj (x), d0⟩ + ⟨

ζ (n), d0⟩)

� limn→∞

( ∑k∈J,k �=s

λk

⟨ξ

(n)k , d0

⟩ + ∑i∈I (x)

μi

⟨η

(n)i , d0

⟩). (28)

Since (λ(s),μ) �= (0,0), it follows from condition (i) in (CQ1) that

limn→∞

( ∑k∈J,k �=s

λk

⟨ξ

(n)k , d0

⟩ + ∑i∈I (x)

μi

⟨η

(n)i , d0

⟩)� −

∑k∈J,k �=s

λkak −∑

i∈I (x)

μibi < 0.

This conflicts with (28). Hence the conclusion follows. �

A Karush–Kuhn–Tucker necessary condition for efficiency can be stated as thefollows.

Page 14: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

Theorem 4.1 Let x be a local weak Pareto minimum of (P). Assume all the hy-potheses of Theorem 3.2 are fulfilled. Suppose also that the constraint qualification(CQ1) or (CQ2) (for s ∈ J ) holds. Then there exist λs > 0, λk � 0 (∀k ∈ J, k �= s),μi � 0 (∀i ∈ I (x)) and γj ∈R (∀j ∈ L) such that

0 ∈ cl

(∑k∈J

λk conv ∂∗fk(x) +∑

i∈I (x)

μi conv ∂∗gi(x) +∑j∈L

γj∇hj (x) + N(C; x)

).

(29)

Proof On using Theorem 3.2 yields the existence of λk � 0 (∀k ∈ J ), μi � 0 (∀i ∈I (x)) with (λ, μ) �= (0,0) ((λ, μ) = ((λk)k∈J , (μi)i∈I (x))), and γ ∈ R

� such that (29)holds. If λs = 0, then from (CQ1) or (CQ2) we arrive a contradiction with (29), andso λs > 0. �

Remark 4.1 We introduce the following constraint qualification (CQ3), which isweaker than (CQ1): there exist d0 ∈ T (C; x) and numbers bi > 0 (i ∈ I (x)) suchthat the condition (ii) in (CQ1) and the following condition hold:

(i′) 〈ηi, d0〉 � −bi (∀ηi ∈ conv ∂∗gi(x),∀i ∈ I (x)).

We also introduce the following constraint (CQ4), which is weaker than (CQ2):for every μi � 0 (∀i ∈ I (x)), not all zero, and γj ∈R (∀j ∈ L),

0 /∈ cl

( ∑i∈I (x)

μi conv∂∗gi(x) +∑j∈L

γj∇hj (x) + N(C; x)

).

By an argument analogous to that used in the proof of Proposition 4.1, we deducethat (CQ3) implies (CQ4). Moreover, if in Theorem 4.1 (CQ1) and (CQ2) (for somes ∈ J ) are replaced by (CQ3) and (CQ4), respectively, we also obtain the necessarycondition (29) with the condition

∑k∈J λk > 0, which is weaker than the condition

λs > 0.

The following assumption is needful to derive necessary efficient conditions inwhich all the Lagrange multipliers corresponding to all the components of the objec-tive are positive.

Assumption 4.1 For every k ∈ J and i ∈ I (x), the functions fk and gi admit uppersemi-regular convexificators ∂∗fk(x) and ∂∗gi(x) at x, respectively. All the functionsgi(i /∈ I (x)) are continuous at x.

Hereafter, we derive a Karush–Kuhn–Tucker necessary condition for local weakPareto minimum with positive Lagrange multipliers corresponding to all the compo-nents of the objective.

Theorem 4.2 Let x be a local weak Pareto minimum of (P). Assume all the hy-potheses of Theorem 3.2 are fulfilled in which Assumption 3.1 is replaced by Assump-tion 4.1. Suppose also that the constraint qualification (CQ1) or (CQ2) (for all s ∈ J )

Page 15: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

holds. Then there exist λk > 0 (∀k ∈ J ), μi � 0 (∀i ∈ I (x)) and γj ∈R (∀j ∈ L) suchthat

0 ∈ cl

(∑k∈J

λk conv ∂∗fk(x) +∑

i∈I (x)

μi conv ∂∗gi(x) +∑j∈L

γj∇hj (x) + N(C; x)

).

Proof For each s ∈ J , we apply Theorem 4.1 to deduce that there exist λ(s)s > 0,

λ(s)k � 0 (∀k ∈ J, k �= s),μ

(s)i � 0 (∀i ∈ I (x)) and γ

(s)j ∈ R (∀j ∈ L) such that

0 ∈ cl

(∑k∈J

λ(s)k ∂∗fk(x) +

∑i∈I (x)

μ(s)i ∂∗gi(x) +

∑j∈L

γ(s)j ∇hj (x) + N(C; x)

). (30)

It can be seen that clA + clB ⊆ cl(A + B). Taking s = 1, . . . , r in (30) and adding upboth sides of the obtained inclusions, we arrive at the following:

0 ∈∑s∈J

cl

(∑k∈J

λ(s)k ∂∗fk(x) +

∑i∈I (x)

μ(s)i ∂∗gi(x) +

∑j∈L

γ(s)j ∇hj (x) + N(C; x)

)

⊆ cl

(∑k∈J

λk∂∗fk(x) +

∑i∈I (x)

μi∂∗gi(x) +

∑j∈L

γj∇hj (x) + N(C; x)

),

where λk = λ(s)s +∑

s∈J,s �=k λ(s)k > 0 (∀k ∈ J ), μi = ∑

s∈J μ(s)i � 0 (∀i ∈ I (x)) and

γj = ∑s∈J γ

(s)j ∈ R (∀j ∈ L), which completes the proof. �

5 Sufficient Conditions for Weak Pareto Minima

In what follows, we can see that Karush–Kuhn–Tucker necessary conditions for ef-ficiency will become sufficient when introducing some hypotheses on generalizedconvexity.

Theorem 5.1 Let x be a feasible point of (P). Assume that fk admits a convexificator∂∗fk(x) at x for all k ∈ J , gi admits upper regular convexificator ∂∗gi(x) at x forall i ∈ I (x) and h is Fréchet differentiable at x. Suppose, furthermore, that

(i) There exist λk � 0 (∀k ∈ J ) with λ = (λk)k∈J �= 0, μi � 0 (∀i ∈ I (x)) andγj ∈ R (∀j ∈ L) such that

0 ∈ cl

(∑k∈J

λk conv ∂∗fk(x)+∑

i∈I (x)

μi conv ∂∗gi(x)+∑j∈L

γj∇hj (x)+N(C; x)

),

(31)(ii) f is scalarly R

n+- asymptotic pseudoconvex at x with respect to C; gi is qua-siconvex and hj is quasilinear at x with respect to C (∀i ∈ I (x), j ∈ L); C isconvex.

Then x is a weak Pareto minimum of (P).

Page 16: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

Proof It follows from (31) that there exist ξ(n)k ∈ conv ∂∗fk(x) (∀k ∈ J ), η

(n)i ∈

conv ∂∗gi(x) (∀i ∈ I (x)) and ζ (n) ∈ N(C; x) such that

limn→∞

(∑k∈J

λkξ(n)k +

∑i∈I (x)

μi η(n)i +

∑j∈L

γj∇hj (x) + ζ (n)

)= 0. (32)

Since gi admits an upper regular convexificator ∂∗gi(x) at x with respect to M forall i ∈ I (x), it follows from Proposition 2.1 that, for any x ∈ M ,

〈ηi, x − x〉� 0(∀ηi ∈ ∂∗gi(x)

),

as gi(x) − gi(x) � 0 (∀x ∈ M). This leads to the following:

〈ηi, x − x〉 � 0(∀ηi ∈ conv ∂∗gi(x)

).

Hence, ⟨η

(n)i , x − x

⟩� 0 (∀x ∈ M). (33)

Since hj (j ∈ L) are quasilinear at x with respect to C, we have

⟨∇hj (x), x − x⟩ = 0 (∀x ∈ M). (34)

In view of the convexity of C, we have the result that x − x ∈ T (C; x). Hence,

⟨ζ (n), x − x

⟩� 0 (∀x ∈ M). (35)

Combining (32)–(36) yields

limn→∞

⟨∑k∈J

λkξ(n)k , x − x

⟩� 0. (36)

Since fk admits a convexificator ∂∗fk(x) at x and λk � 0 for all k ∈ J , by Rules 4.1and 4.2 in [4], it follows that λT f admits

∑k∈J λk∂

∗fk(x) as a convexificator at x.Making use of the scalarly R

n+- asymptotic pseudoconvexity of f , we deduce thatλT f is asymptotic pseudoconvex at x with respect to M . Hence, in view of (36) weget ⟨

λ, f (x)⟩�

⟨λ, f (x)

⟩(∀x ∈ M).

It is easy to see that x is a weak Pareto minimum of (P), as was to be shown. �

6 Concluding Remarks

Under the regularity condition by Jiménez and Novo [16], which is more than gen-eral than the linear independence condition, Fritz John necessary conditions for weakPareto minima in terms of convexificators are established based on the extended

Page 17: Necessary and Sufficient Conditions for Efficiency Via Convexificators

J Optim Theory Appl

Ljusternik Theorem by Jiménez and Novo. Karush–Kuhn–Tucker necessary condi-tions for efficiency are derived under one of the constraint qualifications (CQ1)–(CQ4), where (CQ3) and (CQ4) are weaker than (CQ1) and (CQ2), respectively. Withsome hypotheses on generalized convexity, sufficient conditions for weak Pareto min-ima are obtained.

Acknowledgements The author thanks the referees for their valuable comments and suggestions. Thisresearch was supported by National Foundation for Science and Technology Development of Vietnam.

References

1. Demyanov, V.F.: Convexification and concavification of a positively homogeneous function by thesame family of linear functions. Universia di Pisa, Report 3, 208, 802 (1994)

2. Demyanov, V.F., Jeyakumar, V.: Hunting for a smaller convex subdifferential. J. Glob. Optim. 10,305–326 (1997)

3. Demyanov, V.F., Rubinov, A.M.: Constructive Nonsmooth Analysis. Verlag Peter Lang, Frankfurt(1995)

4. Jeyakumar, V., Luc, D.T.: Nonsmooth calculus, minimality, and monotonicity of convexificators.J. Optim. Theory Appl. 101, 599–621 (1999)

5. Jeyakumar, V., Luc, D.T.: Approximate Jacobian matrices for nonsmooth continuous maps andC1-optimization. SIAM J. Control Optim. 36, 1815–1832 (1998)

6. Jeyakumar, V., Luc, D.T., Schaible, S.: Characterizations of generalized monotone nonsmooth con-tinuous maps using approximate Jacobians. J. Convex Anal. 5, 119–132 (1998)

7. Luc, D.T.: A multiplier rule for multiobjective programming problems with continuous data. SIAM J.Optim. 13, 168–178 (2002)

8. Dutta, J., Chandra, S.: Convexifactors, generalized convexity and vector optimization. Optimization53, 77–94 (2004)

9. Dutta, J., Chandra, S.: Convexifactors, generalized convexity and optimality conditions. J. Optim.Theory Appl. 113, 41–65 (2002)

10. Luu, D.V.: Convexificators and necessary conditions for efficiency. Optimization (2013, accepted forpublication). doi:10.1080/02331934.2011.648636

11. Kohli, B.: Optimality conditions for optimistic bilevel programming problem using convexifactors.J. Optim. Theory Appl. 152, 632–651 (2012)

12. Suneja, S.K., Kohli, B.: Optimality and duality results for bilevel programming problem using con-vexifactors. J. Optim. Theory Appl. 150, 1–19 (2011)

13. Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley Interscience, New York (1983)14. Michel, P., Penot, J.-P.: Calcul sous-différentiel pour des fonctions lipschitziennes et nonlipschitzi-

ennes. C. R. Math. Acad. Sci. 12, 269–272 (1984)15. Mordukhovich, B.S., Shao, Y.: On nonconvex subdifferential calculus in Banach spaces. J. Convex

Anal. 2, 211–228 (1995)16. Jiménez, B., Novo, V.: A finite dimensional extension of Lyusternik theorem with applications to

multiobjective optimization. J. Math. Anal. Appl. 270, 340–356 (2002)17. Mangasarian, O.L.: Nonlinear Programming. McGraw-Hill, New York (1969)18. Yang, X.Q.: Continuous generalized convex functions and their characterizations. Optimization 54,

495–506 (2005)19. Aubin, J.P., Frankowska, H.: Set-valued Analysis. Birkhauser, Boston (1990)20. Aubin, J.P., Cellina, A.: Differential Inclusions. Springer, Berlin (1984)21. Rockafellar, R.T.: Convex Analysis. Priceton University Press, Princeton (1970)22. Luu, D.V.: Necessary conditions for efficiency in terms of the Michel–Penot subdifferentials. Opti-

mization 61, 1099–1117 (2012)