13
INTERNATIONAL JOtJRNAL FOR NUMERICAL METHODS IN ENGINEERING, VOL. 24, 1865-187'7 (1987) INTERACTIVE SOLUTION TO MULTIOBJECTIVE OPTIMIZATION PROBLEMS SUMMARY Sensitivity analysis is presented as 3 natural addition to interactive multjobjective opthixition methods based on compromise programming. It is shown that useful informatioii regarding trade-offs in the objectives can be generated elrectivelq by means of an analysis of the sensitivity of solutions to variations in preference structrires. Au implementation based on sequential quadratixe programming is provided ;itid examples are given for iliustration. INTRODUCTION Most applications of multiobjective optimization in structural and mechanical design have been based on the so-called prior articulation methods,' that is, methods that require an ordering of the objectives before the optimization takes place. There are instances, liower~er, when the iiiforrnation available to the designer before taking any optimization step is not sufficient to support the statement of preferences regarding the objectives. When this is the case metliods that rely on preference information generated interactively, during the optimization, may be more suitable. These methods req uirc the solution of several single objective optimization problems, which can become iniprac~ical in cnginecring applicatims that require cxtensive calculations as part of the such as those involving finite clcmciit approximations. This paper presents a procedure to rcduce the number of solutions required by interactive optimization methods based on compromise programming. Mu!tiobjectiw optimization procedures that interact with the decision maker have received much attention in the management sciences and in policy decision making. Comparatively, developmenis and applications of multiobjective optitniLation in general have been much fewer in engineering. 'The contributions of Nakayaina and Furukana.' who introduced the so-callcd satisficing trade-off method, and the works of Bcndsoe ef E~chenauer,~ K~ski,~ Koski and Silvennoinen" and Osycaka' are relcvant examples of work in structural design. Surveys by Stadlcr8 and Osyczka and K oslii' contain rclatively few additional examples of applications of mu-ltiobjeclive optimization in enginccring. in this paper thc incorporation of sensitivity analysis in multiobjective optimization is proposed as a way to iniprove the efficiency of schemes that require interaction with the designer. Solutions to the inuitiohjecli;.e probleni are characterized using the main ideas of compromise programming, introduced by Zelcny iii Refcreme 10 and studied among others by Gearhart," Sawazagi et ~l.,'~ Frcinicr and Yu," Bowman'" and %deny. l5 Thr objective is to provide information that can rcduce thc iiriiitber of interactiijns needed before a satisfactory solution +, found. The procedure

Interactive solution to multiobjective optimization problems

Embed Size (px)

Citation preview

INTERNATIONAL JOtJRNAL FOR NUMERICAL METHODS IN ENGINEERING, VOL. 24, 1865-187'7 (1987)

INTERACTIVE SOLUTION TO MULTIOBJECTIVE OPTIMIZATION PROBLEMS

SUMMARY

Sensitivity analysis is presented as 3 natural addition to interactive multjobjective opthixition methods based on compromise programming. It is shown that useful informatioii regarding trade-offs in the objectives can be generated elrectivelq by means of an analysis of the sensitivity of solutions to variations in preference structrires. Au implementation based on sequential quadratixe programming is provided ;itid examples are given for iliustration.

INTRODUCTION

Most applications of multiobjective optimization in structural and mechanical design have been based on the so-called prior articulation methods,' that is, methods that require an ordering of the objectives before the optimization takes place. There are instances, liower~er, when the iiiforrnation available to the designer before taking any optimization step is not sufficient to support the statement of preferences regarding the objectives. When this is the case metliods that rely on preference information generated interactively, during the optimization, may be more suitable. These methods req uirc the solution of several single objective optimization problems, which can become iniprac~ical in cnginecring applicatims that require cxtensive calculations as part of the

such as those involving finite clcmciit approximations. This paper presents a procedure to rcduce the number of solutions required by interactive optimization methods based on compromise programming.

Mu!tiobjectiw optimization procedures that interact with the decision maker have received much attention in the management sciences and in policy decision making. Comparatively, developmenis and applications of multiobjective optitniLation in general have been much fewer in engineering. 'The contributions of Nakayaina and Furukana.' who introduced the so-callcd satisficing trade-off method, and the works of Bcndsoe ef E ~ c h e n a u e r , ~ K ~ s k i , ~ Koski and Silvennoinen" and Osycaka' are relcvant examples of work in structural design. Surveys by Stadlcr8 and Osyczka and K oslii' contain rclatively few additional examples of applications of mu-ltiobjeclive optimization in enginccring.

in this paper thc incorporation of sensitivity analysis in multiobjective optimization is proposed as a way t o iniprove the efficiency of schemes that require interaction with the designer. Solutions to the inuitiohjecli;.e probleni are characterized using the main ideas of compromise programming, introduced by Zelcny iii Refcreme 10 and studied among others by Gearhart," Sawazagi et ~ l . , ' ~ Frcinicr and Yu," Bowman'" and %deny. l 5 Thr objective is to provide information that can rcduce thc iiriiitber of interactiijns needed before a satisfactory solution +, found. The procedure

1866 A. DTAZ

proposed should be easily implemcnted using cxisting single objeclive aptimi7alion algorithms and it should require little additional computational effort. The main features of the method are outlined and illustrated in the following sections.

THE MULTIOBJECTIVE PROBLEM

The ejjicicient set

In the standard naultiobjective optimization problem (MOP) the objectivc i s expressed as a vector-valued function f ( x ) = (J1 (x), ,f,(.x), . . . . ,f,,,(x)]., where n7 is an integer greater than 1. The vector x represents the design and is restricted to lie within a prescribed set X c R", the feasible set. Borrowing the notation from singlc objective optimization, the MOP is cxprcssed formally as

rniniinizc ,f(,x) (1 ) sub,ject to .YEX

In multiple objective optimization, however. the meaning of 'minimize' is not immediately apparcnl sincc, in gcneral, a solution x that minhiizes all individual objcctivcs ,fi(.x) simultaneously does not exist. In order to characterize the solutions of problems ( I ) it is necessary to introduce the concept of efficiency.

Definition. The set of efficient solutions S associated with problem (1) is defined by

S = ( x E X : There is no EX such that j i ( y ) <j.fl(x), i = 1; 2,. . . tn andJ'(x) # j'(y)) (3)

The image of S under .f is called the efficient set E? i s .

ii = ( z t Z : 2 = ~ f ( X ) , . u € S ] (3) where 2 = J'(,X) E X"' is the image of X under ,f.

According to the definition used here, an efficient solution is one in which a reduction in an individual objective j ; can be achicved only at the expense of increasing a competing goal ,fj This is the so-called Pareto efficiency. The set S is orten callcd the Pareto set or the set of non-inferior solutions. For simplicity this is the only form of efficiency uscd herc, although other definitions are possible.

The definition of cfficiency introduces an ordering that gives mcaning to the minimization statement in the MOP ( I ) : to 'solvc' ( I ) simply means to devise a procedure that computes eiements of the set of efficient solutions S. One such procedure is based on the concept of compromise solutions.

The compromise set

The compromise set in multiobjective optimization is defined with the help of a so-called ideal solution f *. This is a vector in R", not necessarily an achievable outcome, that is used as a reference against which a feasible point f ( x ) is compared. Here it will be assumed that each J, and J 7 satisfy the condition

min ,fi(.x) > ,fT > - a, i = 1,2, . . . . m

X € X (4)

From (4)it is clear that feasible outcomes that are in some sense closest tof* are likely candidates

MULTIOBJECTIVE OPTIMIZATION PROBLEMS 1867

to be efficient solutions. A point f 0 € % whose divtance to f * is in some sense minimum is called a compromise solution. Specifically J ' = f (xoj is the outcome associated with the solution of the following problem.

Problem P,(w)

minimize d,.,,(f(x) - f * )

X E X

where d,,,z is the following mapping from Ky to K:" with

define, for z > 0 ,

and

The measures in(5c) and (5d) are the well-known weighted lE,* and weighted Tchebyshev norms in R". The following theorem is useful in establishing the connection between elements i n the efficient set E and compromise solutions, i.e. solutions to Px(w) . For details, see Reference 1 t .

Theorem. If X is closed and f * satisfies (4), P,(M') has a solution x0. Furthermore, under the same conditions, for any y€:E there exist an element t v f Wand a scalar a* such that f ( x o ) is arbitrarily close to y whenever x > a*.

Thew results are the basis of an operationally useful procedure to solve the MOP. If P,(w) IS

solved several timcs, allowing M' to vary over W , it is possible to construct a set of solutions that is arbitrarily close to E. Such a set i s called the set of compromise solutions.

The approximation of E by means of the compromise set replaces the problem offindmg E by the problem of finding C. In most interactive problems only a few elements of C need to be computed until a solution that satisfies the designcrs personal set of values is found. This search should be carefully planned, however, since each compromise solution can be obtained only after solving a potentially costly single objective optimiration problem. In the following a procedure is presented whereby the efficiency of the search in C can be improved.

SENSITIVITY IN THE COMPROMTSE SET

Suppose that at some point during the optimization process a compromise solution f " = f ( x " j has been obtained after solving P,(w"). I f f ' is satisfactory to the designer no further search in C is necessary and the process ends there. If. on the other hand, (.yo, f " ) is not a satisfactory answer a new element of C must be produced. that is, P J w ) must be solved again for a w # wo. Hence P,(M) can be interpreted as a mapping P,: W+ C for some fixed value of a. Often, however, the designer has a better understanding of C than of W and finds it easier to state what kind of solution is satisfactory than to identify parameters u' that would produce satisfactory solutions. This suggests the need to investigate procedures that would help the designer in the selection of elements in W that produce a desired outcome in C, that is, procedures that resemble an inverse mapping of P,.

1868 A. DIAZ

One way to accomplish this is to study the sensitivity of elements in C to variations in 11'. Let the feasible set X be of the forni

X = { .Y EX : g j ( x) .i 0, j == 1 . 2 , . . . ~ p ; 12, (x) = 0. k = i , 2 , , . . ~ q j, (6,

Here it wili be assumed that all functions f i t g i and h, are twice, continuously difftrentiabie in a ncighbourhood ofs" in K" large cnough :o contain all solutions af P J w ) for w near M'. Tt is also assumed that .yo satisfies the followifig sccond order sufficiency conditions.

(i) There exist 1,agrange multipliers l r o ~ R P and v 'EK '~ that satisfy

z~Yy~(.x") = 0,

k,(.Y") = 0:

j = 1.2,. . . , p

k = 1 ,2 , . . . .(I with uo 3 0. Here L is the Lagrangian function

L = d,,,,, + "Tg + C T h

iii) yTL(.xo,uo, r " ) y 1 0 for all y + 0 such that

Vgyfy 2 0 for all ,i such that g,;(.x-") = 0

OgTy = 0 for all j such that up > 0

Vh:y = 0 for all k

(7)

Equations (7) describe the conditions under which x 0 is a solution to fi,(wo). These equations can be conveniently expressed in the form

f,(z) = 0, F',.EK"+P+4 (8)

where I = (x, u , o) and F,, is evaluated at zo = (.x", u', 6"). The subscript \ \ , i s used to indicate that the system arises from one specific member of W, in this case rt' = ivo. For va~ucs of 11' ncar w" and provided that the conditions of the implicit function thcorem apply, there exists a function Z(M\),

differentiable w i t h respect to 12:. such that

F ( z ( w ) ) = 0 (9) It can be shown (Fiacco") that equation 19) holds near w'' provided that

(iii) the gradients of the active constraints at xo are Iinearly independent and (iv) strict complementary slackncss holds at 9, that is, u r > 0 if ~ ~ j s - " ) = 0.

This being the case. F,v can be differentiated with respect lo w to obtain

J V,,Z + V,F , , = 0 (10)

where J is the Jacobian of F , , with respect to z and all quantities are evaluated at z'. Equation (10) is the basis for the analysis of the sensitivity of conipromisc solutions with respect to changes in $1'. For 1%' near MI'', equation (10) provides the following linear estimate of the solution to P,(H.):

z ( w ) % 2' = z" - ,J-- 1 V,$,F(W - I,!") (1 1) Given the prcsent solution x", the designer can solve the sensitivity equation ( 1 1) for different values of $2: near ti," and compute an estimate of . f ( . ~ i ~ . ) ) . The value of w that produces a satisfactory estiniate is then sclectcci :is the next element in W to he iniestigated.

MULT'IOBJBCTIVE OPTTMIZ4TIC)N PROBLEMS 1869

IMPLEMENTATION OF THF SENSITIVITY EQLIAI I O ' U S

The implementation of the sensitivity analysis within the rnultlobjectlve procedure i b justifiable only if the effort required is signific,intly l o ~ c r than thc effort needcd to solve P,(re). A method of solution of the sensitivity equations k outlined here

An eiXeient implementation of the sensitivity equation ( I I ) makes maximum ure of the information penerared during the solution of the single objective problem P,(no) Conader, for instance, a Newton-bascd method of solution ot the necessary conditions

F ( z ) = 0

When this method i s applied near the solution z o , an c3timatt: z,_ i s produced from z, through the iteratire scheme

z,,, =zy- 1- 'F(z,I (1 2)

Exact penally fmct ions or generalized Lagrangian algorithms. for instance, can all be related to Newton's scheme (12). Here the implementation of the sensitikity analysis in multiobjective optimization IS illustrated using a sequential quadratic programming (SQP) algorithm based on the I agrangian function.

The sequet~iiul quadratic prograinmirig (SQP) ulgorithrrr

In a SQP algorithm x, + is computed from x, , = x, 4- s6, where ~ E R " is the qolution to the

minimize

quadratic problcrn

112 J 'H6 + (1' Vd

SubJect to g(xJ 4- ViJTLjTb < 0 (13) h p , ) + VhTh = 0

Were If represents the Hessian of the Lagrangian and d stands for the measure d,,,,T in (5c). (7'he weighted Tchebyshev norm will be discussed 1ater.j Cg and Vh are matrices whose columns are the gradients V g j and Vhk, respectively and ali quantities are evaluated at i, = (x,. ity, L , ~ ) . Under the second order sufficiency assumptions, if b = 0 is thc solution to the quadratic problem ( 1 3); zl, satisfics thc Kuhn -Tucker conditions in the original problem P,x(ivo). Diffcrcnt implementations of this procedure differ mainly in the selection of the step size s and in the approximahi o f H using quasi-Newton updates €?,, that prcservc positive definiteness.

Consider now the sensitivity equations (1 1). Since xi i is a solution to P,(MJ'), the quadratic program has a solution 6 = 0 with x" replaced by x, in (13). The sensitivity equation can be interpreted as a perturbation from this problem. Let this perturbcd quadratic problem be

minimize 1/2PHi i + tF(Vd + h,j

where h o ~ K " , b, ER" and h , ~ R 4 are perturbations that dcpend on the difference (M. - M.*). In this problem fi, Fd, Vy arid Q h are all evaluated at (A", zP, t o ) and hencc they were computed during the last stages of the solution of P,(w@) The unknown 6 IS a perturbation from Y O caused by the change in tv, say h = x - s', for some X C E W

'To relate the perturbed quadratic prohlem to the sensltivlty equations one can write the necessarj conditions associated w t h ( 14).

1870 A. DIAZ

where U E K P and VERY are Lagrangc multipliers. The symbols [D, ,] represent diagonal matrices whose entries are [D, ,Iii = (.),. Aftcr some algebraic manipulations these equations can be rewritten as

H 6 + V y ~ u - u " S + V h ( v - 2 ' 0 ) = -b ,

[D,] VgTG 4- [D,] { u - uo ) = - [D,] b ,

Vh"6 = - h 2

using (Vd + VguO + Vku') = 0, since (xo, tio, uo) is a solution to P,(wo). Assume now that the set of active inequalities in the perturbed problem is the same as in the

original problem P,(w*). This assumption 1s valid for w close enough to wo and it follows lrom the continuity of the implicit functions J(x (w) ) . c/(x(w)), h(x(wj) , u(w) and v(w) and the strict complementary slackness assumption. If this is the case, equation (16) is simply

J ( z - z o ' J - - - (h0,h1,h2)T

h, = V,"(V,L)(t(: - W O )

(171

which is the sensitivity equation with z = (x, u, u) and perturbations

b , = v,vg(x")(w - W O )

b, = V,~(X' ) (W - w")

In summary, the estimate of the compromise solution associated with a paratneter w near wo is available from a perturbed quadratic programming problem constructed at the (known) point (x", uo, v").

The Hessian H

So far no assumption has been made about the positive definiteness of the Hessian of the Lagrangian. The second order sufficiency conditions guarantee the existence of JP1 but not the positive definiteness of H or the existence of N It is well known, however, that solution algorithms for quadratic problems such as (13) and (14) present difficulties if H is not positive definite. Partly for this reason all SQP algorithms use a positive definite quasi-Newton update w instead of the true Hessian.

After P,(wo) has been solved using a standard SQP algorithm, one approach to obtain z from the sensitivity equations

J i Z - Z O ) +VF,"jMi-wO)=O

is to compute the true Hessian H ( z o ) and use it in the perturbed quadratic program. if it turns out to be positive definite. Otherwise, an inverse of J can still be obtained using a 'reduced' Hessian, computed much in the same way a generalized reduced gradient is computed. Formulae to carry out the computation of J- ' are presented in Reference 17.

A different approach would be In use the update fi, rather than H, in the perturbed program (14). Although only 'approximate' second order information would be used in this case, the method has

M[ I1 TTOHIFCTIVF OPTIMIZATION PROBLEMS 1571

thc advantage of not requiring information other than what is already available from the solution of P,(wo) via (13). This approach will bc used here.

The association of the sensitivity equations with the method of solution of Px(wo)-- in this case a SQP algorithm ---brings forth a naturai way to deal with interactive multiobjective problems. The soiution process of P,iwo) genemtes most of the information nccdcd to explore thc region of C in the neighbourhood of the compromise solution xo. Each point in this immediate region is

ociaied with pcrlurbations (Ao, b,, 11,) that are easy to calculate and require no additional evaiuations ofthe ob.iective function . for !he constraints y and h or their derivatives. These features make the solution of multiobjective optimization problem that require extensive computations- such as a finite element analysis---more attractive. The process is illustrated in morc detail for the Tcheb.;shev measure d,,,, in the following section.

MTJLTTOBJECTIVE OPTIMIZATION IN THE TCHERYSH EV N O R M

X7hen the distance to the idcal solution is measured in the weighted Tchebyshev norm ii is more convenient to formulate the problem a " , ( b t j in the following form.

Problem P ~ (w) minimize (1 ;2)yL

subject to w i ( f ; - f * j < 1:, i = l ,? : . . . ,n? (19j X E X ' = f y ~ R : g i x ) < O ; h(x)=O)

The equivalence betwccn this problem and (5j with dw,T , can bc cstablishcd without difficulties. The advantage of thc Tchebyshev nerm in this context is that efficient soliltions are produced

even when the objective set Z - , f ( x ) is non-convex. This featurc is obtained with somc Okadvantages: solutions to P(w) may he only weakly-efficient, i.e. they may satisfy (2) with ' < ' replacing '<'. The problem P,(w), however, remains as one of the morc altractive methods to generate solutions to [lie MOP.

In one implcmcntation of P,(w). the parameter w j is replaced using the ratio -

M'i 1 l / ( f i - f*) (XI}

(211

where "~ER' ' has the property

j;. =. f ' ~ , i = I ,?, . . . , U I

The vector ,r is sometimes called the 'aspiration levcl' of f .' Various implementations of U,(w) have appeared in the literature, under names such as satisficing trade-off method, STEP method and ~ t h e r s . * ~ ' ' . ~

Using ,Tinstead of w as the parameter, problem P.,(w) is replaced by

Problem P(.T) min ( 1 / 2 ) y 2

For this problem the Lagrangian function is

L = (1/2)y2 + Fqb + uTg t UTh

where %, is the Lagrange multiplier associated with the objective f.,. If (x", y o ) is the solution to (22)

1872 A. D I M

for a given aspiration level ,p, the perturbed quadratic program for .f near is

minimize 1 /'2dT116 -t- ( y o -- i orAT)dr + , subject to &.xO) + VQT6 - yA7 ,< 0

g(x0) + Vg'6 G 0

h ( P ) + VhTG = 0

(23)

Here d = (.x - xo, y - y o ) is a (n + ])-vector and all dcrivatives are taken with respect to (x, y). The vector A,7= (,r- 7') represcnls a change in aspiration level from the original point 7'. The quadratic sub-problem used to compute .xi) is similar to (23), with AJ'= 0 and interpreting .yo as the present iterate.

Example

TQ illustrate the method consider the tLvo-bar truss in Figure 1 . This very simple structure is usefd to illustrate the csscntial features of the MOP problem in the weighted Tchebyshev norm, including sencitivity analjsis.

Suppose that the maximum stress in the members and the iteight of the truss are to he minimized

\*- COMPRQMiSE SET

?=(38.16)

-----.

o ! I-

0 2'0 ' 4b 60 ' do ' d o ' 160 ' 1

OBJECTIVE f 7 Stress

Figure 2 Cumpromiie wt fnr tMo-bnr truss

MULTIOBJECTlVE OPTlMIZATION PROBLEMS I873

by selecting optimum values of the cross sectional areas of the bars and the height of the truss. L'.

N o a priori, explicit ranking in importance of the objectives is assumed. For a linearly elastic truss she stresses are

where A , and A , are the cross sectional areas of the member?. Using c = -3, b = 1 and P = 1 and assuming A , = ? A , = 24, the objective is

stress: f l = (4 + c2)"', (3Atn)

nelght: f z ; : ( 4 + ~ ~ ) " + 2 ( 1 + r 2 ) i 2 j A i \ u

&here K and 6 are normalization factors The compromise set for thic problem is obtained from the solution tci

minimize( 1 / 2 ) 1

( A , C ) E X

subject to { ( 4 + ~ ' ) " ~ + 2 ( 1 + c 2 ) ' 2 ) , 4 , ' G - ~ F c + ( I - j ) f : < O (23a)

(24b) (4 t C 2 ) l 2/(3Ac6\ - v f z + ( I \ ) f ? < 0

A' = { (A, c) : tl A s A, _C C < C}

Letting i i i-0.3, @=0-03,and (A, A,c, C ) = ( O . l , 10,0.1, 10) , ,~>(1 ,1) ' f s ranysolu t inn in and j '* = (1.1)' satisfies (4).

Thc aspiration levels (7, ~ , f 2 ) rcpresent outcomes that the designer considers potentiai!y satisfactory. From the constraints (24) it is apparent that, for those constraints q5, t h a t are active. the solution f Y i s a linear combination of and ,f,* with parameter j:". If' 7 is feasible, i.r. , T E , ~ I X ) , y" < 1 and,f0 d .r Jf 7 is not feasible, y o > 1 and f ' ) > f

Figure 2 shows the compromise set obtained after solving P ( , f ) for a range of values of .f. Consider, for instance, the solution with 7' = (38, 18IT. The coinprorriise solution was obtained from the initial design ( A , c) = - ( I , 1) using the sequential quadratic programming method 01 P o ~ e l l . ' ~ Convergence was achioved after 6 BFGS updates. The compromise set (T is then approximated using the perturbed quadratic program (23). T h e rcsults are shown i n Figure 3. All points on the solid line were obtained using only information xvaiiable at x', without e\aluati(;rns of,f. This knowledge of the compromise sct near 7" can be very valuable in deciding what aspiralion level will be more likely to produce a satisfactory solution in the ncxt step.

Finally, suppose that the upper bound on il is decreased so that, when 7 is chosen sufficiently far from To, the bound on A will become active. Such changes in active set can be predicted ttccurately by the solution ofthe perturbed program. Within the linear approximation, the perturbed prograin will provide useful information on constraint activity near 2'. When the Tchcbyshev norm is used this information reflects changes in the dominance of one objective Over anothcr, in view of the formulation (22), in which objectives appear as constraints with moving bounds,

OPTlhlIZATJON OF A VEHlCLE SUSPENSlON SYSTEM

Examplc

In thk section an interactite mulliobjective optim!zation'proiedurc with sensitivity ana lp i5 i s

1874 A. DiAZ

0 C I - t T y - T v - 7 c 20 4h & eo 100 120

DBJECTIVE f 1 : Strgss Figure 3. Approximation of thc compromise set

=5

Ut) rit)

Figure 4. Model of vehicle suspension sys~emzo

appiied to the design of a 5-dof suspension system (Figure (4)). The problem has been used by several authors to illustrate different optimization techniques. The present version is based on Haug and Aurora.”

The problem consists of finding stiffness and damping paramems kl, k2, k,: cl, c2 and c3 that optimize the design of the system for a given road condition r(t). For instance, one could define a satisfactory design as onc in which

(i) the acceleration and displacement of the driver seat, (ii) the relative displacement between the whcels and the chassis Is as small as possible.

and z1 are as small as possible;

These performance criteria will be measured here by the functions

fi = i i i , I/ f 2 = / / 2 , I1

f 3 = )I z4 - z2 -- 40z,i/

MULTIOBJECTIVE OPTIMIZATION PROBLEMS 1875

where 11 u I/ is the mean-square of u(d) for t ~ [ 0 , 21. The state variablc z is obtained from the solution of the first order system

[hflift) = F(f)> z(0) = 10) (26) where z = { z , , . . . ~ z 5 ~ i, , . . . ,is). The matrix A T is a function of the design variables, which are normalized and constrained to be in the set

x = {x = (kl,kz,k3,cl ,cz,cj): 0 < ki d 1 ; 0 < ci d 1, i = 1,2.3}

Both M and F can be calculated easily using Lagrangc’s equations ofmotioii. Numerical values for the parameters are as in Reference 20. Notice that although the problem is small in dimension, one function and gradient evaluation involves somewhat lengthy computations, including the solution of the state equations and one adjoint equation for each objective.

To start the optimization procedure, the initial design is taken as xo = (0,0.5,0.5,1,15 0.5) and the outcomes are normalized so that

.f(.xo) = 1.85, 1.36, ~ 0 , 2 a j

and the ideal solution, chosen as .f’* = { 1 ,1 ,1 ,1 jT2 satisfies (4). With this information, the decision- making process can be initiated:

Step: Optimization. The aspiration lcvel is set

A compromise solution is found at

arbitrarily-- to f= f(x,) and the problem P ( 7 ) is solved using a sequential quadratic programming algorithm.

X’ = {0.0.551,0,1.1,0.778)

f = 11.82, 1.34,1.57,2.09)

The level of,f4 is assumed to be unsatisfactory.

Step: Sensitiuity analysis. With the goal of reducing f4 . the effect of changes in the aspiration level r4 is studied using the perturbed SQP problem (23). The predicted outcomes are shown in Figure 5 (for comparison, the exact compromise solution for the given aspiration level is also shown). The study predicts that in this region of the compromise set the desired reduction inf; can be achieved without significaiitly affecting the other objectives. The outcome f 4 = 1.81 predicted by the aspiration lcvcl 7 = 1-80 is selected as appropriatc.

Step: Oplimization. The aspiration level of j4 is set to 1.8 and P ( , f ) is solved again. The result- ing compromise solution is

f = [1.84,1.34,1.59,1.78)

which compares favourably with the predicted outcome (Figure (5)).

Step: Sensitizity analysis. A new modification of the design is studied by varying the aspiration levels ,TI (Figure (6)). It is concluded from this study that the second objective is relatively insensitive to design changes in the neighbourhood of the present solution. When f i is set to 1.65, the predicted outcome

f (1.72,1.34, 165, 1.86)

187b

2.2 f=(l.82.1.54,1.57.2.09)

2 1.6 0

fopp=(l .85,1.34,1.58,1.61)

- - exact - aPPr

:::I I

1 .o s I I I

1 . 7 1.8 1.g 2.0 2.1 2.2 2.3 5

Aspiration 7, Figure 5. Prcdictcd effect of clung:\ 111 f J

2 1.51

I exact I_

1.44

1.3 I - , - oPPr - I I I I

1.4 1.5 1.6 1.7 1.8 1.9

Aspiration f ,

Figure 6. Predicted effcct of changes in

i s conildcrcd acccptable. Similar invesligatioiis can be pcrfommcd by varying aspiration levels smultancously.

or SCI era1

Step Optrnzir~iiion. is set to 1 65 and P ( T \ IS solved oncc more. The new compromise solution 1s

f - 11.71,1.34,1.65,1.87:

The proccduse is continued if this solution is not acceptable

COhI'CLlJSIONS

Information regarding trade-offj in the objectives can be gcnerated \vitbin kleractive multi- objectivc optimization programs by means of a sensitivity anaiysis in the compromise set. This

MULTIOBJECTIVE OPTIMIZATION PROBLEMS 1877

analysis can be implemented effectively using the results generated during the optimi7alion step. A sequential quadratic programming algorithm has been shown to be particularly suited for this task.

Several issues deserve to be studied with Inore attention. For instance, although the sensil i v i ty analysis can be instrumented using true second order information --the truc Hessian rather than an update--- much or the attraction of the niethod relies on the fact that it requires little additional effort after the optimization step is finished. If second order information is not peiierated by thc optimization algorithm, the use of the update greatly simplifies calculations and its effect ou the analysis can be the subject of additional research. The results presented here indicate that the strong conncction bctwccn sensitivity analysis. compromise salutions and algorithms inspired 011 a Newton-like solution of the Kuhn-Tucker necessary conditions can be fruitfully exploited and should bc a topic of future investigations.

K E F EK E N C’ ES

I. C. L. Hwmg and A. S. M . M x u d . :Lficlrip!r Oblc,ctii:e Drcisiun Muking-Meihods urid Appiicuriow .1 St‘iiv !$ the ;IF: .kme)!. Springer-Verlag, Berlin. 1979.

2. H. Nnkayama and K. Furukawa. ‘Satisficing tradc-off mcthod with an application to multiobjective structural design’, Large S ~ d e Sysfetm, 8(1), 47--57 (19SZj.

3. M. Rendsoe, N. Olhoffand I. 1aylor;A variational forniulation forniultisritcria slructural optini17ittinn‘,’The Danish Center for Applied Matheoiatics arid Mechanics. Rcpor: No. 258. Thc Technical Llni\ersity of Denmarb, IY83.

4. H. Fschenauer, ‘Vcctor-optimizatiori in structural dcsign and its application on antcnna structurcs. :n H. Esrhenauer arid N. Oll~off (eds.). Op/imi;ution dfri hods in Siructural Desiqn. Bibliographisches Institut, Zurich, 1983.

5. J. Koshi. ‘Multicriterion ~ptiinization in structural dcsign‘, in E. Atrek CI ul. (eds.), IVPM Dirrc i icw in Opirn~rm S r r w i i i w ! Design. Wiley. N e w York. 1981.

6 . 1. Koski and R. Silvennoinen, ‘Parctn optima of isohtatic trumea’. Ci~mpirtar Mrihcrds A p p l . d.ti.rh. En&/.. 31, 265-279 (1982).

7. A Osyc~ka, ‘An approach t o rnulticriterion optimimtioii for structural design’. l’roc. h l I t i r . S~nifi. o n Op/iinuni Strucrurul Desiyn. I!nivcrsity oi Arizona. Tucson. Arimna, 1 OX I .

8 . W. Stadler, ‘A comprehensive hibliography o n multicriteria dccision making‘, in M. Zelcny (ed.). )bfCD,!f: l’usi Dei.tr0e.Y uiul k’zc t i iw Trerids. JAl Press. Greenwich; Conn.. 1984.

9. A. ( 1 s ) c k j and J. Kolski, ‘Sclcctcd works related to multicriicriori uptirriiza!icm methods for engineering design’, in H. Eschenauer and X. OlhofT (eds.J, Opriniixrion Mrhods in S t r i ~ l u r i r l Ilesicqti. Hibliographisches 1nsti:ut.

10. M. Zeleny, ‘Compromise programming’, in J. L. Clochrane and M. Zelen! (eds.j. Muitiple C-riferiti D w l!niversity of South C;iioliiia Press. Colunihia. 1973.

11. W. B. Gearhart.‘ Compromise solutionsand estiniation ofthc noninkrior sct’, J . Opti t t i . 7lii.nry Apjd.. 28,29-47 ( I 9791. 12. Y. Sawaragi, H. Nakayama and T. Tanino, 7 l w u r v of :lfie/riohjecrii e Optirni.7citiiirr, Ac.adernic Prcss. Orlando. 1985. 13. M . Frcimcr and P. L. Yu. ‘Some new results on comprornix wltitions for grtwp dccision problems’. . l lar icry~inc .n / Sit..

22, 688-693 (1976). 14. V. J . Bowman, ’On the relationship of the Tchebyshev nnrin and the efficient frontier of multiple-criteria objectives’,

in H. Thriez and S. Zicints (eds.), .Wuitip!e Criteria Decision i M d h q . Sprinaer-Verlag. Xevi York. 1Y76. IS. -34. Zclcny. ‘ lhc thcor? of thc displaced idcal’. in IL1. Zelcny (ed.). M i r l i i p l t ’ Crcrerw Dwr.+ion .Uukriuq, Kyoto 1975.

Springer-Verlag. New York. 1976. 16. A. V. Fiacco. ‘Sensitivity analysis fur nonlincar programming using penalty rndhmls’. Mui f i . Plot/.. lo? 2S7-31 I (1976). 17. A. V. Fiacco. Introduction tn Smsitif-i ty and Stability Atidjsis irr Nonl iwar Progr~rnrniinq. Academic Press, New York,

18. R. Benayoun, 1. deMongolfier and 0. Lxichev, ‘Linear programming with multiple objective fcinchns: STEP method

19. M. J. D. Powell. ‘Algorithms for nonlinear constraints that use Lagrangian functionh’. Moth. I’m/., 11,224-74B (19?6). 20. E. Haug and J. Aurora, .4pplied U p r i t n d Dcsigir, Wiley. Ncw York, 1979.

1983.

(STEM): iLl[rth. P ~ Y J ~ . . 1. 366-375 (197 I ) .