Multiobjectivo PSO

Embed Size (px)

Citation preview

  • 8/2/2019 Multiobjectivo PSO

    1/22

    International Journal of Computational Intelligence Research.ISSN 0973-1873 Vol.2, No.3 (2006), pp. 287308c Research India Publications http://www.ijcir.info

    Multi-Objective Particle Swarm Optimizers:A Survey of the State-of-the-Art

    Margarita Reyes-Sierra and Carlos A. Coello Coello

    CINVESTAV-IPN (Evolutionary Computation Group), Electrical Engineering Department, Computer Science Section,Av. IPN No. 2508, Col. San Pedro Zacatenco, M exico D.F. 07300, M EXICO

    [email protected]@cs.cinvestav.mx

    Abstract : The success of the Particle Swarm Optimiza-tion (PSO) algorithm as a single-objective optimizer (mainlywhen dealing with continuous search spaces) has motivated re-searchers to extend the use of this bio-inspired technique toother areas. One of them is multi-objective optimization. De-spite the fact that the rst proposal of a Multi-Objective Par-ticle Swarm Optimizer (MOPSO) is over six years old, a con-siderable number of other algorithms have been proposed sincethen. This paper presents a comprehensive review of the vari-ous MOPSOs reported in the specialized literature. As part of this review, we include a classication of the approaches, and

    we identify the main features of each proposal. In the last partof the paper, we list some of the topics within this eld that weconsider as promising areas of future research.

    I. Introduction

    Optimization problems that have more that one objectivefunction are rather common in every eld or area of knowl-edge. In such problems, the objectives to be optimized arenormally in conict with respect to each other, which meansthat there is no single solution for these problems. Instead,we aim to nd good trade-off solutions that represent thebest possible compromises among the objectives.

    Particle Swarm Optimization (PSO) is a heuristic searchtechnique (which is considered as an evolutionary algorithmby its authors [18]) that simulates the movements of a ock of birds which aim to nd food. The relative simplicity of PSO and the fact that is a population-based technique havemade it a natural candidate to be extended for multi-objectiveoptimization.

    Moore and Chapman proposed the rst extension of thePSO strategy for solving multi-objective problems in an un-published manuscript from 1999 1 [41]. After this earlyattempt, a great interest to extend PSO arose among re-

    1This paper may be found in the EMOO repository located at:http://delta.cs.cinvestav.mx/ccoello/EMOO/

    searchers, but interestingly, the next proposal was not pub-lished until 2002. Nevertheless, there are currently overtwenty ve different proposals of MOPSOs reported in thespecialized literature. This paper provides the rst surveyof this work, attempting to classify these proposals and todelineate some of the potential research paths that could befollowed in the future by researchers in this area.

    The remainder of this paper is organized as follows. InSection II, we provide some basic concepts from multi-objective optimization required to make the paper self-contained. Section III presents an introduction to the PSO

    strategy and Section IV presents a brief discussion about ex-tending the PSO strategy for solving multi-objective prob-lems. A complete review of the MOPSO approaches is pro-vided in Section V. We provide a brief discussion aboutthe convergence properties of PSO and MOPSO in SectionVI. In Section VII, possible paths of future research are dis-cussed and, nally, we present our conclusions in SectionVIII.

    II. Basic Concepts

    We are interested in solving problems of the type 2 :

    minimize f (x) := [ f 1 (x), f 2 (x), . . . , f k (x)] (1)

    subject to:gi (x) 0 i = 1 , 2, . . . , m (2)

    h i (x) = 0 i = 1 , 2, . . . , p (3)

    where x = [x 1 , x 2 , . . . , x n ]T is the vector of decision

    variables, f i : IRn IR , i = 1 ,...,k are the objectivefunctions and gi , h j : IRn IR , i = 1 ,...,m , j = 1 ,...,pare the constraint functions of the problem.

    2Without loss of generality, we will assume only minimization problems.

  • 8/2/2019 Multiobjectivo PSO

    2/22

    288 Margarita Reyes-Sierra and Carlos A. Coello Coello

    dominated solutions

    f

    f 2

    1

    Figure. 1 : Dominance relation in a bi-objective space.

    To describe the concept of optimality in which we areinterested, we will introduce next a few denitions.

    Denition 1. Given two vectors x, y IRk

    , we say thatx y if x i yi for i = 1 ,...,k , and that x dominates y(denoted by x y) if x y and x = y.

    Figure 1 shows a particular case of the dominancerelation in the presence of two objective functions.

    Denition 2. We say that a vector of decision variablesx X IRn is nondominated with respect to X , if theredoes not exist another x X such that f (x ) f (x).

    Denition 3. We say that a vector of decision variablesx F IRn (F is the feasible region) is Pareto-optimalif it is nondominated with respect to F .

    Denition 4. The Pareto Optimal Set P is dened by:

    P = {x F|x is Pareto-optimal }

    Denition 5. The Pareto Front PF is dened by:

    PF = { f (x) IRk | x P }

    Figure 2 shows a particular case of the Pareto front in thepresence of two objective functions.We thus wish to determine the Pareto optimal set from the setF of all the decision variable vectors that satisfy (2) and (3).Note however that in practice, not all the Pareto optimal setis normally desirable (e.g., it may not be desirable to havedifferent solutions that map to the same values in objectivefunction space) or achievable.

    III. Particle Swarm Optimization

    James Kennedy and Russell C. Eberhart [30] originallyproposed the PSO algorithm for optimization. PSO is apopulation-based search algorithm based on the simulationof the social behavior of birds within a ock. Although orig-

    inally adopted for balancing weights in neural networks [17],PSO soon became a very popular global optimizer, mainly in

    dominated solutions

    Pareto front solutions

    f

    f 2

    1

    Figure. 2 : The Pareto front of a set of solutions in a twoobjective space.

    problems in which the decision variables are real numbers 3

    [32, 19].According to Angeline [3], we can make two main distinc-

    tions between PSO and an evolutionary algorithm:

    1. Evolutionary algorithms rely on three mechanisms intheir processing: parent representation, selection of in-dividuals and the ne tuning of their parameters. In con-trast, PSO only relies on two mechanisms, since PSOdoes not adopt an explicit selection function. The ab-sence of a selection mechanism in PSO is compensatedby the use of leaders to guide the search. However, there

    is no notion of offspring generation in PSO as with evo-lutionary algorithms.

    2. A second difference between evolutionary algorithmsand PSO has to do with the way in which the individ-uals are manipulated. PSO uses an operator that setsthe velocity of a particle to a particular direction. Thiscan be seen as a directional mutation operator in whichthe direction is dened by both the particles personalbest and the global best (of the swarm). If the direc-tion of the personal best is similar to the direction of the global best, the angle of potential directions will besmall, whereas a larger angle will provide a larger rangeof exploration. In contrast, evolutionary algorithms usea mutation operator that can set an individual in any di-rection (although the relative probabilities for each di-rection may be different). In fact, the limitations exhib-ited by the directional mutation of PSO has led to theuse of mutation operators similar to those adopted inevolutionary algorithms.

    Two are the key aspects by which we believe that PSO hasbecome so popular:

    3 It is worth noting that there have been proposals to use alternative en-codings with PSO (e.g., binary [31] and integer [26]), but none of them hasbeen as popular as the original proposal in which the algorithm operatesusing vectors of real numbers.

  • 8/2/2019 Multiobjectivo PSO

    3/22

    Multi-Objective Particle Swarm Optimizers 289

    1. The main algorithm of PSO is relatively simple (sincein its original version, it only adopts one operator forcreating new solutions, unlike most evolutionary algo-rithms) and its implementation is, therefore, straight-

    forward. Additionally, there is plenty of source codeof PSO available in the public domain (see for exam-ple: http://www.swarmintelligence.org/codes.php ).

    2. PSO has been found to be very effective in a wide va-riety of applications, being able to produce very goodresults at a very low computational cost [32, 20].

    In order to establish a common terminology, in the follow-ing we provide some denitions of several technical termscommonly used:

    Swarm: Population of the algorithm.

    Particle: Member (individual) of the swarm. Each par-ticle represents a potential solution to the problem beingsolved. The position of a particle is determined by thesolution it currently represents.

    pbest ( personal best ): Personal best position of a givenparticle, so far. That is, the position of the particle thathas provided the greatest success (measured in terms of a scalar value analogous to the tness adopted in evolu-tionary algorithms).

    lbest (local best ): Position of the best particle member

    of the neighborhood of a given particle.

    gbest (global best ): Position of the best particle of theentire swarm.

    Leader: Particle that is used to guide another particletowards better regions of the search space.

    Velocity (vector): This vector drives the optimizationprocess, that is, it determines the direction in which aparticle needs to y (move), in order to improve itscurrent position.

    Inertia weight: Denoted by W , the inertia weight isemployed to control the impact of the previous historyof velocities on the current velocity of a given particle.

    Learning factor: Represents the attraction that a parti-cle has toward either its own success or that of its neigh-bors. Two are the learning factors used: C 1 and C 2 . C 1is the cognitive learning factor and represents the attrac-tion that a particle has toward its own success. C 2 is thesocial learning factor and represents the attraction thata particle has toward the success of its neighbors. Both,C 1 and C 2 , are usually dened as constants.

    Neighborhood topology: Determines the set of parti-

    cles that contribute to the calculation of the lbest valueof a given particle.

    In PSO, particles are own through hyperdimensionalsearch space. Changes to the position of the particles withinthe search space are based on the social-psychological ten-dency of individuals to emulate the success of other individ-

    uals.The position of each particle is changed according to its

    own experience and that of its neighbors. Let xi (t) denotethe position of particle pi , at time step t . The position of pi is then changed by adding a velocity vi (t ) to the currentposition, i.e.:

    xi (t) = xi (t 1) + vi (t) (4)

    The velocity vector reects the socially exchanged infor-mation and, in general, is dened in the following way:

    vi (t) = W vi (t 1) + C 1 r 1 (x pbest i xi (t ))+ C 2 r 2 (xleader xi (t )) (5)

    where and r 1 , r 2 [0, 1] are random values.Particles tend to be inuenced by the success of anyone

    they are connected to. These neighbors are not necessarilyparticles which are close to each other in parameter (deci-sion variable) space, but instead are particles that are closeto each other based on a neighborhood topology that denesthe social structure of the swarm [32].

    Particles can be connected to each other in any kind of neighborhood topology represented as a graph. In the fol-

    lowing, list some typical neighborhood graphs used in PSO. Empty graph: In this topology, particles are isolated.

    Each particle is connected only with itself, and it com-pares its current position only to its own best positionfound so far ( pbest ) [19]. In this case, C 2 = 0 in Equa-tion 5.

    Local best: In this topology, each particle is affectedby the best performance of its k immediate neighbors.Particles are inuenced by the best position within theirneighborhood ( lbest ), as well as their own past experi-ence ( pbest ) [19]. When k = 2 , this structure is equiv-

    alent to a ring topology such as the one shown in Fig-ure 3. In this case, leader=lbest in Equation 5.

    Fully connected graph: This topology is the oppositeof the empty graph . The fully connected topology con-nects all members of the swarm to one another. Eachparticle uses its history of experiences in terms of itsown best solution so far ( pbest ) but, in addition, the par-ticle uses the position of the best particle from the entireswarm ( gbest ). This structure is also called star topol-ogy in the PSO community [19]. See Figure 4. In thiscase, leader=gbest in Equation 5.

    Star network: In this topology, one particle is con-nected to all others and they are connected to only that

  • 8/2/2019 Multiobjectivo PSO

    4/22

    290 Margarita Reyes-Sierra and Carlos A. Coello Coello

    Figure. 3 : The ring neighborhood topology that representsthe local best scheme, when k = 2 . In this common localbest case, each particle is affected only by its two immediateadjacent neighbors. Each circle represents a particle.

    Figure. 4 : The fully connected graph represents the fullyconnected neighborhood topology (each circle represents aparticle). All members of the swarm are connected to oneanother.

    focal particle

    Figure. 5 : The star network topology (each circle representsa particle). Tne focal particle is connected to all the otherparticles and they are connected to only that one.

    Figure. 6 : The tree network topology (each circle representsa particle). All particles are arranged in a tree. A particle isinuenced by its own best position so far ( pbest ) and by thebest position of the particle that is directly above in the tree.Here, we show an example of a topology dened by a regulartree with a height equal to 3, degree equal to 4 and a total of

    21 particles.

    one (called focal particle) [19]. See Figure 5. Parti-cles are isolated from one another, as all informationhas to be communicated through the focal particle. The focal particle compares performances of all particles inthe swarm and adjusts its trajectory towards the best of them. That performance is eventually communicatedto the rest of the swarm. This structure is also calledwheel topology in the PSO community. In this case,leader=focal in Equation 5.

    Tree network: In this topology, all particles arearranged in a tree and each node of the tree containsexactly one particle [28]. See Figure 6. A particle isinuenced by its own best position so far ( pbest ) andby the best position of the particle that is directly abovein the tree (parent). If a particle at a child node hasfound a solution that is better than the best so far solu-tion of the particle at the parent node, both particles areexchanged. In this way, this topology offers a dynamicneighborhood. This structure is also called hierarchicaltopology in the PSO community. In this case, leader= pbest parent in Equation 5.

    The neighborhood topology is likely to affect the rate of convergence as it determines how much time it takes to the

  • 8/2/2019 Multiobjectivo PSO

    5/22

    Multi-Objective Particle Swarm Optimizers 291

    BeginInitialize swarmLocate leaderg = 0While g < gmax

    For each particleUpdate Position (Flight)EvaluationUpdate pbest

    EndForUpdate leaderg++

    EndWhileEnd

    Figure. 7 : Pseudocode of the general PSO algorithm.

    particles to nd out about the location of good (better) re-gions of the search space. For example, since in the fully con-nected topology all particles are connected to each other, allparticles receive the information of the best solution from theentire swarm at the same time. Thus, when using such topol-ogy, the swarm tends to converge more rapidly than whenusing local best topologies, since in this case, the informa-tion of the best position of the swarm takes a longer time tobe transferred. However, for the same reason, the fully con-nected topology is also more susceptible to suffer prematureconvergence (i.e., to converge to local optima) [20].

    Figure 7 shows the way in which the general (single-optimization) PSO algorithm works. First, the swarm is ini-tialized. This initialization includes both positions and ve-locities. The corresponding pbest of each particle is initial-ized and the leader is located (usually the gbest solution isselected as the leader). Then, for a maximum number of iter-ations, each particle ies through the search space updatingits position (using (4) and (5)) and its pbest and, nally, theleader is updated too.

    IV. Particle Swarm Optimization for Multi-Objective Problems

    In order to apply the PSO strategy for solving multi-objectiveoptimization problems, it is obvious that the original schemehas to be modied. As we saw in Section II, the solutionset of a problem with multiple objectives does not consistof a single solution (as in global optimization). Instead, inmulti-objective optimization, we aim to nd a set of differentsolutions (the so-called Pareto optimal set). In general, whensolving a multi-objective problem, three are the main goalsto achieve [73]:

    1. Maximize the number of elements of the Pareto optimalset found.

    2. Minimize the distance of the Pareto front produced by

    our algorithm with respect to the true (global) Paretofront (assuming we know its location).

    3. Maximize the spread of solutions found, so that we can

    have a distribution of vectors as smooth and uniform aspossible.

    Given the population-based nature of PSO, it is desirableto produce several (different) nondominated solutions with asingle run. So, as with any other evolutionary algorithm, thethree main issues to be considered when extending PSO tomulti-objective optimization are [13]:

    1. How to select particles (to be used as leaders) in orderto give preference to nondominated solutions over thosethat are dominated?

    2. How to retain the nondominated solutions found during

    the search process in order to report solutions that arenondominated with respect to all the past populationsand not only with respect to the current one? Also, it isdesirable that these solutions are well spread along thePareto front.

    3. How to maintain diversity in the swarm in order to avoidconvergence to a single solution?

    As we could see in the previous section, when solvingsingle-objective optimization problems, the leader that eachparticle uses to update its position is completely determinedonce a neighborhood topology is stablished. However, in thecase of multi-objective optimization problems, each particlemight have a set of different leaders from which just one canbe selected in order to update its position. Such set of lead-ers is usually stored in a different place from the swarm, thatwe will call external archive 4 : This is a repository in whichthe nondominated solutions found so far are stored. The so-lutions contained in the external archive are used as leaderswhen the positions of the particles of the swarm have to beupdated. Furthermore, the contents of the external archive isalso usually reported as the nal output of the algorithm.

    Figure 8 shows the way in which a general MOPSO al-gorithm works. We have marked with italics the processesthat make this algorithm different from the general PSO al-

    gorithm for single objective optimization.First, the swarm is initialized. Then, a set of leaders is also

    initialized with the nondominated particles from the swarm.As we mentioned before, the set of leaders is usually storedin an external archive. Later on, some sort of quality measureis calculated for all the leaders in order to select (usually)one leader for each particle of the swarm. At each genera-tion, for each particle, a leader is selected and the ight isperformed. Most of the existing MOPSOs apply some sortof mutation operator 5 after performing the ight. Then, the

    4This external archive is also used by many Multi-Objective Evolution-ary Algorihtms (MOEAs).

    5The mutation operators adopted in the PSO literature have also beencalled turbulence operators.

  • 8/2/2019 Multiobjectivo PSO

    6/22

    292 Margarita Reyes-Sierra and Carlos A. Coello Coello

    BeginInitialize swarm Initialize leaders in an external archiveQuality (leaders)g = 0While g < gmax

    For each particleSelect leader Update Position (Flight) MutationEvaluationUpdate pbest

    EndForUpdate leaders in the external archiveQuality (leaders)g++

    EndWhile Report results in the external archive

    End

    Figure. 8 : Pseudocode of a general MOPSO algorithm.

    particle is evaluated and its corresponding pbest is updated.A new particle replaces its pbest particle usually when thisparticle is dominated or if both are incomparable (i.e., theyare both nondominated with respect to each other). After allthe particles have been updated, the set of leaders is updated,too. Finally, the quality measure of the set of leaders is re-calculated. This process is repeated for a certain (usuallyxed) number of iterations.

    As we can see, and given the characteristics of the PSOalgorithm, the issues that arise when dealing with multi-objective problems are related with two main algorithmic de-sign aspects [64]:

    1. Selection and updating of leaders:

    How to select a single leader out of set of non-dominated solutions which are all equally good?Should we select this leader in a random way or

    should we use an additional criterion (to promotediversity, for example)?

    How to select the particles that should remain inthe external archive from one iteration to another?

    2. Creation of new solutions:

    How to promote diversity through the two mainmechanisms to create new solutions: updating of positions (Equations 4 and 5) and mutation (turbu-lence) operator.

    These issues are discussed in more detail in the next sub-sections.

    A. Leaders in Multi-Objective Optimization

    Since the solution of a multi-objective problem consist of aset of equally good solutions, it is evident that the concept of

    leader traditionally adopted in PSO has to be changed.A few researches have avoided the problem of dening anew concept of leader for multi-objective problems by adopt-ing aggregating functions (i.e., weighted sums of the objec-tives) or approaches that optimize each objective separately.We will briey discuss these approaches in Section V.

    However, it is important to indicate that the majority of thecurrently proposed MOPSO approaches redene the conceptof leader.

    As we mentioned before, the selection of a leader is a keycomponent when designing a MOPSO approach. The moststraightforward approach is to consider every nondominatedsolution as a new leader and then, just one leader has to beselected. In this way, a quality measure that indicates howgood is a leader is very important. Obviously, such featurecan be dened in several different ways. As we will see inSection V, there exist already different proposals to deal withthis issue.

    One posible way of dening such quality measure can berelated to density measures. Promoting diversity may bedone through this process by means of mechanisms basedon some quality measures that indicate the closeness of theparticles within the swarm.

    Several authors have proposed leader selection techniquesthat are based on density measures. In order to help un-

    derstanding the specic approaches that are going to be de-scribed later on, we present here two of the most importantdensity measures used in the area of multi-objective opti-mization:

    Nearest neighbor density estimator [16]. The near-est neighbor density estimator gives us an idea of howcrowded are the closest neighbors of a given particle, inobjective function space. This measure estimates theperimeter of the cuboid formed by using the nearestneighbors as the vertices. See Figure 9.

    Kernel density estimator [22, 15]: When a particle is

    sharing resources with others, its tness is degraded inproportion to the number and closeness to particles thatsurround it within a certain perimeter. A neighborhoodof a particle is dened in terms of a parameter calledshare that indicates the radius of the neighborhood.Such neighborhoods are called niches . See Figure 10.

    B. Retaining and Spreading Nondominated Solutions

    As we mentioned before, it is important to retain the non-dominated solutions found along all the search process sothat we can report at the end those solutions that are non-dominated with respect to all the previous populations. This

    is important not only for pragmatic reasons, but also for the-oretical ones [54].

  • 8/2/2019 Multiobjectivo PSO

    7/22

    Multi-Objective Particle Swarm Optimizers 293

    i1

    i

    i+1

    f

    f

    1

    2

    Figure. 9 : The nearest neighbor density estimator for an ex-ample with two objective functions. Particles with a largervalue of this estimator are preferred.

    share

    share

    Figure. 10 : For each particle, a niche is dened. Particleswhose niche is less crowded are preferred.

    The most straightforward way of retaining solutions thatare nondominated with respect to all the previous populations(or swarms) is to use an external archive. Such an archivewill allow the entrance of a solution only if: (a) it is non-

    dominated with respect to the contents of the archive or (b)it dominates any of the solutions within the archive (in thiscase, the dominated solutions have to be deleted from thearchive).

    This approach has, however, the drawback of increasingthe size of the archive very quickly. This is an important is-sue because the archive has to be updated at each generation.Thus, this update may become very expensive, computation-ally speaking, if the size of the archive grows too much. Inthe worst case, all members of the swarm may wish to enterinto the archive, at each generation. Thus, the correspond-ing updating process, at each generation, has a complexity of O

    (kN 2

    ), where N is the size of the swarm and k is the num-ber of objectives. In this way, the complexity of the updatingprocess for the complete run is of O (kM N 2 ), where M isthe total number of iterations.

    Thus, mainly due to practical reasons, archives tend tobe bounded [13], which makes necessary the use of an ad-ditional criterion to decide which nondominated solutionsto retain, once the archive is full. In evolutionary multi-objective optimization, researchers have adopted differenttechniques to prune the archive (e.g., clustering [74] andgeographical-based schemes that place the nondominated so-lutions in cells in order to favor less crowded cells whendeleting in-excess nondominated solutions [34]). However,the use of an archive introduces additional issues: for ex-ample, do we impose additional criteria to enter the archiveinstead of just using nondominance (e.g., use the distributionof solutions as an additional criterion)?

    Note that, strictly speaking, three archives should be usedwhen extending PSO for multi-objective optimization: onefor storing the global best solutions, one for the personal bestvalues and a third one for storing the local best (if applica-ble). However, in practice, few authors report the use of morethan one archive in their MOPSOs.

    Besides the use of an external le, it is also possible touse a plus selection in which parents compete with their chil-

    dren and those which are nondominated (and possibly com-ply with some additional criterion such as providing a betterdistribution of solutions) are selected for the following gener-ation. In the case of PSO, a plus selection involves selectingfrom a merge of two consecutive swarms.

    More recently, other researchers have proposed the useof relaxed forms of dominance. The main one adopted inPSO has been -dominance [36], which is illustrated in Fig-ure 11. The main use of this concept in multi-objective PSOhas been to lter solutions in the external archive. By using-dominance, we dene a set of boxes of size and only onenondominated solution is retained for each box (e.g., the oneclosest to the lower lefthand corner). This is illustrated inFigure 12, for a bi-objective case. The use of -dominance,

  • 8/2/2019 Multiobjectivo PSO

    8/22

    294 Margarita Reyes-Sierra and Carlos A. Coello Coello

    f / 1

    f f

    f /

    2

    f f 1 1

    2

    2

    epsilondominancedominancenormal

    Figure. 11 : To the left, we can see the area that is dominatedby a certain solution. To the right, we graphically depict the-dominance concept. In this case, the area being dominatedhas been extended by a value proportional to the parameter

    (which is dened by the user).

    f 1

    f 2

    2 3 4 5

    2

    3

    5

    4

    12

    34

    6

    5

    7

    Figure. 12 : An example of the use of -dominance in anexternal archive. Solution 1 dominates solution 2, thereforesolution 1 is preferred. Solutions 3 and 4 are incomparable.However, solution 3 is preferred over solution 4, since solu-tion 3 is the closer to the lower lefthand corner representedby point (2 ,2). Solution 5 dominates solution 6, thereforesolution 5 is preferred. Solution 7 is not accepted since itsbox, represented by point (2 ,3) is dominated by the boxrepresented by point (2 ,2).

    as proposed in [36] and illustrated in Figure 12, guaranteesthat the retained solutions are nondominated with respect toall solutions generated during the run. It is worth noting,however, that, when using -dominance, the size of the -nal external archive depends on the -value, which is nor-mally a user-dened parameter [36]. Mostaghim and Teich[43] have found that when comparing -dominance againstexisting clustering techniques for xing the archive size, the-dominance method can nd solutions much faster (com-putationally speaking) than the clustering technique with acomparable (and even better in some cases) convergence anddiversity.

    C. Promoting Diversity while Creating New Solutions

    It is well-known that one of the most important features of the PSO algorithm is its fast convergence. This is a positivefeature as long as we dont have premature convergence (i.e.,convergence to a local optimum).

    Premature convergence is caused by the rapid loss of di-versity within the swarm. So, the appropriate promotion of diversity in PSO is a very important issue in order to controlits (normally fast) convergence.

    As we mentioned in Section IV-A, when adopting PSO forsolving multi-objective optimization problems, it is possibleto promote diversity through the selection of leaders. How-ever, this can be also done through the two main mechanismsused for creating new solutions:

    1. Updating of positions. As we mentioned in SectionIII, the use of different neighborhood topologies deter-mines how fast is the process of transfering the infor-mation through the swarm (since a neighborhood deter-mines who the leader particle is in Equation 5). Sincein a fully connected topology all particles are connectedwith each other, the information is transferred fasterthan inthe case ofa local best or a tree topology, since inthese cases particles have smaller neighborhoods. Un-der the same argument, a specied neighborhood topol-ogy also determines how fast is diversity lost within theswarm. Since in a fully connected topology, the tranferof information is fast, when using this topology, diver-

    sity within the swarm is also lost rapidly. In this way,topologies that dene neighborhoods smaller than theentire swarm for each particle can also preserve diver-sity within the swarm a longer time.

    On the other hand, diversity can also be promoted bymeans of the inertia weight ( W in Equation 5). As it wasdened in Section III, the inertia weight is employed tocontrol the impact of the previous history of velocitieson the current velocity. Thus, the inertia weight inu-ences the trade-off between global (wide-ranging) andlocal (nearby) exploration abilities [58]. A large inertiaweight facilitates global exploration (searching new ar-

    eas) while a smaller inertia weight tends to facilitate lo-cal exploration to ne-tune the current search area. The

  • 8/2/2019 Multiobjectivo PSO

    9/22

    Multi-Objective Particle Swarm Optimizers 295

    value of the inertia weight may vary during the opti-mization process. Shi [59] asserted that by linearly de-creasing the inertia weight from a relatively large valueto a small one through the course of the PSO run, the

    PSO tends to have more global search ability at the be-ginning of the run and have more local search abilitynear the end of the run. On the other hand, Zheng etal. [72] argue that either global or local search abilityassociates with a small inertia and that a large inertiaweight provides the algorithm more chances to be sta-bilized. In this way, inspired on the process of the sim-ulated annealing algorithm, the authors proposed to usean increasing inertia weight through the PSO run.

    The addition of velocity to the current position to gen-erate the next position is similar to the mutation opera-tor in evolutionary algorithms, except that mutation in

    PSO is guided by the experience of a particle and that of its neighbors. In other words, PSO performs mutationwith a conscience [58].

    2. Through the use of a mutation (or turbulence) oper-ator .

    As mentioned in the previous section, when a particleupdates its position, a mutation with conscience oc-curs. Sometimes, however, some unconciousness orcraziness, as called by Kennedy and Eberhart in theoriginal proposal of PSO [30], is needed. Craziness,also referred as turbulence, reects the change in a par-

    ticles ight which is out of its control [21].In general, when a swarm stagnates, that is, when thevelocities of the particles are almost zero, it becomesunable to generate new solutions which might lead theswarm out of this state. This behavior can lead to thewhole swarm being trapped in a local optimum fromwhich it becomes impossible to escape. Since the globalbest individual attracts all members of the swarm, it ispossible to lead the swarm away from a current locationby mutating a single particle if the mutated particle be-comes the new global best. This mechanism potentiallyprovides a means both of escaping local optima and of

    speeding up the search [62].In this way, the use of a mutation operator is very impor-tant in order to escape from local optima and to improvethe exploratory capabilities of PSO. When a solution ischosen to be mutated each component is then mutated(randomly changed) or not with certain probability. Ac-tually, different mutation operators have been proposedthat mutate components of either the position or the ve-locity of a particle.

    In our experience, the choice of a good mutation oper-ator is a difcult task that has a signicant impact onperformance. On the other hand, once we have selected

    a specic mutation operator another difcult task is todecide how much mutation to apply: with how much

    probability, in which moments of the process, in whichspecic component of a particle, etc.

    Several proposed approaches have used different mu-tation operators, however, there are also approacheswhich do not use any kind of mutation operator and thatshow good performance. So, the use of mutation is anissue that certainly deserves a more careful study.

    V. A Taxonomy of Approaches

    The taxonomy that we propose to classify the current MOP-SOs is the following:

    Aggregating approaches

    Lexicographic ordering

    Sub-Population approaches Pareto-based approaches

    Combined approaches

    Other approaches

    We will discuss next each of these types of approaches.Also, Table 1 summarizes all the different approaches andindicates their most important features.

    A. Aggregating Approaches

    Under this category we consider approaches that combine (oraggregate) all the objectives of the problem into a singleone. In other words, the multi-objective problem is trans-formed into a single-objective one. This is not a new idea,since aggregating functions can be derived from the well-known Kuhn-Tucker conditions for nondominated solutions[35].

    Parsopoulos and Vrahatis [50]: This algorithm adoptsthree types of aggregating functions: (1) a conventionallinear aggregating function (where weights are xedduring the run), (2) a dynamic aggregating function(were weights are gradually modied during the run)and (3) the bang bang weighted aggregation approach(were weights are abruptly modied during the run) 6[29]. In all cases, the authors adopt the fully connected topology.

    Baumgartner et al. [6]: This approach, based on the fully connected topology, uses linear aggregating func-tions. In this case, the swarm is equally partitionedinto n subswarms, each of which uses a different set of weights and evolves into the direction of its own swarmleader. The approach adopts a gradient technique toidentify the Pareto optimal solutions.

    6This approach has the peculiarity of being able to generate nonconvexportions of the Pareto front, which is something that traditional linear aggre-gating functions cannot do [14].

  • 8/2/2019 Multiobjectivo PSO

    10/22

    296 Margarita Reyes-Sierra and Carlos A. Coello Coello

    B. Lexicographic Ordering

    In this method, the user is asked to rank the objectives in or-der of importance. The optimum solution is then obtained by

    minimizing the objective functions separately, starting withthe most important one and proceeding according to the as-signed order of importance of the objectives [40]. Lexico-graphic ordering tends to be useful only when few objectivefunctions are used (two or three), and it may be sensitive tothe ordering of the objectives [10].

    Hu and Eberhart [24]: In this algorithm, only one objec-tive is optimized at a time using a scheme similar to lex-icographic ordering [13]. This approach adopts the ring(local best ) topology. No external archive is adoptedin this case. However, in a further version of this ap-proach [25], the authors incorporate an external archive

    (called extended memory) and introduce some furtherimprovements to their dynamic neighborhood PSO ap-proach.

    C. Sub-Population Approaches

    These approaches involve the use of several subpopulationsas single-objective optimizers. Then, the subpopulationssomehow exchange information or recombine among them-selves aiming to produce trade-offs among the different so-lutions previously generated for the objectives that were sep-arately optimized.

    Parsopoulos et al. [49] studied a parallel version of the Vector Evaluated Particle Swarm (VEPSO) methodfor multi-objective problems. VEPSO is a multi-swarmvariant of PSO, which is inspired on the Vector Evalu-ated Genetic Algorithm (VEGA) [56, 57]. In VEPSO,each swarm is evaluated using only one of the objec-tive functions of the problem under consideration, andthe information it possesses for this objective functionis communicated to the other swarms through the ex-change of their best experience ( gbest particle). The au-thors argue that this process can lead to Pareto optimalsolutions.

    Chow and Tsui [8]: In this paper, the authors use PSOas an autonomous agent response learning algorithm.For that sake, the authors propose to decompose theaward function of the autonomous agent into a set of local award functions and, in this way, to model the re-sponse extraction process as a multi-objective optimiza-tion problem. A modied PSO called Multi-SpeciesPSO is introduced by considering each objective func-tion as a species swarm. A communication channel isestablished between the neighboring swarms for trans-mitting the information of the best particles, in order toprovide guidance for improving their objective values.

    Also, the authors use the ight formula of the fully con-nected topology, but include a neighbor swarm refer-

    ence velocity . Such velocity is directly related with thebest particle within each subswarm (similar to lbest ).

    D. Pareto-Based Approaches

    These approaches use leader selection techniques based onPareto dominance. The basic idea of all the approaches con-sidered here is to select as leaders to the particles that arenondominated with respect to the swarm. Note however, thatseveral variations of the leader selection scheme are possi-ble since most authors adopt additional information to selectleaders (e.g., information provided by a density estimator) inorder to avoid a random selection of a leader from the currentset of nondominated solutions.

    Moore and Chapman [41]: This algorithm was pre-sented in an unpublished document and it is based on

    Pareto dominance. The authors emphasize the impor-tance of performing both an individual and a groupsearch (a cognitive component and a social component).In this approach, the personal best ( pbest ) of a particleis a list of all the nondominated solutions it has foundin its trajectory. When selecting a pbest , a particle fromthe list is randomly chosen. Since the ring topologyis used, when selecting the best particle of the neigh-borhood, the solutions contained in the pbest lists arecompared, and a nondominated solution with respect tothe neighborhood is chosen. The authors dont indicatehow they choose the lbest particle when more that onenondominated solution is found in the neigborhood.

    Ray and Liew [53]: This algorithm (based on a fullyconnected topology) uses Pareto dominance and com-bines concepts of evolutionary techniques with the par-ticle swarm. The approach uses a nearest neighbordensity estimator to promote diversity (by means of aroulette selection scheme of leaders based on this value)and a multilevel sieve to handle constraints (for this, theauthors adopt the constraint and objective matrices pro-posed in some of their previous research [52]). The setof leaders maintained by the authors can be consideredan external archive.

    Fieldsend and Singh [21]: This approach uses an uncon-strained elite external archive (in which a special datastructure called dominated tree is adopted) to storethe nondominated individuals found along the searchprocess. The archive interacts with the primary pop-ulation in order to dene leaders. The selection of thegbest for a particle in the swarm is based on the structuredened by the dominated tree. First, a composite pointof the tree is located based on dominance relations, andthen the closest member (in objective function space)of the composite point is chosen as the leader. On theother hand, a set of personal best particles found (non-

    dominated) is also maintained for each swarm member,and the selection is performed uniformly. This approach

  • 8/2/2019 Multiobjectivo PSO

    11/22

    Multi-Objective Particle Swarm Optimizers 297

    also uses a turbulence operator that is basically a mu-tation operator that acts on the velocity value used bythe PSO algorithm.

    Coello et al. [11, 12]: This proposal is based on the ideaof having an external archive in which every particlewill deposit its ight experiences after each ight cycle.The updates to the external archive are performed con-sidering a geographically-based system dened in termsof the objective function values of each particle. Thesearch space explored is divided on hypercubes. Eachhypercube receives a tness value based on the num-ber of particles it contains. Thus, in order to select aleader for each particle of the swarm, a roulette-wheelselection using these tness values is rst applied, to se-lect the hypercube from which the leader will be taken.Once a hypercube has been selected, the leader is ran-domly chosen. This approach also uses a mutation op-erator that acts both on the particles of the swarm, andon the range of each design variable of the problem tobe solved.

    In more recent work, Toscano and Coello [66] use theconcept of Pareto dominance to determine the ight di-rection of a particle. The authors adopt clustering tech-niques to divide the population of particles into severalswarms. This aims to provide a better distribution of solutions in decision variable space. Each sub-swarmhas its own set of leaders (nondominated particles). Ineach sub-swarm, a PSO algorithm is executed (leadersare randomly chosen) and, at some point, the differentsub-swarms exchange information: the leaders of eachswarm are migrated to a different swarm in order to vari-ate the selection pressure. Also, this approach does notuse an external archive since elitism in this case is anemergent process derived from the migration of leaders.

    Srinivasan and Hou [61]: This approach, called Parti-cle Swarm Inspired Evolutionary Algorithm (PS-EA),is a hybrid between PSO and an evolutionary algorithm.The main aim is to use EA operators (mutation, for ex-ample) to emulate the workings of PSO mechanisms,

    based on a fully connected topology. Since the authorsmention that the nal swarm constitutes the nal solu-tion (Pareto front), we conclude that a plus selection isperformed at each iteration of the algorithm. Also, theauthors use a niche count and a Pareto ranking approachin order to assign a tness value to the particles of theswarm. However, the selection technique used is notdescribed in the paper.

    Mostaghim and Teich [44]: They propose a sigmamethod in which the leader for each particle is selectedin order to improve the convergence and diversity of aMOPSO approach. The idea of the sigma method is

    similar to compromise programming [13]. In order toselect a leader for each particle of the swarm, a sigma

    value is assigned to each particle of the swarm and of the external archive. Each particle of the swarm selectsas its leader the particle of the external archive with theclosest sigma value. The use of the sigma values makes

    the selection pressure of PSO even higher, which maycause premature convergence in some cases. The au-thors also use a turbulence operator, which is appliedon decision variable space. This approach has been suc-cessfully applied to the molecular force eld parame-trization problem [42].

    In further work, Mostaghim and Teich [43] studied theinuence of -dominance [36] on MOPSO methods. -dominance is compared with existing clustering tech-niques for xing the external archive size and the so-lutions are compared in terms of computational time,convergence and diversity. The results show that the -

    dominance method can nd solutions much faster thanthe clustering technique with a comparable (and evenbetter in some cases) convergence and diversity. Theauthors suggest a new density measure (sigma method)inspired on their previous work [44]. Also, based onthe idea that the initial external archive from which theparticles have to select a leader has inuence on thediversity of solutions, the authors propose the use of successive improvements adopting a previous externalarchive of solutions. In this way, in more recent work,Mostaghim and Teich [45] propose a new method calledcovering MOPSO (cvMOPSO) which retakes this idea.This method works in two phases. In phase 1, a MOPSOalgorithm is run with an external archive with restrictedsize and the goal is to obtain a good approximation of the Pareto-front. In the phase 2, the non-dominated so-lutions obtained from the phase 1 are considered as theinput external archive of the cvMOPSO. The particles inthe swarm of the cvMOPSO are divided into subswarmsaround each non-dominated solution after the rst gen-eration. The task of the subswarms is to cover the gapsbetween the non-dominated solutions obtained from thephase 1. No restrictions on the archive size are imposedin the phase 2.

    Bartz et al. [5]: This approach starts from the ideaof introducing elitism (through the use of an externalarchive) into PSO. Different methods for selecting anddeleting particles (leaders) from the archive are ana-lyzed to generate a satisfactory approximation of thePareto front. The deletion methods analyzed are basedon the contribution of each particle to the diversity of the Pareto front. Selecting methods are either inverselyrelated to the tness value or based on the previous suc-cess of each particle. The authors provide some statisti-cal analysis in order to assess the impact of each of theparameters used by their approach.

    Li [37]: This approach is based on a fully connected topology and incorporates the main mechanisms of the

  • 8/2/2019 Multiobjectivo PSO

    12/22

    298 Margarita Reyes-Sierra and Carlos A. Coello Coello

    NSGA-II [16] to the PSO algorithm. In this approach,once a particle has updated its position, instead of com-paring the new position only against the pbest positionof the particle, all the pbest positions of the swarm and

    all the new positions recently obtained are combined in just one set (given a total of 2N solutions, where N isthe size of the swarm). Then, the approach selects thebest solutions among them to conform the next swarm(by means of a nondominated sorting). The authordoesnt specify which values are assigned to the velocityof pbest positions, in order to consider them as particles.This approach also selects the leaders randomly fromthe leaders set (stored in an external archive) amongthe best of them, based on two different mechanisms:a niche count and a nearest neighbor density estimator.This approach uses a mutation operator that is applied at

    each iteration step only to the particle with the smallestdensity estimator value (or the largest niche count).

    Reyes and Coello [60]: This approach is based onPareto dominance and the use of a nearest neighbor den-sity estimator for the selection of leaders (by means of a binary tournament). This proposal uses two externalarchives: one for storing the leaders currently used forperforming the ight and another for storing the nalsolutions. The density estimator factor is used to lterout the list of leaders whenever the maximum limit im-posed on such list is exceeded. Only the leaders withthe best density estimator values are retained. On the

    other hand, the concept of -dominance is used to se-lect the particles that will remain in the archive of nalsolutions. Additionally, the authors propose a schemein which they subdivide the population (or swarm) intothree different subsets. A different mutation operator isapplied to each subset. Note however, that for all otherpurposes, a single swarm is considered (e.g., for select-ing leaders). This approach is based on a fully connected topology.

    Alvarez-Benitez et al. [2]: The authors propose meth-ods based exclusively on Pareto dominance for select-ing leaders from an unconstrained nondominated (ex-ternal) archive. Three different selection techniques arepresented: One technique that explicitly promotes di-versity (called Rounds by the authors), one techniquethat explicitly promotes convergence (called Random )and nally one technique that is a weighted probabilis-tic method (called Prob ) and forms a compromise be-tween Random and Rounds . Also, the authors proposeand evaluate four mechanisms for conning particles tothe feasible region, that is, constraint-handling meth-ods. The authors show that probabilistic selection favor-ing archival particles that dominate few particles pro-vides good convergence towards the Pareto front while

    properly covering it at the same time. Also, they con-clude that allowing particles to explore regions close to

    the constraint boundaries is important to ensure conver-gence to the Pareto front. This approach uses a turbu-lence factor that is added to the position of the particleswith certain probability.

    Ho et al. [23]: The authors propose a novel formulafor updating velocity and position particles, based onthree main modications to the known ight formulafor the fully connected topology. First, since the au-thors argue that the random factors r 1 and r 2 in Equa-tion 5 are not completely independent, they propose touse: r 2 = 1 r 1 . Second, they propose to incorpo-rate the term (1 W ) in the second and third terms of Equation 5, where W = rnd (0, 1). Third (and last),under the argument of allowing a particle to y some-times back, the authors propose to allow the rst term of Equation 5 being negative with a 50% probability. Onthe other hand, the authors introduce a craziness op-erator in order to promote diversity within the swarm.This craziness operator is applied (with certain prob-ability) to the velocity vector before updating the po-sition of a particle. Finally, the authors introduce oneexternal archive for each particle and one global exter-nal archive for the whole swarm. The archive of eachparticle stores the latest Pareto solutions found by theparticle and the global archive stores the current Paretooptimal set. Every time a particle updates its position,it selects its personal best from its own archive and theglobal best from the global archive. In both cases, the

    authors use a roulette selection mechanism based on thetness values of the particles (assigned using the mech-anism originally proposed by Zitzler et al. [74], for theSPEA algorithm) and on an age variable that the au-thors introduce and that is increased at each generation.

    Villalobos-Arias et al. [68]: The authors propose a newmechanism to promote diversity in multi-objective opti-mization problems. Although the approach is indepen-dent of the search engine adopted, they incorporate itinto the MOPSO proposed in [12]. The new approachis based on the use of stripes that are applied on theobjective function space. Based on an analysis for abi-objective problem, the main idea of the approach isthat the Pareto front of the problem is similar to theline determined by the minimal points of the objectivefunctions. In this way, several points (that the authorscall stripe centers) are distributed uniformly along suchline, and the particles of the swarm are assigned to thenearest stripe center. When using this approach for solv-ing multi-objective problems with PSO, one leader isused in each stripe. Such leader is selected minimiz-ing a weighted sum of the minimal points of the ob- jective functions. The authors show that their approachovercomes the drawbacks on other popular mechanisms

    such as -dominance [36] and the sigma method pro-posed in [44].

  • 8/2/2019 Multiobjectivo PSO

    13/22

    Multi-Objective Particle Swarm Optimizers 299

    Salazar-Lechuga and Rowe [55]: The main idea of thisapproach is to use PSO to guide the search with the helpof niche counts (applied on objective function space)[22] to spread the particles along the Pareto front. The

    approach uses an external archive to store the best parti-cles (nondominated particles) found by the algorithm.Since this external archive helps to guide the search,the niche count is calculated for each of the particlesin the archive and the leaders are chosen from this setby means of an stochastic sampling method (roulettewheel). Also, the niche count is used as a criterion toupdate the external archive. Each time the archive isfull and a new particle wants to get in, its niche count iscompared with the niche count of the worst solution of the archive. If the new particle is better than the worstparticle, then the new particle enters into the archive and

    the worst particle is deleted. Niche counts are updatedwhen inserting or deleting a particle from the archive.

    Raquel and Naval [51]: As in [60], this approach in-corporates the concept of nearest neighbor density esti-mator for selecting the global best particle and also fordeleting particles from the external archive of nondom-inated solutions. When selecting a leader, the archiveof nondominated solutions is sorted in descending or-der with respect to the density estimator, and a particleis randomly chosen from the top part of the list. On theother hand, when the external archive is full, it is againsorted in descending order with respect to the density

    estimator value and a particle is randomly chosen to bedeleted, from the bottom part of the list. This approachuses the mutation operator proposed in [12] in such away that it is applied only during a certain number of generations at the beginning of the process. Finally, theauthors adopt the constraint-handling technique fromthe NSGA-II [16].

    Zhao and Cao [71]: This approach is very similar to theproposal of Coello and Lechuga [11]. However, the au-thors indicate that they maintain two external archives,but one of them is actually a list that keeps the pbest particle for each member of the swarm. The another ex-ternal archive stores the nondominated solutions foundalong the evolutionary process. This truncated archiveis similar to the adaptive grid of PAES [34]. The au-thors apply their approach to solve the economic loaddispatch problem. With this aim, they employ a fuzzy-based mechanism to extract the best compromise solu-tion, in which they incorporate the preferences of thedecision maker. The approach adopts a linear mem-bership function to represent the goals of each objec-tive function. This membership function is adopted tomodify the ranking of the nondominated solutions as tofocus the search on the single solution that attains the

    maximum membership in the fuzzy set.Janson and Merkle [27] proposed a hybrid particle

    swarm optimization algorithm for multi-objective op-timization, called ClustMPSO. ClustMPSO combinesthe PSO algorithm with clustering techniques to divideall particles into several subswarms. For this aim, the

    authors use the K -means algorithm. Each subswarmhas its own nondominated front and the total nondom-inated front is obtained from the union of the fronts of all the subswarms. Each particle randomly selects itsneighborhood best ( lbest ) particle from the nondomi-nated front of the swarm to which it belongs. Also, aparticle only selects a new lbest particle when the cur-rent is no longer a nondominated solution. On the otherhand, the personal best ( pbest ) of each particle is up-dated based on dominance relations. Finally, the au-thors dene that a subswarm is dominated when noneof its particles belongs to the total nondominated front.

    In this way, when a subwarm is dominated for a certainnumber of consecutive generations, the subswarm is re-located. The proposed algorithm is tested on an arti-cial multi-objective optimization function and on a real-world problem from biochemistry, called the moleculardocking problem. The authors reformulate the molec-ular docking problem as a multi-objective optimizationproblem and, in this case, the updating of the pbest par-ticle is also based on the weighted sum of the objec-tives of the problem. ClustMPSO outperforms a well-known Lamarckian Genetic Algorithm that had beenpreviously adopted to solve such problem.

    E. Combined Approaches

    Mahfouf et al. [39]: The authors propose an AdaptiveWeighted PSO (AWPSO) algorithm, in which the veloc-ity is modied by including an acceleration term that in-creases as the number of iterations increases. This aimsto enhance the global search ability at the end of run andto help the algorithm to jump out of local optima. Also,a weighted aggregating function is introduced withinthe algorithm for performance evaluation and to guidethe selection of the personal and global bests. The au-thors use dynamic weights to generate Pareto optimal

    solutions. When the population is losing diversity, amutation operator is applied to the positions of certainparticles and the best of them are retained. Finally, theauthors include a nondominated sorting algorithm to se-lect the particles from one iteration to the next. Sinceplus selection is adopted, an external archive is not nec-essary in this case. This approach is applied in the opti-mal design of heat-treated alloy steels.

    Xiao-hua et al. [69]: The authors propose an In-telligent Particle Swarm Optimization (IPSO) algo-rithm for multi-objective problems based on an Agent-Environment-Rules (AER) model to provide an appro-

    priate selection pressure to propel the swarm populationtowards the Pareto optimal front. In this model, the au-

  • 8/2/2019 Multiobjectivo PSO

    14/22

    300 Margarita Reyes-Sierra and Carlos A. Coello Coello

    thors modify the fully connected ight formula includ-ing the lbest position of the neighborhood of each par-ticle. The neighborhood of a particle is determined bya lattice-like topology. On the other hand, each particle

    is taken as an agent particle with the ability of mem-ory, communication, response, cooperation and self-learning. Each particle has its position, velocity and en-ergy, which is related to its tness. All particles live ina latticelike environment, which is called an agent lat-tice, and each particle is xed on a lattice-point. In orderto survive in the system, they compete or cooperate withtheir neighbors so that they can gain more resources (in-crease energies). Each particle has the ability of cloningitself, and the number of clones produced depends of the energy of the particle. General agent particles andlatency agent particles (those who have smaller energy

    but contain certain featurese.g., favoring diversitythat make them good candidates to be cloned) will becloned. The aim of the clonal operator (which is mod-eled in the clonal selection theory also adopted with ar-ticial immune systems [46]) is to increase the compe-tition among particles, maintain diversity of the swarmand improving the convergence of the process. Also, aclonal mutation operator is used. Leaders are selectedbased on the energy values of the particles. Finally, thisapproach adopts an external archive in order to store thenondominated solutions found throughout the run andto provide the nal solution set.

    F. Other Approaches

    Here, we consider the approaches that could not t any of themain categories previously described.

    Li [38]: This author proposes the maximinPSO , whichuses a tness function derived from the maximin strat-egy proposed by Balling [4] to determine Pareto-domination. The author shows that one advantage of this approach is that no additional clustering or nichingtechnique is needed, since the maximin tness of a so-lution can tell us not only if a solution is dominated ornot, but also if it is clustered with other solutions, i.e.,

    the approach also provides diversity information. In thisapproach, for each particle, a different leader is selectedfor each of the decision variables to conform a singleglobal best. Leaders (stored in an external archive) arerandomly selected based on the maximin tness.

    Zhang et al. [70]: This approach (based on a fully con-nected topology) attempts to improve the selection of gbest and pbest when the velocity of each particle isupdated. For each objective function, there exists botha gbest and a pbest for each particle. In order to updatethe velocity of a particle, the algorithm denes the gbestof a particle as the average of the complete set of gbest

    particles. Analogously, the pbest is computed using ei-ther a random choice or the average from the complete

    set of pbest values. This choice depends on the disper-sion degree between the gbest and pbest values of eachparticle.

    VI. Convergence Properties of PSO andMOPSO

    Recently, some theoretical studies about the convergenceproperties of PSO have been published. As in the case of many evolutionary algorithms, these studies have concludedthat the performance of the PSO is sensitive to control para-meter choices [20].

    Most of the theoretical studies are based on simplied PSOmodels, in which a swarm consisting of one particle of onedimension is studied. The pbest and gbest particles are as-sumed to be constant throughout the process. Also, the terms

    1 = c1 r 1 , 2 = c2 r 2 (used in Equation 5) are assumed tobe constant. Under these conditions, particle trajectories andconvergence of the swarm have been analyzed.

    In the theoretical studies developed about PSO, conver-gence has been dened as follows:

    Denition 6 . Considering the sequence of global best so-lutions {gbest t }t =0 , we say that the swarm converges iff

    lim t gbest t = p

    where p is an arbitrary position in the search space.Since p refers to an arbitrary solution, Denition 6 does

    not mean convergence to a local or global optimum.

    The rst studies on the convergence properties of PSOwere developed by Ozcan and Mohan [47, 48]. Ozcan andMohan studied a PSO under the conditions previously de-scribed but, in addition, their model did not consider the in-ertia weight. They concluded that, when 0 < < 4, where = 1 + 2 , the trajectory of a particle is a sinusoidal wavewhere the initial conditions and parameter choices determinethe amplitude and frequency of the wave. Also, they con-cluded that the periodic nature of the trajectory may causea particle to repeatedly search regions of the search spacealready visited, unless another particle in its neighborhoodnds a better solution.

    In [67], van den Bergh developed a model of PSO underthe same conditions, but considering the inertia weight. Vanden Berg proved that, when w > 12 (c1 + c2 ) 1, the particleconverges to the point

    1 pbest + 2 gbest1 + 2

    .

    In this way, if c1 = c2 , the particle converges to the point

    pbest + gbest2

    .

    Since these conclusions were obtained under the assumption

    of 1 and 2 being constants, van den Bergh generalized hismodel considering the stochastic nature of 1 and 2 . In this

  • 8/2/2019 Multiobjectivo PSO

    15/22

    Multi-Objective Particle Swarm Optimizers 301

    neighborhood leaders selection external dynamic mutationtopology based on archive W operator

    Aggregating approachesParsopolous and Vrahatis [50] fully connected single-objective no yes no

    (1 .0 0.4)Baumgartner et al. [6] fully connected single-objective no no noLexicographic orderingHu and Eberhart [24] ring single-objective no yes no

    rnd (0 .5, 1.0)Hu et al. [25] ring single-objective yes yes no

    rnd (0 .5, 1.0)Sub-Population approachesParsopoulos et al. [49] fully connected single-objective yes no no

    Chow and Tsui [8] fully connected single-objective no no noPareto-Based approachesMoore and Chapman [41] ring dominance no no noRay and Liew [53] fully connected density estimator yes no noFieldsend and Singh [21] fully connected dominance & yes no yes

    closenessCoello et al. [11, 12] fully connected density of solutions yes no yesToscano and Coello [66] fully connected randomly no no noSrinivasan and Hou [61] fully connected niche count & no no yes

    dominanceMostaghim and Teich [44] fully connected sigma value yes no yesMostaghim and Teich [43] fully connected sigma value yes no yesMostaghim and Teich [45] fully connected sigma value yes no yesBartz et al. [5] fully connected density of solutions; yes no no

    successLi [37] fully connected niche count; yes yes yes

    density estimator (1 .0 0.4)Reyes and Coello [60] fully connected density estimator yes yes yes

    rnd (0 .1, 0.5)Alvarez-Benitez et al. [2] fully connected dominance yes no yesHo et al. [23] fully connected tness & age yes yes yes

    proposed Villalobos-Arias et al. [68] fully connected stripes yes no yesSalazar-Lechuga and Rowe [55] fully connected niche count yes no noRaquel and Naval [51] fully connected density estimator yes no yesZhao and Cao [71] fully connected fuzzy membership yes no noJanson and Merkle [27] fully connected random yes no noCombined approachesMahfouf et al. [39] fully connected single-objective no yes yes

    rnd (0 .15, 1.0)Xiao-hua et al. [69] fully connected & energy value yes yes yes

    lattice (0 .6 0.2)Other approachesLi [38] fully connected maximin tness yes yes no

    (1 .0 0.4)Zhang et al. [70] fully connected composite leader no yes no

    (0 .8 0.4)

    Table 1 : Complete list of the MOPSO proposals reviewed. For each proposal, we indicate the corresponding neighborhoodtopology adopted, leader selection scheme used and whether the approach incorporates some dynamic scheme for the inertiaweight ( W ), an external archive and a mutation operator.

  • 8/2/2019 Multiobjectivo PSO

    16/22

    302 Margarita Reyes-Sierra and Carlos A. Coello Coello

    case, he concluded (assuming uniform distributions) that theparticle then converges to the position:

    (1 a ) pbest + agbest

    where a = c 2c 1 + c 2 . In this way, van den Bergh showed that aparticle converges to a weighted average between its personalbest and its neighborhood best position.

    As we said before, in order to ensure convergence, the con-dition w > 12 (c1 + c2 ) 1 must hold. However, it is possibleto choose values of c1 , c2 and w such that the condition isviolated, and the swarm still converges [67]: if

    ratio =crit

    c1 + c2

    is close to 1.0, where crit = sup { | 0.5 1 < w }, (0, c1 + c2 ], the swarm has convergent behavior. Thisimplies that the trajectory of the particle will converge most of the time , occasionally taking divergent steps.

    The studies developed by Ozcan and Mohan, and van derBergh, consider trajectories that are not constricted. In [9],Clerc and Kennedy provide a theoretical analysis of particlebehavior in which they introduce a constriction coefcientwhose objective is to prevent the velocity from growing outof bounds.

    As we could see, the convergence of PSO has been proved.However, we can only ensure the convergence of PSO to thebest position visited by all the particles of the swarm. Inorder to ensure convergence to the local or global optimum,

    two conditions are necessary:

    1. The gbest t +1 solution can be no worse than the gbest tsolution (monotonic condition).

    2. The algorithm must be able to generate a solution in theneighborhood of the optimum with nonzero probability,from any solution x of the search space.

    In [67], van den Bergh provides a proof to show that thebasic PSO is not a local (neither global) optimizer. This isdue to the fact that, although PSO satises the monotoniccondition indicated above, once the algorithm reaches thestate where x = pbest = gbest for all particles in theswarm, no further progress will be made. The problem isthat this state may be reached before gbest reaches a mini-mum, whether be local or global. The basic PSO is thereforesaid to prematurely converge. In this way, the basic PSO al-gorithm is not a local (global) search algorithm, since it hasno guaranteed convergence to a local (global) minimum froman arbitrary initial state.

    Also, van den Bergh suggests two ways of extending PSOin order to make it a global search algorithm. The rst is re-lated to the generation of new random solutions. In general,the introduction of a mutation operator is useful. Neverthe-less, forcing PSO to perform a random search in an area sur-

    rounding the global best position, that is, forcing the globalbest position to change in order to prevent stagnation (by

    means of a hill-climbing search, for example), is also a suit-able mechanism [20]. On the other hand, van den Bergh alsoproposes to use a multi-start PSO, in which when the algo-rithm has converged (under some criteria), it records the best

    solution found and the particles are randomly reinitialized.To the best of our knowledge, until this date, there are no

    studies about the convergence properties of MOPSOs. Fromthe discussion previously provided, we can conclude that itis possible to ensure convergence, by correctly setting theparameters of the ight formula. But, as in the case of single-optimization, such property does not ensure the convergenceto the true Pareto front, in this case. In the case of multi-objective optimization, we may conclude that we still needconditions (1) and (2), to ensure convergence. However, inthis case, condition (1) may change to:

    1. The solutions contained in the external archive at itera-tion t + 1 should be nondominated with respect to thesolutions generated in all iterations , 0 t + 1 , sofar (monotonic condition).

    The use of the -dominance based archiving as proposed in[36] ensures this condition, but the normal dominance-basedstrategies do not, unless they make sure that for any solutiondiscarded from the archive one with equal or dominating ob- jective vector is accepted. In this way, given a MOPSO ap-proach, and assuming it satises condition (1), it remains toexplore if it satises condition (2), to ensure global conver-gence to the true Pareto front.

    VII. Future Research Paths

    As we have seen, despite the fact that MOPSOs started to bedeveloped less than ten years ago, the growth of this eld hasexceeded even the most optimistic expectations. By lookingat the papers that we reviewed, the core of the work on MOP-SOs has focused on algorithmic aspects, but there is muchmore to do in this area. In this section, we will provide someinsights regarding some topics that we believe that are worthinvestigating within the next few years:

    Emphasis on Efciency: The current MOPSOs arenot algorithms particularly complex (in terms of theirdata structures, memory management and so on), andare quite effective (more than state-of-the-art multi-objective evolutionary algorithms in some cases). So,why to make things more complicated regarding algo-rithmic design? Is there room for new developmentsin this regard? We believe that there is, but we haveto focus our work in a new direction. For example,few people have tried to exploit the very high conver-gence rate commonly associated with PSO to design anultra-efcient MOPSO. It would be very useful (forreal-world applications) to have a MOPSO that could

    produce reasonably good approximations of the Paretofront of multi-objective optimization problems with 20

  • 8/2/2019 Multiobjectivo PSO

    17/22

    Multi-Objective Particle Swarm Optimizers 303

    or 30 decision variables with less than 5000 tness func-tion evaluations. A rst attempt to design such a type of MOPSO is reported in [64] but more work in that direc-tion is certainly expected, since this topic has been re-

    cently explored with other types of multi-objective evo-lutionary algorithms as well [33].

    Self-Adaptation of Parameters in MOPSOs: The de-sign of MOPSOs with no parameters that have to bene-tuned by the user is another topic that is worthstudying. In evolutionary multi-objective optimizationin general, the use of self-adaptation or on-line adapta-tion mechanisms is scarce (see for example [63, 1, 7]),and we are only aware of one multi-objective evolu-tionary algorithm which was designed to be parameter-less: the microGA 2 [65]. The design of a parameterlessMOPSO requires a careful study of the velocity updateformula adopted in PSO, and an assessment of the im-pact of each of its components in the performance of aMOPSO. Even the inertia and learning factors whichare normally assumed constants in PSO may benetfrom an on-line adaptation mechanism when dealingwith multi-objective optimization problems. 7

    Theoretical Developments: There is not much theoret-ical work on PSO in general (see for example [9]) and,therefore, the lack of research on theoretical aspects of MOPSOs is, by no means, surprising. It would be inter-esting to perform a theoretical study of the run-time and

    convergence properties of a MOPSO (see Section VI).Other aspects such as the tness landscapes and dynam-ics of a MOPSO are also very attractive theoretical re-search topics.

    Applications: Evidently, no algorithm will ever be use-ful if we cannot nd a good application for it. MOPSOshave been used in a few applications (see Section V),but not so extensively as other multi-objective evolu-tionary algorithms. The reason may be that MOPSOsare younger and less known than, for example, multi-objective genetic algorithms. However, a well-designedMOPSO may be quite useful in real-world applications,

    mainly if, as we mentioned before, its very fast conver-gence rate is properly exploited. At some point in thenear future, we believe that there will be an importantgrowth in the number of applications that adopt MOP-SOs as their search engine.

    VIII. Conclusions

    We have reviewed the state-of-the-art regarding extensionsof PSO to handle multiple objectives. We have started byproviding a short introduction to PSO in which we described

    7Readers interested in this topic may be interestedin looking at the work of Maurice Clerc, available at:http://clerc.maurice.free.fr/pso/ .

    its basic algorithm and its main topologies. We have alsoindicated the main issues that have to be considered whenextending PSO to multi-objective optimization, and then wehave analyzed each of them in more detail.

    We have also proposed a taxonomy to classify the currenttechniques reported in the specialized literature, and we haveprovided a survey of approaches based on such a taxonomy.

    Finally, we have provided some topics that seem (from theauthors perspective) as very promising paths for future re-search in this area. Considering the current rate of growth of this area, we expect a lot of more activity within the next fewyears. However, the switch to new areas different from purealgorithm development may attract newcomers to this eldand may contribute to keep it alive for several more years.

    Acknowledgments

    The authors gratefully acknowledge the comments fromMarco Laumanns and the anonymous reviewers, whichgreatly helped them to improve the contents of this paper.

    The rst author acknowledges support from CONACyTthrough a scholarship to pursue graduate studies at the Com-puter Science Section of the Electrical Engineering Depart-ment at CINVESTAV-IPN. The second author acknowledgessupport from CONACyT project number 45683.

    References

    [1] Hussein A. Abbass. The self-adaptive pareto differen-tial evolution algorithm. In Congress on EvolutionaryComputation (CEC2002) , volume 1, pages 831836,Piscataway, New Jersey, May 2002. IEEE Service Cen-ter.

    [2] Julio E. Alvarez-Benitez, Richard M. Everson, andJonathan E. Fieldsend. A MOPSO algorithm based ex-clusively on pareto dominance concepts. In Third Inter-national Conference on Evolutionary Multi-CriterionOptimization, EMO 2005. , pages 459473, Guanajuato,Mexico, 2005. LNCS 3410, Springer-Verlag.

    [3] Peter J. Angeline. Evolutionary optimization ver-sus particle swarm optimization: Philosophy and per-formance differences. In V.W. Porto, N. Saravanan,D. Waagen, and A.E. Eiben, editors, Evolutionary Pro-gramming VII. 7th International Conference, EP 98 ,pages 601610. Springer. Lecture Notes in ComputerScience Vol. 1447, San Diego, California, USA, March1998.

    [4] Richard Balling. The maximin tness function; multi-objective city and regional planning. In Carlos M. Fon-seca, Peter J. Fleming, Eckart Zitzler, Kalyanmoy Deb,and Lothar Thiele, editors, Second International Con-

    ference on Evolutionary Multi-Criterion Optimization, EMO 2003 , pages 115, Faro, Portugal, April 2003.

  • 8/2/2019 Multiobjectivo PSO

    18/22

    304 Margarita Reyes-Sierra and Carlos A. Coello Coello

    Springer. Lecture Notes in Computer Science. Volume2632.

    [5] Thomas Bartz-Beielstein, Philipp Limbourg, Kon-

    stantinos E. Parsopoulos, Michael N. Vrahatis, J ornMehnen, and Karlheinz Schmitt. Particle swarm opti-mizers for pareto optimization with enhanced archivingtechniques. In Congress on Evolutionary Computation(CEC2003) , volume 3, pages 17801787, Canberra,Australia, December 2003. IEEE Press.

    [6] U. Baumgartner, Ch. Magele, and W. Renhart. Paretooptimality and particle swarm optimization. IEEE Transactions on Magnetics , 40(2):11721175, March2004.

    [7] Dirk B uche, Sibylle M uller, and Petro Koumoutsakos.Self-adaptation for multi-objective evolutionary algo-rithms. In Carlos M. Fonseca, Peter J. Fleming,Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele, ed-itors, Second International Conference on Evolution-ary Multi-Criterion Optimization, EMO 2003 , pages267281, Faro, Portugal, April 2003. Springer. LectureNotes in Computer Science. Volume 2632.

    [8] Chi-kin Chow and Hung-tat Tsui. Autonomous agentresponse learning by a multi-species particle swarm op-timization. In Congress on Evolutionary Computation(CEC2004) , volume 1, pages 778785, Portland, Ore-gon, USA, June 2004. IEEE Service Center.

    [9] Maurice Clerc and James Kennedy. The particleswarmexplosion, stability, and convergence in a mul-tidimensional complex space. IEEE Transactionson Evolutionary Computation , 6(1):5873, February2002.

    [10] Carlos A. Coello Coello. A comprehensive surveyof evolutionary-based multiobjective optimization tech-niques. Knowledge and Information Systems. An Inter-national Journal , 1(3):269308, August 1999.

    [11] Carlos A. Coello Coello and Maximino SalazarLechuga. MOPSO: A proposal for multiple objectiveparticle swarm optimization. In Congress on Evolution-ary Computation (CEC2002) , volume 2, pages 10511056, Piscataway, New Jersey, May 2002. IEEE Ser-vice Center.

    [12] Carlos A. Coello Coello, Gregorio Toscano Pulido, andMaximino Salazar Lechuga. Handling multiple objec-tives with particle swarm optimization. IEEE Trans-actions on Evolutionary Computation , 8(3):256279,June 2004.

    [13] Carlos A. Coello Coello, David A. Van Veldhuizen, andGary B. Lamont. Evolutionary Algorithms for Solving

    Multi-Objective Problems . Kluwer Academic Publish-ers, New York, May 2002.

    [14] Indraneel Das and John Dennis. A closer look at draw-backs of minimizing weighted sums of objectives forpareto set generation in multicriteria optimization prob-lems. Structural Optimization , 14(1):6369, 1997.

    [15] Kalyanmoy Deb and David E. Goldberg. An investiga-tion of niche and species formation in genetic functionoptimization. In J. David Schaffer, editor, Proceedingsof the Third International Conference on Genetic Algo-rithms , pages 4250, San Mateo, California, June 1989.George Mason University, Morgan Kaufmann Publish-ers.

    [16] Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, andT. Meyarivan. A fast and elitist multiobjective geneticalgorithm: NSGAII. IEEE Transactions on Evolution-ary Computation , 6(2):182197, April 2002.

    [17] Russell C. Eberhart, Roy Dobbins, and Patrick K.Simpson. Computational Intelligence PC Tools . Mor-gan Kaufmann Publishers, 1996.

    [18] Russell C. Eberhart and Yuhui Shi. Comparison be-tween genetic algorithms and particle swarm optimiza-tion. In V. W. Porto, N. Saravanan, D. Waagen, andA.E. Eibe, editors, Proceedings of the Seventh AnnualConference on Evolutionary Programming , pages 611619. Springer-Verlag, March 1998.

    [19] Andries P. Engelbrecht, editor. Computational Intelli-gence: An Introduction . John Wiley & Sons, England,

    2002.

    [20] Andries P. Engelbrecht. Fundamentals of Computa-tional Swarm Intelligence . John Wiley & Sons, 2005.

    [21] Jonathan E. Fieldsend and Sameer Singh. A multi-objective algorithm based upon particle swarm optimi-sation, an efcient data structure and turbulence. InProceedings of the 2002 U.K. Workshop on Compu-tational Intelligence , pages 3744, Birmingham, UK,September 2002.

    [22] David E. Goldberg and Jon Richardson. Genetic algo-rithms with sharing for multimodal function optimiza-tion. In Proceedings of the Second International Con- ference on Genetic Algorithms , pages 4149. LawrenceErlbaum Associates, 1987.

    [23] S.L. Ho, Yang Shiyou, Ni Guangzheng, Edward W.C.Lo, and H.C. Wong. A particle swarm optimization-based method for multiobjective design optimizations. IEEE Transactions on Magnetics , 41(5):17561759,May 2005.

    [24] Xiaohui Hu and Russell Eberhart. Multiobjective op-timization using dynamic neighborhood particle swarmoptimization. In Congress on Evolutionary Computa-

    tion (CEC2002) , volume 2, pages 16771681, Piscat-away, New Jersey, May 2002. IEEE Service Center.

  • 8/2/2019 Multiobjectivo PSO

    19/22

    Multi-Objective Particle Swarm Optimizers 305

    [25] Xiaohui Hu, Russell C. Eberhart, and Yuhui Shi. Par-ticle swarm with extended memory for multiobjectiveoptimization. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium , pages 193197, Indianapolis,

    Indiana, USA, April 2003. IEEE Service Center.

    [26] Xiaohui Hu, Russell C. Eberhart, and Yuhui Shi.Swarm intelligence for permutation optimization: Acase study on n-queens problem. In Proceedings of the2003 IEEE Swarm Intelligence Symposium , pages 243246, Indianapolis, Indiana, USA, 2003. IEEE.

    [27] Stefan Janson and Daniel Merkle. A new multi-objective particle swarm optimization algorithm usingclustering applied to automated docking. In Mara J.Blesa, Christian Blum, Andrea Roli, and Michael Sam-pels, editors, Hybrid Metaheuristics, Second Interna-

    tional Workshop, HM 2005 , pages 128142, Barcelona,Spain, August 2005. Springer. Lecture Notes in Com-puter Science Vol. 3636.

    [28] Stefan Janson and Martin Middendorf. A hierarchicalparticle swarm optimizer. In Congress on EvolutionaryComputation (CEC2003) , pages 770776, Camberra,Australia, 2003. IEEE Press.

    [29] Yaochu Jin, Tatsuya Okabe, and Bernhard Sendhoff.Dynamic weighted aggregation for evolutionary multi-objective optimization: Why does it work and how?In Lee Spector, Erik D. Goodman, Annie Wu, W.B.

    Langdon, Hans-Michael Voigt, Mitsuo Gen, SandipSen, Marco Dorigo, Shahram Pezeshk, Max H. Gar-zon, and Edmund Burke, editors, Proceedings of theGenetic and Evolutionary Computation Conference(GECCO2001) , pages 10421049, San Francisco, Cal-ifornia, 2001. Morgan Kaufmann Publishers.

    [30] James Kennedy and Russell C. Eberhart. Particleswarm optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks , pages19421948, Piscataway, New Jersey, 1995. IEEE Ser-vice Center.

    [31] James Kennedy and Russell C. Eberhart. A discretebinary version of the particle swarm algorithm. InProceedings of the 1997 IEEE Conference on Systems, Man, and Cybernetics , pages 41044109, Piscataway,New Jersey, 1997. IEEE Service Center.

    [32] James Kennedy and Russell C. Eberhart. Swarm Intel-ligence . Morgan Kaufmann Publishers, San Francisco,California, 2001.

    [33] Joshua Knowles and Evan J. Hughes. Multiobjectiveoptimization on a budget of 250 evaluations. In Car-los A. Coello Coello, Arturo Hern andez Aguirre, and

    Eckart Zitzler, editors, Third International Conferenceon Evolutionary Multi-Criterion Optimization, EMO

    2005 , pages 176190, Guanajuato, M exico, March2005. Springer. Lecture Notes in Computer ScienceVol. 3410.

    [34] Joshua D. Knowles and David W. Corne. Ap-proximating the nondominated front using the paretoarchived evolution strategy. Evolutionary Computation ,8(2):149172, 2000.

    [35] Harold W. Kuhn and Albert W. Tucker. Nonlinear pro-gramming. In J. Neyman, editor, Proceedings of theSecond Berkeley Symposium on Mathematical Statisticsand Probability , pages 481492, Berkeley, California,1951. University of California Press.

    [36] Marco Laumanns, Lothar Thiele, Kalyanmoy Deb, andEckart Zitzler. Combining convergence and diversity in

    evolutionary multi-objective optimization. Evolution-ary Computation , 10(3):263282, Fall 2002.

    [37] Xiaodong Li. A non-dominated sorting particleswarm optimizer for multiobjective optimization. InErick Cantu-Paz et al., editor, Proceedings of theGenetic and Evolutionary Computation Conference(GECCO2003) , pages 3748. Springer. Lecture Notesin Computer Science Vol. 2723, July 2003.

    [38] Xiaodong Li. Better spread and convergence: Particleswarm multiobjective optimization using the maximintness function. In Kalyanmoy Deb et al., editor, Pro-

    ceedings of the Genetic and Evolutionary ComputationConference (GECCO2004) , pages 117128, Seattle,Washington, USA, June 2004. Springer-Verlag, LectureNotes in Computer Science Vol. 3102.

    [39] Mahdi Mahfouf, Min-You Chen, and Derek ArturhLinkens. Adaptive weighted particle swarm optimisa-tion for multi-objective optimal design of alloy steels.In Parallel Problem Solving from Nature - PPSN VIII ,pages 762771, Birmingham, UK, September 2004.Springer-Verlag. Lecture Notes in Computer ScienceVol. 3242.

    [40] Kaisa M