8
Control Parameter Sensitivity Analysis of the Multi-guide Particle Swarm Optimization Algorithm Kyle Erwin Stellenbosh University Computer Science Division Stellenbosh [email protected] Andries Engelbrecht Stellenbosh University Department of Industrial Engineering, and Computer Science Division Stellenbosh [email protected] ABSTRACT This paper conducts a sensitivity analysis of the recently proposed multi-objective optimizing algorithm, namely the multi-guide particle swarm optimization algorithm (MGPSO). The MGPSO uses subswarms to explore the search space, where each subswarm optimises one of the multiple objectives. A bounded archive is used to share previously found non- dominated solutions between subswarms. A third term, the archive guide, is added to the velocity update equation that represents a randomly selected solution from the archive. The influence of the archive guide on a particle is controlled by the archive balance coefficient and is proportional to the social guide. The original implementation of the MGPSO used static values randomly sampled from a uniform distribution in the range [0,1] for the archive balance coefficient. This paper investigates a number of approaches to dynamically adjust this control parameter. These approaches are evaluated on a variety of multi-objective optimization problems. It is shown that a linearly increasing strategy and stochastic strategies outperformed the standard approach to initializing the archive balance coefficient on two-objective and three objective optimization problems. CCS CONCEPTS Computing methodologies Artificial intelligence; Search methodologies; Continuous space search; KEYWORDS Particle swarm optimization, multi-objective optimization, multi-giude particle swarm optimization, adaptive control parameters ACM Reference Format: Kyle Erwin and Andries Engelbrecht. 2019. Control Parameter Sen- sitivity Analysis of the Multi-guide Particle Swarm Optimization Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. GECCO ’19, July 13–17, 2019, Prague, Czech Republic © 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6111-8/19/07. . . $15.00 https://doi.org/10.1145/3321707.3321739 Algorithm. In Genetic and Evolutionary Computation Conference (GECCO ’19), July 13–17, 2019, Prague, Czech Republic. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3321707. 3321739 1 INTRODUCTION A new particle swarm optimization (PSO) based multi-objec- tive optimization (MOO) algorithm, namely the multi-guide PSO (MGPSO), has been shown to be highly competitive against a number of multi-objective PSO (MOPSO) and multi-objective evolutionary algorithms (MOEAs) [13]. The MGPSO uses multiple swarms, a swarm for each objective, to solve problems. Subswarms move through the search space in an effort to optimize their respective objectives. The MGPSO also makes use of a shared archive to store previously found non-dominated solutions. An archive attractor is added to the velocity update equation to share knowlege about best po- sitions found among the subswarms. The contribution of the archive attractor relative to the social component is weighted by the control parameter , referred to as the archive bal- ance coefficient. The current implementation of the MGPSO initializes the archive balance coeffcient to a random value sampled from a uniform distribution in the range [0,1] for each particle. The values for the archive balance coeffcient are not changed during the search process and remain static. The MGPSO was extensivly compared against a number of state-of-the-art algorithms, namely the non-dominated sorting genetic algorithm II (NSGA II) [10], multi-objective evolutionary algorithm based on decomposition (MOEA/D) [18], strength Pareto evolutionary algorithm 2 (SPEA2) [6] and Pareto envelope-based selection algorithm II (PESA-II) [4]. It was found to be very competitive on two-objective and three-objective multi-objective optimization problems (MOOPs). This paper conducts a sensitivity analysis of the archive balance coefficient in order to gain a better understanding of its influence on the performance of the MGPSO. The outcome of this sensitivity analysis is different approaches to initialize and adapt the archive balance coefficient. An empirical analysis of these strategies is conducted to find the best approach to control the archive balance coefficient values. It has been shown that stochastic strategies, as well as a linearly decreasing strategy, perform better than the 22

Control Parameter Sensitivity Analysis of the Multi-guide

  • Upload
    others

  • View
    12

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Control Parameter Sensitivity Analysis of the Multi-guide

Control Parameter Sensitivity Analysis of the Multi-guideParticle Swarm Optimization AlgorithmKyle Erwin

Stellenbosh UniversityComputer Science Division

[email protected]

Andries EngelbrechtStellenbosh University

Department of Industrial Engineering, and ComputerScience Division

[email protected]

ABSTRACTThis paper conducts a sensitivity analysis of the recentlyproposed multi-objective optimizing algorithm, namely themulti-guide particle swarm optimization algorithm (MGPSO).The MGPSO uses subswarms to explore the search space,where each subswarm optimises one of the multiple objectives.A bounded archive is used to share previously found non-dominated solutions between subswarms. A third term, thearchive guide, is added to the velocity update equation thatrepresents a randomly selected solution from the archive. Theinfluence of the archive guide on a particle is controlled by thearchive balance coefficient and is proportional to the socialguide. The original implementation of the MGPSO usedstatic values randomly sampled from a uniform distributionin the range [0,1] for the archive balance coefficient. Thispaper investigates a number of approaches to dynamicallyadjust this control parameter. These approaches are evaluatedon a variety of multi-objective optimization problems. Itis shown that a linearly increasing strategy and stochasticstrategies outperformed the standard approach to initializingthe archive balance coefficient on two-objective and threeobjective optimization problems.

CCS CONCEPTS• Computing methodologies → Artificial intelligence; Searchmethodologies; Continuous space search;

KEYWORDSParticle swarm optimization, multi-objective optimization,multi-giude particle swarm optimization, adaptive controlparametersACM Reference Format:Kyle Erwin and Andries Engelbrecht. 2019. Control Parameter Sen-sitivity Analysis of the Multi-guide Particle Swarm Optimization

Permission to make digital or hard copies of all or part of this workfor personal or classroom use is granted without fee provided thatcopies are not made or distributed for profit or commercial advantageand that copies bear this notice and the full citation on the firstpage. Copyrights for components of this work owned by others thanACM must be honored. Abstracting with credit is permitted. To copyotherwise, or republish, to post on servers or to redistribute to lists,requires prior specific permission and/or a fee. Request permissionsfrom [email protected] ’19, July 13–17, 2019, Prague, Czech Republic© 2019 Association for Computing Machinery.ACM ISBN 978-1-4503-6111-8/19/07. . . $15.00https://doi.org/10.1145/3321707.3321739

Algorithm. In Genetic and Evolutionary Computation Conference(GECCO ’19), July 13–17, 2019, Prague, Czech Republic. ACM,New York, NY, USA, 8 pages. https://doi.org/10.1145/3321707.3321739

1 INTRODUCTIONA new particle swarm optimization (PSO) based multi-objec-tive optimization (MOO) algorithm, namely the multi-guidePSO (MGPSO), has been shown to be highly competitiveagainst a number of multi-objective PSO (MOPSO) andmulti-objective evolutionary algorithms (MOEAs) [13]. TheMGPSO uses multiple swarms, a swarm for each objective, tosolve problems. Subswarms move through the search space inan effort to optimize their respective objectives. The MGPSOalso makes use of a shared archive to store previously foundnon-dominated solutions. An archive attractor is added tothe velocity update equation to share knowlege about best po-sitions found among the subswarms. The contribution of thearchive attractor relative to the social component is weightedby the control parameter 𝜆, referred to as the archive bal-ance coefficient. The current implementation of the MGPSOinitializes the archive balance coeffcient to a random valuesampled from a uniform distribution in the range [0,1] foreach particle. The values for the archive balance coeffcientare not changed during the search process and remain static.

The MGPSO was extensivly compared against a numberof state-of-the-art algorithms, namely the non-dominatedsorting genetic algorithm II (NSGA II) [10], multi-objectiveevolutionary algorithm based on decomposition (MOEA/D)[18], strength Pareto evolutionary algorithm 2 (SPEA2) [6]and Pareto envelope-based selection algorithm II (PESA-II)[4]. It was found to be very competitive on two-objectiveand three-objective multi-objective optimization problems(MOOPs).

This paper conducts a sensitivity analysis of the archivebalance coefficient in order to gain a better understandingof its influence on the performance of the MGPSO. Theoutcome of this sensitivity analysis is different approachesto initialize and adapt the archive balance coefficient. Anempirical analysis of these strategies is conducted to findthe best approach to control the archive balance coefficientvalues. It has been shown that stochastic strategies, as wellas a linearly decreasing strategy, perform better than the

22

Page 2: Control Parameter Sensitivity Analysis of the Multi-guide

GECCO ’19, July 13–17, 2019, Prague, Czech Republic Kyle Erwin and Andries Engelbrecht

standard approach for two and three objective problems.

The rest of this paper is organized as follows: Section 2provides a review of the multi-objective optimization conceptsused in this paper, and of PSO and the MGPSO. Section 3presents different strategies for updating the archive balancecoefficients. Section 4 discusses the empirical methodologyused. The findings of this paper are presented in section 5.Lastly, section 6 concludes the paper.

2 BACKGROUNDThis section presents the necessary background for the workpresented in this paper. Section 2.1 describes multi-objectiveoptimization. Section 2.2 discusses particle swarm optimiza-tion. Lastly, section 2.3 details multi-guide particle swarmoptimization.

2.1 Multi-objective OptimizationMOOPs consist of either two or three sub-problems, where so-lutions are often represented as a set of optimal trade-offs [15].The Pareto-optimal front (POF) are the objective vectorsof the non-dominated particle positions. A non-dominatedsolution can be defined as a feasible solution that is not dom-inated by any other feasible solution that has been found.The goals of a MOOP algorithm are

∙ to find solutions as close to the true (POF) as possible,∙ to find as many non-dominated solutions as possible,

and∙ to obtain an even spread of these solutions.

Formally, a multi-objective optimization problem 𝑓 with noconstraints can be defined as follows, assuming minimization:

𝑚𝑖𝑛 𝑓(𝑥

)=

(𝑓1

(𝑥

), 𝑓2

(𝑥

), ..., 𝑓𝑚

(𝑥

))(1)

where 𝑚 is the number of objectives, and 𝑥 is a solutionwithin the defined boundaries of the search space.

2.2 Particle Swarm OptimizationParticle swarm optimization (PSO) is a stochastic population-based search algorithm based on the simulation of the socialbehavior of birds within a flock [7][12]. The population, re-ferred to as the swarm, consists of 𝑛 particles initializedwithin the bounds of a hypercube where 𝑛 is the swarm size.Particles retain a memory of their personal best and neigh-borhood best positions found throughout the duration of thesearch as well as their velocity, initially zero. Informationabout best positions found by particles are exchanged withina defined neighborhood topology [11][12]. The objective func-tion 𝑓 is used to determine the quality of a particle’s position.Particles move through the search space by updating theirposition with their velocity, 𝑣𝑖, or step size, at each time step.Particles update their positions as follows:

𝑥𝑖

(𝑡 + 1

)= 𝑥𝑖

(𝑡)+ 𝑣𝑖

(𝑡 + 1

)(2)

Ideally, PSO should balance the time spent between explor-ing the search space and exploiting promising areas. Givensufficient time, the PSO will reach an equilibrium provided

that the PSO control parameters 𝑤, 𝑐1 and 𝑐2 are initializedto satisfy a PSO convergence condition [3].

2.3 Multi-Guide Particle Swarm OptimizationPSO, as discussed in section 2.2, was developed to solvesingle objective optimization problems. The MGPSO utilizesmultiple swarms to find solutions to MOOPs as well as abounded archive to share non-dominated solutions betweensubswarms. For an 𝑛𝑚-objective problem, the MGPSO uses𝑛𝑚 subswarms, where each subswarm optimizes one of theobjective functions. An archive balance coefficient, 𝜆𝑖, isadded to the velocity update of each particle to attractparticles to solutions found within the archive. Velocity isupdate using

𝑣𝑖

(𝑡 + 1

)= 𝑤𝑣𝑖

(𝑡)+ 𝑐1𝑟1

(𝑦𝑖

(𝑡)

− 𝑥𝑖

(𝑡))

+ 𝜆𝑖𝑐2𝑟2(𝑦𝑖

(𝑡)

− 𝑥𝑖

(𝑡))

+(1 − 𝜆𝑖

)𝑐3𝑟3

(��𝑖

(𝑡)

− 𝑥𝑖

(𝑡)) (3)

where 𝑟1 and 𝑟2 are vectors of random values, each sampledfrom a standard uniform distribution in [0,1]; 𝑐1 and 𝑐2 arepositive constants referred to as the cognitive and social ac-celeration coefficients respectively; 𝑦𝑖 is the personal bestposition of particle 𝑖 and 𝑦𝑖 is the best position within theparticle’s neighborhood. Constants 𝑐1 and 𝑐2 are responsiblefor controlling the amount of influence that the cognitive andsocial guides have. The inertia weight, 𝑤 introduced by Shiand Eberhart [14], controls how much influence the previousvelocity has on the step size.

The archive balance coefficient balances the influence ofthe archive and that of the social component. Smaller val-ues of 𝜆𝑖 will exploit the archive guide while proportionallydecreasing the influence of the social guide. Similarly, largervalues of 𝜆𝑖 encourage the social guide while discouraging thearchive guide. The MGPSO initializes 𝜆𝑖 for each particleby randomly selecting a value from a uniform distributionin the range

[0, 1

][13]. The sampled 𝜆𝑖 values remain static

throughout the duration of the search. Random 𝜆𝑖 valueshave been shown to increase convergence to the POF whilemaintaining diversity in non-dominated solutions [13]. Algo-rithm 1 contains the pseudo code for the MGPSO.

The archive guide is a non-dominated solution selectedfrom the archive using a randomly created tournament ofsize three, from which the least crowded [2] solution in thetournament is chosen as the archive guide [13]. The leastcrowded solution will facilitate a more diverse Pareto opti-mal front, because the MGPSO will focus more on sparselypopulated areas of the objective space.

The archive’s maximum capacity is set to the numberof particles within the search space. This number does notchange over time. Once a non-dominated solution is found,and the non-dominated solution to be inserted is not dom-inated by any other solution in the archive using the dom-inance relation, the algorithm inserts the non-dominated

23

Page 3: Control Parameter Sensitivity Analysis of the Multi-guide

Control Parameter Sensitivity Analysis of MGPSO GECCO ’19, July 13–17, 2019, Prague, Czech Republic

Algorithm 1 Multi-guide Particle Swarm Optimization1: 𝑡 = 02: for each objective 𝑘 = 1, ..., 𝑛𝑚 do3: Create and initialize a swarm, 𝑆𝑘, of 𝑛𝑠𝑘

particles uniformly within a predefinedhypercube of dimension 𝑑

4: for each particle 𝑖 = 1, ..., 𝑆𝑘 · 𝑛𝑠𝑘 do5: Let 𝑓𝑘 be the objective function6: Let S𝑘 · y𝑖 represent the personal best position

of particle S𝑘 · x𝑖, initialized to S𝑘 · x𝑖

(𝑡)

7: Let S𝑘 · y𝑖 represent the neighborhood bestposition of particle S𝑘 · x𝑖, initialized to thebest S𝑘 · x𝑖 in 𝑖’s neighborhood

8: Initialize S𝑘 · v𝑖

(𝑡)

to 09: Initialize S𝑘 · 𝜆𝑖 ∼ 𝑈

(0, 1

)10: repeat11: for each objective 𝑘 = 1, ..., 𝑛𝑘 do12: for each particle 𝑖 = 1, ..., 𝑆𝑘𝑛𝑠𝑘 do13: if 𝑓𝑘

(S𝑘 · x𝑖

(𝑡))

< 𝑓𝑘

(S𝑘 · y𝑖

(𝑡))

then14: S𝑘 · y𝑖

(𝑡 + 1

)= S𝑘 · x𝑖

(𝑡)

15: for particles �� with particle 𝑖 in their neigh-bourhood do

16: if 𝑓𝑘

(S𝑘 · x𝑖

(𝑡))

< 𝑓𝑘

(S𝑘 · y𝑖

(𝑡))

then17: S𝑘 · y𝑖

(𝑡 + 1

)= S𝑘 · x𝑖

(𝑡)

18: Update the archive with solution x𝑖

(𝑡)

19: for each objective 𝑚 = 1, ..., 𝑛𝑘 do20: for each particle 𝑖 = 1, ..., 𝑆𝑘𝑛𝑠𝑚 do21: Select a solution, a𝑖

(𝑡), from the archive using

tournament selection22: v𝑖

(𝑡 + 1

)= 𝑤v𝑖

(𝑡)+ 𝑐1𝑟1

(y𝑖

(𝑡)

− x𝑖

(𝑡))

+𝜆𝑖𝑐2𝑟2(y𝑖

(𝑡)

− x𝑖

(𝑡))

+(1 − 𝜆𝑖

)𝑐3𝑟3

(a𝑖

(𝑡)

− x𝑖

(𝑡))

23: x𝑖

(𝑡 + 1

)= x𝑖

(𝑡)+ v𝑖

(𝑡 + 1

)24: 𝑡 = 𝑡 + 125: until stopping condition is true

solution into the archive. If the archive is full, any solutionsin the archive that are dominated by the newly found solutionare removed. If no existing solutions within the archive aredominated, crowding distance [2] is used to determine themost crowded non-dominated solution within the archive.The most crowded solution is then removed due to the factthat the surrounding non-dominated solutions are similar.

3 CONTROL PARAMETER UPDATESTRATEGIES

The standard approach to deal with the archive balancecoefficients, 𝜆𝑖, is to randomly initialize 𝜆𝑖 per particle as aconstant, 𝑖.𝑒.

𝜆𝑖

(0)

∼ 𝑈(0, 1

)(4)

This section presents five alternative approaches to the archivebalance coefficient parameter, with the objective to find theapproach which will result in a more diverse POF.

3.1 Random UpdateThe random strategy samples a new archive balance coeffi-cient at every iteration. This value is used by all particles.Thus, every particle will have the same archive balance coef-ficient at each iteration, 𝑖.𝑒.,

𝜆𝑖

(𝑡)= 𝜆

(𝑡)

∼ 𝑈(0, 1

)(5)

3.2 Random Update Per ParticleA different 𝜆𝑖 is assigned to every particle, re-sampled atevery iteration:

𝜆𝑖

(𝑡)

∼ 𝑈(0, 1

)(6)

3.3 Random Update Per Particle Per DimensionA different archive balance coefficient, 𝜆𝑖𝑗 , is assigned toeach dimension for for every particle, re-sampled at everyiteration.

𝜆𝑖𝑗

(𝑡)

∼ 𝑈(0, 1

)(7)

3.4 Linearly DecreasingFor every particle, the archive balance coefficient is initializedto 1.0. Thereafter, 𝜆𝑖 is updated as follows:

𝜆(𝑡 + 1

)= 𝜆

(𝑡)

− 1.0𝑛𝑡

(8)

where 𝑛𝑡 is the max number of iterations.

3.5 Linearly IncreasingThis is similar to the linearly decreasing strategy, but thearchive balance coefficient is initialized to 0.0 and increaseslinearly over time:

𝜆(𝑡 + 1

)= 𝜆

(𝑡)+

1.0𝑛𝑡

(9)

3.6 DiscussionA consequence of the standard approach is that there is noguarantee that the 𝜆𝑖 values will balance the search processbetween exploration and exploitation of the objective space.The proposed random update strategies attempt to correctthis by updating the archive balance coefficients at everyiteration. These strategies introduce more stochasticity inbalancing the influence of the social and archive components.It is expected that these the strategies will result in a betterbalance of the influence of these two components.

The linearly decreasing strategy decreases the archive bal-ance coefficient from a maximum to a minimum over time.Thus, the influence of the archive guide will increase from aminimum to a maximum over time. Particles are expected toexplore the search space more effectively early when comparedto other strategies, because particles will not be influenced

24

Page 4: Control Parameter Sensitivity Analysis of the Multi-guide

GECCO ’19, July 13–17, 2019, Prague, Czech Republic Kyle Erwin and Andries Engelbrecht

by the archive guide. Particles will then exploit promisingareas through the archive guide as its influence approachesa maximum over time. It is expected that this strategy willlead to a diverse POF.

The linearly increasing strategy is a counter strategy tolinearly decreasing. The social guide’s influence will start ata minimum. Thus particles will be significantly influencedby the archive guide. It is expected that, due to the highattraction of the archive guide, particles will be guided tomore favourable areas of the search space early on. As timecontinues, the influence will transfer from the archive guideto the social guide, guiding particles to the neighbourhoodbest at each iteration. It is expected that the obtained POFby this strategy will contain similar solutions due to the earlyexploitation of promising areas.

4 EMPIRICAL ANALYSISThis section details the empirical process used to assess theproposed archive balance coefficient strategies. This is thesame empirical process used by Scheepers [13].

4.1 ImplementationThe MGPSO and the archive balance coefficient strategieswere implemented using the Computational Intelligence li-brary (CIlib) (github.com/cirg-up/cilib) following the psuedocode as given in Algorithm 1.

4.2 BenchmarksThe performance of the MGPSO variants were evaluated onthe Zitzler-Deb-Thiele (ZDT) [5] and Walking Fish Group(WFG) [9] benchmark suites. Table 1 details the bounds,dimensionality, and number of objectives specific to eachproblem in both the ZDT and the WFG benchmark suites.

4.3 Performance MeasuresThe inverted generational distance (IGD) is a multi-objectiveperformance measure that is used to measure how close anobtained POF is to the true POF [16] [17]. Reference pointswere taken from the jMetal framework [1]. IGD results weretabulated and ranked using the outcome of the one-tailedMann Whitney U tests with a level of significance of 95%.

The porcupine measure [13] is an 𝑚-dimensional attain-ment surface [8] based quantitative measurement. The porcu-pine measure indicates for each algorithm, over a number ofindependent runs, the percentage of the POF for which thealgorithm performed statistically significantly better thanthe other algorithms compared with. The porcupine measurewas used to show, between two variants of the MGPSO withdifferent archive coefficient balance strategies, which coversmore of the POF.

To measure diversity, the average Euclidean distance fromeach particle to every other particle is calculated. The largerthe distance, the more the diverse the obtained solutions.

Table 1: Details Each Of Benchmark

Problem Objectives Dimensions BoundsZDT1 2 30 [0.0, 1.0]ZDT2 2 30 [0.0, 1.0]ZDT3 2 30 [0.0, 1.0]ZDT4 2 10 𝑗1 ∈ 0.0, 1.0

𝑗2...𝑗10 ∈ −5.0, 5.0ZDT6 2 10 [0.0, 1.0]WFG 2, 3 24 𝑗𝑥 ∈ 0.0, 2 * 𝑥 + 1

𝑥 ∈ 1, 24

Table 2: Control Parameters Used For Each Test Problem

Problem |𝑆1| |𝑆2| |𝑆3| 𝑤 𝑐1 𝑐2 𝑐3ZDT

ZDT1 33 17 0.475 1.80 1.10 1.80ZDT2 8 42 0.075 1.60 1.35 1.90ZDT3 8 42 0.050 1.85 1.90 1.90ZDT4 5 45 0.175 1.85 1.35 1.85ZDT6 1 49 0.600 1.85 1.55 1.80

Two-objective WFGWFG1 45 5 0.275 1.65 1.80 1.75WFG2 24 26 0.750 1.15 1.70 1.05WFG3 31 19 0.600 1.60 1.85 0.95WFG4 2 48 0.100 0.80 1.65 1.70WFG5 50 0 0.600 0.80 1.60 1.85WFG6 19 31 0.525 0.65 0.60 1.65WFG7 29 21 0.450 1.20 1.85 1.55WFG8 37 13 0.750 1.00 1.65 1.05WFG9 13 37 0.275 1.00 0.50 1.70

Three-objective WFGWFG1 37 4 9 0.125 1.20 1.30 1.75WFG2 24 25 1 0.275 1.25 1.40 1.70WFG3 29 10 11 0.525 1.65 1.75 0.75WFG4 29 21 0 0.275 1.75 0.50 1.05WFG5 2 48 0 0.575 0.60 1.85 1.75WFG6 5 30 15 0.300 0.90 0.90 1.90WFG7 10 22 18 0.425 1.45 1.50 1.40WFG8 4 23 23 0.425 0.95 1.75 1.85WFG9 4 45 1 0.275 1.25 0.75 1.50

4.4 ProcedureEach variant of the MGPSO was evaluated on each bench-mark problem for 30 independent runs, where each indepen-dent run lasted 2000 iterations. The control parameters usedfor each benchmark, as recommend by Scheepers [13], arelisted in Table 2. The performance of the MGPSO variantswere quantified using their final archives for each indepen-dent run with respect to the aforementioned performancemeasures. The results obtained using the proposed strategieswere compared against the standard approach to see if thereis any statistical significant difference between the methods.

25

Page 5: Control Parameter Sensitivity Analysis of the Multi-guide

Control Parameter Sensitivity Analysis of MGPSO GECCO ’19, July 13–17, 2019, Prague, Czech Republic

Table 3: Inverted Generational Distance Ranking For ZDT1Through ZDT4 And ZDT6

Strategy Result 1 2 3 4 6 Overall

STD

Wins 1 0 5 1 3 2Losses 4 4 0 1 2 2Difference -3 -4 5 0 1 0Rank 4 5 1 3 2 2

R

Wins 4 2 1 1 0 2Losses 0 0 3 0 3 1Difference 4 2 -2 1 -3 0Rank 1 2 4 2 3 1

RI

Wins 4 1 1 1 0 1Losses 0 2 3 0 3 2Difference 4 -1 -2 1 -3 0Rank 1 3 4 2 3 2

RIJ

Wins 3 3 0 1 0 1Losses 2 0 5 1 3 2Difference 1 3 -5 0 -3 -1Rank 2 1 5 3 3 2

LD

Wins 0 0 4 0 4 2Losses 5 3 1 5 0 3Difference -5 -3 3 -5 4 -1Rank 5 4 2 4 1 2

LI

Wins 2 3 3 3 4 3Losses 3 0 2 0 0 1Difference -1 3 1 3 4 2Rank 3 1 3 1 1 1

5 RESEARCH RESULTSThis section presents a high level discussion of the resultsof the paper. For convenience purposes, the following labelsare assigned to the MGPSO variants: the standard strategyis represented by STD, R represents the random updatestrategy, random update per particle is labeled as RI, randomupdate per particle per dimension as RIJ, lastly linearlydecreasing and linearly increasing are represented as LD andLI respectively.

5.1 ZDT Benchmark ResultsAs seen in table 3, LI and R tied for an overall IGD rank ofone, outperforming all other strategies. However, on average,LI had more wins than R with the same amount of losses. Theremaining strategies scored an overall rank of two, indicatingthat the strategies achieved similar performance. LD could beconsidered the worst performing strategy, for ZDT, as it hadthe most losses on average. Furthermore, LD was statisticallysignificantly better than STD 12.28% of the time, whereasSTD was 66.40% better than LD (refer to table 5). DespiteR performing well in the IGD rankings, it failed to performsignificantly better than the standard approach. StrategiesRI, RIJ and LI outperformed STD with LI having the largestmargin of 30%. The other proposed strategies performedpoorly. These results suggest that RI, RIJ, and LI were ableto find a more diverse POF than STD.

5.2 Two-objective WFG Benchmark ResultsFor the two-objective WFG benchmark set, LI continued toperform well, achieving the highest overall IGD ranking ofone, as shown in Table 4. In fact, LI only lost three times,averaging to zero loses overall. With regards to the porcupineresults, the margin between LI and STD widens even furtheras LI was statistically significantly better than STD 76.09%of the time, whereas STD was only 11.22% better than LI(refer to Table 5).

Strategies STD, R, RI, and RIJ tied with a score of three(refer to Table 4). As shown in Table 5, RI and RIJ continuedto be statistically significantly better than STD as their mar-gins also increased. R outperformed other stochastic variantsas it’s margin between STD was the largest, approximately40%. This is unlike the ZDT results where R failed to bebetter than STD.

LD was the performing strategy out of all the variants.LD scored the worst overall IGD rank of for and failed to bestatistically significantly better than STD (refer to Tables 4and 5 respectively).

As seen in Figure 1, with regards to diversity, LI (sub fig-ure 1b) takes longer to converge on average for two-objectiveWFG6. By comparison, LD did not converge due to the diver-sity increasing over time. This indicates that, over time, LIwill reach an equilibrium whereas LD does not. The diversityfor the stochastic strategies, R, RI and RIJ look very similarto that of STD for WFG6. The more stochasticity used bythe a strategy, the closer the diversity of the subswarms asindicated by Figure 1.

As stated in section 3.6, it was expected that LD wouldobtain a more diverse POF and that LI would obtain aPOF that contained very similar solutions. Figures 1c and1b show that, over time, LD lead to a more diverse swarmin the objective space and LD lead to a less diverse swarm,respectively. However, as seen in Figure 2b both strategiesobtained archives of similar diversity for the two-objectiveWFG6. LD reached and maintained a higher diversity levelearly on in the search whereas with LI the diversity increasedover time.

5.3 Three-objective WFG Benchmark ResultsThe stochastic variants R, RI and RIJ tied with LI for highestranking in terms of the IGD results (refer to Table 4). Simi-larly, all stochastic variants and LI were significantly betterthan STD with regards to the porcupine results shown in Ta-ble 5. However, the stochastic variants were better than STDby a slightly larger margin when compared to the margin be-tween LI and STD. The results make the stochastic strategiesa competitive alternative for problems with a higher numberof objectives. The STD and LD strategies tied for last placeand, on average, had more losses than wins, as shown inTable 4. Table 5 shows STD managed to only be statistically

26

Page 6: Control Parameter Sensitivity Analysis of the Multi-guide

GECCO ’19, July 13–17, 2019, Prague, Czech Republic Kyle Erwin and Andries Engelbrecht

(a) STD (b) LI (c) LD

(d) R (e) RI (f) RIJ

Figure 1: Objective Diversity Per Subswarm Over Time The Two-Objective WFG6

(a) STD (b) LI (c) LD

Figure 2: Obtained POF Objective Diversity Over Time The Two-Objective WFG6

significantly better than LD, like with two objective WFGresults. However, the margin between STD and LD is smallerthan in previous results. Table 4 shows STD managed to onlybe statistically significantly better than LD, like with twoobjective WFG results. However, the margin between STDand LD is smaller than in previous results.

6 CONCLUSIONThis paper proposed a number of strategies to balance the in-fluence of the archive guide in the multi-guide particle swarmoptimization (MGPSO). Five strategies were proposed andtested on the Zitzler-Deb-Thiele (ZDT) [5] and Walking FishGroup (WFG) [9] benchmark suites.

The linearly increasing strategy significantly outperformedother strategies for the two-objective problems. With thisstrategy, the MGPSO was able to reach high levels of diver-sity early on and then exploiting these areas as the socialguide’s influence increases. Furthermore, LI obtained a morediverse POF than the standard approach.

For three-objective problems, the linearly increasing strat-egy, as well as approaches that made use of stochastic updates,performed better than all other strategies. Further researchwill explore reasons as to why this is the case.

27

Page 7: Control Parameter Sensitivity Analysis of the Multi-guide

Control Parameter Sensitivity Analysis of MGPSO GECCO ’19, July 13–17, 2019, Prague, Czech Republic

Table 4: Inverted Generational Distance Ranking For Two-Objective WFG1 Through WFG9

Two Objective Three ObjectiveStrategy Result 1 2 3 4 5 6 7 8 9 Overall 1 2 3 4 5 6 7 8 9 Overall

STD

Wins 4 1 2 1 5 1 0 3 1 2 1 0 2 3 2 2 1 0 1 1Losses 1 1 3 3 0 4 4 1 3 2 4 2 1 1 0 3 3 3 0 2Difference 3 0 -1 -2 5 -3 -4 2 -2 0 -3 -2 1 2 2 -1 -2 -3 1 -1Rank 2 2 3 4 1 4 3 2 5 3 3 4 3 2 1 3 4 2 2 2

R

Wins 1 5 0 2 1 3 2 3 2 2 2 1 0 1 2 3 1 3 1 2Losses 2 0 4 2 3 1 1 1 1 2 2 1 2 3 0 1 1 0 1 1Difference -1 5 -4 0 -2 2 1 2 1 0 0 0 -2 -2 2 2 0 3 0 0Rank 3 1 4 2 4 2 2 2 3 3 2 3 4 3 1 2 3 1 3 1

RI

Wins 1 1 3 1 1 2 2 1 1 1 2 3 0 1 2 3 2 3 1 2Losses 2 1 1 2 3 3 1 3 2 2 2 1 3 3 0 1 1 0 0 1Difference -1 0 2 -1 -2 -1 1 -2 -1 -1 0 2 -3 -2 2 2 1 3 1 1Rank 3 2 2 3 4 3 2 3 4 3 2 2 5 3 1 2 2 1 2 1

RIJ

Wins 0 1 3 4 0 3 2 0 3 2 4 0 3 0 0 5 2 3 1 2Losses 2 4 1 0 5 1 1 5 1 2 0 2 1 5 4 0 1 0 0 1Difference -2 -3 2 4 -5 2 1 -5 2 0 4 -2 2 -5 -4 5 1 3 1 1Rank 4 4 2 1 5 2 2 4 2 3 1 4 2 4 2 1 2 1 2 1

LD

Wins 0 0 0 0 4 0 0 1 0 1 0 0 5 3 0 0 5 0 0 1Losses 4 2 4 5 1 5 4 3 5 4 5 3 0 1 4 5 0 3 5 3Difference -4 -2 -4 -5 3 -5 -4 -2 -5 -3 -5 -3 5 2 -4 -5 5 -3 -5 -1Rank 5 3 4 5 2 5 3 3 6 4 4 5 1 2 2 5 1 2 4 2

LI

Wins 5 1 5 4 3 5 5 5 5 4 4 5 0 5 2 1 0 0 2 2Losses 0 1 0 0 2 0 0 0 0 0 0 0 3 0 0 4 5 3 0 2Difference 5 0 5 4 1 5 5 5 5 4 4 5 -3 5 2 -3 -5 -3 2 0Rank 1 2 1 1 3 1 1 1 1 1 1 1 5 1 1 4 5 2 1 1

7 ACKNOWLEDGEMENTSThe authors acknowledge the Centre for High PerformanceComputing (CHPC), South Africa, for providing computa-tional resources to this research project.

REFERENCES[1] J.J. Durillo A.J. Nebro and M. Vergne. 2015. Redesigning the

jMetal Multi-Objective Optimization Framework. Proceedings ofthe Companion Publication of the 2015 Genetic and Evolution-ary Computation Conference (2015). https://doi.org/10.1145/2739482.2768462

[2] A.H. Aguirre C. A. Coello Coello and E. Zitzler (Eds.). 2005.Evolutionary Multi-Criterion Optimization. Vol. 3410. Springer.https://doi.org/10.1007/b106458

[3] C.W. Cleghorn and A.P. Engelbrecht. 2014. Particle swarm con-vergence: An empirical investigation. In Proceedings of the IEEECongress of Evolutionary Computation 1 (2014), 2524–2530.

[4] J.D. Knowles D.W. Corne, N. Jerram and M.J. Oates. 2001.PESA-II: Region-based Selection in Evolutionary MultiobjectiveOptimization. In In Proceed- ings of the Genetic and Evolution-ary Computation Conference. 283–290.

[5] K. Deb E. Zitzler and L. Thieler. 2000. Scalable Test Prob-lems for Evolutionary Multiobjective Optimization. AdvancedInformation and Knowledge Processing Evolutionary Multiob-jective Optimization (2000), 105–145. https://doi.org/10.1007/1-84628-137-7_6

[6] M. Laumanns E. Zitzler and L. Thiele. 2001. SPEA2: Improvingthe Strength Pareto Evolutionary Algorithm. In Technical report,Swiss Federal Institute of Technology (ETH).

[7] A.P. Engelbrecht. 2007. Computational Intelligence - An Intro-duction (2. ed.). Wiley.

[8] Carlos M. Fonseca and Peter J. Fleming. 1996. On the Perfor-mance Assessment and Comparison of Stochastic MultiobjectiveOptimizers. In Proceedings of the 4th International Conference

on Parallel Problem Solving from Nature. Springer-Verlag, Lon-don, UK, 584–593. http://dl.acm.org/citation.cfm?id=645823.670856

[9] S. Huband, P. Hingston, L. Barone, and L. While. 2006. A reviewof multiobjective test problems and a scalable test problem toolkit.IEEE Transactions on Evolutionary Computation 10, 5 (2006),477–506. https://doi.org/10.1109/tevc.2005.861417

[10] A. Pratap K. Deb, S. Agrawal and T. Meyarivan. 2000. A Fast Elit-ist Non-Dominated Sorting Genetic Algorithm for Multi-ObjectiveOptimization: NSGA-II. Parallel Problem Solving from Nature(2000), 849–858.

[11] J. Kennedy and R.C. Eberhart. 1995. A New Optimizer using Par-ticle Swarm Theory. In In Proceedings of the Sixth InternationalSymposium on Micro Machine and Human Science. 39–43.

[12] J. Kennedy and R.C. Eberhart. 1995. Particle Swarm Optimiza-tion. In In Proceedings of the IEEE International Conferenceon Neural Networks, Vol. 4. 1942–1948.

[13] C. Scheepers. 2018. Multi-guide Particle Swarm Optimization AMulti-swarm Multi-objective Particle Swarm Optimizer. Ph.D.Dissertation. Department of Computer Science, University ofPretoria.

[14] Y. Shi and R.C. Eberhart. 1998. A Modified Particle Swarm Opti-mizer. In Proceedings of the IEEE International Conference onEvolutionary Computation. IEEE Computer Society, Washington,DC, USA, 69–73. https://doi.org/10.1109/ICEC.1998.699146

[15] M. G. Castillo Tapia and C. A. Coello Coello. 2007. Applica-tions of multi-objective evolutionary algorithms in economics andfinance: A survey. In Proceedings of the IEEE Congress on Evo-lutionary Computation. 532–539. https://doi.org/10.1109/CEC.2007.4424516

[16] D.A. Van Veldhuizen and G.B. Lamont. 1999. MultiobjectiveEvolutionary Algorithm Research: A History and Analysis. Evol.Comput. 8. (may 1999).

[17] D.A. Van Veldhuizen and G.B. Lamont. 2000. On MeasuringMultiobjective Evolutionary Algorithm Performance. In In pro-ceedings of the IEEE Congress on Evolutionary Computation.204–211.

28

Page 8: Control Parameter Sensitivity Analysis of the Multi-guide

GECCO ’19, July 13–17, 2019, Prague, Czech Republic Kyle Erwin and Andries Engelbrecht

Table 5: Porcupine Measure Results For Each Archive Coefficient Strategy Compared Against The Standard Approach

STD VSProblem Objectives R RI RIJ LD LIZDT1 2 [0.00, 55.16] [0.03, 93.39] [0.07, 85.10] [100.00, 0.00] [0.00, 98.78]ZDT2 2 [0.00, 19.32] [0.00, 68.58] [0.00, 92.39] [55.99, 0.18] [22.71, 77.29]ZDT3 2 [100.00, 0.00] [100.00, 0.00] [100.00, 0.00] [88.46, 0.01] [89.01, 0.00]ZDT4 2 [0.00, 4.26] [0.00, 1.37] [0.07, 6.55] [87.53, 0.00] [0.00, 9.01]ZDT6 2 [45.47, 0.01] [16.50, 0.00] [36.17, 0.01] [0.00, 61.19] [0.52, 77.55]WFG1 2 [0.00, 88.00] [0.00, 90.87] [8.49, 67.41] [100.00, 0.00] [0.00, 100.00]WFG2 2 [0.00, 98.54] [0.00, 91.51] [30.07, 0.14] [0.00, 2.86] [0.00, 21.41]WFG3 2 [6.52, 0.00] [0.60, 97.86] [0.64, 97.04] [15.31, 0.00] [0.00, 100.00]WFG4 2 [0.00, 41.24] [0.95, 1.25] [4.02, 77.92] [96.85, 0.00] [1.57, 69.00]WFG5 2 [100.00, 0.00] [100.00, 0.00] [100.00, 0.00] [62.08, 0.00] [90.88, 4.72]WFG6 2 [0.00, 100.00] [0.00, 100.00] [0.00, 100.00] [90.91, 2.50] [0.00, 100.00]WFG7 2 [0.76, 95.95] [0.50, 96.83] [1.79, 94.15] [12.70, 0.00] [0.00, 100.00]WFG8 2 [20.03, 35.54] [55.04, 0.00] [96.10, 0.00] [83.43, 0.00] [0.00, 100.00]WFG9 2 [0.56, 38.91] [8.72, 7.37] [6.72, 75.24] [90.69, 0.00] [8.56, 89.64]WFG1 3 [0.00, 94.68] [0.00, 92.70] [0.00, 96.37] [73.98, 0.00] [0.00, 99.88]WFG2 3 [0.00, 79.73] [0.00, 77.35] [0.00, 83.53] [2.99, 25.14] [0.00, 94.53]WFG3 3 [0.40, 42.02] [12.66, 43.51] [0.28, 43.38] [0.00, 50.84] [6.99, 40.68]WFG4 3 [50.77, 4.37] [38.72, 2.26] [59.87, 6.10] [33.38, 0.69] [3.88, 43.32]WFG5 3 [2.44, 11.46] [9.73, 1.08] [66.26, 0.00] [29.97, 0.62] [56.83, 28.42]WFG6 3 [1.79, 62.64] [0.08, 70.20] [2.16, 78.98] [81.72, 0.00] [8.33, 25.77]WFG7 3 [22.81, 49.86] [14.61, 41.61] [11.79, 49.24] [4.26, 31.61] [57.86, 27.66]WFG8 3 [3.54, 67.12] [7.24, 61.49] [5.75, 69.06] [46.97, 4.47] [7.88, 52.82]WFG9 3 [20.46, 6.03] [6.47, 3.68] [8.87, 14.73] [47.97, 0.00] [3.02, 9.99]ZDT 2 [29.09, 15.75] [23.31, 32.67] [27.26, 36.81] [66.40, 12.28] [22.45, 52.53]WFG 2 [14.21, 55.35] [18.42, 53.97] [27.54, 56.88] [61.33, 0.60] [11.22, 76.09]WFG 3 [11.36, 46.43] [9.95, 43.76] [17.22, 49.04] [35.69, 12.60] [16.09, 47.01]Overall - [18.22, 39.18] [17.22, 43.47] [24.01, 47.58] [54.47, 8.49] [16.59, 58.54]

[18] Q. Zhang and H. Li. 2007. MOEA/D: A Multiobjective Evolu-tionary Algorithm Based on Decomposition. IEEE Transactions

on Evolutionary Computation (2007).

29