15
Research Article An Adaptive Shrinking Grid Search Chaotic Wolf Optimization Algorithm Using Standard Deviation Updating Amount DongxingWang, 1,2 XiaojuanBan , 1 LinhongJi, 3 XinyuGuan , 3 KangLiu, 2 andXuQian 2 1 Beijing Advanced Innovation Center for Materials Genome Engineering, University of Science &Technology Beijing, Beijing 100083, China 2 School of Mechanical Electronic & Information Engineering, China University of Mining & Technology-Beijing, Beijing 100083, China 3 Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China Correspondence should be addressed to Xiaojuan Ban; [email protected] and Xinyu Guan; [email protected] Received 4 December 2019; Revised 16 March 2020; Accepted 6 May 2020; Published 18 May 2020 Academic Editor: Friedhelm Schwenker Copyright © 2020 Dongxing Wang et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. To improve the optimization quality, stability, and speed of convergence of wolf pack algorithm, an adaptive shrinking grid search chaotic wolf optimization algorithm using standard deviation updating amount (ASGS-CWOA) was proposed. First of all, a strategy of adaptive shrinking grid search (ASGS) was designed for wolf pack algorithm to enhance its searching capability through which all wolves in the pack are allowed to compete as the leader wolf in order to improve the probability of finding the global optimization. Furthermore, opposite-middle raid method (OMR) is used in the wolf pack algorithm to accelerate its convergence rate. Finally, “Standard Deviation Updating Amount” (SDUA) is adopted for the process of population regeneration, aimed at enhancing biodiversity of the population. e experimental results indicate that compared with traditional genetic algorithm (GA), particle swarm optimization (PSO), leading wolf pack algorithm (LWPS), and chaos wolf optimization algorithm (CWOA), ASGS-CWOA has a faster convergence speed, better global search accuracy, and high robustness under the same conditions. 1. Introduction 1.1.LiteratureReview. e metaheuristic search technology based on swarm intelligence has been increasing in popu- larity due to its ability to solve a variety of complex scientific and engineering problems [1]. e technology models the social behavior of certain living creatures, in which each individual is simple and has limited cognitive capability, but the swarm can act in a coordinated way without a coordi- nator or an external commander and yield intelligent be- havior to obtain global optima as a whole [2]. In [3], Yang Cuicui et al. adopted bacterial foraging optimization to optimize the structural learning of Bayesian networks. In [4] and [5], swarm intelligent algorithm is used for functional module detection in protein-protein interaction networks to help biologists to find some novel biological insights. In [6], Ji et al. performed a systematic comparison of three typical methods based on ant colony optimization, artificial bee colony algorithm, and bacterial foraging optimization re- garding how to accurately and robustly learn a network structure for a complex system. In [7], the authors utilize the artificial immune algorithm to infer the effective connec- tivity between different brain regions. In [8], researchers used an ant colony optimization algorithm for learning brain effective connectivity network from fMRI data. Particle swarm optimization (PSO) [9] algorithm was proposed through the observation and study of the bird group’s flapping behavior. Ant colony optimization (ACO) [10] algorithm was proposed under the principle of simulating ant social division of labor and cooperative foraging. Fish swarm algorithm (FSA) [11] was proposed to simulate the behavior of foraging and clustering in the fish group. In [12], Hindawi Computational Intelligence and Neuroscience Volume 2020, Article ID 7986982, 15 pages https://doi.org/10.1155/2020/7986982

AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

Research ArticleAn Adaptive Shrinking Grid Search Chaotic Wolf OptimizationAlgorithm Using Standard Deviation Updating Amount

DongxingWang12 XiaojuanBan 1 Linhong Ji3 XinyuGuan 3KangLiu2 andXuQian2

1Beijing Advanced Innovation Center for Materials Genome Engineering University of Science ampTechnology BeijingBeijing 100083 China2School of Mechanical Electronic amp Information Engineering China University of Mining amp Technology-BeijingBeijing 100083 China3Department of Mechanical Engineering Tsinghua University Beijing 100084 China

Correspondence should be addressed to Xiaojuan Ban banxjustbeducn and Xinyu Guan guanxinyumailtsinghuaeducn

Received 4 December 2019 Revised 16 March 2020 Accepted 6 May 2020 Published 18 May 2020

Academic Editor Friedhelm Schwenker

Copyright copy 2020 Dongxing Wang et al is is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited

To improve the optimization quality stability and speed of convergence of wolf pack algorithm an adaptive shrinking grid searchchaotic wolf optimization algorithm using standard deviation updating amount (ASGS-CWOA) was proposed First of all astrategy of adaptive shrinking grid search (ASGS) was designed for wolf pack algorithm to enhance its searching capabilitythrough which all wolves in the pack are allowed to compete as the leader wolf in order to improve the probability of finding theglobal optimization Furthermore opposite-middle raid method (OMR) is used in the wolf pack algorithm to accelerate itsconvergence rate Finally ldquoStandard Deviation Updating Amountrdquo (SDUA) is adopted for the process of population regenerationaimed at enhancing biodiversity of the population e experimental results indicate that compared with traditional geneticalgorithm (GA) particle swarm optimization (PSO) leading wolf pack algorithm (LWPS) and chaos wolf optimization algorithm(CWOA) ASGS-CWOA has a faster convergence speed better global search accuracy and high robustness under thesame conditions

1 Introduction

11 Literature Review e metaheuristic search technologybased on swarm intelligence has been increasing in popu-larity due to its ability to solve a variety of complex scientificand engineering problems [1] e technology models thesocial behavior of certain living creatures in which eachindividual is simple and has limited cognitive capability butthe swarm can act in a coordinated way without a coordi-nator or an external commander and yield intelligent be-havior to obtain global optima as a whole [2] In [3] YangCuicui et al adopted bacterial foraging optimization tooptimize the structural learning of Bayesian networks In [4]and [5] swarm intelligent algorithm is used for functionalmodule detection in protein-protein interaction networks tohelp biologists to find some novel biological insights In [6]

Ji et al performed a systematic comparison of three typicalmethods based on ant colony optimization artificial beecolony algorithm and bacterial foraging optimization re-garding how to accurately and robustly learn a networkstructure for a complex system In [7] the authors utilize theartificial immune algorithm to infer the effective connec-tivity between different brain regions In [8] researchersused an ant colony optimization algorithm for learning braineffective connectivity network from fMRI data Particleswarm optimization (PSO) [9] algorithm was proposedthrough the observation and study of the bird grouprsquosflapping behavior Ant colony optimization (ACO) [10]algorithm was proposed under the principle of simulatingant social division of labor and cooperative foraging Fishswarm algorithm (FSA) [11] was proposed to simulate thebehavior of foraging and clustering in the fish group In [12]

HindawiComputational Intelligence and NeuroscienceVolume 2020 Article ID 7986982 15 pageshttpsdoiorg10115520207986982

bacterial foraging optimization (BFO) algorithm was pro-posed by mimicking the foraging behavior of Escherichiacoli in the human esophagus Artificial shuffled frog leapingalgorithm (SFLA) [13] was put forward through the simu-lation of the frog groups to share information foragingprocess and exchange mechanism In [14] Karaboga andBasturk proposed artificial bee colony (ABC) algorithmbased on the concept of postpeak colony breeding and theway of foraging But no algorithm is universal these swarmintelligence optimization algorithms have their own short-comings such as slow convergence easy to fall into localoptimum or low accuracy In [15] the authors proposed animproved ant colony optimization (ICMPACO) algorithmbased on the multipopulation strategy coevolution mech-anism pheromone updating strategy and pheromone dif-fusionmechanism in order to balance the convergence speedand solution diversity and improve optimization perfor-mance in solving large-scale optimization problem Toovercome the deficiencies of weak local search ability ingenetic algorithms (GA) [16] and slow global convergencespeed in ant colony optimization (ACO) algorithm insolving complex optimization problems in [17] the authorsintroduced the chaotic optimization method multi-population collaborative strategy and adaptive control pa-rameters into the GA and ACO algorithm to propose agenetic and ant colony adaptive collaborative optimization(MGACACO) algorithm for solving complex optimizationproblems On the one hand the ant colony optimization(ACO) algorithm has the characteristics of positive feedbackessential parallelism and global convergence but it has theshortcomings of premature convergence and slow conver-gence speed on the other hand the coevolutionary algo-rithm (CEA) emphasizes the existing interaction amongdifferent subpopulations but it is overly formal and does notform a very strict and unified definition erefore in [18]Huimin et al proposed a new adaptive coevolutionary antcolony optimization (SCEACO) algorithm based on thecomplementary advantages and hybrid mechanism

In 2007 Yang and his coauthors proposed the wolfswarm algorithm [19] which is a new swarm intelligencealgorithm e algorithm simulates the wolf predationprocess mainly through the walk raid and siege of threekinds of behavior and survival of the fittest populationupdate mechanism to achieve the purpose of solvingcomplex nonlinear optimization problems Since the wolfpack algorithm is proposed it is widely used in various fieldsand has been developed and improved continually such asfollows In [20] the authors proposed a novel and efficientoppositional wolf pack algorithm to estimate the parametersof Lorenz chaotic system In [21] the modified wolf packsearch algorithm is applied to compute the quasioptimaltrajectories for the rotor wing UAVs in the complex three-dimensional (3D) spaces In [22] wolf algorithm was used tomake out polynomial equation roots of the problem accu-rately and quickly In [23] a new wolf pack algorithm wasdesigned aiming to get better performance including newupdate rule of scout wolf new concept of siege radius andnew attack step kind In [24] Qiang and Zhou presented awolf colony search algorithm based on the leaderrsquos strategy

In [25] to explore the problem of parameter optimizationfor complex nonlinear function a chaos wolf optimizationalgorithm (CWOA) with self-adaptive variable step size wasproposed In [26] Mirjalili et al proposed ldquogrey wolf op-timizerrdquo based on the cooperative hunting behaviour ofwolves which can be regarded as a variant of paper [19] in[27ndash29] grey wolf optimizer is adopted to solve nonsmoothoptimal power flow problems optimal planning of renew-able distributed generation in distribution systems andoptimal reactive power dispatch considering SSSC (staticsynchronous series compensator)

12 Motivation and Incitement From the review of theabove literature the current optimization algorithm ofwolf pack follows the principle of ldquoa certain number ofscouting wolves lead wolves through greedy search with aspecially limited number of times (each wolf has only fouropportunities in some literatures)rdquo the principle of ldquofiercewolves approach the first wolf through a specially limitednumber of times of rushingrdquo in the call and rushingprocess and the principle of ldquofierce wolves pass through aspecially limited number of times of rushingrdquo in the siegeprocess greedy search for prey ldquoin this operation mech-anism the algorithm itself too imitates the actual huntingbehavior of wolves especially inrdquo grey wolf optimizationrdquowhich divides the wolves in the algorithm into a moredetailed level e advantage of doing so is that it caneffectively guarantee the final convergence of the algorithmbecause it completely mimics the biological foragingprocess however the goal of the intelligent optimizationalgorithm of wolves is to solve the optimization problemefficiently and imitating wolvesrsquo foraging behavior is onlya way erefore the intelligent optimization algorithm ofwolves should be more abstract on the basis of imitatingwolvesrsquo foraging behavior in order to improve its opti-mization ability and efficiency Due to the reasons aboveeach of variants about wolf pack algorithm mentionedabove has its own limitation or inadequacy including slowconvergence weak optimization accuracy and narrowscope of application

13 Contribution and Paper Organization To further im-prove the wolf pack optimization algorithm to make it havebetter performance this paper proposed an adaptiveshrinking grid search chaotic wolf optimization algorithmusing standard deviation updating amount (ASGS-CWOA)e paper consists of five parts including IntroductionPrinciple of LWPS and CWOA (both of them are classicvariants of wolf pack optimization algorithm) Improve-ment Steps of ASGS-CWOA Numerical Experiments andcorresponding analyses

2 Materials and Methods

21 Principle of LWPS and CWOA Many variants aboutoriginal wolf pack algorithm are mentioned above in thisarticle we focus on LWPS and CWOA

2 Computational Intelligence and Neuroscience

211 LWPS e leader strategy was employed into tradi-tional wolf pack algorithm detailed in [24] unlike the simplesimulation of wolvesrsquo hunting in [26] LWPS abstracts wolvesrsquohunting activities into five steps According to the thought ofLWPS the specific steps and related formulas are as follows

(1) Initialization e step is to disperse wolves to searchspace or solution space in some way Here the number ofwolf populations can be represented by N and dimensionsof the search space can be represented by D and then wecan get the position of the i-th wolf by the followingequation

xi xi1 xi2 xid xi D( 1113857 (i 1 2 N d 1 2 D)

(1)

where xid is the location of the i-th wolf in d-th dimensionNmeans the number of wolf population and D is the maxi-mum dimension number of the solution space Initial lo-cation of each wolf can be produced by

xid xmin + rand(0 1) times xmax minus xmin( 1113857 (2)

where rand (0 1) is a random number distributed uniformlyin the interval [0 1] and xmax and xmin are the upper andlower limits of the solution space respectively

(2) Competition for the Leader Wolf is step is to find theleader wolf with best fitness value Firstly Q wolves shouldbe chosen as the candidates which are the top Q of wolfpack according to their fitness Secondly each one of the Qwolves searches around itself in D directions so a newlocation can be got by equation (3) if its fitness is betterthan the current fitness the current wolf i should move tothe new location otherwise do not move then searchingwill be continued until the searching time is greater thanthe maximum number Tmax or the searched locationcannot be improved any more Finally by comparing thefitness ofQwolves the best wolf can be elected and it is theleader wolf

ximinusnew xid + rand(minus1 1) times step a (3)

where step_a is the search step size

(3) Summon and Raid Each of the other wolves will raidtoward the location of leader wolf as soon as it receives thesummon from leader wolf and it continues to search forpreys during the running road Finally a new location can begot by equation (4) if its fitness is better than the one ofcurrent location the wolf will move to the new one oth-erwise stay at the current location

xidminusnew xid + rand(minus1 1) times step b times xld minus xid( 1113857 (4)

where xld is the location of the leader wolf in d-th dimensionand step_b is the run step size

(4) Siege the Prey [30] After the above process wolves cometo the nearby of the leader wolf and will be ready to catch theprey until the prey is gote new location for anyone exceptleader wolf can be obtained by the following equation

xidminusnew xid rm ltR0

xld + rand(minus1 1) times step c rm gtR0 rm R01113896

(5)

where step_c is the siege step size rm is a number generatedrandomly by the function rand (minus1 1) distributed uniformlyin the interval [minus1 1] and R0 is a preset siege threshold

In the optimization problem with the current solutiongetting closer and closer to the theoretical optimal value thesiege step size of wolf pack also decreases with the increase initeration times so that wolves have greater probability to findbetter values e following equation is about the siege stepsize which is obtained from [31]

step c step cmin times xdminusmax minus xdminusmin( 1113857

times expln step cminstep cmax( 1113857lowast t

T1113888 1113889

(6)

where step_cmin is the lower limit of the siege step sizexdndashmax is the upper limit of search space in d-th dimensionxdndashmin is the lower limit of search space in d-th dimensionstep_cmax is the upper limit of the siege step size and tmeansthe current number of iterations while T represents theupper limit one

Being out of boundary is not allowed to each wolf soequation (7) is used to deal with the possible transboundary

xidminusnew xdminusmin xidminusnew ltxdminusmin

xdminusmax xidminusnew ltxdminusmax1113896 (7)

(5) Distribution of Food and Regeneration According to thewolf group renewal mechanism of ldquosurvival of the strongrdquogroup renewal is carried out including the worst m wolveswill die and be deleted and new m wolves will be generatedby equation (2)

212 CWOA CWOA develops from the LWPS by intro-ducing the strategy of chaos optimization and adaptiveparameters into the traditional wolf pack algorithm eformer utilizes logistic map to generate chaotic variables thatare projected to solution space in order to search while thelatter introduces adaptive step size to enhance the perfor-mance and they work welle equation of logistic map is asfollows which is from [32]

chaosk+1 μ times chaosk times 1 minus chasok( 1113857chaosk isin [0 1] (8)

where μ is the control variable and when μ is 4 the system isin chaos state

e strategy of adaptive variable step size search includesthe improvement of search step size and siege step-size Inthe early stage wolves should search for preys with a largestep size so as to cover the whole solution space as much aspossible while in the later stage with the wolves gatheringcontinuously wolves should take a small step size to searchfor preys so that they can search finely in a small target areaAs a result the possibility of finding the global optimalsolution will be increased And by equation (9) we can get

Computational Intelligence and Neuroscience 3

the step size of migration while by equation (10) the one ofsiege can be obtained

step anew α times step a0 α 1 minust minus 12

1113874 11138752

(9)

step cnew step c0 times rand(0 1) times 1 minust minus 12

1113874 11138752

1113890 1113891 (10)

where step_c0 is the starting step size for siege

22 Improvement

221 Strategy of Adaptive Shrinking Grid Search (ASGS)In the solution space with D dimensions a searching wolfneeds to migrate along different directions in order to findpreys during any iteration of themigration it is according tothe thought of the original LWPS and CWOA that there isonly a dynamic point around the current location to begenerated according to equation (3)

However a single dynamic point is isolated and not wellenough to search in the current domain space of some a wolfFigures 1(a) and 1(b) show the two-dimensional and thethree-dimensional spaces respectively In essence the al-gorithm needs to take the whole local domain space of thecurrent wolf to be considered in order to find the local bestlocation so ASGS was used to generate an adaptive gridcentered on the current wolf which is extending along 2timesDdirections and includes (2timesK+ 1)D nodes where K meansthe number of nodes taken along any direction Figures 2(a)and 2(b) show the migration about ASGS in two-dimen-sional and three-dimensional space respectively and forbrevity here K is set to 2 detailed in the following equation

k xidminusnew1113858 1113859 xid + step anew times k

(k minusK minusK + 1 0 K minus 1 K)

(11)

So a node in the grid can be defined as

ximinusnew k xi1minusnew1113858 1113859 k xi2minusnew1113858 1113859 k xi Dminusnew1113858 11138591113864 1113865

(k minusK minusK + 1 0 K minus 1 K)

(12)

During any migration the node with best fitness in thegrid should be selected as new location of the searching wolfand after any migration the leader wolf of the population isupdated according to the new fitness It needs to be par-ticularly pointed out that the searching grid will be smallerand smaller as the step_anew becomes smaller Obviouslycompared to a single isolated point in traditional methodsthe searching accuracy and the possibility of finding theoptimal value can be improved by ASGS including(2timesK+ 1)D nodes

As the same reason during the process of the siege thesame strategy is used to find the leader wolf in the localdomain space including (2timesK+ 1)D points But the differentis that the step size of siege is smaller than the one of mi-gration After any migration the leader wolf of the pop-ulation is updated according to new current fitness andthen the current best wolf will be the leader wolftemporarily

222 Strategy of Opposite-Middle Raid (OMR)According to the idea of the traditional wolf pack algo-rithms it is unfortunately that raid pace is too small orunreasonable and wolves cannot rapidly appear around theleader wolf when they receive the summon signal from theleader wolf as shown in Figure 3(a) So OMR is put forwardto solve the problem above and its main clue is that theopposite location of the current location relative to theleader wolf should be calculated by the following equation

xidminusopposition 2 times xld minus xid (13)

If the opposite location has better fitness than the currentone the current wolf should move to the opposite oneOtherwise the following is obtained

ximinusm d xi d + xl d

2

xi bestfitness xi 1 xi 2 xi (Dminus1) xi D1113960 1113961 xi 1 xi 2 xi (Dminus1) xm D1113960 1113961 ximinusm 1 ximinusm 2 ximinusm (Dminus1) ximinusm D1113960 11139611113872 1113873

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

(14)

where ximinusm_d is the middle location in d-th dimension be-tween the i-th wolf and the leader wolf xi_d is the location ofthe wolf i in d-th dimension and xl_d is the location of theleader wolf in d-th dimension ldquoBestfitnessrdquo returns a wolfwith the best fitness from the given ones

From equation (14) and Figure 3(b) it is seen that thereare 2D points among the current point and the middle pointand as the result of each raid the point with best fitness ischosen as the new location of the current i-th wolf ereby

not only the wolves can appear around the leader wolf assoon as possible but also they can try to find new preys as faras possible

223 Standard Deviation Updating Amount (SDUA)According to the basic idea of the leader wolf pack algo-rithm during the iterations some wolves with poorer fitnesswill be continuously eliminated while the same amount of

4 Computational Intelligence and Neuroscience

ndash10ndash8ndash6ndash4ndash2

02468

10

D ndash

2

6ndash6 2ndash2 10ndash10 4ndash4 8ndash8 0D ndash 1

(a)

D ndash 1D ndash 2

10

10

5

5

00

ndash5 ndash5ndash10

ndash10

ndash10

ndash5

0

5

10

D ndash

3

(b)

Figure 1 (a)e range of locations of searching or competing wolves in CWOAwhen dimension is 2 (b) the range of locations of searchingor competing wolves in CWOA when dimension is 3 where step_a 10 or step_c 10 (the red point means the current location of some awolf)

20

15

10

5

0

ndash5

ndash10

ndash15

ndash20

D ndash

2

ndash20 ndash15 ndash10 ndash5 0 5 10 15 20D ndash 1

(a)

D ndash

3

D ndash 1D ndash 2

20

20

20

10

10

10

0

00

ndash10

ndash10ndash10ndash20ndash20

ndash20

(b)

Figure 2 e searching situation of wolves in ASGS-CWOA when dimension is (a) 2 and (b) 3 where step_a 10 or step_c 10 and K 2(the red point means the current location of some a wolf and the black ones mean the searching locations of the wolf)

10

10

86420 0ndash2

ndash5

5

D ndash 1D ndash 2

ndash202468

10

D ndash

3

(a)

10

10

8

8

6

64 42

20

0 D ndash 2D ndash 1

0

2

4

6

8

10

D ndash

3

(b)

Figure 3 (a) e range of locations of wolves in raid when dimension is 3 according to the original thought of wolf pack algorithm (b) therange of locations of wolves in raid when dimension is 3 according to OMR where step_a 2 e red point means the current position ofsome wolves the pink one indicates the position of the lead wolf and the blue one means the middle point of current wolf and lead wolf

Computational Intelligence and Neuroscience 5

wolves will be added to the population to make sure that thebad gene can be eliminated and the population diversity ofthe wolf pack can be ensured so it is not easy to fall into thelocal optimal solution and the convergence rate can beimproved However the amount of wolves that should beeliminated and added to the wolf pack is a fixed numberwhich is 5 in LWPS and CWOA mentioned before and thatis stiff and unreasonable In fact the amount should be adynamic number to reflect the dynamic situation of the wolfpack during any iteration

Standard deviation (SD) is a statistical concept whichhas been widely used in many fields Such as in [33]standard deviation was used into industrial equipment tohelp process the signal of bubble detection in liquid sodiumin [34] standard deviation is used with delay multiply toform a new weighting factor which is introduced to enhancethe contrast of the reconstructed images in medical fields in[35] based on eight-year-old dental trauma research datastandard deviation was utilized to help analyze the potentialof laser Doppler flowmetry the beam forming performancehas a large impact on image quality in ultrasound imagingto improve image resolution and contrast in [36] a newadaptive weighting factor for ultrasound imaging calledsignal mean-to-standard-deviation factor (SMSF) was pro-posed based on which researchers put forward an adaptivebeam forming method for ultrasound imaging based on themean-to-standard-deviation factor in [37] standard devi-ation was adopted to help analyze when an individual shouldstart social security

In this paper we take a concept named ldquoStandard De-viation Updating Amountrdquo to eliminate wolves with poorfitness and dynamically reflect the situation of the wolf packwhich means the population amount of wolf pack andstandard deviation about their fitness determine how manywolves will be eliminated and regenerated Standard devi-ation can be obtained as follows

σ

1N

1113944

N

i1xi minus μ( 1113857

2

11139741113972

(15)

whereNmeans the number of wolf pack ximeans the fitnessof the i-th wolf and μ is the mean value of the fitness enSDUA is gained by the following formula

SDUA

SDUA + 1 fitness(xi)ltμ minus σ2

1113874 1113875

do nothing fitness(xi)geμ minus σ2

1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(16)

SDUA is zero when the iteration begins next differencebetween the mean value and the SD about the fitness of thewolf pack should be computed and if the fitness of currentwolf is less than the difference SDUA increases by 1 oth-erwise nothing must be done So the value of SDUA is gotand SDUA wolves should be eliminated and regenerated

e effect of using a fixed value is displayed inFigure 4(a) and Figure 4(b) shows the effect of using ASD Itis clear that the amount in the latter is fluctuant as the wolf

pack has a different fitness during any iteration and betterthan the former to reflect the dynamic situations of itera-tions Accordingly not only will the bad gene from the wolfpack be eliminated but the population diversity of the wolfpack can be also kept better so that it is difficulty to fall intolocal optimal solution and the convergence rate can beimproved as far as possible

23 Steps of ASGS-CWOA Based on the thoughts ofadaptive grid search and adaptive standard deviationupdating amount mentioned above ASGS-CWOA is pro-posed the implementation steps are following and its flowchart is shown in Figure 5

231 Initialization of Population e following parametersshould be initially assigned amount of wolf population is Nand dimension of searching space is D for brevity theranges of all dimensions are [rangemin rangemax] the upperlimit of iterations is T value of step size in migration isstep_a value of step size in summon and raid is step_b valueof step size in siegement is step_c and the population can begenerated initially by the following equation [25]

xid range min + chaos(0 1) times(range max minus range min)

(17)

where chaos (0 1) returns a chaotic variable distributed in[0 1] uniformly

232 Migration Here an adaptive grid centered on thecurrent wolf can be generated by equation (12) which in-cludes 2 nodes After migration the wolf with best fitnesscan be found as the leader wolf

233 Summon and Raid After migration the others beginto run toward the location of leader wolf and during theprocess of raiding any wolf keeps searching for preys fol-lowing equations (13) and (14)

234 Siege After summon and raid all other wolves cometo be around the leader wolf in order to siege the preyfollowing equations

k xidminusnew1113858 1113859 xid + step c times k

(k minusK minusK + 1 0 K minus 1 K)

ximinusnew k xi1minusnew1113858 1113859 k xi2minusnew1113858 1113859 k xi Dminusnew1113858 11138591113864 1113865

(18)

After any siegement the newest leader wolf can be gotwhich has best fitness temporarily

235 Regeneration After siege some wolves with poorerfitness will be eliminated while the same amount of wolveswill be regenerated according to equation (14)

6 Computational Intelligence and Neuroscience

236 Loop Here if t (the number of current iteration) isbigger than T (the upper limit of iterations) exit the loopOtherwise go to 42 for loop until t is bigger than T Andwhen the loop is over the leader wolf maybe is the globaloptima that the algorithm can find

24 Numerical Experiments In this paper four state-of-the-art optimization algorithms are taken to validate the per-formance of new algorithm proposed above detailed inTable 1

241 Experiments on Twelve Classical Benchmark FunctionsTable 2 shows benchmark functions for testing and Table 3shows the numerical experimental data of five algorithms on12 benchmark functions for testing mentioned above es-pecially the numerical experiments were done on a com-puter equipped with Ubuntu 16044 operating system Intel

(R) Core (TM) i7-5930K processor and 64G memory as wellas Matlab 2017a For genetic algorithm toolbox in Matlab2017a is utilized for GA experiments the PSO experimentswere implemented by a ldquoPSOtrdquo toolbox for Matlab LWPS isgot from the thought in [24] CWOA is carried out based onthe idea in [25] and the new algorithm ASGS-CWOA iscarried out in Matlab 2017a which is an integrated devel-opment environment with M programming language Toprove the good performance of the proposed algorithmoptimization calculations were run for 100 times on anybenchmark function for testing as well as any optimizationalgorithm mentioned above

Firstly focusing on Table 3 seen from the term ofbest value only new algorithm can find the theoreticalglobal optima of all benchmark functions for testing soASGS-CWOA has better performance in optimizationaccuracy

Furthermore for 100 times the worst and average valuesof ASGS-CWOA in the all benchmark functions for testingexcept the F2 named ldquoEasomrdquo reach the theoretical valuesrespectively and the new algorithm has better standard de-viation about best values detailed in Figure 6 from which itcan be seen that nearly all the standard deviations are bestexcept F2 and F6 especially the standard deviations are zeroon F1 F3 F4 F5 F7 F9 F10 F11 and F12 Figure 6(b) showsthat the value of standard deviation about ASGS-CWOA onF2 is not worst in five algorithms and it has better perfor-mance than GA Figure 6(f) indicates that the value ofstandard deviation about ASGS-CWOA on F6 reaches 10minus11weaker than the values on others Figure 6(h) demonstratesthat the value of standard deviation about ASGS-CWOA onF8 reaches 10minus30 which is not zero but best in five algorithmserefore ASGS-CWOA has better stability than others

In addition focusing on the number of mean iterationASGS-CWOAhas weaker iteration times on the test function 2ldquoEasomrdquo the test function 8 ldquoBridgerdquo and the test function 10ldquoBohachevsky1rdquo but it has better performances on the othertest functions especially the iteration times on test function 34 5 6 11 and 12 are 1 or about 1 So ASGS-CWOA has betteradvantage on performance of iteration number

5

Fixed amount 5

ndash5

15

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(a)

ASDUA

ndash5

5

15

25

35

45

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(b)

Figure 4 (a)e number that how many wolves will be starved to death and regenerated as the iteration goes on It is a fixed number 5 (b)e number that how many wolves will be starved to death and regenerated following the new method named SDUA as the iteration goeson

Step 1 initialize parameters of algorithm

Step 2 migration process with ASGS

Step 3 summon ndash raid process with OMR

Step 4 siege process with ASGS

Step 5 population regeneration with SDUA

End condition satisfied

Output or record head wolf position and fitness value

Yes

No

Step 6

Figure 5 Flow chart for the new proposed algorithm

Computational Intelligence and Neuroscience 7

Table 1 Arithmetic configuration

Order Name Configuration

1 Genetic algorithm [16] e crossover probability is set at 07 the mutation probability is set at 001 while the generation gap isset at 095

2 Particle swarmoptimization

Value of individual acceleration 2 value of weighted value of initial time 09 value of weighted value ofconvergence time 04 limit individual speed at 20 of the changing range

3 LWPS Migration step size stepa 15 raid step size stepb 09 siege threshold r0 02 upper limit of siege step sizestepcmax 106 lower limit of siege step size stepcmin 10minus2 updating number of wolves m 5

4 CWOAAmount of campaign wolves q 5 searching direction h 4 upper limit of searchHmax 15 migration stepsize stepa0 15 raid step size stepb 09 siege threshold r0 02 value of siege step size stepc0 16 updating

amount of wolves m 5

5 ASGS-CWOA Migration step-size stepa0 15 upper limit of siege step-size stepcmax 1e6 lower limit of siege step sizestep cmin 1eminus 40 upper limit number of iteration T 600 number of wolf population N 50

Table 2 Test functions [25]

Order Function Expression Dimension Range Optimum1 Matyas F1 026times (x2

1 +x22)minus 048times x1 times x2 2 [minus10 10] min f 0

2 Easom F2 minuscos (x1)times cos (x2)times exp[minus (x1minus π)2 minus (x2minus π)2] 2 [minus100 100] min f minus13 Sumsquares F3 1113936

Di1 ilowast x2

i 10 [minus15 15] min f 0

4 Sphere F4 1113936Di1 x2

i 30 [minus15 15] min f 05 Eggcrate F5 x2

1 + x22 + 25times (sin2 x1 + sin2 x2) 2 [minusπ π] min f 0

6 Six hump camelback F6 4times x1 minus 21x4

1 + (13)times x61 + x1 times x2 minus 4times x2

2 + 4times x42 2 [minus5 5] min

f minus103167 Bohachevsky3 F7 x2

1 + 2times x22 minus 03times cos (3πx1 + 4πx2) + 03 2 [minus100 100] min f 0

8 Bridge F8 (sinx21 + x2

2

1113969)

x21 + x2

2

1113969minus 07129 2 [minus15 15] max f 30054

9 Booth F9 (x1 + 2times x2 minus 7)2 + (2times x1 + x2 minus 5)2 2 [minus10 10] min f 010 Bohachevsky1 F10 x2

1 + 2x22 minus 03times cos (3πx1)minus 04times cos (4πx2) + 07 2 [minus100 100] min f 0

11 Ackley F11 minus20times exp (minus02times

(1D) 1113936Di1 x2

i

1113969

)minus exp((1D) 1113936

Di1 cos 2 πxi) + 20 + e

6 [minus15 15] min f 0

12 Quadric F12 1113936Di1 (1113936

ik1 xk)2 10 [minus15 15] min f 0

Table 3 Experimental results

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F1

1 364Eminus 12 283Eminus 08 352Eminus 10 283Eminus 09 488 040382 414Eminus 21 326Eminus 14 115Eminus 15 387Eminus 15 591 042823 479Eminus 26 495Eminus 21 258Eminus 22 579Eminus 22 585 240954 104Eminus 119 169Eminus 12 585Eminus 24 213Eminus 13 295 042665 0 0 0 0 2744 010806

F2

1 minus1 minus000008110 minus097 01714 153 003752 minus1 minus1 minus1 324Eminus 06 583 041963 minus1 minus1 minus1 0 507 191114 minus1 minus1 minus1 0 72 003215 minus1 0 minus002 00196 59262 27036

F3

1 850Eminus 07 161Eminus 05 277Eminus 06 335Eminus 06 593 143962 000017021 00114 00023 00022 595 047923 760Eminus 07 87485 14802 15621 600 483194 561Eminus 07 149Eminus 05 107Eminus 06 874Eminus 06 600 414535 0 0 0 0 1 102205

F4

1 0004 00331 00127 00055 592 319022 00351 01736 00902 00292 594 048683 35579 103152 80497 12048 600 147424 000001114 00098 00012 00048 600 084315 0 0 0 0 1 100834

8 Computational Intelligence and Neuroscience

Finally the mean time spent by ASGS-CWOA are smallerthan the othersrsquo on F1 F5 F6 and F11 as shown in Figures 7(a)7(e) 7(f) and 7(k) respectively On F7 F9 and F10 the fivealgorithms spent time roughly in the same order of magnitudeand ASGS-CWOA has better performance than GA PSO andLWPS on F7 and F9 as shown in Figures 7(g) and 7(i) Un-fortunately ASGS-CWOA spent most time on F2 F3 F4 F8and F12 yet it is a comfort that the times spent byASGS-CWOAare not too much to be unacceptable is phenomenon con-forms to a philosophical truth that nothing is perfect and flawlessin the world and ASGS-CWOA is without exception Ac-cordingly in general ASGS-CWOA has a better speed ofconvergence e detailed is shown in Figure 7

242 Supplementary Experiments In order to further verifythe performance of the new proposed algorithm supplementary

experiments are conducted on CEC-2014 (IEEE Congress onEvolutionary Computation 2014) testing functions [38] detailedin Table 4 It is different with the above 12 testing functions thatCEC2014 testing functions are conducted on a computerequipment with Win7-32 bit Matlab 2014a CPU (AMD A6-3400M) and RAM (40GB) due to the match between the givencec14_funcmexw32 andwindows 32 bit system not Linux 64 bit

From Table 5 and Figure 8(a) obviously it can be seenthat new proposed algorithm has better performance thanGA and LWPS in terms of ldquooptimal valuerdquo which meansASGS-CWOA has better performance in finding the globaloptima From Figures 8(b)ndash8(d) it can be seen that the newproposed algorithm has best performance in terms of ldquoworstvaluerdquo ldquoaverage valuerdquo and ldquostandard deviationrdquo in otherwords the new proposed algorithm has best stability androbustness From Figure 8(e) the proportion of ASGS-

Table 3 Continued

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F5

1 467Eminus 10 467Eminus 10 467Eminus 10 312Eminus 25 73 000852 920Eminus 22 666Eminus 15 134Eminus 16 678Eminus 16 594 042983 123Eminus 23 176Eminus 19 156Eminus 20 260Eminus 20 585 241124 375Eminus 136 131Eminus 19 428Eminus 22 224Eminus 11 195 021695 0 0 0 0 101 106Eminus 03

F6

1 minus10316 minus10316 minus10316 179Eminus 15 68 000772 minus10316 minus10316 minus10316 147Eminus 15 578 045973 minus10316 minus10316 minus10316 152Eminus 15 519 247274 minus10316 minus10316 minus10316 125Eminus 15 157 018235 minus10316 minus10316 minus10316 560Eminus 11 332 0021855

F7

1 407Eminus 08 164Eminus 06 317Eminus 07 501Eminus 07 437 03272 278Eminus 16 260Eminus 10 885Eminus 12 310Eminus 11 590 045513 0 167Eminus 16 777Eminus 18 250Eminus 17 556 177454 0 169Eminus 11 636Eminus 15 236Eminus 11 139 007935 0 0 0 0 2406 01064

F8

1 30054 27052 29787 00629 69 000792 30054 30054 30054 453Eminus 16 550 039463 30054 30038 30053 000016125 399 129644 30054 30054 30054 941Eminus 11 67 003075 30054 30054 30054 966Eminus 30 600 30412

F9

1 455Eminus 11 368Eminus 09 891Eminus 11 367Eminus 10 271 012762 247Eminus 20 191Eminus 11 197Eminus 13 191Eminus 12 591 045253 605Eminus 24 312Eminus 19 270Eminus 20 548Eminus 20 580 196064 0 315Eminus 11 146Eminus 22 511Eminus 12 132 007385 0 0 0 0 2686 011364

F10

1 436Eminus 07 04699 00047 0047 109 001972 0 365Eminus 12 112Eminus 13 403Eminus 13 588 045643 0 0 0 0 539 167364 0 0 0 0 78 00255 0 0 0 0 18488 086886

F11

1 15851 15851 15851 378Eminus 07 434 057882 15934 1594 15935 00000826 595 049983 155Eminus 06 21015 07618 05833 594 17894 921Eminus 07 000018069 520Eminus 05 41549ndash5 535 108225 0 0 0 0 1 019021

F12

1 20399 times Eminus 4 00277 00056 00059 597 15772 00151 02574 00863 00552 592 056083 0139 16442 09017 03516 600 211694 428Eminus 08 00214 000038043 00013 600 197285 0 0 0 0 1 74354

Computational Intelligence and Neuroscience 9

CWOA is better than PSO LWPS and CWOA and it meansthe new proposed ASGS-CWOA has advantages in timeperformance

In a word ASGS-CWOA has good optimization qualitystability advantage on performance of iteration numberand speed of convergence

3 Results and Discussion

eoretical research and experimental results reveal thatcompared with traditional genetic algorithm particle swarmoptimization leading wolf pack algorithm and chaotic wolf

optimization algorithm ASGS-CWOA has better globaloptimization accuracy fast convergence speed and highrobustness under the same conditions

In fact the strategy of ASGS greatly strengthens the localexploitation power of the original algorithm which makes iteasier for the algorithm to find the global optimum the strategyof ORM and SDUA effectively enhances the global explorationpower whichmakes the algorithm not easy to fall into the localoptimum and thus easier to find the global optimum

Focusing on Tables 3 and 5 and Figures 6 and 7 abovecompared with the four state-of-the-art algorithms ASGS-CWOA is effective and efficient in most of terms on

1 2 3 4 5

Stan

dard

dev

iatio

n

Algorithm order

Matyas

0E + 00

(a)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Easom

0E + 00

2E ndash 02

4E ndash 02

(b)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sumsquares

0E + 00

5E ndash 06

1E ndash 05

(c)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sphere

0E + 00

3E ndash 02

6E ndash 02

(d)

Stan

dard

dev

iatio

n

Algorithm order51 2 3 4

Execrate

0E + 00

(e)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

(f)

Algorithm order1 2 3 4 5

Bohachevsky3

Stan

dard

dev

iatio

n

0E + 00

(g)

Algorithm order1 2 3 4 5

Bridge

Stan

dard

dev

iatio

n

0E + 00

(h)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Booth

0E + 00

(i)

Algorithm order1 2 3 4 5

Bohachevsky1

Stan

dard

dev

iatio

n

0E + 00

4E ndash 13

8E ndash 13

(j)

Algorithm order1 2 3 4 5

Ackley

Stan

dard

dev

iatio

n

0E + 00

4E ndash 07

8E ndash 07

(k)

Algorithm order1 2 3 54

Quadric

Stan

dard

dev

iatio

n

0E + 00

4E ndash 03

8E ndash 03

(l)

Figure 6 Histograms of the SD about the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g) ndashF7 (h)ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12 (red means the value is out of upper limit of the chart)

10 Computational Intelligence and Neuroscience

Mea

n tim

e

Matyas

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(a)

Mea

n tim

e

Easom

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(b)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sumsquares

00E + 00

60E + 00

12E + 01

(c)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sphere

00E + 00

50E + 00

10E + 01

(d)

Mea

n tim

e

Algorithm order1 2 3 4 5

Execrate

0E + 00

1E + 00

2E + 00

3E + 00

(e)

Mea

n tim

e

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

1E + 00

2E + 00

3E + 00

(f )

Mea

n tim

e

Algorithm order1 2 3 4 5

Bohachevsky3

0E + 00

1E + 00

2E + 00

(g)

Mea

n tim

e

Algorithm order1 2 3 4 5

Bridge

0E + 00

1E + 00

2E + 00

3E + 00

(h)

Mea

n tim

e

Algorithm order1 2 3 4 5

Booth

0E + 00

1E + 00

2E + 00

(i)

Mea

n tim

e

1 2 3 4 5Algorithm order

Bohachevsky1

0E + 00

1E + 00

2E + 00

(j)

Mea

n tim

e

1 2 3 4 5Algorithm order

Ackley

0E + 00

1E + 00

2E + 00

3E + 00

(k)

Mea

n tim

e

1 2 3 4 5Algorithm order

Quadric

0E + 00

4E + 00

8E + 00

(l)

Figure 7 Histograms of the mean time spent on the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g)ndashF7 (h) ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12

Table 4 Part of the CECrsquo14 test functions [38]

No Functions Dimension Range Filowast Fi (xlowast)

1 Rotated High Conditioned Elliptic Function 2 [minus100 100] 1002 Rotated Bent Cigar Function 2 [minus100 100] 2003 Rotated Discuss Function 2 [minus100 100] 3004 Shifted and Rotated Rosenbrockrsquos Function 2 [minus100 100] 4005 Shifted and Rotated Ackleyrsquos Function 2 [minus100 100] 5006 Shifted and Rotated Weier Stress Function 2 [minus100 100] 6007 Shifted and Rotated Griewankrsquos Function 2 [minus100 100] 7008 Shifted Rastriginrsquos Function 2 [minus100 100] 8009 Shifted and Rotated Rastriginrsquos Function 2 [minus100 100] 900

Computational Intelligence and Neuroscience 11

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 2: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

bacterial foraging optimization (BFO) algorithm was pro-posed by mimicking the foraging behavior of Escherichiacoli in the human esophagus Artificial shuffled frog leapingalgorithm (SFLA) [13] was put forward through the simu-lation of the frog groups to share information foragingprocess and exchange mechanism In [14] Karaboga andBasturk proposed artificial bee colony (ABC) algorithmbased on the concept of postpeak colony breeding and theway of foraging But no algorithm is universal these swarmintelligence optimization algorithms have their own short-comings such as slow convergence easy to fall into localoptimum or low accuracy In [15] the authors proposed animproved ant colony optimization (ICMPACO) algorithmbased on the multipopulation strategy coevolution mech-anism pheromone updating strategy and pheromone dif-fusionmechanism in order to balance the convergence speedand solution diversity and improve optimization perfor-mance in solving large-scale optimization problem Toovercome the deficiencies of weak local search ability ingenetic algorithms (GA) [16] and slow global convergencespeed in ant colony optimization (ACO) algorithm insolving complex optimization problems in [17] the authorsintroduced the chaotic optimization method multi-population collaborative strategy and adaptive control pa-rameters into the GA and ACO algorithm to propose agenetic and ant colony adaptive collaborative optimization(MGACACO) algorithm for solving complex optimizationproblems On the one hand the ant colony optimization(ACO) algorithm has the characteristics of positive feedbackessential parallelism and global convergence but it has theshortcomings of premature convergence and slow conver-gence speed on the other hand the coevolutionary algo-rithm (CEA) emphasizes the existing interaction amongdifferent subpopulations but it is overly formal and does notform a very strict and unified definition erefore in [18]Huimin et al proposed a new adaptive coevolutionary antcolony optimization (SCEACO) algorithm based on thecomplementary advantages and hybrid mechanism

In 2007 Yang and his coauthors proposed the wolfswarm algorithm [19] which is a new swarm intelligencealgorithm e algorithm simulates the wolf predationprocess mainly through the walk raid and siege of threekinds of behavior and survival of the fittest populationupdate mechanism to achieve the purpose of solvingcomplex nonlinear optimization problems Since the wolfpack algorithm is proposed it is widely used in various fieldsand has been developed and improved continually such asfollows In [20] the authors proposed a novel and efficientoppositional wolf pack algorithm to estimate the parametersof Lorenz chaotic system In [21] the modified wolf packsearch algorithm is applied to compute the quasioptimaltrajectories for the rotor wing UAVs in the complex three-dimensional (3D) spaces In [22] wolf algorithm was used tomake out polynomial equation roots of the problem accu-rately and quickly In [23] a new wolf pack algorithm wasdesigned aiming to get better performance including newupdate rule of scout wolf new concept of siege radius andnew attack step kind In [24] Qiang and Zhou presented awolf colony search algorithm based on the leaderrsquos strategy

In [25] to explore the problem of parameter optimizationfor complex nonlinear function a chaos wolf optimizationalgorithm (CWOA) with self-adaptive variable step size wasproposed In [26] Mirjalili et al proposed ldquogrey wolf op-timizerrdquo based on the cooperative hunting behaviour ofwolves which can be regarded as a variant of paper [19] in[27ndash29] grey wolf optimizer is adopted to solve nonsmoothoptimal power flow problems optimal planning of renew-able distributed generation in distribution systems andoptimal reactive power dispatch considering SSSC (staticsynchronous series compensator)

12 Motivation and Incitement From the review of theabove literature the current optimization algorithm ofwolf pack follows the principle of ldquoa certain number ofscouting wolves lead wolves through greedy search with aspecially limited number of times (each wolf has only fouropportunities in some literatures)rdquo the principle of ldquofiercewolves approach the first wolf through a specially limitednumber of times of rushingrdquo in the call and rushingprocess and the principle of ldquofierce wolves pass through aspecially limited number of times of rushingrdquo in the siegeprocess greedy search for prey ldquoin this operation mech-anism the algorithm itself too imitates the actual huntingbehavior of wolves especially inrdquo grey wolf optimizationrdquowhich divides the wolves in the algorithm into a moredetailed level e advantage of doing so is that it caneffectively guarantee the final convergence of the algorithmbecause it completely mimics the biological foragingprocess however the goal of the intelligent optimizationalgorithm of wolves is to solve the optimization problemefficiently and imitating wolvesrsquo foraging behavior is onlya way erefore the intelligent optimization algorithm ofwolves should be more abstract on the basis of imitatingwolvesrsquo foraging behavior in order to improve its opti-mization ability and efficiency Due to the reasons aboveeach of variants about wolf pack algorithm mentionedabove has its own limitation or inadequacy including slowconvergence weak optimization accuracy and narrowscope of application

13 Contribution and Paper Organization To further im-prove the wolf pack optimization algorithm to make it havebetter performance this paper proposed an adaptiveshrinking grid search chaotic wolf optimization algorithmusing standard deviation updating amount (ASGS-CWOA)e paper consists of five parts including IntroductionPrinciple of LWPS and CWOA (both of them are classicvariants of wolf pack optimization algorithm) Improve-ment Steps of ASGS-CWOA Numerical Experiments andcorresponding analyses

2 Materials and Methods

21 Principle of LWPS and CWOA Many variants aboutoriginal wolf pack algorithm are mentioned above in thisarticle we focus on LWPS and CWOA

2 Computational Intelligence and Neuroscience

211 LWPS e leader strategy was employed into tradi-tional wolf pack algorithm detailed in [24] unlike the simplesimulation of wolvesrsquo hunting in [26] LWPS abstracts wolvesrsquohunting activities into five steps According to the thought ofLWPS the specific steps and related formulas are as follows

(1) Initialization e step is to disperse wolves to searchspace or solution space in some way Here the number ofwolf populations can be represented by N and dimensionsof the search space can be represented by D and then wecan get the position of the i-th wolf by the followingequation

xi xi1 xi2 xid xi D( 1113857 (i 1 2 N d 1 2 D)

(1)

where xid is the location of the i-th wolf in d-th dimensionNmeans the number of wolf population and D is the maxi-mum dimension number of the solution space Initial lo-cation of each wolf can be produced by

xid xmin + rand(0 1) times xmax minus xmin( 1113857 (2)

where rand (0 1) is a random number distributed uniformlyin the interval [0 1] and xmax and xmin are the upper andlower limits of the solution space respectively

(2) Competition for the Leader Wolf is step is to find theleader wolf with best fitness value Firstly Q wolves shouldbe chosen as the candidates which are the top Q of wolfpack according to their fitness Secondly each one of the Qwolves searches around itself in D directions so a newlocation can be got by equation (3) if its fitness is betterthan the current fitness the current wolf i should move tothe new location otherwise do not move then searchingwill be continued until the searching time is greater thanthe maximum number Tmax or the searched locationcannot be improved any more Finally by comparing thefitness ofQwolves the best wolf can be elected and it is theleader wolf

ximinusnew xid + rand(minus1 1) times step a (3)

where step_a is the search step size

(3) Summon and Raid Each of the other wolves will raidtoward the location of leader wolf as soon as it receives thesummon from leader wolf and it continues to search forpreys during the running road Finally a new location can begot by equation (4) if its fitness is better than the one ofcurrent location the wolf will move to the new one oth-erwise stay at the current location

xidminusnew xid + rand(minus1 1) times step b times xld minus xid( 1113857 (4)

where xld is the location of the leader wolf in d-th dimensionand step_b is the run step size

(4) Siege the Prey [30] After the above process wolves cometo the nearby of the leader wolf and will be ready to catch theprey until the prey is gote new location for anyone exceptleader wolf can be obtained by the following equation

xidminusnew xid rm ltR0

xld + rand(minus1 1) times step c rm gtR0 rm R01113896

(5)

where step_c is the siege step size rm is a number generatedrandomly by the function rand (minus1 1) distributed uniformlyin the interval [minus1 1] and R0 is a preset siege threshold

In the optimization problem with the current solutiongetting closer and closer to the theoretical optimal value thesiege step size of wolf pack also decreases with the increase initeration times so that wolves have greater probability to findbetter values e following equation is about the siege stepsize which is obtained from [31]

step c step cmin times xdminusmax minus xdminusmin( 1113857

times expln step cminstep cmax( 1113857lowast t

T1113888 1113889

(6)

where step_cmin is the lower limit of the siege step sizexdndashmax is the upper limit of search space in d-th dimensionxdndashmin is the lower limit of search space in d-th dimensionstep_cmax is the upper limit of the siege step size and tmeansthe current number of iterations while T represents theupper limit one

Being out of boundary is not allowed to each wolf soequation (7) is used to deal with the possible transboundary

xidminusnew xdminusmin xidminusnew ltxdminusmin

xdminusmax xidminusnew ltxdminusmax1113896 (7)

(5) Distribution of Food and Regeneration According to thewolf group renewal mechanism of ldquosurvival of the strongrdquogroup renewal is carried out including the worst m wolveswill die and be deleted and new m wolves will be generatedby equation (2)

212 CWOA CWOA develops from the LWPS by intro-ducing the strategy of chaos optimization and adaptiveparameters into the traditional wolf pack algorithm eformer utilizes logistic map to generate chaotic variables thatare projected to solution space in order to search while thelatter introduces adaptive step size to enhance the perfor-mance and they work welle equation of logistic map is asfollows which is from [32]

chaosk+1 μ times chaosk times 1 minus chasok( 1113857chaosk isin [0 1] (8)

where μ is the control variable and when μ is 4 the system isin chaos state

e strategy of adaptive variable step size search includesthe improvement of search step size and siege step-size Inthe early stage wolves should search for preys with a largestep size so as to cover the whole solution space as much aspossible while in the later stage with the wolves gatheringcontinuously wolves should take a small step size to searchfor preys so that they can search finely in a small target areaAs a result the possibility of finding the global optimalsolution will be increased And by equation (9) we can get

Computational Intelligence and Neuroscience 3

the step size of migration while by equation (10) the one ofsiege can be obtained

step anew α times step a0 α 1 minust minus 12

1113874 11138752

(9)

step cnew step c0 times rand(0 1) times 1 minust minus 12

1113874 11138752

1113890 1113891 (10)

where step_c0 is the starting step size for siege

22 Improvement

221 Strategy of Adaptive Shrinking Grid Search (ASGS)In the solution space with D dimensions a searching wolfneeds to migrate along different directions in order to findpreys during any iteration of themigration it is according tothe thought of the original LWPS and CWOA that there isonly a dynamic point around the current location to begenerated according to equation (3)

However a single dynamic point is isolated and not wellenough to search in the current domain space of some a wolfFigures 1(a) and 1(b) show the two-dimensional and thethree-dimensional spaces respectively In essence the al-gorithm needs to take the whole local domain space of thecurrent wolf to be considered in order to find the local bestlocation so ASGS was used to generate an adaptive gridcentered on the current wolf which is extending along 2timesDdirections and includes (2timesK+ 1)D nodes where K meansthe number of nodes taken along any direction Figures 2(a)and 2(b) show the migration about ASGS in two-dimen-sional and three-dimensional space respectively and forbrevity here K is set to 2 detailed in the following equation

k xidminusnew1113858 1113859 xid + step anew times k

(k minusK minusK + 1 0 K minus 1 K)

(11)

So a node in the grid can be defined as

ximinusnew k xi1minusnew1113858 1113859 k xi2minusnew1113858 1113859 k xi Dminusnew1113858 11138591113864 1113865

(k minusK minusK + 1 0 K minus 1 K)

(12)

During any migration the node with best fitness in thegrid should be selected as new location of the searching wolfand after any migration the leader wolf of the population isupdated according to the new fitness It needs to be par-ticularly pointed out that the searching grid will be smallerand smaller as the step_anew becomes smaller Obviouslycompared to a single isolated point in traditional methodsthe searching accuracy and the possibility of finding theoptimal value can be improved by ASGS including(2timesK+ 1)D nodes

As the same reason during the process of the siege thesame strategy is used to find the leader wolf in the localdomain space including (2timesK+ 1)D points But the differentis that the step size of siege is smaller than the one of mi-gration After any migration the leader wolf of the pop-ulation is updated according to new current fitness andthen the current best wolf will be the leader wolftemporarily

222 Strategy of Opposite-Middle Raid (OMR)According to the idea of the traditional wolf pack algo-rithms it is unfortunately that raid pace is too small orunreasonable and wolves cannot rapidly appear around theleader wolf when they receive the summon signal from theleader wolf as shown in Figure 3(a) So OMR is put forwardto solve the problem above and its main clue is that theopposite location of the current location relative to theleader wolf should be calculated by the following equation

xidminusopposition 2 times xld minus xid (13)

If the opposite location has better fitness than the currentone the current wolf should move to the opposite oneOtherwise the following is obtained

ximinusm d xi d + xl d

2

xi bestfitness xi 1 xi 2 xi (Dminus1) xi D1113960 1113961 xi 1 xi 2 xi (Dminus1) xm D1113960 1113961 ximinusm 1 ximinusm 2 ximinusm (Dminus1) ximinusm D1113960 11139611113872 1113873

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

(14)

where ximinusm_d is the middle location in d-th dimension be-tween the i-th wolf and the leader wolf xi_d is the location ofthe wolf i in d-th dimension and xl_d is the location of theleader wolf in d-th dimension ldquoBestfitnessrdquo returns a wolfwith the best fitness from the given ones

From equation (14) and Figure 3(b) it is seen that thereare 2D points among the current point and the middle pointand as the result of each raid the point with best fitness ischosen as the new location of the current i-th wolf ereby

not only the wolves can appear around the leader wolf assoon as possible but also they can try to find new preys as faras possible

223 Standard Deviation Updating Amount (SDUA)According to the basic idea of the leader wolf pack algo-rithm during the iterations some wolves with poorer fitnesswill be continuously eliminated while the same amount of

4 Computational Intelligence and Neuroscience

ndash10ndash8ndash6ndash4ndash2

02468

10

D ndash

2

6ndash6 2ndash2 10ndash10 4ndash4 8ndash8 0D ndash 1

(a)

D ndash 1D ndash 2

10

10

5

5

00

ndash5 ndash5ndash10

ndash10

ndash10

ndash5

0

5

10

D ndash

3

(b)

Figure 1 (a)e range of locations of searching or competing wolves in CWOAwhen dimension is 2 (b) the range of locations of searchingor competing wolves in CWOA when dimension is 3 where step_a 10 or step_c 10 (the red point means the current location of some awolf)

20

15

10

5

0

ndash5

ndash10

ndash15

ndash20

D ndash

2

ndash20 ndash15 ndash10 ndash5 0 5 10 15 20D ndash 1

(a)

D ndash

3

D ndash 1D ndash 2

20

20

20

10

10

10

0

00

ndash10

ndash10ndash10ndash20ndash20

ndash20

(b)

Figure 2 e searching situation of wolves in ASGS-CWOA when dimension is (a) 2 and (b) 3 where step_a 10 or step_c 10 and K 2(the red point means the current location of some a wolf and the black ones mean the searching locations of the wolf)

10

10

86420 0ndash2

ndash5

5

D ndash 1D ndash 2

ndash202468

10

D ndash

3

(a)

10

10

8

8

6

64 42

20

0 D ndash 2D ndash 1

0

2

4

6

8

10

D ndash

3

(b)

Figure 3 (a) e range of locations of wolves in raid when dimension is 3 according to the original thought of wolf pack algorithm (b) therange of locations of wolves in raid when dimension is 3 according to OMR where step_a 2 e red point means the current position ofsome wolves the pink one indicates the position of the lead wolf and the blue one means the middle point of current wolf and lead wolf

Computational Intelligence and Neuroscience 5

wolves will be added to the population to make sure that thebad gene can be eliminated and the population diversity ofthe wolf pack can be ensured so it is not easy to fall into thelocal optimal solution and the convergence rate can beimproved However the amount of wolves that should beeliminated and added to the wolf pack is a fixed numberwhich is 5 in LWPS and CWOA mentioned before and thatis stiff and unreasonable In fact the amount should be adynamic number to reflect the dynamic situation of the wolfpack during any iteration

Standard deviation (SD) is a statistical concept whichhas been widely used in many fields Such as in [33]standard deviation was used into industrial equipment tohelp process the signal of bubble detection in liquid sodiumin [34] standard deviation is used with delay multiply toform a new weighting factor which is introduced to enhancethe contrast of the reconstructed images in medical fields in[35] based on eight-year-old dental trauma research datastandard deviation was utilized to help analyze the potentialof laser Doppler flowmetry the beam forming performancehas a large impact on image quality in ultrasound imagingto improve image resolution and contrast in [36] a newadaptive weighting factor for ultrasound imaging calledsignal mean-to-standard-deviation factor (SMSF) was pro-posed based on which researchers put forward an adaptivebeam forming method for ultrasound imaging based on themean-to-standard-deviation factor in [37] standard devi-ation was adopted to help analyze when an individual shouldstart social security

In this paper we take a concept named ldquoStandard De-viation Updating Amountrdquo to eliminate wolves with poorfitness and dynamically reflect the situation of the wolf packwhich means the population amount of wolf pack andstandard deviation about their fitness determine how manywolves will be eliminated and regenerated Standard devi-ation can be obtained as follows

σ

1N

1113944

N

i1xi minus μ( 1113857

2

11139741113972

(15)

whereNmeans the number of wolf pack ximeans the fitnessof the i-th wolf and μ is the mean value of the fitness enSDUA is gained by the following formula

SDUA

SDUA + 1 fitness(xi)ltμ minus σ2

1113874 1113875

do nothing fitness(xi)geμ minus σ2

1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(16)

SDUA is zero when the iteration begins next differencebetween the mean value and the SD about the fitness of thewolf pack should be computed and if the fitness of currentwolf is less than the difference SDUA increases by 1 oth-erwise nothing must be done So the value of SDUA is gotand SDUA wolves should be eliminated and regenerated

e effect of using a fixed value is displayed inFigure 4(a) and Figure 4(b) shows the effect of using ASD Itis clear that the amount in the latter is fluctuant as the wolf

pack has a different fitness during any iteration and betterthan the former to reflect the dynamic situations of itera-tions Accordingly not only will the bad gene from the wolfpack be eliminated but the population diversity of the wolfpack can be also kept better so that it is difficulty to fall intolocal optimal solution and the convergence rate can beimproved as far as possible

23 Steps of ASGS-CWOA Based on the thoughts ofadaptive grid search and adaptive standard deviationupdating amount mentioned above ASGS-CWOA is pro-posed the implementation steps are following and its flowchart is shown in Figure 5

231 Initialization of Population e following parametersshould be initially assigned amount of wolf population is Nand dimension of searching space is D for brevity theranges of all dimensions are [rangemin rangemax] the upperlimit of iterations is T value of step size in migration isstep_a value of step size in summon and raid is step_b valueof step size in siegement is step_c and the population can begenerated initially by the following equation [25]

xid range min + chaos(0 1) times(range max minus range min)

(17)

where chaos (0 1) returns a chaotic variable distributed in[0 1] uniformly

232 Migration Here an adaptive grid centered on thecurrent wolf can be generated by equation (12) which in-cludes 2 nodes After migration the wolf with best fitnesscan be found as the leader wolf

233 Summon and Raid After migration the others beginto run toward the location of leader wolf and during theprocess of raiding any wolf keeps searching for preys fol-lowing equations (13) and (14)

234 Siege After summon and raid all other wolves cometo be around the leader wolf in order to siege the preyfollowing equations

k xidminusnew1113858 1113859 xid + step c times k

(k minusK minusK + 1 0 K minus 1 K)

ximinusnew k xi1minusnew1113858 1113859 k xi2minusnew1113858 1113859 k xi Dminusnew1113858 11138591113864 1113865

(18)

After any siegement the newest leader wolf can be gotwhich has best fitness temporarily

235 Regeneration After siege some wolves with poorerfitness will be eliminated while the same amount of wolveswill be regenerated according to equation (14)

6 Computational Intelligence and Neuroscience

236 Loop Here if t (the number of current iteration) isbigger than T (the upper limit of iterations) exit the loopOtherwise go to 42 for loop until t is bigger than T Andwhen the loop is over the leader wolf maybe is the globaloptima that the algorithm can find

24 Numerical Experiments In this paper four state-of-the-art optimization algorithms are taken to validate the per-formance of new algorithm proposed above detailed inTable 1

241 Experiments on Twelve Classical Benchmark FunctionsTable 2 shows benchmark functions for testing and Table 3shows the numerical experimental data of five algorithms on12 benchmark functions for testing mentioned above es-pecially the numerical experiments were done on a com-puter equipped with Ubuntu 16044 operating system Intel

(R) Core (TM) i7-5930K processor and 64G memory as wellas Matlab 2017a For genetic algorithm toolbox in Matlab2017a is utilized for GA experiments the PSO experimentswere implemented by a ldquoPSOtrdquo toolbox for Matlab LWPS isgot from the thought in [24] CWOA is carried out based onthe idea in [25] and the new algorithm ASGS-CWOA iscarried out in Matlab 2017a which is an integrated devel-opment environment with M programming language Toprove the good performance of the proposed algorithmoptimization calculations were run for 100 times on anybenchmark function for testing as well as any optimizationalgorithm mentioned above

Firstly focusing on Table 3 seen from the term ofbest value only new algorithm can find the theoreticalglobal optima of all benchmark functions for testing soASGS-CWOA has better performance in optimizationaccuracy

Furthermore for 100 times the worst and average valuesof ASGS-CWOA in the all benchmark functions for testingexcept the F2 named ldquoEasomrdquo reach the theoretical valuesrespectively and the new algorithm has better standard de-viation about best values detailed in Figure 6 from which itcan be seen that nearly all the standard deviations are bestexcept F2 and F6 especially the standard deviations are zeroon F1 F3 F4 F5 F7 F9 F10 F11 and F12 Figure 6(b) showsthat the value of standard deviation about ASGS-CWOA onF2 is not worst in five algorithms and it has better perfor-mance than GA Figure 6(f) indicates that the value ofstandard deviation about ASGS-CWOA on F6 reaches 10minus11weaker than the values on others Figure 6(h) demonstratesthat the value of standard deviation about ASGS-CWOA onF8 reaches 10minus30 which is not zero but best in five algorithmserefore ASGS-CWOA has better stability than others

In addition focusing on the number of mean iterationASGS-CWOAhas weaker iteration times on the test function 2ldquoEasomrdquo the test function 8 ldquoBridgerdquo and the test function 10ldquoBohachevsky1rdquo but it has better performances on the othertest functions especially the iteration times on test function 34 5 6 11 and 12 are 1 or about 1 So ASGS-CWOA has betteradvantage on performance of iteration number

5

Fixed amount 5

ndash5

15

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(a)

ASDUA

ndash5

5

15

25

35

45

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(b)

Figure 4 (a)e number that how many wolves will be starved to death and regenerated as the iteration goes on It is a fixed number 5 (b)e number that how many wolves will be starved to death and regenerated following the new method named SDUA as the iteration goeson

Step 1 initialize parameters of algorithm

Step 2 migration process with ASGS

Step 3 summon ndash raid process with OMR

Step 4 siege process with ASGS

Step 5 population regeneration with SDUA

End condition satisfied

Output or record head wolf position and fitness value

Yes

No

Step 6

Figure 5 Flow chart for the new proposed algorithm

Computational Intelligence and Neuroscience 7

Table 1 Arithmetic configuration

Order Name Configuration

1 Genetic algorithm [16] e crossover probability is set at 07 the mutation probability is set at 001 while the generation gap isset at 095

2 Particle swarmoptimization

Value of individual acceleration 2 value of weighted value of initial time 09 value of weighted value ofconvergence time 04 limit individual speed at 20 of the changing range

3 LWPS Migration step size stepa 15 raid step size stepb 09 siege threshold r0 02 upper limit of siege step sizestepcmax 106 lower limit of siege step size stepcmin 10minus2 updating number of wolves m 5

4 CWOAAmount of campaign wolves q 5 searching direction h 4 upper limit of searchHmax 15 migration stepsize stepa0 15 raid step size stepb 09 siege threshold r0 02 value of siege step size stepc0 16 updating

amount of wolves m 5

5 ASGS-CWOA Migration step-size stepa0 15 upper limit of siege step-size stepcmax 1e6 lower limit of siege step sizestep cmin 1eminus 40 upper limit number of iteration T 600 number of wolf population N 50

Table 2 Test functions [25]

Order Function Expression Dimension Range Optimum1 Matyas F1 026times (x2

1 +x22)minus 048times x1 times x2 2 [minus10 10] min f 0

2 Easom F2 minuscos (x1)times cos (x2)times exp[minus (x1minus π)2 minus (x2minus π)2] 2 [minus100 100] min f minus13 Sumsquares F3 1113936

Di1 ilowast x2

i 10 [minus15 15] min f 0

4 Sphere F4 1113936Di1 x2

i 30 [minus15 15] min f 05 Eggcrate F5 x2

1 + x22 + 25times (sin2 x1 + sin2 x2) 2 [minusπ π] min f 0

6 Six hump camelback F6 4times x1 minus 21x4

1 + (13)times x61 + x1 times x2 minus 4times x2

2 + 4times x42 2 [minus5 5] min

f minus103167 Bohachevsky3 F7 x2

1 + 2times x22 minus 03times cos (3πx1 + 4πx2) + 03 2 [minus100 100] min f 0

8 Bridge F8 (sinx21 + x2

2

1113969)

x21 + x2

2

1113969minus 07129 2 [minus15 15] max f 30054

9 Booth F9 (x1 + 2times x2 minus 7)2 + (2times x1 + x2 minus 5)2 2 [minus10 10] min f 010 Bohachevsky1 F10 x2

1 + 2x22 minus 03times cos (3πx1)minus 04times cos (4πx2) + 07 2 [minus100 100] min f 0

11 Ackley F11 minus20times exp (minus02times

(1D) 1113936Di1 x2

i

1113969

)minus exp((1D) 1113936

Di1 cos 2 πxi) + 20 + e

6 [minus15 15] min f 0

12 Quadric F12 1113936Di1 (1113936

ik1 xk)2 10 [minus15 15] min f 0

Table 3 Experimental results

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F1

1 364Eminus 12 283Eminus 08 352Eminus 10 283Eminus 09 488 040382 414Eminus 21 326Eminus 14 115Eminus 15 387Eminus 15 591 042823 479Eminus 26 495Eminus 21 258Eminus 22 579Eminus 22 585 240954 104Eminus 119 169Eminus 12 585Eminus 24 213Eminus 13 295 042665 0 0 0 0 2744 010806

F2

1 minus1 minus000008110 minus097 01714 153 003752 minus1 minus1 minus1 324Eminus 06 583 041963 minus1 minus1 minus1 0 507 191114 minus1 minus1 minus1 0 72 003215 minus1 0 minus002 00196 59262 27036

F3

1 850Eminus 07 161Eminus 05 277Eminus 06 335Eminus 06 593 143962 000017021 00114 00023 00022 595 047923 760Eminus 07 87485 14802 15621 600 483194 561Eminus 07 149Eminus 05 107Eminus 06 874Eminus 06 600 414535 0 0 0 0 1 102205

F4

1 0004 00331 00127 00055 592 319022 00351 01736 00902 00292 594 048683 35579 103152 80497 12048 600 147424 000001114 00098 00012 00048 600 084315 0 0 0 0 1 100834

8 Computational Intelligence and Neuroscience

Finally the mean time spent by ASGS-CWOA are smallerthan the othersrsquo on F1 F5 F6 and F11 as shown in Figures 7(a)7(e) 7(f) and 7(k) respectively On F7 F9 and F10 the fivealgorithms spent time roughly in the same order of magnitudeand ASGS-CWOA has better performance than GA PSO andLWPS on F7 and F9 as shown in Figures 7(g) and 7(i) Un-fortunately ASGS-CWOA spent most time on F2 F3 F4 F8and F12 yet it is a comfort that the times spent byASGS-CWOAare not too much to be unacceptable is phenomenon con-forms to a philosophical truth that nothing is perfect and flawlessin the world and ASGS-CWOA is without exception Ac-cordingly in general ASGS-CWOA has a better speed ofconvergence e detailed is shown in Figure 7

242 Supplementary Experiments In order to further verifythe performance of the new proposed algorithm supplementary

experiments are conducted on CEC-2014 (IEEE Congress onEvolutionary Computation 2014) testing functions [38] detailedin Table 4 It is different with the above 12 testing functions thatCEC2014 testing functions are conducted on a computerequipment with Win7-32 bit Matlab 2014a CPU (AMD A6-3400M) and RAM (40GB) due to the match between the givencec14_funcmexw32 andwindows 32 bit system not Linux 64 bit

From Table 5 and Figure 8(a) obviously it can be seenthat new proposed algorithm has better performance thanGA and LWPS in terms of ldquooptimal valuerdquo which meansASGS-CWOA has better performance in finding the globaloptima From Figures 8(b)ndash8(d) it can be seen that the newproposed algorithm has best performance in terms of ldquoworstvaluerdquo ldquoaverage valuerdquo and ldquostandard deviationrdquo in otherwords the new proposed algorithm has best stability androbustness From Figure 8(e) the proportion of ASGS-

Table 3 Continued

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F5

1 467Eminus 10 467Eminus 10 467Eminus 10 312Eminus 25 73 000852 920Eminus 22 666Eminus 15 134Eminus 16 678Eminus 16 594 042983 123Eminus 23 176Eminus 19 156Eminus 20 260Eminus 20 585 241124 375Eminus 136 131Eminus 19 428Eminus 22 224Eminus 11 195 021695 0 0 0 0 101 106Eminus 03

F6

1 minus10316 minus10316 minus10316 179Eminus 15 68 000772 minus10316 minus10316 minus10316 147Eminus 15 578 045973 minus10316 minus10316 minus10316 152Eminus 15 519 247274 minus10316 minus10316 minus10316 125Eminus 15 157 018235 minus10316 minus10316 minus10316 560Eminus 11 332 0021855

F7

1 407Eminus 08 164Eminus 06 317Eminus 07 501Eminus 07 437 03272 278Eminus 16 260Eminus 10 885Eminus 12 310Eminus 11 590 045513 0 167Eminus 16 777Eminus 18 250Eminus 17 556 177454 0 169Eminus 11 636Eminus 15 236Eminus 11 139 007935 0 0 0 0 2406 01064

F8

1 30054 27052 29787 00629 69 000792 30054 30054 30054 453Eminus 16 550 039463 30054 30038 30053 000016125 399 129644 30054 30054 30054 941Eminus 11 67 003075 30054 30054 30054 966Eminus 30 600 30412

F9

1 455Eminus 11 368Eminus 09 891Eminus 11 367Eminus 10 271 012762 247Eminus 20 191Eminus 11 197Eminus 13 191Eminus 12 591 045253 605Eminus 24 312Eminus 19 270Eminus 20 548Eminus 20 580 196064 0 315Eminus 11 146Eminus 22 511Eminus 12 132 007385 0 0 0 0 2686 011364

F10

1 436Eminus 07 04699 00047 0047 109 001972 0 365Eminus 12 112Eminus 13 403Eminus 13 588 045643 0 0 0 0 539 167364 0 0 0 0 78 00255 0 0 0 0 18488 086886

F11

1 15851 15851 15851 378Eminus 07 434 057882 15934 1594 15935 00000826 595 049983 155Eminus 06 21015 07618 05833 594 17894 921Eminus 07 000018069 520Eminus 05 41549ndash5 535 108225 0 0 0 0 1 019021

F12

1 20399 times Eminus 4 00277 00056 00059 597 15772 00151 02574 00863 00552 592 056083 0139 16442 09017 03516 600 211694 428Eminus 08 00214 000038043 00013 600 197285 0 0 0 0 1 74354

Computational Intelligence and Neuroscience 9

CWOA is better than PSO LWPS and CWOA and it meansthe new proposed ASGS-CWOA has advantages in timeperformance

In a word ASGS-CWOA has good optimization qualitystability advantage on performance of iteration numberand speed of convergence

3 Results and Discussion

eoretical research and experimental results reveal thatcompared with traditional genetic algorithm particle swarmoptimization leading wolf pack algorithm and chaotic wolf

optimization algorithm ASGS-CWOA has better globaloptimization accuracy fast convergence speed and highrobustness under the same conditions

In fact the strategy of ASGS greatly strengthens the localexploitation power of the original algorithm which makes iteasier for the algorithm to find the global optimum the strategyof ORM and SDUA effectively enhances the global explorationpower whichmakes the algorithm not easy to fall into the localoptimum and thus easier to find the global optimum

Focusing on Tables 3 and 5 and Figures 6 and 7 abovecompared with the four state-of-the-art algorithms ASGS-CWOA is effective and efficient in most of terms on

1 2 3 4 5

Stan

dard

dev

iatio

n

Algorithm order

Matyas

0E + 00

(a)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Easom

0E + 00

2E ndash 02

4E ndash 02

(b)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sumsquares

0E + 00

5E ndash 06

1E ndash 05

(c)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sphere

0E + 00

3E ndash 02

6E ndash 02

(d)

Stan

dard

dev

iatio

n

Algorithm order51 2 3 4

Execrate

0E + 00

(e)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

(f)

Algorithm order1 2 3 4 5

Bohachevsky3

Stan

dard

dev

iatio

n

0E + 00

(g)

Algorithm order1 2 3 4 5

Bridge

Stan

dard

dev

iatio

n

0E + 00

(h)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Booth

0E + 00

(i)

Algorithm order1 2 3 4 5

Bohachevsky1

Stan

dard

dev

iatio

n

0E + 00

4E ndash 13

8E ndash 13

(j)

Algorithm order1 2 3 4 5

Ackley

Stan

dard

dev

iatio

n

0E + 00

4E ndash 07

8E ndash 07

(k)

Algorithm order1 2 3 54

Quadric

Stan

dard

dev

iatio

n

0E + 00

4E ndash 03

8E ndash 03

(l)

Figure 6 Histograms of the SD about the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g) ndashF7 (h)ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12 (red means the value is out of upper limit of the chart)

10 Computational Intelligence and Neuroscience

Mea

n tim

e

Matyas

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(a)

Mea

n tim

e

Easom

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(b)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sumsquares

00E + 00

60E + 00

12E + 01

(c)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sphere

00E + 00

50E + 00

10E + 01

(d)

Mea

n tim

e

Algorithm order1 2 3 4 5

Execrate

0E + 00

1E + 00

2E + 00

3E + 00

(e)

Mea

n tim

e

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

1E + 00

2E + 00

3E + 00

(f )

Mea

n tim

e

Algorithm order1 2 3 4 5

Bohachevsky3

0E + 00

1E + 00

2E + 00

(g)

Mea

n tim

e

Algorithm order1 2 3 4 5

Bridge

0E + 00

1E + 00

2E + 00

3E + 00

(h)

Mea

n tim

e

Algorithm order1 2 3 4 5

Booth

0E + 00

1E + 00

2E + 00

(i)

Mea

n tim

e

1 2 3 4 5Algorithm order

Bohachevsky1

0E + 00

1E + 00

2E + 00

(j)

Mea

n tim

e

1 2 3 4 5Algorithm order

Ackley

0E + 00

1E + 00

2E + 00

3E + 00

(k)

Mea

n tim

e

1 2 3 4 5Algorithm order

Quadric

0E + 00

4E + 00

8E + 00

(l)

Figure 7 Histograms of the mean time spent on the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g)ndashF7 (h) ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12

Table 4 Part of the CECrsquo14 test functions [38]

No Functions Dimension Range Filowast Fi (xlowast)

1 Rotated High Conditioned Elliptic Function 2 [minus100 100] 1002 Rotated Bent Cigar Function 2 [minus100 100] 2003 Rotated Discuss Function 2 [minus100 100] 3004 Shifted and Rotated Rosenbrockrsquos Function 2 [minus100 100] 4005 Shifted and Rotated Ackleyrsquos Function 2 [minus100 100] 5006 Shifted and Rotated Weier Stress Function 2 [minus100 100] 6007 Shifted and Rotated Griewankrsquos Function 2 [minus100 100] 7008 Shifted Rastriginrsquos Function 2 [minus100 100] 8009 Shifted and Rotated Rastriginrsquos Function 2 [minus100 100] 900

Computational Intelligence and Neuroscience 11

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 3: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

211 LWPS e leader strategy was employed into tradi-tional wolf pack algorithm detailed in [24] unlike the simplesimulation of wolvesrsquo hunting in [26] LWPS abstracts wolvesrsquohunting activities into five steps According to the thought ofLWPS the specific steps and related formulas are as follows

(1) Initialization e step is to disperse wolves to searchspace or solution space in some way Here the number ofwolf populations can be represented by N and dimensionsof the search space can be represented by D and then wecan get the position of the i-th wolf by the followingequation

xi xi1 xi2 xid xi D( 1113857 (i 1 2 N d 1 2 D)

(1)

where xid is the location of the i-th wolf in d-th dimensionNmeans the number of wolf population and D is the maxi-mum dimension number of the solution space Initial lo-cation of each wolf can be produced by

xid xmin + rand(0 1) times xmax minus xmin( 1113857 (2)

where rand (0 1) is a random number distributed uniformlyin the interval [0 1] and xmax and xmin are the upper andlower limits of the solution space respectively

(2) Competition for the Leader Wolf is step is to find theleader wolf with best fitness value Firstly Q wolves shouldbe chosen as the candidates which are the top Q of wolfpack according to their fitness Secondly each one of the Qwolves searches around itself in D directions so a newlocation can be got by equation (3) if its fitness is betterthan the current fitness the current wolf i should move tothe new location otherwise do not move then searchingwill be continued until the searching time is greater thanthe maximum number Tmax or the searched locationcannot be improved any more Finally by comparing thefitness ofQwolves the best wolf can be elected and it is theleader wolf

ximinusnew xid + rand(minus1 1) times step a (3)

where step_a is the search step size

(3) Summon and Raid Each of the other wolves will raidtoward the location of leader wolf as soon as it receives thesummon from leader wolf and it continues to search forpreys during the running road Finally a new location can begot by equation (4) if its fitness is better than the one ofcurrent location the wolf will move to the new one oth-erwise stay at the current location

xidminusnew xid + rand(minus1 1) times step b times xld minus xid( 1113857 (4)

where xld is the location of the leader wolf in d-th dimensionand step_b is the run step size

(4) Siege the Prey [30] After the above process wolves cometo the nearby of the leader wolf and will be ready to catch theprey until the prey is gote new location for anyone exceptleader wolf can be obtained by the following equation

xidminusnew xid rm ltR0

xld + rand(minus1 1) times step c rm gtR0 rm R01113896

(5)

where step_c is the siege step size rm is a number generatedrandomly by the function rand (minus1 1) distributed uniformlyin the interval [minus1 1] and R0 is a preset siege threshold

In the optimization problem with the current solutiongetting closer and closer to the theoretical optimal value thesiege step size of wolf pack also decreases with the increase initeration times so that wolves have greater probability to findbetter values e following equation is about the siege stepsize which is obtained from [31]

step c step cmin times xdminusmax minus xdminusmin( 1113857

times expln step cminstep cmax( 1113857lowast t

T1113888 1113889

(6)

where step_cmin is the lower limit of the siege step sizexdndashmax is the upper limit of search space in d-th dimensionxdndashmin is the lower limit of search space in d-th dimensionstep_cmax is the upper limit of the siege step size and tmeansthe current number of iterations while T represents theupper limit one

Being out of boundary is not allowed to each wolf soequation (7) is used to deal with the possible transboundary

xidminusnew xdminusmin xidminusnew ltxdminusmin

xdminusmax xidminusnew ltxdminusmax1113896 (7)

(5) Distribution of Food and Regeneration According to thewolf group renewal mechanism of ldquosurvival of the strongrdquogroup renewal is carried out including the worst m wolveswill die and be deleted and new m wolves will be generatedby equation (2)

212 CWOA CWOA develops from the LWPS by intro-ducing the strategy of chaos optimization and adaptiveparameters into the traditional wolf pack algorithm eformer utilizes logistic map to generate chaotic variables thatare projected to solution space in order to search while thelatter introduces adaptive step size to enhance the perfor-mance and they work welle equation of logistic map is asfollows which is from [32]

chaosk+1 μ times chaosk times 1 minus chasok( 1113857chaosk isin [0 1] (8)

where μ is the control variable and when μ is 4 the system isin chaos state

e strategy of adaptive variable step size search includesthe improvement of search step size and siege step-size Inthe early stage wolves should search for preys with a largestep size so as to cover the whole solution space as much aspossible while in the later stage with the wolves gatheringcontinuously wolves should take a small step size to searchfor preys so that they can search finely in a small target areaAs a result the possibility of finding the global optimalsolution will be increased And by equation (9) we can get

Computational Intelligence and Neuroscience 3

the step size of migration while by equation (10) the one ofsiege can be obtained

step anew α times step a0 α 1 minust minus 12

1113874 11138752

(9)

step cnew step c0 times rand(0 1) times 1 minust minus 12

1113874 11138752

1113890 1113891 (10)

where step_c0 is the starting step size for siege

22 Improvement

221 Strategy of Adaptive Shrinking Grid Search (ASGS)In the solution space with D dimensions a searching wolfneeds to migrate along different directions in order to findpreys during any iteration of themigration it is according tothe thought of the original LWPS and CWOA that there isonly a dynamic point around the current location to begenerated according to equation (3)

However a single dynamic point is isolated and not wellenough to search in the current domain space of some a wolfFigures 1(a) and 1(b) show the two-dimensional and thethree-dimensional spaces respectively In essence the al-gorithm needs to take the whole local domain space of thecurrent wolf to be considered in order to find the local bestlocation so ASGS was used to generate an adaptive gridcentered on the current wolf which is extending along 2timesDdirections and includes (2timesK+ 1)D nodes where K meansthe number of nodes taken along any direction Figures 2(a)and 2(b) show the migration about ASGS in two-dimen-sional and three-dimensional space respectively and forbrevity here K is set to 2 detailed in the following equation

k xidminusnew1113858 1113859 xid + step anew times k

(k minusK minusK + 1 0 K minus 1 K)

(11)

So a node in the grid can be defined as

ximinusnew k xi1minusnew1113858 1113859 k xi2minusnew1113858 1113859 k xi Dminusnew1113858 11138591113864 1113865

(k minusK minusK + 1 0 K minus 1 K)

(12)

During any migration the node with best fitness in thegrid should be selected as new location of the searching wolfand after any migration the leader wolf of the population isupdated according to the new fitness It needs to be par-ticularly pointed out that the searching grid will be smallerand smaller as the step_anew becomes smaller Obviouslycompared to a single isolated point in traditional methodsthe searching accuracy and the possibility of finding theoptimal value can be improved by ASGS including(2timesK+ 1)D nodes

As the same reason during the process of the siege thesame strategy is used to find the leader wolf in the localdomain space including (2timesK+ 1)D points But the differentis that the step size of siege is smaller than the one of mi-gration After any migration the leader wolf of the pop-ulation is updated according to new current fitness andthen the current best wolf will be the leader wolftemporarily

222 Strategy of Opposite-Middle Raid (OMR)According to the idea of the traditional wolf pack algo-rithms it is unfortunately that raid pace is too small orunreasonable and wolves cannot rapidly appear around theleader wolf when they receive the summon signal from theleader wolf as shown in Figure 3(a) So OMR is put forwardto solve the problem above and its main clue is that theopposite location of the current location relative to theleader wolf should be calculated by the following equation

xidminusopposition 2 times xld minus xid (13)

If the opposite location has better fitness than the currentone the current wolf should move to the opposite oneOtherwise the following is obtained

ximinusm d xi d + xl d

2

xi bestfitness xi 1 xi 2 xi (Dminus1) xi D1113960 1113961 xi 1 xi 2 xi (Dminus1) xm D1113960 1113961 ximinusm 1 ximinusm 2 ximinusm (Dminus1) ximinusm D1113960 11139611113872 1113873

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

(14)

where ximinusm_d is the middle location in d-th dimension be-tween the i-th wolf and the leader wolf xi_d is the location ofthe wolf i in d-th dimension and xl_d is the location of theleader wolf in d-th dimension ldquoBestfitnessrdquo returns a wolfwith the best fitness from the given ones

From equation (14) and Figure 3(b) it is seen that thereare 2D points among the current point and the middle pointand as the result of each raid the point with best fitness ischosen as the new location of the current i-th wolf ereby

not only the wolves can appear around the leader wolf assoon as possible but also they can try to find new preys as faras possible

223 Standard Deviation Updating Amount (SDUA)According to the basic idea of the leader wolf pack algo-rithm during the iterations some wolves with poorer fitnesswill be continuously eliminated while the same amount of

4 Computational Intelligence and Neuroscience

ndash10ndash8ndash6ndash4ndash2

02468

10

D ndash

2

6ndash6 2ndash2 10ndash10 4ndash4 8ndash8 0D ndash 1

(a)

D ndash 1D ndash 2

10

10

5

5

00

ndash5 ndash5ndash10

ndash10

ndash10

ndash5

0

5

10

D ndash

3

(b)

Figure 1 (a)e range of locations of searching or competing wolves in CWOAwhen dimension is 2 (b) the range of locations of searchingor competing wolves in CWOA when dimension is 3 where step_a 10 or step_c 10 (the red point means the current location of some awolf)

20

15

10

5

0

ndash5

ndash10

ndash15

ndash20

D ndash

2

ndash20 ndash15 ndash10 ndash5 0 5 10 15 20D ndash 1

(a)

D ndash

3

D ndash 1D ndash 2

20

20

20

10

10

10

0

00

ndash10

ndash10ndash10ndash20ndash20

ndash20

(b)

Figure 2 e searching situation of wolves in ASGS-CWOA when dimension is (a) 2 and (b) 3 where step_a 10 or step_c 10 and K 2(the red point means the current location of some a wolf and the black ones mean the searching locations of the wolf)

10

10

86420 0ndash2

ndash5

5

D ndash 1D ndash 2

ndash202468

10

D ndash

3

(a)

10

10

8

8

6

64 42

20

0 D ndash 2D ndash 1

0

2

4

6

8

10

D ndash

3

(b)

Figure 3 (a) e range of locations of wolves in raid when dimension is 3 according to the original thought of wolf pack algorithm (b) therange of locations of wolves in raid when dimension is 3 according to OMR where step_a 2 e red point means the current position ofsome wolves the pink one indicates the position of the lead wolf and the blue one means the middle point of current wolf and lead wolf

Computational Intelligence and Neuroscience 5

wolves will be added to the population to make sure that thebad gene can be eliminated and the population diversity ofthe wolf pack can be ensured so it is not easy to fall into thelocal optimal solution and the convergence rate can beimproved However the amount of wolves that should beeliminated and added to the wolf pack is a fixed numberwhich is 5 in LWPS and CWOA mentioned before and thatis stiff and unreasonable In fact the amount should be adynamic number to reflect the dynamic situation of the wolfpack during any iteration

Standard deviation (SD) is a statistical concept whichhas been widely used in many fields Such as in [33]standard deviation was used into industrial equipment tohelp process the signal of bubble detection in liquid sodiumin [34] standard deviation is used with delay multiply toform a new weighting factor which is introduced to enhancethe contrast of the reconstructed images in medical fields in[35] based on eight-year-old dental trauma research datastandard deviation was utilized to help analyze the potentialof laser Doppler flowmetry the beam forming performancehas a large impact on image quality in ultrasound imagingto improve image resolution and contrast in [36] a newadaptive weighting factor for ultrasound imaging calledsignal mean-to-standard-deviation factor (SMSF) was pro-posed based on which researchers put forward an adaptivebeam forming method for ultrasound imaging based on themean-to-standard-deviation factor in [37] standard devi-ation was adopted to help analyze when an individual shouldstart social security

In this paper we take a concept named ldquoStandard De-viation Updating Amountrdquo to eliminate wolves with poorfitness and dynamically reflect the situation of the wolf packwhich means the population amount of wolf pack andstandard deviation about their fitness determine how manywolves will be eliminated and regenerated Standard devi-ation can be obtained as follows

σ

1N

1113944

N

i1xi minus μ( 1113857

2

11139741113972

(15)

whereNmeans the number of wolf pack ximeans the fitnessof the i-th wolf and μ is the mean value of the fitness enSDUA is gained by the following formula

SDUA

SDUA + 1 fitness(xi)ltμ minus σ2

1113874 1113875

do nothing fitness(xi)geμ minus σ2

1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(16)

SDUA is zero when the iteration begins next differencebetween the mean value and the SD about the fitness of thewolf pack should be computed and if the fitness of currentwolf is less than the difference SDUA increases by 1 oth-erwise nothing must be done So the value of SDUA is gotand SDUA wolves should be eliminated and regenerated

e effect of using a fixed value is displayed inFigure 4(a) and Figure 4(b) shows the effect of using ASD Itis clear that the amount in the latter is fluctuant as the wolf

pack has a different fitness during any iteration and betterthan the former to reflect the dynamic situations of itera-tions Accordingly not only will the bad gene from the wolfpack be eliminated but the population diversity of the wolfpack can be also kept better so that it is difficulty to fall intolocal optimal solution and the convergence rate can beimproved as far as possible

23 Steps of ASGS-CWOA Based on the thoughts ofadaptive grid search and adaptive standard deviationupdating amount mentioned above ASGS-CWOA is pro-posed the implementation steps are following and its flowchart is shown in Figure 5

231 Initialization of Population e following parametersshould be initially assigned amount of wolf population is Nand dimension of searching space is D for brevity theranges of all dimensions are [rangemin rangemax] the upperlimit of iterations is T value of step size in migration isstep_a value of step size in summon and raid is step_b valueof step size in siegement is step_c and the population can begenerated initially by the following equation [25]

xid range min + chaos(0 1) times(range max minus range min)

(17)

where chaos (0 1) returns a chaotic variable distributed in[0 1] uniformly

232 Migration Here an adaptive grid centered on thecurrent wolf can be generated by equation (12) which in-cludes 2 nodes After migration the wolf with best fitnesscan be found as the leader wolf

233 Summon and Raid After migration the others beginto run toward the location of leader wolf and during theprocess of raiding any wolf keeps searching for preys fol-lowing equations (13) and (14)

234 Siege After summon and raid all other wolves cometo be around the leader wolf in order to siege the preyfollowing equations

k xidminusnew1113858 1113859 xid + step c times k

(k minusK minusK + 1 0 K minus 1 K)

ximinusnew k xi1minusnew1113858 1113859 k xi2minusnew1113858 1113859 k xi Dminusnew1113858 11138591113864 1113865

(18)

After any siegement the newest leader wolf can be gotwhich has best fitness temporarily

235 Regeneration After siege some wolves with poorerfitness will be eliminated while the same amount of wolveswill be regenerated according to equation (14)

6 Computational Intelligence and Neuroscience

236 Loop Here if t (the number of current iteration) isbigger than T (the upper limit of iterations) exit the loopOtherwise go to 42 for loop until t is bigger than T Andwhen the loop is over the leader wolf maybe is the globaloptima that the algorithm can find

24 Numerical Experiments In this paper four state-of-the-art optimization algorithms are taken to validate the per-formance of new algorithm proposed above detailed inTable 1

241 Experiments on Twelve Classical Benchmark FunctionsTable 2 shows benchmark functions for testing and Table 3shows the numerical experimental data of five algorithms on12 benchmark functions for testing mentioned above es-pecially the numerical experiments were done on a com-puter equipped with Ubuntu 16044 operating system Intel

(R) Core (TM) i7-5930K processor and 64G memory as wellas Matlab 2017a For genetic algorithm toolbox in Matlab2017a is utilized for GA experiments the PSO experimentswere implemented by a ldquoPSOtrdquo toolbox for Matlab LWPS isgot from the thought in [24] CWOA is carried out based onthe idea in [25] and the new algorithm ASGS-CWOA iscarried out in Matlab 2017a which is an integrated devel-opment environment with M programming language Toprove the good performance of the proposed algorithmoptimization calculations were run for 100 times on anybenchmark function for testing as well as any optimizationalgorithm mentioned above

Firstly focusing on Table 3 seen from the term ofbest value only new algorithm can find the theoreticalglobal optima of all benchmark functions for testing soASGS-CWOA has better performance in optimizationaccuracy

Furthermore for 100 times the worst and average valuesof ASGS-CWOA in the all benchmark functions for testingexcept the F2 named ldquoEasomrdquo reach the theoretical valuesrespectively and the new algorithm has better standard de-viation about best values detailed in Figure 6 from which itcan be seen that nearly all the standard deviations are bestexcept F2 and F6 especially the standard deviations are zeroon F1 F3 F4 F5 F7 F9 F10 F11 and F12 Figure 6(b) showsthat the value of standard deviation about ASGS-CWOA onF2 is not worst in five algorithms and it has better perfor-mance than GA Figure 6(f) indicates that the value ofstandard deviation about ASGS-CWOA on F6 reaches 10minus11weaker than the values on others Figure 6(h) demonstratesthat the value of standard deviation about ASGS-CWOA onF8 reaches 10minus30 which is not zero but best in five algorithmserefore ASGS-CWOA has better stability than others

In addition focusing on the number of mean iterationASGS-CWOAhas weaker iteration times on the test function 2ldquoEasomrdquo the test function 8 ldquoBridgerdquo and the test function 10ldquoBohachevsky1rdquo but it has better performances on the othertest functions especially the iteration times on test function 34 5 6 11 and 12 are 1 or about 1 So ASGS-CWOA has betteradvantage on performance of iteration number

5

Fixed amount 5

ndash5

15

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(a)

ASDUA

ndash5

5

15

25

35

45

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(b)

Figure 4 (a)e number that how many wolves will be starved to death and regenerated as the iteration goes on It is a fixed number 5 (b)e number that how many wolves will be starved to death and regenerated following the new method named SDUA as the iteration goeson

Step 1 initialize parameters of algorithm

Step 2 migration process with ASGS

Step 3 summon ndash raid process with OMR

Step 4 siege process with ASGS

Step 5 population regeneration with SDUA

End condition satisfied

Output or record head wolf position and fitness value

Yes

No

Step 6

Figure 5 Flow chart for the new proposed algorithm

Computational Intelligence and Neuroscience 7

Table 1 Arithmetic configuration

Order Name Configuration

1 Genetic algorithm [16] e crossover probability is set at 07 the mutation probability is set at 001 while the generation gap isset at 095

2 Particle swarmoptimization

Value of individual acceleration 2 value of weighted value of initial time 09 value of weighted value ofconvergence time 04 limit individual speed at 20 of the changing range

3 LWPS Migration step size stepa 15 raid step size stepb 09 siege threshold r0 02 upper limit of siege step sizestepcmax 106 lower limit of siege step size stepcmin 10minus2 updating number of wolves m 5

4 CWOAAmount of campaign wolves q 5 searching direction h 4 upper limit of searchHmax 15 migration stepsize stepa0 15 raid step size stepb 09 siege threshold r0 02 value of siege step size stepc0 16 updating

amount of wolves m 5

5 ASGS-CWOA Migration step-size stepa0 15 upper limit of siege step-size stepcmax 1e6 lower limit of siege step sizestep cmin 1eminus 40 upper limit number of iteration T 600 number of wolf population N 50

Table 2 Test functions [25]

Order Function Expression Dimension Range Optimum1 Matyas F1 026times (x2

1 +x22)minus 048times x1 times x2 2 [minus10 10] min f 0

2 Easom F2 minuscos (x1)times cos (x2)times exp[minus (x1minus π)2 minus (x2minus π)2] 2 [minus100 100] min f minus13 Sumsquares F3 1113936

Di1 ilowast x2

i 10 [minus15 15] min f 0

4 Sphere F4 1113936Di1 x2

i 30 [minus15 15] min f 05 Eggcrate F5 x2

1 + x22 + 25times (sin2 x1 + sin2 x2) 2 [minusπ π] min f 0

6 Six hump camelback F6 4times x1 minus 21x4

1 + (13)times x61 + x1 times x2 minus 4times x2

2 + 4times x42 2 [minus5 5] min

f minus103167 Bohachevsky3 F7 x2

1 + 2times x22 minus 03times cos (3πx1 + 4πx2) + 03 2 [minus100 100] min f 0

8 Bridge F8 (sinx21 + x2

2

1113969)

x21 + x2

2

1113969minus 07129 2 [minus15 15] max f 30054

9 Booth F9 (x1 + 2times x2 minus 7)2 + (2times x1 + x2 minus 5)2 2 [minus10 10] min f 010 Bohachevsky1 F10 x2

1 + 2x22 minus 03times cos (3πx1)minus 04times cos (4πx2) + 07 2 [minus100 100] min f 0

11 Ackley F11 minus20times exp (minus02times

(1D) 1113936Di1 x2

i

1113969

)minus exp((1D) 1113936

Di1 cos 2 πxi) + 20 + e

6 [minus15 15] min f 0

12 Quadric F12 1113936Di1 (1113936

ik1 xk)2 10 [minus15 15] min f 0

Table 3 Experimental results

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F1

1 364Eminus 12 283Eminus 08 352Eminus 10 283Eminus 09 488 040382 414Eminus 21 326Eminus 14 115Eminus 15 387Eminus 15 591 042823 479Eminus 26 495Eminus 21 258Eminus 22 579Eminus 22 585 240954 104Eminus 119 169Eminus 12 585Eminus 24 213Eminus 13 295 042665 0 0 0 0 2744 010806

F2

1 minus1 minus000008110 minus097 01714 153 003752 minus1 minus1 minus1 324Eminus 06 583 041963 minus1 minus1 minus1 0 507 191114 minus1 minus1 minus1 0 72 003215 minus1 0 minus002 00196 59262 27036

F3

1 850Eminus 07 161Eminus 05 277Eminus 06 335Eminus 06 593 143962 000017021 00114 00023 00022 595 047923 760Eminus 07 87485 14802 15621 600 483194 561Eminus 07 149Eminus 05 107Eminus 06 874Eminus 06 600 414535 0 0 0 0 1 102205

F4

1 0004 00331 00127 00055 592 319022 00351 01736 00902 00292 594 048683 35579 103152 80497 12048 600 147424 000001114 00098 00012 00048 600 084315 0 0 0 0 1 100834

8 Computational Intelligence and Neuroscience

Finally the mean time spent by ASGS-CWOA are smallerthan the othersrsquo on F1 F5 F6 and F11 as shown in Figures 7(a)7(e) 7(f) and 7(k) respectively On F7 F9 and F10 the fivealgorithms spent time roughly in the same order of magnitudeand ASGS-CWOA has better performance than GA PSO andLWPS on F7 and F9 as shown in Figures 7(g) and 7(i) Un-fortunately ASGS-CWOA spent most time on F2 F3 F4 F8and F12 yet it is a comfort that the times spent byASGS-CWOAare not too much to be unacceptable is phenomenon con-forms to a philosophical truth that nothing is perfect and flawlessin the world and ASGS-CWOA is without exception Ac-cordingly in general ASGS-CWOA has a better speed ofconvergence e detailed is shown in Figure 7

242 Supplementary Experiments In order to further verifythe performance of the new proposed algorithm supplementary

experiments are conducted on CEC-2014 (IEEE Congress onEvolutionary Computation 2014) testing functions [38] detailedin Table 4 It is different with the above 12 testing functions thatCEC2014 testing functions are conducted on a computerequipment with Win7-32 bit Matlab 2014a CPU (AMD A6-3400M) and RAM (40GB) due to the match between the givencec14_funcmexw32 andwindows 32 bit system not Linux 64 bit

From Table 5 and Figure 8(a) obviously it can be seenthat new proposed algorithm has better performance thanGA and LWPS in terms of ldquooptimal valuerdquo which meansASGS-CWOA has better performance in finding the globaloptima From Figures 8(b)ndash8(d) it can be seen that the newproposed algorithm has best performance in terms of ldquoworstvaluerdquo ldquoaverage valuerdquo and ldquostandard deviationrdquo in otherwords the new proposed algorithm has best stability androbustness From Figure 8(e) the proportion of ASGS-

Table 3 Continued

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F5

1 467Eminus 10 467Eminus 10 467Eminus 10 312Eminus 25 73 000852 920Eminus 22 666Eminus 15 134Eminus 16 678Eminus 16 594 042983 123Eminus 23 176Eminus 19 156Eminus 20 260Eminus 20 585 241124 375Eminus 136 131Eminus 19 428Eminus 22 224Eminus 11 195 021695 0 0 0 0 101 106Eminus 03

F6

1 minus10316 minus10316 minus10316 179Eminus 15 68 000772 minus10316 minus10316 minus10316 147Eminus 15 578 045973 minus10316 minus10316 minus10316 152Eminus 15 519 247274 minus10316 minus10316 minus10316 125Eminus 15 157 018235 minus10316 minus10316 minus10316 560Eminus 11 332 0021855

F7

1 407Eminus 08 164Eminus 06 317Eminus 07 501Eminus 07 437 03272 278Eminus 16 260Eminus 10 885Eminus 12 310Eminus 11 590 045513 0 167Eminus 16 777Eminus 18 250Eminus 17 556 177454 0 169Eminus 11 636Eminus 15 236Eminus 11 139 007935 0 0 0 0 2406 01064

F8

1 30054 27052 29787 00629 69 000792 30054 30054 30054 453Eminus 16 550 039463 30054 30038 30053 000016125 399 129644 30054 30054 30054 941Eminus 11 67 003075 30054 30054 30054 966Eminus 30 600 30412

F9

1 455Eminus 11 368Eminus 09 891Eminus 11 367Eminus 10 271 012762 247Eminus 20 191Eminus 11 197Eminus 13 191Eminus 12 591 045253 605Eminus 24 312Eminus 19 270Eminus 20 548Eminus 20 580 196064 0 315Eminus 11 146Eminus 22 511Eminus 12 132 007385 0 0 0 0 2686 011364

F10

1 436Eminus 07 04699 00047 0047 109 001972 0 365Eminus 12 112Eminus 13 403Eminus 13 588 045643 0 0 0 0 539 167364 0 0 0 0 78 00255 0 0 0 0 18488 086886

F11

1 15851 15851 15851 378Eminus 07 434 057882 15934 1594 15935 00000826 595 049983 155Eminus 06 21015 07618 05833 594 17894 921Eminus 07 000018069 520Eminus 05 41549ndash5 535 108225 0 0 0 0 1 019021

F12

1 20399 times Eminus 4 00277 00056 00059 597 15772 00151 02574 00863 00552 592 056083 0139 16442 09017 03516 600 211694 428Eminus 08 00214 000038043 00013 600 197285 0 0 0 0 1 74354

Computational Intelligence and Neuroscience 9

CWOA is better than PSO LWPS and CWOA and it meansthe new proposed ASGS-CWOA has advantages in timeperformance

In a word ASGS-CWOA has good optimization qualitystability advantage on performance of iteration numberand speed of convergence

3 Results and Discussion

eoretical research and experimental results reveal thatcompared with traditional genetic algorithm particle swarmoptimization leading wolf pack algorithm and chaotic wolf

optimization algorithm ASGS-CWOA has better globaloptimization accuracy fast convergence speed and highrobustness under the same conditions

In fact the strategy of ASGS greatly strengthens the localexploitation power of the original algorithm which makes iteasier for the algorithm to find the global optimum the strategyof ORM and SDUA effectively enhances the global explorationpower whichmakes the algorithm not easy to fall into the localoptimum and thus easier to find the global optimum

Focusing on Tables 3 and 5 and Figures 6 and 7 abovecompared with the four state-of-the-art algorithms ASGS-CWOA is effective and efficient in most of terms on

1 2 3 4 5

Stan

dard

dev

iatio

n

Algorithm order

Matyas

0E + 00

(a)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Easom

0E + 00

2E ndash 02

4E ndash 02

(b)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sumsquares

0E + 00

5E ndash 06

1E ndash 05

(c)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sphere

0E + 00

3E ndash 02

6E ndash 02

(d)

Stan

dard

dev

iatio

n

Algorithm order51 2 3 4

Execrate

0E + 00

(e)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

(f)

Algorithm order1 2 3 4 5

Bohachevsky3

Stan

dard

dev

iatio

n

0E + 00

(g)

Algorithm order1 2 3 4 5

Bridge

Stan

dard

dev

iatio

n

0E + 00

(h)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Booth

0E + 00

(i)

Algorithm order1 2 3 4 5

Bohachevsky1

Stan

dard

dev

iatio

n

0E + 00

4E ndash 13

8E ndash 13

(j)

Algorithm order1 2 3 4 5

Ackley

Stan

dard

dev

iatio

n

0E + 00

4E ndash 07

8E ndash 07

(k)

Algorithm order1 2 3 54

Quadric

Stan

dard

dev

iatio

n

0E + 00

4E ndash 03

8E ndash 03

(l)

Figure 6 Histograms of the SD about the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g) ndashF7 (h)ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12 (red means the value is out of upper limit of the chart)

10 Computational Intelligence and Neuroscience

Mea

n tim

e

Matyas

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(a)

Mea

n tim

e

Easom

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(b)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sumsquares

00E + 00

60E + 00

12E + 01

(c)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sphere

00E + 00

50E + 00

10E + 01

(d)

Mea

n tim

e

Algorithm order1 2 3 4 5

Execrate

0E + 00

1E + 00

2E + 00

3E + 00

(e)

Mea

n tim

e

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

1E + 00

2E + 00

3E + 00

(f )

Mea

n tim

e

Algorithm order1 2 3 4 5

Bohachevsky3

0E + 00

1E + 00

2E + 00

(g)

Mea

n tim

e

Algorithm order1 2 3 4 5

Bridge

0E + 00

1E + 00

2E + 00

3E + 00

(h)

Mea

n tim

e

Algorithm order1 2 3 4 5

Booth

0E + 00

1E + 00

2E + 00

(i)

Mea

n tim

e

1 2 3 4 5Algorithm order

Bohachevsky1

0E + 00

1E + 00

2E + 00

(j)

Mea

n tim

e

1 2 3 4 5Algorithm order

Ackley

0E + 00

1E + 00

2E + 00

3E + 00

(k)

Mea

n tim

e

1 2 3 4 5Algorithm order

Quadric

0E + 00

4E + 00

8E + 00

(l)

Figure 7 Histograms of the mean time spent on the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g)ndashF7 (h) ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12

Table 4 Part of the CECrsquo14 test functions [38]

No Functions Dimension Range Filowast Fi (xlowast)

1 Rotated High Conditioned Elliptic Function 2 [minus100 100] 1002 Rotated Bent Cigar Function 2 [minus100 100] 2003 Rotated Discuss Function 2 [minus100 100] 3004 Shifted and Rotated Rosenbrockrsquos Function 2 [minus100 100] 4005 Shifted and Rotated Ackleyrsquos Function 2 [minus100 100] 5006 Shifted and Rotated Weier Stress Function 2 [minus100 100] 6007 Shifted and Rotated Griewankrsquos Function 2 [minus100 100] 7008 Shifted Rastriginrsquos Function 2 [minus100 100] 8009 Shifted and Rotated Rastriginrsquos Function 2 [minus100 100] 900

Computational Intelligence and Neuroscience 11

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 4: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

the step size of migration while by equation (10) the one ofsiege can be obtained

step anew α times step a0 α 1 minust minus 12

1113874 11138752

(9)

step cnew step c0 times rand(0 1) times 1 minust minus 12

1113874 11138752

1113890 1113891 (10)

where step_c0 is the starting step size for siege

22 Improvement

221 Strategy of Adaptive Shrinking Grid Search (ASGS)In the solution space with D dimensions a searching wolfneeds to migrate along different directions in order to findpreys during any iteration of themigration it is according tothe thought of the original LWPS and CWOA that there isonly a dynamic point around the current location to begenerated according to equation (3)

However a single dynamic point is isolated and not wellenough to search in the current domain space of some a wolfFigures 1(a) and 1(b) show the two-dimensional and thethree-dimensional spaces respectively In essence the al-gorithm needs to take the whole local domain space of thecurrent wolf to be considered in order to find the local bestlocation so ASGS was used to generate an adaptive gridcentered on the current wolf which is extending along 2timesDdirections and includes (2timesK+ 1)D nodes where K meansthe number of nodes taken along any direction Figures 2(a)and 2(b) show the migration about ASGS in two-dimen-sional and three-dimensional space respectively and forbrevity here K is set to 2 detailed in the following equation

k xidminusnew1113858 1113859 xid + step anew times k

(k minusK minusK + 1 0 K minus 1 K)

(11)

So a node in the grid can be defined as

ximinusnew k xi1minusnew1113858 1113859 k xi2minusnew1113858 1113859 k xi Dminusnew1113858 11138591113864 1113865

(k minusK minusK + 1 0 K minus 1 K)

(12)

During any migration the node with best fitness in thegrid should be selected as new location of the searching wolfand after any migration the leader wolf of the population isupdated according to the new fitness It needs to be par-ticularly pointed out that the searching grid will be smallerand smaller as the step_anew becomes smaller Obviouslycompared to a single isolated point in traditional methodsthe searching accuracy and the possibility of finding theoptimal value can be improved by ASGS including(2timesK+ 1)D nodes

As the same reason during the process of the siege thesame strategy is used to find the leader wolf in the localdomain space including (2timesK+ 1)D points But the differentis that the step size of siege is smaller than the one of mi-gration After any migration the leader wolf of the pop-ulation is updated according to new current fitness andthen the current best wolf will be the leader wolftemporarily

222 Strategy of Opposite-Middle Raid (OMR)According to the idea of the traditional wolf pack algo-rithms it is unfortunately that raid pace is too small orunreasonable and wolves cannot rapidly appear around theleader wolf when they receive the summon signal from theleader wolf as shown in Figure 3(a) So OMR is put forwardto solve the problem above and its main clue is that theopposite location of the current location relative to theleader wolf should be calculated by the following equation

xidminusopposition 2 times xld minus xid (13)

If the opposite location has better fitness than the currentone the current wolf should move to the opposite oneOtherwise the following is obtained

ximinusm d xi d + xl d

2

xi bestfitness xi 1 xi 2 xi (Dminus1) xi D1113960 1113961 xi 1 xi 2 xi (Dminus1) xm D1113960 1113961 ximinusm 1 ximinusm 2 ximinusm (Dminus1) ximinusm D1113960 11139611113872 1113873

⎧⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎩

(14)

where ximinusm_d is the middle location in d-th dimension be-tween the i-th wolf and the leader wolf xi_d is the location ofthe wolf i in d-th dimension and xl_d is the location of theleader wolf in d-th dimension ldquoBestfitnessrdquo returns a wolfwith the best fitness from the given ones

From equation (14) and Figure 3(b) it is seen that thereare 2D points among the current point and the middle pointand as the result of each raid the point with best fitness ischosen as the new location of the current i-th wolf ereby

not only the wolves can appear around the leader wolf assoon as possible but also they can try to find new preys as faras possible

223 Standard Deviation Updating Amount (SDUA)According to the basic idea of the leader wolf pack algo-rithm during the iterations some wolves with poorer fitnesswill be continuously eliminated while the same amount of

4 Computational Intelligence and Neuroscience

ndash10ndash8ndash6ndash4ndash2

02468

10

D ndash

2

6ndash6 2ndash2 10ndash10 4ndash4 8ndash8 0D ndash 1

(a)

D ndash 1D ndash 2

10

10

5

5

00

ndash5 ndash5ndash10

ndash10

ndash10

ndash5

0

5

10

D ndash

3

(b)

Figure 1 (a)e range of locations of searching or competing wolves in CWOAwhen dimension is 2 (b) the range of locations of searchingor competing wolves in CWOA when dimension is 3 where step_a 10 or step_c 10 (the red point means the current location of some awolf)

20

15

10

5

0

ndash5

ndash10

ndash15

ndash20

D ndash

2

ndash20 ndash15 ndash10 ndash5 0 5 10 15 20D ndash 1

(a)

D ndash

3

D ndash 1D ndash 2

20

20

20

10

10

10

0

00

ndash10

ndash10ndash10ndash20ndash20

ndash20

(b)

Figure 2 e searching situation of wolves in ASGS-CWOA when dimension is (a) 2 and (b) 3 where step_a 10 or step_c 10 and K 2(the red point means the current location of some a wolf and the black ones mean the searching locations of the wolf)

10

10

86420 0ndash2

ndash5

5

D ndash 1D ndash 2

ndash202468

10

D ndash

3

(a)

10

10

8

8

6

64 42

20

0 D ndash 2D ndash 1

0

2

4

6

8

10

D ndash

3

(b)

Figure 3 (a) e range of locations of wolves in raid when dimension is 3 according to the original thought of wolf pack algorithm (b) therange of locations of wolves in raid when dimension is 3 according to OMR where step_a 2 e red point means the current position ofsome wolves the pink one indicates the position of the lead wolf and the blue one means the middle point of current wolf and lead wolf

Computational Intelligence and Neuroscience 5

wolves will be added to the population to make sure that thebad gene can be eliminated and the population diversity ofthe wolf pack can be ensured so it is not easy to fall into thelocal optimal solution and the convergence rate can beimproved However the amount of wolves that should beeliminated and added to the wolf pack is a fixed numberwhich is 5 in LWPS and CWOA mentioned before and thatis stiff and unreasonable In fact the amount should be adynamic number to reflect the dynamic situation of the wolfpack during any iteration

Standard deviation (SD) is a statistical concept whichhas been widely used in many fields Such as in [33]standard deviation was used into industrial equipment tohelp process the signal of bubble detection in liquid sodiumin [34] standard deviation is used with delay multiply toform a new weighting factor which is introduced to enhancethe contrast of the reconstructed images in medical fields in[35] based on eight-year-old dental trauma research datastandard deviation was utilized to help analyze the potentialof laser Doppler flowmetry the beam forming performancehas a large impact on image quality in ultrasound imagingto improve image resolution and contrast in [36] a newadaptive weighting factor for ultrasound imaging calledsignal mean-to-standard-deviation factor (SMSF) was pro-posed based on which researchers put forward an adaptivebeam forming method for ultrasound imaging based on themean-to-standard-deviation factor in [37] standard devi-ation was adopted to help analyze when an individual shouldstart social security

In this paper we take a concept named ldquoStandard De-viation Updating Amountrdquo to eliminate wolves with poorfitness and dynamically reflect the situation of the wolf packwhich means the population amount of wolf pack andstandard deviation about their fitness determine how manywolves will be eliminated and regenerated Standard devi-ation can be obtained as follows

σ

1N

1113944

N

i1xi minus μ( 1113857

2

11139741113972

(15)

whereNmeans the number of wolf pack ximeans the fitnessof the i-th wolf and μ is the mean value of the fitness enSDUA is gained by the following formula

SDUA

SDUA + 1 fitness(xi)ltμ minus σ2

1113874 1113875

do nothing fitness(xi)geμ minus σ2

1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(16)

SDUA is zero when the iteration begins next differencebetween the mean value and the SD about the fitness of thewolf pack should be computed and if the fitness of currentwolf is less than the difference SDUA increases by 1 oth-erwise nothing must be done So the value of SDUA is gotand SDUA wolves should be eliminated and regenerated

e effect of using a fixed value is displayed inFigure 4(a) and Figure 4(b) shows the effect of using ASD Itis clear that the amount in the latter is fluctuant as the wolf

pack has a different fitness during any iteration and betterthan the former to reflect the dynamic situations of itera-tions Accordingly not only will the bad gene from the wolfpack be eliminated but the population diversity of the wolfpack can be also kept better so that it is difficulty to fall intolocal optimal solution and the convergence rate can beimproved as far as possible

23 Steps of ASGS-CWOA Based on the thoughts ofadaptive grid search and adaptive standard deviationupdating amount mentioned above ASGS-CWOA is pro-posed the implementation steps are following and its flowchart is shown in Figure 5

231 Initialization of Population e following parametersshould be initially assigned amount of wolf population is Nand dimension of searching space is D for brevity theranges of all dimensions are [rangemin rangemax] the upperlimit of iterations is T value of step size in migration isstep_a value of step size in summon and raid is step_b valueof step size in siegement is step_c and the population can begenerated initially by the following equation [25]

xid range min + chaos(0 1) times(range max minus range min)

(17)

where chaos (0 1) returns a chaotic variable distributed in[0 1] uniformly

232 Migration Here an adaptive grid centered on thecurrent wolf can be generated by equation (12) which in-cludes 2 nodes After migration the wolf with best fitnesscan be found as the leader wolf

233 Summon and Raid After migration the others beginto run toward the location of leader wolf and during theprocess of raiding any wolf keeps searching for preys fol-lowing equations (13) and (14)

234 Siege After summon and raid all other wolves cometo be around the leader wolf in order to siege the preyfollowing equations

k xidminusnew1113858 1113859 xid + step c times k

(k minusK minusK + 1 0 K minus 1 K)

ximinusnew k xi1minusnew1113858 1113859 k xi2minusnew1113858 1113859 k xi Dminusnew1113858 11138591113864 1113865

(18)

After any siegement the newest leader wolf can be gotwhich has best fitness temporarily

235 Regeneration After siege some wolves with poorerfitness will be eliminated while the same amount of wolveswill be regenerated according to equation (14)

6 Computational Intelligence and Neuroscience

236 Loop Here if t (the number of current iteration) isbigger than T (the upper limit of iterations) exit the loopOtherwise go to 42 for loop until t is bigger than T Andwhen the loop is over the leader wolf maybe is the globaloptima that the algorithm can find

24 Numerical Experiments In this paper four state-of-the-art optimization algorithms are taken to validate the per-formance of new algorithm proposed above detailed inTable 1

241 Experiments on Twelve Classical Benchmark FunctionsTable 2 shows benchmark functions for testing and Table 3shows the numerical experimental data of five algorithms on12 benchmark functions for testing mentioned above es-pecially the numerical experiments were done on a com-puter equipped with Ubuntu 16044 operating system Intel

(R) Core (TM) i7-5930K processor and 64G memory as wellas Matlab 2017a For genetic algorithm toolbox in Matlab2017a is utilized for GA experiments the PSO experimentswere implemented by a ldquoPSOtrdquo toolbox for Matlab LWPS isgot from the thought in [24] CWOA is carried out based onthe idea in [25] and the new algorithm ASGS-CWOA iscarried out in Matlab 2017a which is an integrated devel-opment environment with M programming language Toprove the good performance of the proposed algorithmoptimization calculations were run for 100 times on anybenchmark function for testing as well as any optimizationalgorithm mentioned above

Firstly focusing on Table 3 seen from the term ofbest value only new algorithm can find the theoreticalglobal optima of all benchmark functions for testing soASGS-CWOA has better performance in optimizationaccuracy

Furthermore for 100 times the worst and average valuesof ASGS-CWOA in the all benchmark functions for testingexcept the F2 named ldquoEasomrdquo reach the theoretical valuesrespectively and the new algorithm has better standard de-viation about best values detailed in Figure 6 from which itcan be seen that nearly all the standard deviations are bestexcept F2 and F6 especially the standard deviations are zeroon F1 F3 F4 F5 F7 F9 F10 F11 and F12 Figure 6(b) showsthat the value of standard deviation about ASGS-CWOA onF2 is not worst in five algorithms and it has better perfor-mance than GA Figure 6(f) indicates that the value ofstandard deviation about ASGS-CWOA on F6 reaches 10minus11weaker than the values on others Figure 6(h) demonstratesthat the value of standard deviation about ASGS-CWOA onF8 reaches 10minus30 which is not zero but best in five algorithmserefore ASGS-CWOA has better stability than others

In addition focusing on the number of mean iterationASGS-CWOAhas weaker iteration times on the test function 2ldquoEasomrdquo the test function 8 ldquoBridgerdquo and the test function 10ldquoBohachevsky1rdquo but it has better performances on the othertest functions especially the iteration times on test function 34 5 6 11 and 12 are 1 or about 1 So ASGS-CWOA has betteradvantage on performance of iteration number

5

Fixed amount 5

ndash5

15

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(a)

ASDUA

ndash5

5

15

25

35

45

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(b)

Figure 4 (a)e number that how many wolves will be starved to death and regenerated as the iteration goes on It is a fixed number 5 (b)e number that how many wolves will be starved to death and regenerated following the new method named SDUA as the iteration goeson

Step 1 initialize parameters of algorithm

Step 2 migration process with ASGS

Step 3 summon ndash raid process with OMR

Step 4 siege process with ASGS

Step 5 population regeneration with SDUA

End condition satisfied

Output or record head wolf position and fitness value

Yes

No

Step 6

Figure 5 Flow chart for the new proposed algorithm

Computational Intelligence and Neuroscience 7

Table 1 Arithmetic configuration

Order Name Configuration

1 Genetic algorithm [16] e crossover probability is set at 07 the mutation probability is set at 001 while the generation gap isset at 095

2 Particle swarmoptimization

Value of individual acceleration 2 value of weighted value of initial time 09 value of weighted value ofconvergence time 04 limit individual speed at 20 of the changing range

3 LWPS Migration step size stepa 15 raid step size stepb 09 siege threshold r0 02 upper limit of siege step sizestepcmax 106 lower limit of siege step size stepcmin 10minus2 updating number of wolves m 5

4 CWOAAmount of campaign wolves q 5 searching direction h 4 upper limit of searchHmax 15 migration stepsize stepa0 15 raid step size stepb 09 siege threshold r0 02 value of siege step size stepc0 16 updating

amount of wolves m 5

5 ASGS-CWOA Migration step-size stepa0 15 upper limit of siege step-size stepcmax 1e6 lower limit of siege step sizestep cmin 1eminus 40 upper limit number of iteration T 600 number of wolf population N 50

Table 2 Test functions [25]

Order Function Expression Dimension Range Optimum1 Matyas F1 026times (x2

1 +x22)minus 048times x1 times x2 2 [minus10 10] min f 0

2 Easom F2 minuscos (x1)times cos (x2)times exp[minus (x1minus π)2 minus (x2minus π)2] 2 [minus100 100] min f minus13 Sumsquares F3 1113936

Di1 ilowast x2

i 10 [minus15 15] min f 0

4 Sphere F4 1113936Di1 x2

i 30 [minus15 15] min f 05 Eggcrate F5 x2

1 + x22 + 25times (sin2 x1 + sin2 x2) 2 [minusπ π] min f 0

6 Six hump camelback F6 4times x1 minus 21x4

1 + (13)times x61 + x1 times x2 minus 4times x2

2 + 4times x42 2 [minus5 5] min

f minus103167 Bohachevsky3 F7 x2

1 + 2times x22 minus 03times cos (3πx1 + 4πx2) + 03 2 [minus100 100] min f 0

8 Bridge F8 (sinx21 + x2

2

1113969)

x21 + x2

2

1113969minus 07129 2 [minus15 15] max f 30054

9 Booth F9 (x1 + 2times x2 minus 7)2 + (2times x1 + x2 minus 5)2 2 [minus10 10] min f 010 Bohachevsky1 F10 x2

1 + 2x22 minus 03times cos (3πx1)minus 04times cos (4πx2) + 07 2 [minus100 100] min f 0

11 Ackley F11 minus20times exp (minus02times

(1D) 1113936Di1 x2

i

1113969

)minus exp((1D) 1113936

Di1 cos 2 πxi) + 20 + e

6 [minus15 15] min f 0

12 Quadric F12 1113936Di1 (1113936

ik1 xk)2 10 [minus15 15] min f 0

Table 3 Experimental results

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F1

1 364Eminus 12 283Eminus 08 352Eminus 10 283Eminus 09 488 040382 414Eminus 21 326Eminus 14 115Eminus 15 387Eminus 15 591 042823 479Eminus 26 495Eminus 21 258Eminus 22 579Eminus 22 585 240954 104Eminus 119 169Eminus 12 585Eminus 24 213Eminus 13 295 042665 0 0 0 0 2744 010806

F2

1 minus1 minus000008110 minus097 01714 153 003752 minus1 minus1 minus1 324Eminus 06 583 041963 minus1 minus1 minus1 0 507 191114 minus1 minus1 minus1 0 72 003215 minus1 0 minus002 00196 59262 27036

F3

1 850Eminus 07 161Eminus 05 277Eminus 06 335Eminus 06 593 143962 000017021 00114 00023 00022 595 047923 760Eminus 07 87485 14802 15621 600 483194 561Eminus 07 149Eminus 05 107Eminus 06 874Eminus 06 600 414535 0 0 0 0 1 102205

F4

1 0004 00331 00127 00055 592 319022 00351 01736 00902 00292 594 048683 35579 103152 80497 12048 600 147424 000001114 00098 00012 00048 600 084315 0 0 0 0 1 100834

8 Computational Intelligence and Neuroscience

Finally the mean time spent by ASGS-CWOA are smallerthan the othersrsquo on F1 F5 F6 and F11 as shown in Figures 7(a)7(e) 7(f) and 7(k) respectively On F7 F9 and F10 the fivealgorithms spent time roughly in the same order of magnitudeand ASGS-CWOA has better performance than GA PSO andLWPS on F7 and F9 as shown in Figures 7(g) and 7(i) Un-fortunately ASGS-CWOA spent most time on F2 F3 F4 F8and F12 yet it is a comfort that the times spent byASGS-CWOAare not too much to be unacceptable is phenomenon con-forms to a philosophical truth that nothing is perfect and flawlessin the world and ASGS-CWOA is without exception Ac-cordingly in general ASGS-CWOA has a better speed ofconvergence e detailed is shown in Figure 7

242 Supplementary Experiments In order to further verifythe performance of the new proposed algorithm supplementary

experiments are conducted on CEC-2014 (IEEE Congress onEvolutionary Computation 2014) testing functions [38] detailedin Table 4 It is different with the above 12 testing functions thatCEC2014 testing functions are conducted on a computerequipment with Win7-32 bit Matlab 2014a CPU (AMD A6-3400M) and RAM (40GB) due to the match between the givencec14_funcmexw32 andwindows 32 bit system not Linux 64 bit

From Table 5 and Figure 8(a) obviously it can be seenthat new proposed algorithm has better performance thanGA and LWPS in terms of ldquooptimal valuerdquo which meansASGS-CWOA has better performance in finding the globaloptima From Figures 8(b)ndash8(d) it can be seen that the newproposed algorithm has best performance in terms of ldquoworstvaluerdquo ldquoaverage valuerdquo and ldquostandard deviationrdquo in otherwords the new proposed algorithm has best stability androbustness From Figure 8(e) the proportion of ASGS-

Table 3 Continued

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F5

1 467Eminus 10 467Eminus 10 467Eminus 10 312Eminus 25 73 000852 920Eminus 22 666Eminus 15 134Eminus 16 678Eminus 16 594 042983 123Eminus 23 176Eminus 19 156Eminus 20 260Eminus 20 585 241124 375Eminus 136 131Eminus 19 428Eminus 22 224Eminus 11 195 021695 0 0 0 0 101 106Eminus 03

F6

1 minus10316 minus10316 minus10316 179Eminus 15 68 000772 minus10316 minus10316 minus10316 147Eminus 15 578 045973 minus10316 minus10316 minus10316 152Eminus 15 519 247274 minus10316 minus10316 minus10316 125Eminus 15 157 018235 minus10316 minus10316 minus10316 560Eminus 11 332 0021855

F7

1 407Eminus 08 164Eminus 06 317Eminus 07 501Eminus 07 437 03272 278Eminus 16 260Eminus 10 885Eminus 12 310Eminus 11 590 045513 0 167Eminus 16 777Eminus 18 250Eminus 17 556 177454 0 169Eminus 11 636Eminus 15 236Eminus 11 139 007935 0 0 0 0 2406 01064

F8

1 30054 27052 29787 00629 69 000792 30054 30054 30054 453Eminus 16 550 039463 30054 30038 30053 000016125 399 129644 30054 30054 30054 941Eminus 11 67 003075 30054 30054 30054 966Eminus 30 600 30412

F9

1 455Eminus 11 368Eminus 09 891Eminus 11 367Eminus 10 271 012762 247Eminus 20 191Eminus 11 197Eminus 13 191Eminus 12 591 045253 605Eminus 24 312Eminus 19 270Eminus 20 548Eminus 20 580 196064 0 315Eminus 11 146Eminus 22 511Eminus 12 132 007385 0 0 0 0 2686 011364

F10

1 436Eminus 07 04699 00047 0047 109 001972 0 365Eminus 12 112Eminus 13 403Eminus 13 588 045643 0 0 0 0 539 167364 0 0 0 0 78 00255 0 0 0 0 18488 086886

F11

1 15851 15851 15851 378Eminus 07 434 057882 15934 1594 15935 00000826 595 049983 155Eminus 06 21015 07618 05833 594 17894 921Eminus 07 000018069 520Eminus 05 41549ndash5 535 108225 0 0 0 0 1 019021

F12

1 20399 times Eminus 4 00277 00056 00059 597 15772 00151 02574 00863 00552 592 056083 0139 16442 09017 03516 600 211694 428Eminus 08 00214 000038043 00013 600 197285 0 0 0 0 1 74354

Computational Intelligence and Neuroscience 9

CWOA is better than PSO LWPS and CWOA and it meansthe new proposed ASGS-CWOA has advantages in timeperformance

In a word ASGS-CWOA has good optimization qualitystability advantage on performance of iteration numberand speed of convergence

3 Results and Discussion

eoretical research and experimental results reveal thatcompared with traditional genetic algorithm particle swarmoptimization leading wolf pack algorithm and chaotic wolf

optimization algorithm ASGS-CWOA has better globaloptimization accuracy fast convergence speed and highrobustness under the same conditions

In fact the strategy of ASGS greatly strengthens the localexploitation power of the original algorithm which makes iteasier for the algorithm to find the global optimum the strategyof ORM and SDUA effectively enhances the global explorationpower whichmakes the algorithm not easy to fall into the localoptimum and thus easier to find the global optimum

Focusing on Tables 3 and 5 and Figures 6 and 7 abovecompared with the four state-of-the-art algorithms ASGS-CWOA is effective and efficient in most of terms on

1 2 3 4 5

Stan

dard

dev

iatio

n

Algorithm order

Matyas

0E + 00

(a)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Easom

0E + 00

2E ndash 02

4E ndash 02

(b)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sumsquares

0E + 00

5E ndash 06

1E ndash 05

(c)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sphere

0E + 00

3E ndash 02

6E ndash 02

(d)

Stan

dard

dev

iatio

n

Algorithm order51 2 3 4

Execrate

0E + 00

(e)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

(f)

Algorithm order1 2 3 4 5

Bohachevsky3

Stan

dard

dev

iatio

n

0E + 00

(g)

Algorithm order1 2 3 4 5

Bridge

Stan

dard

dev

iatio

n

0E + 00

(h)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Booth

0E + 00

(i)

Algorithm order1 2 3 4 5

Bohachevsky1

Stan

dard

dev

iatio

n

0E + 00

4E ndash 13

8E ndash 13

(j)

Algorithm order1 2 3 4 5

Ackley

Stan

dard

dev

iatio

n

0E + 00

4E ndash 07

8E ndash 07

(k)

Algorithm order1 2 3 54

Quadric

Stan

dard

dev

iatio

n

0E + 00

4E ndash 03

8E ndash 03

(l)

Figure 6 Histograms of the SD about the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g) ndashF7 (h)ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12 (red means the value is out of upper limit of the chart)

10 Computational Intelligence and Neuroscience

Mea

n tim

e

Matyas

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(a)

Mea

n tim

e

Easom

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(b)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sumsquares

00E + 00

60E + 00

12E + 01

(c)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sphere

00E + 00

50E + 00

10E + 01

(d)

Mea

n tim

e

Algorithm order1 2 3 4 5

Execrate

0E + 00

1E + 00

2E + 00

3E + 00

(e)

Mea

n tim

e

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

1E + 00

2E + 00

3E + 00

(f )

Mea

n tim

e

Algorithm order1 2 3 4 5

Bohachevsky3

0E + 00

1E + 00

2E + 00

(g)

Mea

n tim

e

Algorithm order1 2 3 4 5

Bridge

0E + 00

1E + 00

2E + 00

3E + 00

(h)

Mea

n tim

e

Algorithm order1 2 3 4 5

Booth

0E + 00

1E + 00

2E + 00

(i)

Mea

n tim

e

1 2 3 4 5Algorithm order

Bohachevsky1

0E + 00

1E + 00

2E + 00

(j)

Mea

n tim

e

1 2 3 4 5Algorithm order

Ackley

0E + 00

1E + 00

2E + 00

3E + 00

(k)

Mea

n tim

e

1 2 3 4 5Algorithm order

Quadric

0E + 00

4E + 00

8E + 00

(l)

Figure 7 Histograms of the mean time spent on the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g)ndashF7 (h) ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12

Table 4 Part of the CECrsquo14 test functions [38]

No Functions Dimension Range Filowast Fi (xlowast)

1 Rotated High Conditioned Elliptic Function 2 [minus100 100] 1002 Rotated Bent Cigar Function 2 [minus100 100] 2003 Rotated Discuss Function 2 [minus100 100] 3004 Shifted and Rotated Rosenbrockrsquos Function 2 [minus100 100] 4005 Shifted and Rotated Ackleyrsquos Function 2 [minus100 100] 5006 Shifted and Rotated Weier Stress Function 2 [minus100 100] 6007 Shifted and Rotated Griewankrsquos Function 2 [minus100 100] 7008 Shifted Rastriginrsquos Function 2 [minus100 100] 8009 Shifted and Rotated Rastriginrsquos Function 2 [minus100 100] 900

Computational Intelligence and Neuroscience 11

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 5: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

ndash10ndash8ndash6ndash4ndash2

02468

10

D ndash

2

6ndash6 2ndash2 10ndash10 4ndash4 8ndash8 0D ndash 1

(a)

D ndash 1D ndash 2

10

10

5

5

00

ndash5 ndash5ndash10

ndash10

ndash10

ndash5

0

5

10

D ndash

3

(b)

Figure 1 (a)e range of locations of searching or competing wolves in CWOAwhen dimension is 2 (b) the range of locations of searchingor competing wolves in CWOA when dimension is 3 where step_a 10 or step_c 10 (the red point means the current location of some awolf)

20

15

10

5

0

ndash5

ndash10

ndash15

ndash20

D ndash

2

ndash20 ndash15 ndash10 ndash5 0 5 10 15 20D ndash 1

(a)

D ndash

3

D ndash 1D ndash 2

20

20

20

10

10

10

0

00

ndash10

ndash10ndash10ndash20ndash20

ndash20

(b)

Figure 2 e searching situation of wolves in ASGS-CWOA when dimension is (a) 2 and (b) 3 where step_a 10 or step_c 10 and K 2(the red point means the current location of some a wolf and the black ones mean the searching locations of the wolf)

10

10

86420 0ndash2

ndash5

5

D ndash 1D ndash 2

ndash202468

10

D ndash

3

(a)

10

10

8

8

6

64 42

20

0 D ndash 2D ndash 1

0

2

4

6

8

10

D ndash

3

(b)

Figure 3 (a) e range of locations of wolves in raid when dimension is 3 according to the original thought of wolf pack algorithm (b) therange of locations of wolves in raid when dimension is 3 according to OMR where step_a 2 e red point means the current position ofsome wolves the pink one indicates the position of the lead wolf and the blue one means the middle point of current wolf and lead wolf

Computational Intelligence and Neuroscience 5

wolves will be added to the population to make sure that thebad gene can be eliminated and the population diversity ofthe wolf pack can be ensured so it is not easy to fall into thelocal optimal solution and the convergence rate can beimproved However the amount of wolves that should beeliminated and added to the wolf pack is a fixed numberwhich is 5 in LWPS and CWOA mentioned before and thatis stiff and unreasonable In fact the amount should be adynamic number to reflect the dynamic situation of the wolfpack during any iteration

Standard deviation (SD) is a statistical concept whichhas been widely used in many fields Such as in [33]standard deviation was used into industrial equipment tohelp process the signal of bubble detection in liquid sodiumin [34] standard deviation is used with delay multiply toform a new weighting factor which is introduced to enhancethe contrast of the reconstructed images in medical fields in[35] based on eight-year-old dental trauma research datastandard deviation was utilized to help analyze the potentialof laser Doppler flowmetry the beam forming performancehas a large impact on image quality in ultrasound imagingto improve image resolution and contrast in [36] a newadaptive weighting factor for ultrasound imaging calledsignal mean-to-standard-deviation factor (SMSF) was pro-posed based on which researchers put forward an adaptivebeam forming method for ultrasound imaging based on themean-to-standard-deviation factor in [37] standard devi-ation was adopted to help analyze when an individual shouldstart social security

In this paper we take a concept named ldquoStandard De-viation Updating Amountrdquo to eliminate wolves with poorfitness and dynamically reflect the situation of the wolf packwhich means the population amount of wolf pack andstandard deviation about their fitness determine how manywolves will be eliminated and regenerated Standard devi-ation can be obtained as follows

σ

1N

1113944

N

i1xi minus μ( 1113857

2

11139741113972

(15)

whereNmeans the number of wolf pack ximeans the fitnessof the i-th wolf and μ is the mean value of the fitness enSDUA is gained by the following formula

SDUA

SDUA + 1 fitness(xi)ltμ minus σ2

1113874 1113875

do nothing fitness(xi)geμ minus σ2

1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(16)

SDUA is zero when the iteration begins next differencebetween the mean value and the SD about the fitness of thewolf pack should be computed and if the fitness of currentwolf is less than the difference SDUA increases by 1 oth-erwise nothing must be done So the value of SDUA is gotand SDUA wolves should be eliminated and regenerated

e effect of using a fixed value is displayed inFigure 4(a) and Figure 4(b) shows the effect of using ASD Itis clear that the amount in the latter is fluctuant as the wolf

pack has a different fitness during any iteration and betterthan the former to reflect the dynamic situations of itera-tions Accordingly not only will the bad gene from the wolfpack be eliminated but the population diversity of the wolfpack can be also kept better so that it is difficulty to fall intolocal optimal solution and the convergence rate can beimproved as far as possible

23 Steps of ASGS-CWOA Based on the thoughts ofadaptive grid search and adaptive standard deviationupdating amount mentioned above ASGS-CWOA is pro-posed the implementation steps are following and its flowchart is shown in Figure 5

231 Initialization of Population e following parametersshould be initially assigned amount of wolf population is Nand dimension of searching space is D for brevity theranges of all dimensions are [rangemin rangemax] the upperlimit of iterations is T value of step size in migration isstep_a value of step size in summon and raid is step_b valueof step size in siegement is step_c and the population can begenerated initially by the following equation [25]

xid range min + chaos(0 1) times(range max minus range min)

(17)

where chaos (0 1) returns a chaotic variable distributed in[0 1] uniformly

232 Migration Here an adaptive grid centered on thecurrent wolf can be generated by equation (12) which in-cludes 2 nodes After migration the wolf with best fitnesscan be found as the leader wolf

233 Summon and Raid After migration the others beginto run toward the location of leader wolf and during theprocess of raiding any wolf keeps searching for preys fol-lowing equations (13) and (14)

234 Siege After summon and raid all other wolves cometo be around the leader wolf in order to siege the preyfollowing equations

k xidminusnew1113858 1113859 xid + step c times k

(k minusK minusK + 1 0 K minus 1 K)

ximinusnew k xi1minusnew1113858 1113859 k xi2minusnew1113858 1113859 k xi Dminusnew1113858 11138591113864 1113865

(18)

After any siegement the newest leader wolf can be gotwhich has best fitness temporarily

235 Regeneration After siege some wolves with poorerfitness will be eliminated while the same amount of wolveswill be regenerated according to equation (14)

6 Computational Intelligence and Neuroscience

236 Loop Here if t (the number of current iteration) isbigger than T (the upper limit of iterations) exit the loopOtherwise go to 42 for loop until t is bigger than T Andwhen the loop is over the leader wolf maybe is the globaloptima that the algorithm can find

24 Numerical Experiments In this paper four state-of-the-art optimization algorithms are taken to validate the per-formance of new algorithm proposed above detailed inTable 1

241 Experiments on Twelve Classical Benchmark FunctionsTable 2 shows benchmark functions for testing and Table 3shows the numerical experimental data of five algorithms on12 benchmark functions for testing mentioned above es-pecially the numerical experiments were done on a com-puter equipped with Ubuntu 16044 operating system Intel

(R) Core (TM) i7-5930K processor and 64G memory as wellas Matlab 2017a For genetic algorithm toolbox in Matlab2017a is utilized for GA experiments the PSO experimentswere implemented by a ldquoPSOtrdquo toolbox for Matlab LWPS isgot from the thought in [24] CWOA is carried out based onthe idea in [25] and the new algorithm ASGS-CWOA iscarried out in Matlab 2017a which is an integrated devel-opment environment with M programming language Toprove the good performance of the proposed algorithmoptimization calculations were run for 100 times on anybenchmark function for testing as well as any optimizationalgorithm mentioned above

Firstly focusing on Table 3 seen from the term ofbest value only new algorithm can find the theoreticalglobal optima of all benchmark functions for testing soASGS-CWOA has better performance in optimizationaccuracy

Furthermore for 100 times the worst and average valuesof ASGS-CWOA in the all benchmark functions for testingexcept the F2 named ldquoEasomrdquo reach the theoretical valuesrespectively and the new algorithm has better standard de-viation about best values detailed in Figure 6 from which itcan be seen that nearly all the standard deviations are bestexcept F2 and F6 especially the standard deviations are zeroon F1 F3 F4 F5 F7 F9 F10 F11 and F12 Figure 6(b) showsthat the value of standard deviation about ASGS-CWOA onF2 is not worst in five algorithms and it has better perfor-mance than GA Figure 6(f) indicates that the value ofstandard deviation about ASGS-CWOA on F6 reaches 10minus11weaker than the values on others Figure 6(h) demonstratesthat the value of standard deviation about ASGS-CWOA onF8 reaches 10minus30 which is not zero but best in five algorithmserefore ASGS-CWOA has better stability than others

In addition focusing on the number of mean iterationASGS-CWOAhas weaker iteration times on the test function 2ldquoEasomrdquo the test function 8 ldquoBridgerdquo and the test function 10ldquoBohachevsky1rdquo but it has better performances on the othertest functions especially the iteration times on test function 34 5 6 11 and 12 are 1 or about 1 So ASGS-CWOA has betteradvantage on performance of iteration number

5

Fixed amount 5

ndash5

15

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(a)

ASDUA

ndash5

5

15

25

35

45

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(b)

Figure 4 (a)e number that how many wolves will be starved to death and regenerated as the iteration goes on It is a fixed number 5 (b)e number that how many wolves will be starved to death and regenerated following the new method named SDUA as the iteration goeson

Step 1 initialize parameters of algorithm

Step 2 migration process with ASGS

Step 3 summon ndash raid process with OMR

Step 4 siege process with ASGS

Step 5 population regeneration with SDUA

End condition satisfied

Output or record head wolf position and fitness value

Yes

No

Step 6

Figure 5 Flow chart for the new proposed algorithm

Computational Intelligence and Neuroscience 7

Table 1 Arithmetic configuration

Order Name Configuration

1 Genetic algorithm [16] e crossover probability is set at 07 the mutation probability is set at 001 while the generation gap isset at 095

2 Particle swarmoptimization

Value of individual acceleration 2 value of weighted value of initial time 09 value of weighted value ofconvergence time 04 limit individual speed at 20 of the changing range

3 LWPS Migration step size stepa 15 raid step size stepb 09 siege threshold r0 02 upper limit of siege step sizestepcmax 106 lower limit of siege step size stepcmin 10minus2 updating number of wolves m 5

4 CWOAAmount of campaign wolves q 5 searching direction h 4 upper limit of searchHmax 15 migration stepsize stepa0 15 raid step size stepb 09 siege threshold r0 02 value of siege step size stepc0 16 updating

amount of wolves m 5

5 ASGS-CWOA Migration step-size stepa0 15 upper limit of siege step-size stepcmax 1e6 lower limit of siege step sizestep cmin 1eminus 40 upper limit number of iteration T 600 number of wolf population N 50

Table 2 Test functions [25]

Order Function Expression Dimension Range Optimum1 Matyas F1 026times (x2

1 +x22)minus 048times x1 times x2 2 [minus10 10] min f 0

2 Easom F2 minuscos (x1)times cos (x2)times exp[minus (x1minus π)2 minus (x2minus π)2] 2 [minus100 100] min f minus13 Sumsquares F3 1113936

Di1 ilowast x2

i 10 [minus15 15] min f 0

4 Sphere F4 1113936Di1 x2

i 30 [minus15 15] min f 05 Eggcrate F5 x2

1 + x22 + 25times (sin2 x1 + sin2 x2) 2 [minusπ π] min f 0

6 Six hump camelback F6 4times x1 minus 21x4

1 + (13)times x61 + x1 times x2 minus 4times x2

2 + 4times x42 2 [minus5 5] min

f minus103167 Bohachevsky3 F7 x2

1 + 2times x22 minus 03times cos (3πx1 + 4πx2) + 03 2 [minus100 100] min f 0

8 Bridge F8 (sinx21 + x2

2

1113969)

x21 + x2

2

1113969minus 07129 2 [minus15 15] max f 30054

9 Booth F9 (x1 + 2times x2 minus 7)2 + (2times x1 + x2 minus 5)2 2 [minus10 10] min f 010 Bohachevsky1 F10 x2

1 + 2x22 minus 03times cos (3πx1)minus 04times cos (4πx2) + 07 2 [minus100 100] min f 0

11 Ackley F11 minus20times exp (minus02times

(1D) 1113936Di1 x2

i

1113969

)minus exp((1D) 1113936

Di1 cos 2 πxi) + 20 + e

6 [minus15 15] min f 0

12 Quadric F12 1113936Di1 (1113936

ik1 xk)2 10 [minus15 15] min f 0

Table 3 Experimental results

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F1

1 364Eminus 12 283Eminus 08 352Eminus 10 283Eminus 09 488 040382 414Eminus 21 326Eminus 14 115Eminus 15 387Eminus 15 591 042823 479Eminus 26 495Eminus 21 258Eminus 22 579Eminus 22 585 240954 104Eminus 119 169Eminus 12 585Eminus 24 213Eminus 13 295 042665 0 0 0 0 2744 010806

F2

1 minus1 minus000008110 minus097 01714 153 003752 minus1 minus1 minus1 324Eminus 06 583 041963 minus1 minus1 minus1 0 507 191114 minus1 minus1 minus1 0 72 003215 minus1 0 minus002 00196 59262 27036

F3

1 850Eminus 07 161Eminus 05 277Eminus 06 335Eminus 06 593 143962 000017021 00114 00023 00022 595 047923 760Eminus 07 87485 14802 15621 600 483194 561Eminus 07 149Eminus 05 107Eminus 06 874Eminus 06 600 414535 0 0 0 0 1 102205

F4

1 0004 00331 00127 00055 592 319022 00351 01736 00902 00292 594 048683 35579 103152 80497 12048 600 147424 000001114 00098 00012 00048 600 084315 0 0 0 0 1 100834

8 Computational Intelligence and Neuroscience

Finally the mean time spent by ASGS-CWOA are smallerthan the othersrsquo on F1 F5 F6 and F11 as shown in Figures 7(a)7(e) 7(f) and 7(k) respectively On F7 F9 and F10 the fivealgorithms spent time roughly in the same order of magnitudeand ASGS-CWOA has better performance than GA PSO andLWPS on F7 and F9 as shown in Figures 7(g) and 7(i) Un-fortunately ASGS-CWOA spent most time on F2 F3 F4 F8and F12 yet it is a comfort that the times spent byASGS-CWOAare not too much to be unacceptable is phenomenon con-forms to a philosophical truth that nothing is perfect and flawlessin the world and ASGS-CWOA is without exception Ac-cordingly in general ASGS-CWOA has a better speed ofconvergence e detailed is shown in Figure 7

242 Supplementary Experiments In order to further verifythe performance of the new proposed algorithm supplementary

experiments are conducted on CEC-2014 (IEEE Congress onEvolutionary Computation 2014) testing functions [38] detailedin Table 4 It is different with the above 12 testing functions thatCEC2014 testing functions are conducted on a computerequipment with Win7-32 bit Matlab 2014a CPU (AMD A6-3400M) and RAM (40GB) due to the match between the givencec14_funcmexw32 andwindows 32 bit system not Linux 64 bit

From Table 5 and Figure 8(a) obviously it can be seenthat new proposed algorithm has better performance thanGA and LWPS in terms of ldquooptimal valuerdquo which meansASGS-CWOA has better performance in finding the globaloptima From Figures 8(b)ndash8(d) it can be seen that the newproposed algorithm has best performance in terms of ldquoworstvaluerdquo ldquoaverage valuerdquo and ldquostandard deviationrdquo in otherwords the new proposed algorithm has best stability androbustness From Figure 8(e) the proportion of ASGS-

Table 3 Continued

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F5

1 467Eminus 10 467Eminus 10 467Eminus 10 312Eminus 25 73 000852 920Eminus 22 666Eminus 15 134Eminus 16 678Eminus 16 594 042983 123Eminus 23 176Eminus 19 156Eminus 20 260Eminus 20 585 241124 375Eminus 136 131Eminus 19 428Eminus 22 224Eminus 11 195 021695 0 0 0 0 101 106Eminus 03

F6

1 minus10316 minus10316 minus10316 179Eminus 15 68 000772 minus10316 minus10316 minus10316 147Eminus 15 578 045973 minus10316 minus10316 minus10316 152Eminus 15 519 247274 minus10316 minus10316 minus10316 125Eminus 15 157 018235 minus10316 minus10316 minus10316 560Eminus 11 332 0021855

F7

1 407Eminus 08 164Eminus 06 317Eminus 07 501Eminus 07 437 03272 278Eminus 16 260Eminus 10 885Eminus 12 310Eminus 11 590 045513 0 167Eminus 16 777Eminus 18 250Eminus 17 556 177454 0 169Eminus 11 636Eminus 15 236Eminus 11 139 007935 0 0 0 0 2406 01064

F8

1 30054 27052 29787 00629 69 000792 30054 30054 30054 453Eminus 16 550 039463 30054 30038 30053 000016125 399 129644 30054 30054 30054 941Eminus 11 67 003075 30054 30054 30054 966Eminus 30 600 30412

F9

1 455Eminus 11 368Eminus 09 891Eminus 11 367Eminus 10 271 012762 247Eminus 20 191Eminus 11 197Eminus 13 191Eminus 12 591 045253 605Eminus 24 312Eminus 19 270Eminus 20 548Eminus 20 580 196064 0 315Eminus 11 146Eminus 22 511Eminus 12 132 007385 0 0 0 0 2686 011364

F10

1 436Eminus 07 04699 00047 0047 109 001972 0 365Eminus 12 112Eminus 13 403Eminus 13 588 045643 0 0 0 0 539 167364 0 0 0 0 78 00255 0 0 0 0 18488 086886

F11

1 15851 15851 15851 378Eminus 07 434 057882 15934 1594 15935 00000826 595 049983 155Eminus 06 21015 07618 05833 594 17894 921Eminus 07 000018069 520Eminus 05 41549ndash5 535 108225 0 0 0 0 1 019021

F12

1 20399 times Eminus 4 00277 00056 00059 597 15772 00151 02574 00863 00552 592 056083 0139 16442 09017 03516 600 211694 428Eminus 08 00214 000038043 00013 600 197285 0 0 0 0 1 74354

Computational Intelligence and Neuroscience 9

CWOA is better than PSO LWPS and CWOA and it meansthe new proposed ASGS-CWOA has advantages in timeperformance

In a word ASGS-CWOA has good optimization qualitystability advantage on performance of iteration numberand speed of convergence

3 Results and Discussion

eoretical research and experimental results reveal thatcompared with traditional genetic algorithm particle swarmoptimization leading wolf pack algorithm and chaotic wolf

optimization algorithm ASGS-CWOA has better globaloptimization accuracy fast convergence speed and highrobustness under the same conditions

In fact the strategy of ASGS greatly strengthens the localexploitation power of the original algorithm which makes iteasier for the algorithm to find the global optimum the strategyof ORM and SDUA effectively enhances the global explorationpower whichmakes the algorithm not easy to fall into the localoptimum and thus easier to find the global optimum

Focusing on Tables 3 and 5 and Figures 6 and 7 abovecompared with the four state-of-the-art algorithms ASGS-CWOA is effective and efficient in most of terms on

1 2 3 4 5

Stan

dard

dev

iatio

n

Algorithm order

Matyas

0E + 00

(a)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Easom

0E + 00

2E ndash 02

4E ndash 02

(b)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sumsquares

0E + 00

5E ndash 06

1E ndash 05

(c)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sphere

0E + 00

3E ndash 02

6E ndash 02

(d)

Stan

dard

dev

iatio

n

Algorithm order51 2 3 4

Execrate

0E + 00

(e)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

(f)

Algorithm order1 2 3 4 5

Bohachevsky3

Stan

dard

dev

iatio

n

0E + 00

(g)

Algorithm order1 2 3 4 5

Bridge

Stan

dard

dev

iatio

n

0E + 00

(h)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Booth

0E + 00

(i)

Algorithm order1 2 3 4 5

Bohachevsky1

Stan

dard

dev

iatio

n

0E + 00

4E ndash 13

8E ndash 13

(j)

Algorithm order1 2 3 4 5

Ackley

Stan

dard

dev

iatio

n

0E + 00

4E ndash 07

8E ndash 07

(k)

Algorithm order1 2 3 54

Quadric

Stan

dard

dev

iatio

n

0E + 00

4E ndash 03

8E ndash 03

(l)

Figure 6 Histograms of the SD about the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g) ndashF7 (h)ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12 (red means the value is out of upper limit of the chart)

10 Computational Intelligence and Neuroscience

Mea

n tim

e

Matyas

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(a)

Mea

n tim

e

Easom

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(b)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sumsquares

00E + 00

60E + 00

12E + 01

(c)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sphere

00E + 00

50E + 00

10E + 01

(d)

Mea

n tim

e

Algorithm order1 2 3 4 5

Execrate

0E + 00

1E + 00

2E + 00

3E + 00

(e)

Mea

n tim

e

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

1E + 00

2E + 00

3E + 00

(f )

Mea

n tim

e

Algorithm order1 2 3 4 5

Bohachevsky3

0E + 00

1E + 00

2E + 00

(g)

Mea

n tim

e

Algorithm order1 2 3 4 5

Bridge

0E + 00

1E + 00

2E + 00

3E + 00

(h)

Mea

n tim

e

Algorithm order1 2 3 4 5

Booth

0E + 00

1E + 00

2E + 00

(i)

Mea

n tim

e

1 2 3 4 5Algorithm order

Bohachevsky1

0E + 00

1E + 00

2E + 00

(j)

Mea

n tim

e

1 2 3 4 5Algorithm order

Ackley

0E + 00

1E + 00

2E + 00

3E + 00

(k)

Mea

n tim

e

1 2 3 4 5Algorithm order

Quadric

0E + 00

4E + 00

8E + 00

(l)

Figure 7 Histograms of the mean time spent on the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g)ndashF7 (h) ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12

Table 4 Part of the CECrsquo14 test functions [38]

No Functions Dimension Range Filowast Fi (xlowast)

1 Rotated High Conditioned Elliptic Function 2 [minus100 100] 1002 Rotated Bent Cigar Function 2 [minus100 100] 2003 Rotated Discuss Function 2 [minus100 100] 3004 Shifted and Rotated Rosenbrockrsquos Function 2 [minus100 100] 4005 Shifted and Rotated Ackleyrsquos Function 2 [minus100 100] 5006 Shifted and Rotated Weier Stress Function 2 [minus100 100] 6007 Shifted and Rotated Griewankrsquos Function 2 [minus100 100] 7008 Shifted Rastriginrsquos Function 2 [minus100 100] 8009 Shifted and Rotated Rastriginrsquos Function 2 [minus100 100] 900

Computational Intelligence and Neuroscience 11

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 6: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

wolves will be added to the population to make sure that thebad gene can be eliminated and the population diversity ofthe wolf pack can be ensured so it is not easy to fall into thelocal optimal solution and the convergence rate can beimproved However the amount of wolves that should beeliminated and added to the wolf pack is a fixed numberwhich is 5 in LWPS and CWOA mentioned before and thatis stiff and unreasonable In fact the amount should be adynamic number to reflect the dynamic situation of the wolfpack during any iteration

Standard deviation (SD) is a statistical concept whichhas been widely used in many fields Such as in [33]standard deviation was used into industrial equipment tohelp process the signal of bubble detection in liquid sodiumin [34] standard deviation is used with delay multiply toform a new weighting factor which is introduced to enhancethe contrast of the reconstructed images in medical fields in[35] based on eight-year-old dental trauma research datastandard deviation was utilized to help analyze the potentialof laser Doppler flowmetry the beam forming performancehas a large impact on image quality in ultrasound imagingto improve image resolution and contrast in [36] a newadaptive weighting factor for ultrasound imaging calledsignal mean-to-standard-deviation factor (SMSF) was pro-posed based on which researchers put forward an adaptivebeam forming method for ultrasound imaging based on themean-to-standard-deviation factor in [37] standard devi-ation was adopted to help analyze when an individual shouldstart social security

In this paper we take a concept named ldquoStandard De-viation Updating Amountrdquo to eliminate wolves with poorfitness and dynamically reflect the situation of the wolf packwhich means the population amount of wolf pack andstandard deviation about their fitness determine how manywolves will be eliminated and regenerated Standard devi-ation can be obtained as follows

σ

1N

1113944

N

i1xi minus μ( 1113857

2

11139741113972

(15)

whereNmeans the number of wolf pack ximeans the fitnessof the i-th wolf and μ is the mean value of the fitness enSDUA is gained by the following formula

SDUA

SDUA + 1 fitness(xi)ltμ minus σ2

1113874 1113875

do nothing fitness(xi)geμ minus σ2

1113874 1113875

⎧⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎩

(16)

SDUA is zero when the iteration begins next differencebetween the mean value and the SD about the fitness of thewolf pack should be computed and if the fitness of currentwolf is less than the difference SDUA increases by 1 oth-erwise nothing must be done So the value of SDUA is gotand SDUA wolves should be eliminated and regenerated

e effect of using a fixed value is displayed inFigure 4(a) and Figure 4(b) shows the effect of using ASD Itis clear that the amount in the latter is fluctuant as the wolf

pack has a different fitness during any iteration and betterthan the former to reflect the dynamic situations of itera-tions Accordingly not only will the bad gene from the wolfpack be eliminated but the population diversity of the wolfpack can be also kept better so that it is difficulty to fall intolocal optimal solution and the convergence rate can beimproved as far as possible

23 Steps of ASGS-CWOA Based on the thoughts ofadaptive grid search and adaptive standard deviationupdating amount mentioned above ASGS-CWOA is pro-posed the implementation steps are following and its flowchart is shown in Figure 5

231 Initialization of Population e following parametersshould be initially assigned amount of wolf population is Nand dimension of searching space is D for brevity theranges of all dimensions are [rangemin rangemax] the upperlimit of iterations is T value of step size in migration isstep_a value of step size in summon and raid is step_b valueof step size in siegement is step_c and the population can begenerated initially by the following equation [25]

xid range min + chaos(0 1) times(range max minus range min)

(17)

where chaos (0 1) returns a chaotic variable distributed in[0 1] uniformly

232 Migration Here an adaptive grid centered on thecurrent wolf can be generated by equation (12) which in-cludes 2 nodes After migration the wolf with best fitnesscan be found as the leader wolf

233 Summon and Raid After migration the others beginto run toward the location of leader wolf and during theprocess of raiding any wolf keeps searching for preys fol-lowing equations (13) and (14)

234 Siege After summon and raid all other wolves cometo be around the leader wolf in order to siege the preyfollowing equations

k xidminusnew1113858 1113859 xid + step c times k

(k minusK minusK + 1 0 K minus 1 K)

ximinusnew k xi1minusnew1113858 1113859 k xi2minusnew1113858 1113859 k xi Dminusnew1113858 11138591113864 1113865

(18)

After any siegement the newest leader wolf can be gotwhich has best fitness temporarily

235 Regeneration After siege some wolves with poorerfitness will be eliminated while the same amount of wolveswill be regenerated according to equation (14)

6 Computational Intelligence and Neuroscience

236 Loop Here if t (the number of current iteration) isbigger than T (the upper limit of iterations) exit the loopOtherwise go to 42 for loop until t is bigger than T Andwhen the loop is over the leader wolf maybe is the globaloptima that the algorithm can find

24 Numerical Experiments In this paper four state-of-the-art optimization algorithms are taken to validate the per-formance of new algorithm proposed above detailed inTable 1

241 Experiments on Twelve Classical Benchmark FunctionsTable 2 shows benchmark functions for testing and Table 3shows the numerical experimental data of five algorithms on12 benchmark functions for testing mentioned above es-pecially the numerical experiments were done on a com-puter equipped with Ubuntu 16044 operating system Intel

(R) Core (TM) i7-5930K processor and 64G memory as wellas Matlab 2017a For genetic algorithm toolbox in Matlab2017a is utilized for GA experiments the PSO experimentswere implemented by a ldquoPSOtrdquo toolbox for Matlab LWPS isgot from the thought in [24] CWOA is carried out based onthe idea in [25] and the new algorithm ASGS-CWOA iscarried out in Matlab 2017a which is an integrated devel-opment environment with M programming language Toprove the good performance of the proposed algorithmoptimization calculations were run for 100 times on anybenchmark function for testing as well as any optimizationalgorithm mentioned above

Firstly focusing on Table 3 seen from the term ofbest value only new algorithm can find the theoreticalglobal optima of all benchmark functions for testing soASGS-CWOA has better performance in optimizationaccuracy

Furthermore for 100 times the worst and average valuesof ASGS-CWOA in the all benchmark functions for testingexcept the F2 named ldquoEasomrdquo reach the theoretical valuesrespectively and the new algorithm has better standard de-viation about best values detailed in Figure 6 from which itcan be seen that nearly all the standard deviations are bestexcept F2 and F6 especially the standard deviations are zeroon F1 F3 F4 F5 F7 F9 F10 F11 and F12 Figure 6(b) showsthat the value of standard deviation about ASGS-CWOA onF2 is not worst in five algorithms and it has better perfor-mance than GA Figure 6(f) indicates that the value ofstandard deviation about ASGS-CWOA on F6 reaches 10minus11weaker than the values on others Figure 6(h) demonstratesthat the value of standard deviation about ASGS-CWOA onF8 reaches 10minus30 which is not zero but best in five algorithmserefore ASGS-CWOA has better stability than others

In addition focusing on the number of mean iterationASGS-CWOAhas weaker iteration times on the test function 2ldquoEasomrdquo the test function 8 ldquoBridgerdquo and the test function 10ldquoBohachevsky1rdquo but it has better performances on the othertest functions especially the iteration times on test function 34 5 6 11 and 12 are 1 or about 1 So ASGS-CWOA has betteradvantage on performance of iteration number

5

Fixed amount 5

ndash5

15

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(a)

ASDUA

ndash5

5

15

25

35

45

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(b)

Figure 4 (a)e number that how many wolves will be starved to death and regenerated as the iteration goes on It is a fixed number 5 (b)e number that how many wolves will be starved to death and regenerated following the new method named SDUA as the iteration goeson

Step 1 initialize parameters of algorithm

Step 2 migration process with ASGS

Step 3 summon ndash raid process with OMR

Step 4 siege process with ASGS

Step 5 population regeneration with SDUA

End condition satisfied

Output or record head wolf position and fitness value

Yes

No

Step 6

Figure 5 Flow chart for the new proposed algorithm

Computational Intelligence and Neuroscience 7

Table 1 Arithmetic configuration

Order Name Configuration

1 Genetic algorithm [16] e crossover probability is set at 07 the mutation probability is set at 001 while the generation gap isset at 095

2 Particle swarmoptimization

Value of individual acceleration 2 value of weighted value of initial time 09 value of weighted value ofconvergence time 04 limit individual speed at 20 of the changing range

3 LWPS Migration step size stepa 15 raid step size stepb 09 siege threshold r0 02 upper limit of siege step sizestepcmax 106 lower limit of siege step size stepcmin 10minus2 updating number of wolves m 5

4 CWOAAmount of campaign wolves q 5 searching direction h 4 upper limit of searchHmax 15 migration stepsize stepa0 15 raid step size stepb 09 siege threshold r0 02 value of siege step size stepc0 16 updating

amount of wolves m 5

5 ASGS-CWOA Migration step-size stepa0 15 upper limit of siege step-size stepcmax 1e6 lower limit of siege step sizestep cmin 1eminus 40 upper limit number of iteration T 600 number of wolf population N 50

Table 2 Test functions [25]

Order Function Expression Dimension Range Optimum1 Matyas F1 026times (x2

1 +x22)minus 048times x1 times x2 2 [minus10 10] min f 0

2 Easom F2 minuscos (x1)times cos (x2)times exp[minus (x1minus π)2 minus (x2minus π)2] 2 [minus100 100] min f minus13 Sumsquares F3 1113936

Di1 ilowast x2

i 10 [minus15 15] min f 0

4 Sphere F4 1113936Di1 x2

i 30 [minus15 15] min f 05 Eggcrate F5 x2

1 + x22 + 25times (sin2 x1 + sin2 x2) 2 [minusπ π] min f 0

6 Six hump camelback F6 4times x1 minus 21x4

1 + (13)times x61 + x1 times x2 minus 4times x2

2 + 4times x42 2 [minus5 5] min

f minus103167 Bohachevsky3 F7 x2

1 + 2times x22 minus 03times cos (3πx1 + 4πx2) + 03 2 [minus100 100] min f 0

8 Bridge F8 (sinx21 + x2

2

1113969)

x21 + x2

2

1113969minus 07129 2 [minus15 15] max f 30054

9 Booth F9 (x1 + 2times x2 minus 7)2 + (2times x1 + x2 minus 5)2 2 [minus10 10] min f 010 Bohachevsky1 F10 x2

1 + 2x22 minus 03times cos (3πx1)minus 04times cos (4πx2) + 07 2 [minus100 100] min f 0

11 Ackley F11 minus20times exp (minus02times

(1D) 1113936Di1 x2

i

1113969

)minus exp((1D) 1113936

Di1 cos 2 πxi) + 20 + e

6 [minus15 15] min f 0

12 Quadric F12 1113936Di1 (1113936

ik1 xk)2 10 [minus15 15] min f 0

Table 3 Experimental results

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F1

1 364Eminus 12 283Eminus 08 352Eminus 10 283Eminus 09 488 040382 414Eminus 21 326Eminus 14 115Eminus 15 387Eminus 15 591 042823 479Eminus 26 495Eminus 21 258Eminus 22 579Eminus 22 585 240954 104Eminus 119 169Eminus 12 585Eminus 24 213Eminus 13 295 042665 0 0 0 0 2744 010806

F2

1 minus1 minus000008110 minus097 01714 153 003752 minus1 minus1 minus1 324Eminus 06 583 041963 minus1 minus1 minus1 0 507 191114 minus1 minus1 minus1 0 72 003215 minus1 0 minus002 00196 59262 27036

F3

1 850Eminus 07 161Eminus 05 277Eminus 06 335Eminus 06 593 143962 000017021 00114 00023 00022 595 047923 760Eminus 07 87485 14802 15621 600 483194 561Eminus 07 149Eminus 05 107Eminus 06 874Eminus 06 600 414535 0 0 0 0 1 102205

F4

1 0004 00331 00127 00055 592 319022 00351 01736 00902 00292 594 048683 35579 103152 80497 12048 600 147424 000001114 00098 00012 00048 600 084315 0 0 0 0 1 100834

8 Computational Intelligence and Neuroscience

Finally the mean time spent by ASGS-CWOA are smallerthan the othersrsquo on F1 F5 F6 and F11 as shown in Figures 7(a)7(e) 7(f) and 7(k) respectively On F7 F9 and F10 the fivealgorithms spent time roughly in the same order of magnitudeand ASGS-CWOA has better performance than GA PSO andLWPS on F7 and F9 as shown in Figures 7(g) and 7(i) Un-fortunately ASGS-CWOA spent most time on F2 F3 F4 F8and F12 yet it is a comfort that the times spent byASGS-CWOAare not too much to be unacceptable is phenomenon con-forms to a philosophical truth that nothing is perfect and flawlessin the world and ASGS-CWOA is without exception Ac-cordingly in general ASGS-CWOA has a better speed ofconvergence e detailed is shown in Figure 7

242 Supplementary Experiments In order to further verifythe performance of the new proposed algorithm supplementary

experiments are conducted on CEC-2014 (IEEE Congress onEvolutionary Computation 2014) testing functions [38] detailedin Table 4 It is different with the above 12 testing functions thatCEC2014 testing functions are conducted on a computerequipment with Win7-32 bit Matlab 2014a CPU (AMD A6-3400M) and RAM (40GB) due to the match between the givencec14_funcmexw32 andwindows 32 bit system not Linux 64 bit

From Table 5 and Figure 8(a) obviously it can be seenthat new proposed algorithm has better performance thanGA and LWPS in terms of ldquooptimal valuerdquo which meansASGS-CWOA has better performance in finding the globaloptima From Figures 8(b)ndash8(d) it can be seen that the newproposed algorithm has best performance in terms of ldquoworstvaluerdquo ldquoaverage valuerdquo and ldquostandard deviationrdquo in otherwords the new proposed algorithm has best stability androbustness From Figure 8(e) the proportion of ASGS-

Table 3 Continued

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F5

1 467Eminus 10 467Eminus 10 467Eminus 10 312Eminus 25 73 000852 920Eminus 22 666Eminus 15 134Eminus 16 678Eminus 16 594 042983 123Eminus 23 176Eminus 19 156Eminus 20 260Eminus 20 585 241124 375Eminus 136 131Eminus 19 428Eminus 22 224Eminus 11 195 021695 0 0 0 0 101 106Eminus 03

F6

1 minus10316 minus10316 minus10316 179Eminus 15 68 000772 minus10316 minus10316 minus10316 147Eminus 15 578 045973 minus10316 minus10316 minus10316 152Eminus 15 519 247274 minus10316 minus10316 minus10316 125Eminus 15 157 018235 minus10316 minus10316 minus10316 560Eminus 11 332 0021855

F7

1 407Eminus 08 164Eminus 06 317Eminus 07 501Eminus 07 437 03272 278Eminus 16 260Eminus 10 885Eminus 12 310Eminus 11 590 045513 0 167Eminus 16 777Eminus 18 250Eminus 17 556 177454 0 169Eminus 11 636Eminus 15 236Eminus 11 139 007935 0 0 0 0 2406 01064

F8

1 30054 27052 29787 00629 69 000792 30054 30054 30054 453Eminus 16 550 039463 30054 30038 30053 000016125 399 129644 30054 30054 30054 941Eminus 11 67 003075 30054 30054 30054 966Eminus 30 600 30412

F9

1 455Eminus 11 368Eminus 09 891Eminus 11 367Eminus 10 271 012762 247Eminus 20 191Eminus 11 197Eminus 13 191Eminus 12 591 045253 605Eminus 24 312Eminus 19 270Eminus 20 548Eminus 20 580 196064 0 315Eminus 11 146Eminus 22 511Eminus 12 132 007385 0 0 0 0 2686 011364

F10

1 436Eminus 07 04699 00047 0047 109 001972 0 365Eminus 12 112Eminus 13 403Eminus 13 588 045643 0 0 0 0 539 167364 0 0 0 0 78 00255 0 0 0 0 18488 086886

F11

1 15851 15851 15851 378Eminus 07 434 057882 15934 1594 15935 00000826 595 049983 155Eminus 06 21015 07618 05833 594 17894 921Eminus 07 000018069 520Eminus 05 41549ndash5 535 108225 0 0 0 0 1 019021

F12

1 20399 times Eminus 4 00277 00056 00059 597 15772 00151 02574 00863 00552 592 056083 0139 16442 09017 03516 600 211694 428Eminus 08 00214 000038043 00013 600 197285 0 0 0 0 1 74354

Computational Intelligence and Neuroscience 9

CWOA is better than PSO LWPS and CWOA and it meansthe new proposed ASGS-CWOA has advantages in timeperformance

In a word ASGS-CWOA has good optimization qualitystability advantage on performance of iteration numberand speed of convergence

3 Results and Discussion

eoretical research and experimental results reveal thatcompared with traditional genetic algorithm particle swarmoptimization leading wolf pack algorithm and chaotic wolf

optimization algorithm ASGS-CWOA has better globaloptimization accuracy fast convergence speed and highrobustness under the same conditions

In fact the strategy of ASGS greatly strengthens the localexploitation power of the original algorithm which makes iteasier for the algorithm to find the global optimum the strategyof ORM and SDUA effectively enhances the global explorationpower whichmakes the algorithm not easy to fall into the localoptimum and thus easier to find the global optimum

Focusing on Tables 3 and 5 and Figures 6 and 7 abovecompared with the four state-of-the-art algorithms ASGS-CWOA is effective and efficient in most of terms on

1 2 3 4 5

Stan

dard

dev

iatio

n

Algorithm order

Matyas

0E + 00

(a)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Easom

0E + 00

2E ndash 02

4E ndash 02

(b)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sumsquares

0E + 00

5E ndash 06

1E ndash 05

(c)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sphere

0E + 00

3E ndash 02

6E ndash 02

(d)

Stan

dard

dev

iatio

n

Algorithm order51 2 3 4

Execrate

0E + 00

(e)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

(f)

Algorithm order1 2 3 4 5

Bohachevsky3

Stan

dard

dev

iatio

n

0E + 00

(g)

Algorithm order1 2 3 4 5

Bridge

Stan

dard

dev

iatio

n

0E + 00

(h)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Booth

0E + 00

(i)

Algorithm order1 2 3 4 5

Bohachevsky1

Stan

dard

dev

iatio

n

0E + 00

4E ndash 13

8E ndash 13

(j)

Algorithm order1 2 3 4 5

Ackley

Stan

dard

dev

iatio

n

0E + 00

4E ndash 07

8E ndash 07

(k)

Algorithm order1 2 3 54

Quadric

Stan

dard

dev

iatio

n

0E + 00

4E ndash 03

8E ndash 03

(l)

Figure 6 Histograms of the SD about the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g) ndashF7 (h)ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12 (red means the value is out of upper limit of the chart)

10 Computational Intelligence and Neuroscience

Mea

n tim

e

Matyas

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(a)

Mea

n tim

e

Easom

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(b)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sumsquares

00E + 00

60E + 00

12E + 01

(c)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sphere

00E + 00

50E + 00

10E + 01

(d)

Mea

n tim

e

Algorithm order1 2 3 4 5

Execrate

0E + 00

1E + 00

2E + 00

3E + 00

(e)

Mea

n tim

e

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

1E + 00

2E + 00

3E + 00

(f )

Mea

n tim

e

Algorithm order1 2 3 4 5

Bohachevsky3

0E + 00

1E + 00

2E + 00

(g)

Mea

n tim

e

Algorithm order1 2 3 4 5

Bridge

0E + 00

1E + 00

2E + 00

3E + 00

(h)

Mea

n tim

e

Algorithm order1 2 3 4 5

Booth

0E + 00

1E + 00

2E + 00

(i)

Mea

n tim

e

1 2 3 4 5Algorithm order

Bohachevsky1

0E + 00

1E + 00

2E + 00

(j)

Mea

n tim

e

1 2 3 4 5Algorithm order

Ackley

0E + 00

1E + 00

2E + 00

3E + 00

(k)

Mea

n tim

e

1 2 3 4 5Algorithm order

Quadric

0E + 00

4E + 00

8E + 00

(l)

Figure 7 Histograms of the mean time spent on the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g)ndashF7 (h) ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12

Table 4 Part of the CECrsquo14 test functions [38]

No Functions Dimension Range Filowast Fi (xlowast)

1 Rotated High Conditioned Elliptic Function 2 [minus100 100] 1002 Rotated Bent Cigar Function 2 [minus100 100] 2003 Rotated Discuss Function 2 [minus100 100] 3004 Shifted and Rotated Rosenbrockrsquos Function 2 [minus100 100] 4005 Shifted and Rotated Ackleyrsquos Function 2 [minus100 100] 5006 Shifted and Rotated Weier Stress Function 2 [minus100 100] 6007 Shifted and Rotated Griewankrsquos Function 2 [minus100 100] 7008 Shifted Rastriginrsquos Function 2 [minus100 100] 8009 Shifted and Rotated Rastriginrsquos Function 2 [minus100 100] 900

Computational Intelligence and Neuroscience 11

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 7: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

236 Loop Here if t (the number of current iteration) isbigger than T (the upper limit of iterations) exit the loopOtherwise go to 42 for loop until t is bigger than T Andwhen the loop is over the leader wolf maybe is the globaloptima that the algorithm can find

24 Numerical Experiments In this paper four state-of-the-art optimization algorithms are taken to validate the per-formance of new algorithm proposed above detailed inTable 1

241 Experiments on Twelve Classical Benchmark FunctionsTable 2 shows benchmark functions for testing and Table 3shows the numerical experimental data of five algorithms on12 benchmark functions for testing mentioned above es-pecially the numerical experiments were done on a com-puter equipped with Ubuntu 16044 operating system Intel

(R) Core (TM) i7-5930K processor and 64G memory as wellas Matlab 2017a For genetic algorithm toolbox in Matlab2017a is utilized for GA experiments the PSO experimentswere implemented by a ldquoPSOtrdquo toolbox for Matlab LWPS isgot from the thought in [24] CWOA is carried out based onthe idea in [25] and the new algorithm ASGS-CWOA iscarried out in Matlab 2017a which is an integrated devel-opment environment with M programming language Toprove the good performance of the proposed algorithmoptimization calculations were run for 100 times on anybenchmark function for testing as well as any optimizationalgorithm mentioned above

Firstly focusing on Table 3 seen from the term ofbest value only new algorithm can find the theoreticalglobal optima of all benchmark functions for testing soASGS-CWOA has better performance in optimizationaccuracy

Furthermore for 100 times the worst and average valuesof ASGS-CWOA in the all benchmark functions for testingexcept the F2 named ldquoEasomrdquo reach the theoretical valuesrespectively and the new algorithm has better standard de-viation about best values detailed in Figure 6 from which itcan be seen that nearly all the standard deviations are bestexcept F2 and F6 especially the standard deviations are zeroon F1 F3 F4 F5 F7 F9 F10 F11 and F12 Figure 6(b) showsthat the value of standard deviation about ASGS-CWOA onF2 is not worst in five algorithms and it has better perfor-mance than GA Figure 6(f) indicates that the value ofstandard deviation about ASGS-CWOA on F6 reaches 10minus11weaker than the values on others Figure 6(h) demonstratesthat the value of standard deviation about ASGS-CWOA onF8 reaches 10minus30 which is not zero but best in five algorithmserefore ASGS-CWOA has better stability than others

In addition focusing on the number of mean iterationASGS-CWOAhas weaker iteration times on the test function 2ldquoEasomrdquo the test function 8 ldquoBridgerdquo and the test function 10ldquoBohachevsky1rdquo but it has better performances on the othertest functions especially the iteration times on test function 34 5 6 11 and 12 are 1 or about 1 So ASGS-CWOA has betteradvantage on performance of iteration number

5

Fixed amount 5

ndash5

15

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(a)

ASDUA

ndash5

5

15

25

35

45

Am

ount

10 20 30 40 50 60 70 80 90 1000Number of iterations

(b)

Figure 4 (a)e number that how many wolves will be starved to death and regenerated as the iteration goes on It is a fixed number 5 (b)e number that how many wolves will be starved to death and regenerated following the new method named SDUA as the iteration goeson

Step 1 initialize parameters of algorithm

Step 2 migration process with ASGS

Step 3 summon ndash raid process with OMR

Step 4 siege process with ASGS

Step 5 population regeneration with SDUA

End condition satisfied

Output or record head wolf position and fitness value

Yes

No

Step 6

Figure 5 Flow chart for the new proposed algorithm

Computational Intelligence and Neuroscience 7

Table 1 Arithmetic configuration

Order Name Configuration

1 Genetic algorithm [16] e crossover probability is set at 07 the mutation probability is set at 001 while the generation gap isset at 095

2 Particle swarmoptimization

Value of individual acceleration 2 value of weighted value of initial time 09 value of weighted value ofconvergence time 04 limit individual speed at 20 of the changing range

3 LWPS Migration step size stepa 15 raid step size stepb 09 siege threshold r0 02 upper limit of siege step sizestepcmax 106 lower limit of siege step size stepcmin 10minus2 updating number of wolves m 5

4 CWOAAmount of campaign wolves q 5 searching direction h 4 upper limit of searchHmax 15 migration stepsize stepa0 15 raid step size stepb 09 siege threshold r0 02 value of siege step size stepc0 16 updating

amount of wolves m 5

5 ASGS-CWOA Migration step-size stepa0 15 upper limit of siege step-size stepcmax 1e6 lower limit of siege step sizestep cmin 1eminus 40 upper limit number of iteration T 600 number of wolf population N 50

Table 2 Test functions [25]

Order Function Expression Dimension Range Optimum1 Matyas F1 026times (x2

1 +x22)minus 048times x1 times x2 2 [minus10 10] min f 0

2 Easom F2 minuscos (x1)times cos (x2)times exp[minus (x1minus π)2 minus (x2minus π)2] 2 [minus100 100] min f minus13 Sumsquares F3 1113936

Di1 ilowast x2

i 10 [minus15 15] min f 0

4 Sphere F4 1113936Di1 x2

i 30 [minus15 15] min f 05 Eggcrate F5 x2

1 + x22 + 25times (sin2 x1 + sin2 x2) 2 [minusπ π] min f 0

6 Six hump camelback F6 4times x1 minus 21x4

1 + (13)times x61 + x1 times x2 minus 4times x2

2 + 4times x42 2 [minus5 5] min

f minus103167 Bohachevsky3 F7 x2

1 + 2times x22 minus 03times cos (3πx1 + 4πx2) + 03 2 [minus100 100] min f 0

8 Bridge F8 (sinx21 + x2

2

1113969)

x21 + x2

2

1113969minus 07129 2 [minus15 15] max f 30054

9 Booth F9 (x1 + 2times x2 minus 7)2 + (2times x1 + x2 minus 5)2 2 [minus10 10] min f 010 Bohachevsky1 F10 x2

1 + 2x22 minus 03times cos (3πx1)minus 04times cos (4πx2) + 07 2 [minus100 100] min f 0

11 Ackley F11 minus20times exp (minus02times

(1D) 1113936Di1 x2

i

1113969

)minus exp((1D) 1113936

Di1 cos 2 πxi) + 20 + e

6 [minus15 15] min f 0

12 Quadric F12 1113936Di1 (1113936

ik1 xk)2 10 [minus15 15] min f 0

Table 3 Experimental results

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F1

1 364Eminus 12 283Eminus 08 352Eminus 10 283Eminus 09 488 040382 414Eminus 21 326Eminus 14 115Eminus 15 387Eminus 15 591 042823 479Eminus 26 495Eminus 21 258Eminus 22 579Eminus 22 585 240954 104Eminus 119 169Eminus 12 585Eminus 24 213Eminus 13 295 042665 0 0 0 0 2744 010806

F2

1 minus1 minus000008110 minus097 01714 153 003752 minus1 minus1 minus1 324Eminus 06 583 041963 minus1 minus1 minus1 0 507 191114 minus1 minus1 minus1 0 72 003215 minus1 0 minus002 00196 59262 27036

F3

1 850Eminus 07 161Eminus 05 277Eminus 06 335Eminus 06 593 143962 000017021 00114 00023 00022 595 047923 760Eminus 07 87485 14802 15621 600 483194 561Eminus 07 149Eminus 05 107Eminus 06 874Eminus 06 600 414535 0 0 0 0 1 102205

F4

1 0004 00331 00127 00055 592 319022 00351 01736 00902 00292 594 048683 35579 103152 80497 12048 600 147424 000001114 00098 00012 00048 600 084315 0 0 0 0 1 100834

8 Computational Intelligence and Neuroscience

Finally the mean time spent by ASGS-CWOA are smallerthan the othersrsquo on F1 F5 F6 and F11 as shown in Figures 7(a)7(e) 7(f) and 7(k) respectively On F7 F9 and F10 the fivealgorithms spent time roughly in the same order of magnitudeand ASGS-CWOA has better performance than GA PSO andLWPS on F7 and F9 as shown in Figures 7(g) and 7(i) Un-fortunately ASGS-CWOA spent most time on F2 F3 F4 F8and F12 yet it is a comfort that the times spent byASGS-CWOAare not too much to be unacceptable is phenomenon con-forms to a philosophical truth that nothing is perfect and flawlessin the world and ASGS-CWOA is without exception Ac-cordingly in general ASGS-CWOA has a better speed ofconvergence e detailed is shown in Figure 7

242 Supplementary Experiments In order to further verifythe performance of the new proposed algorithm supplementary

experiments are conducted on CEC-2014 (IEEE Congress onEvolutionary Computation 2014) testing functions [38] detailedin Table 4 It is different with the above 12 testing functions thatCEC2014 testing functions are conducted on a computerequipment with Win7-32 bit Matlab 2014a CPU (AMD A6-3400M) and RAM (40GB) due to the match between the givencec14_funcmexw32 andwindows 32 bit system not Linux 64 bit

From Table 5 and Figure 8(a) obviously it can be seenthat new proposed algorithm has better performance thanGA and LWPS in terms of ldquooptimal valuerdquo which meansASGS-CWOA has better performance in finding the globaloptima From Figures 8(b)ndash8(d) it can be seen that the newproposed algorithm has best performance in terms of ldquoworstvaluerdquo ldquoaverage valuerdquo and ldquostandard deviationrdquo in otherwords the new proposed algorithm has best stability androbustness From Figure 8(e) the proportion of ASGS-

Table 3 Continued

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F5

1 467Eminus 10 467Eminus 10 467Eminus 10 312Eminus 25 73 000852 920Eminus 22 666Eminus 15 134Eminus 16 678Eminus 16 594 042983 123Eminus 23 176Eminus 19 156Eminus 20 260Eminus 20 585 241124 375Eminus 136 131Eminus 19 428Eminus 22 224Eminus 11 195 021695 0 0 0 0 101 106Eminus 03

F6

1 minus10316 minus10316 minus10316 179Eminus 15 68 000772 minus10316 minus10316 minus10316 147Eminus 15 578 045973 minus10316 minus10316 minus10316 152Eminus 15 519 247274 minus10316 minus10316 minus10316 125Eminus 15 157 018235 minus10316 minus10316 minus10316 560Eminus 11 332 0021855

F7

1 407Eminus 08 164Eminus 06 317Eminus 07 501Eminus 07 437 03272 278Eminus 16 260Eminus 10 885Eminus 12 310Eminus 11 590 045513 0 167Eminus 16 777Eminus 18 250Eminus 17 556 177454 0 169Eminus 11 636Eminus 15 236Eminus 11 139 007935 0 0 0 0 2406 01064

F8

1 30054 27052 29787 00629 69 000792 30054 30054 30054 453Eminus 16 550 039463 30054 30038 30053 000016125 399 129644 30054 30054 30054 941Eminus 11 67 003075 30054 30054 30054 966Eminus 30 600 30412

F9

1 455Eminus 11 368Eminus 09 891Eminus 11 367Eminus 10 271 012762 247Eminus 20 191Eminus 11 197Eminus 13 191Eminus 12 591 045253 605Eminus 24 312Eminus 19 270Eminus 20 548Eminus 20 580 196064 0 315Eminus 11 146Eminus 22 511Eminus 12 132 007385 0 0 0 0 2686 011364

F10

1 436Eminus 07 04699 00047 0047 109 001972 0 365Eminus 12 112Eminus 13 403Eminus 13 588 045643 0 0 0 0 539 167364 0 0 0 0 78 00255 0 0 0 0 18488 086886

F11

1 15851 15851 15851 378Eminus 07 434 057882 15934 1594 15935 00000826 595 049983 155Eminus 06 21015 07618 05833 594 17894 921Eminus 07 000018069 520Eminus 05 41549ndash5 535 108225 0 0 0 0 1 019021

F12

1 20399 times Eminus 4 00277 00056 00059 597 15772 00151 02574 00863 00552 592 056083 0139 16442 09017 03516 600 211694 428Eminus 08 00214 000038043 00013 600 197285 0 0 0 0 1 74354

Computational Intelligence and Neuroscience 9

CWOA is better than PSO LWPS and CWOA and it meansthe new proposed ASGS-CWOA has advantages in timeperformance

In a word ASGS-CWOA has good optimization qualitystability advantage on performance of iteration numberand speed of convergence

3 Results and Discussion

eoretical research and experimental results reveal thatcompared with traditional genetic algorithm particle swarmoptimization leading wolf pack algorithm and chaotic wolf

optimization algorithm ASGS-CWOA has better globaloptimization accuracy fast convergence speed and highrobustness under the same conditions

In fact the strategy of ASGS greatly strengthens the localexploitation power of the original algorithm which makes iteasier for the algorithm to find the global optimum the strategyof ORM and SDUA effectively enhances the global explorationpower whichmakes the algorithm not easy to fall into the localoptimum and thus easier to find the global optimum

Focusing on Tables 3 and 5 and Figures 6 and 7 abovecompared with the four state-of-the-art algorithms ASGS-CWOA is effective and efficient in most of terms on

1 2 3 4 5

Stan

dard

dev

iatio

n

Algorithm order

Matyas

0E + 00

(a)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Easom

0E + 00

2E ndash 02

4E ndash 02

(b)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sumsquares

0E + 00

5E ndash 06

1E ndash 05

(c)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sphere

0E + 00

3E ndash 02

6E ndash 02

(d)

Stan

dard

dev

iatio

n

Algorithm order51 2 3 4

Execrate

0E + 00

(e)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

(f)

Algorithm order1 2 3 4 5

Bohachevsky3

Stan

dard

dev

iatio

n

0E + 00

(g)

Algorithm order1 2 3 4 5

Bridge

Stan

dard

dev

iatio

n

0E + 00

(h)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Booth

0E + 00

(i)

Algorithm order1 2 3 4 5

Bohachevsky1

Stan

dard

dev

iatio

n

0E + 00

4E ndash 13

8E ndash 13

(j)

Algorithm order1 2 3 4 5

Ackley

Stan

dard

dev

iatio

n

0E + 00

4E ndash 07

8E ndash 07

(k)

Algorithm order1 2 3 54

Quadric

Stan

dard

dev

iatio

n

0E + 00

4E ndash 03

8E ndash 03

(l)

Figure 6 Histograms of the SD about the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g) ndashF7 (h)ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12 (red means the value is out of upper limit of the chart)

10 Computational Intelligence and Neuroscience

Mea

n tim

e

Matyas

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(a)

Mea

n tim

e

Easom

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(b)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sumsquares

00E + 00

60E + 00

12E + 01

(c)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sphere

00E + 00

50E + 00

10E + 01

(d)

Mea

n tim

e

Algorithm order1 2 3 4 5

Execrate

0E + 00

1E + 00

2E + 00

3E + 00

(e)

Mea

n tim

e

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

1E + 00

2E + 00

3E + 00

(f )

Mea

n tim

e

Algorithm order1 2 3 4 5

Bohachevsky3

0E + 00

1E + 00

2E + 00

(g)

Mea

n tim

e

Algorithm order1 2 3 4 5

Bridge

0E + 00

1E + 00

2E + 00

3E + 00

(h)

Mea

n tim

e

Algorithm order1 2 3 4 5

Booth

0E + 00

1E + 00

2E + 00

(i)

Mea

n tim

e

1 2 3 4 5Algorithm order

Bohachevsky1

0E + 00

1E + 00

2E + 00

(j)

Mea

n tim

e

1 2 3 4 5Algorithm order

Ackley

0E + 00

1E + 00

2E + 00

3E + 00

(k)

Mea

n tim

e

1 2 3 4 5Algorithm order

Quadric

0E + 00

4E + 00

8E + 00

(l)

Figure 7 Histograms of the mean time spent on the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g)ndashF7 (h) ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12

Table 4 Part of the CECrsquo14 test functions [38]

No Functions Dimension Range Filowast Fi (xlowast)

1 Rotated High Conditioned Elliptic Function 2 [minus100 100] 1002 Rotated Bent Cigar Function 2 [minus100 100] 2003 Rotated Discuss Function 2 [minus100 100] 3004 Shifted and Rotated Rosenbrockrsquos Function 2 [minus100 100] 4005 Shifted and Rotated Ackleyrsquos Function 2 [minus100 100] 5006 Shifted and Rotated Weier Stress Function 2 [minus100 100] 6007 Shifted and Rotated Griewankrsquos Function 2 [minus100 100] 7008 Shifted Rastriginrsquos Function 2 [minus100 100] 8009 Shifted and Rotated Rastriginrsquos Function 2 [minus100 100] 900

Computational Intelligence and Neuroscience 11

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 8: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

Table 1 Arithmetic configuration

Order Name Configuration

1 Genetic algorithm [16] e crossover probability is set at 07 the mutation probability is set at 001 while the generation gap isset at 095

2 Particle swarmoptimization

Value of individual acceleration 2 value of weighted value of initial time 09 value of weighted value ofconvergence time 04 limit individual speed at 20 of the changing range

3 LWPS Migration step size stepa 15 raid step size stepb 09 siege threshold r0 02 upper limit of siege step sizestepcmax 106 lower limit of siege step size stepcmin 10minus2 updating number of wolves m 5

4 CWOAAmount of campaign wolves q 5 searching direction h 4 upper limit of searchHmax 15 migration stepsize stepa0 15 raid step size stepb 09 siege threshold r0 02 value of siege step size stepc0 16 updating

amount of wolves m 5

5 ASGS-CWOA Migration step-size stepa0 15 upper limit of siege step-size stepcmax 1e6 lower limit of siege step sizestep cmin 1eminus 40 upper limit number of iteration T 600 number of wolf population N 50

Table 2 Test functions [25]

Order Function Expression Dimension Range Optimum1 Matyas F1 026times (x2

1 +x22)minus 048times x1 times x2 2 [minus10 10] min f 0

2 Easom F2 minuscos (x1)times cos (x2)times exp[minus (x1minus π)2 minus (x2minus π)2] 2 [minus100 100] min f minus13 Sumsquares F3 1113936

Di1 ilowast x2

i 10 [minus15 15] min f 0

4 Sphere F4 1113936Di1 x2

i 30 [minus15 15] min f 05 Eggcrate F5 x2

1 + x22 + 25times (sin2 x1 + sin2 x2) 2 [minusπ π] min f 0

6 Six hump camelback F6 4times x1 minus 21x4

1 + (13)times x61 + x1 times x2 minus 4times x2

2 + 4times x42 2 [minus5 5] min

f minus103167 Bohachevsky3 F7 x2

1 + 2times x22 minus 03times cos (3πx1 + 4πx2) + 03 2 [minus100 100] min f 0

8 Bridge F8 (sinx21 + x2

2

1113969)

x21 + x2

2

1113969minus 07129 2 [minus15 15] max f 30054

9 Booth F9 (x1 + 2times x2 minus 7)2 + (2times x1 + x2 minus 5)2 2 [minus10 10] min f 010 Bohachevsky1 F10 x2

1 + 2x22 minus 03times cos (3πx1)minus 04times cos (4πx2) + 07 2 [minus100 100] min f 0

11 Ackley F11 minus20times exp (minus02times

(1D) 1113936Di1 x2

i

1113969

)minus exp((1D) 1113936

Di1 cos 2 πxi) + 20 + e

6 [minus15 15] min f 0

12 Quadric F12 1113936Di1 (1113936

ik1 xk)2 10 [minus15 15] min f 0

Table 3 Experimental results

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F1

1 364Eminus 12 283Eminus 08 352Eminus 10 283Eminus 09 488 040382 414Eminus 21 326Eminus 14 115Eminus 15 387Eminus 15 591 042823 479Eminus 26 495Eminus 21 258Eminus 22 579Eminus 22 585 240954 104Eminus 119 169Eminus 12 585Eminus 24 213Eminus 13 295 042665 0 0 0 0 2744 010806

F2

1 minus1 minus000008110 minus097 01714 153 003752 minus1 minus1 minus1 324Eminus 06 583 041963 minus1 minus1 minus1 0 507 191114 minus1 minus1 minus1 0 72 003215 minus1 0 minus002 00196 59262 27036

F3

1 850Eminus 07 161Eminus 05 277Eminus 06 335Eminus 06 593 143962 000017021 00114 00023 00022 595 047923 760Eminus 07 87485 14802 15621 600 483194 561Eminus 07 149Eminus 05 107Eminus 06 874Eminus 06 600 414535 0 0 0 0 1 102205

F4

1 0004 00331 00127 00055 592 319022 00351 01736 00902 00292 594 048683 35579 103152 80497 12048 600 147424 000001114 00098 00012 00048 600 084315 0 0 0 0 1 100834

8 Computational Intelligence and Neuroscience

Finally the mean time spent by ASGS-CWOA are smallerthan the othersrsquo on F1 F5 F6 and F11 as shown in Figures 7(a)7(e) 7(f) and 7(k) respectively On F7 F9 and F10 the fivealgorithms spent time roughly in the same order of magnitudeand ASGS-CWOA has better performance than GA PSO andLWPS on F7 and F9 as shown in Figures 7(g) and 7(i) Un-fortunately ASGS-CWOA spent most time on F2 F3 F4 F8and F12 yet it is a comfort that the times spent byASGS-CWOAare not too much to be unacceptable is phenomenon con-forms to a philosophical truth that nothing is perfect and flawlessin the world and ASGS-CWOA is without exception Ac-cordingly in general ASGS-CWOA has a better speed ofconvergence e detailed is shown in Figure 7

242 Supplementary Experiments In order to further verifythe performance of the new proposed algorithm supplementary

experiments are conducted on CEC-2014 (IEEE Congress onEvolutionary Computation 2014) testing functions [38] detailedin Table 4 It is different with the above 12 testing functions thatCEC2014 testing functions are conducted on a computerequipment with Win7-32 bit Matlab 2014a CPU (AMD A6-3400M) and RAM (40GB) due to the match between the givencec14_funcmexw32 andwindows 32 bit system not Linux 64 bit

From Table 5 and Figure 8(a) obviously it can be seenthat new proposed algorithm has better performance thanGA and LWPS in terms of ldquooptimal valuerdquo which meansASGS-CWOA has better performance in finding the globaloptima From Figures 8(b)ndash8(d) it can be seen that the newproposed algorithm has best performance in terms of ldquoworstvaluerdquo ldquoaverage valuerdquo and ldquostandard deviationrdquo in otherwords the new proposed algorithm has best stability androbustness From Figure 8(e) the proportion of ASGS-

Table 3 Continued

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F5

1 467Eminus 10 467Eminus 10 467Eminus 10 312Eminus 25 73 000852 920Eminus 22 666Eminus 15 134Eminus 16 678Eminus 16 594 042983 123Eminus 23 176Eminus 19 156Eminus 20 260Eminus 20 585 241124 375Eminus 136 131Eminus 19 428Eminus 22 224Eminus 11 195 021695 0 0 0 0 101 106Eminus 03

F6

1 minus10316 minus10316 minus10316 179Eminus 15 68 000772 minus10316 minus10316 minus10316 147Eminus 15 578 045973 minus10316 minus10316 minus10316 152Eminus 15 519 247274 minus10316 minus10316 minus10316 125Eminus 15 157 018235 minus10316 minus10316 minus10316 560Eminus 11 332 0021855

F7

1 407Eminus 08 164Eminus 06 317Eminus 07 501Eminus 07 437 03272 278Eminus 16 260Eminus 10 885Eminus 12 310Eminus 11 590 045513 0 167Eminus 16 777Eminus 18 250Eminus 17 556 177454 0 169Eminus 11 636Eminus 15 236Eminus 11 139 007935 0 0 0 0 2406 01064

F8

1 30054 27052 29787 00629 69 000792 30054 30054 30054 453Eminus 16 550 039463 30054 30038 30053 000016125 399 129644 30054 30054 30054 941Eminus 11 67 003075 30054 30054 30054 966Eminus 30 600 30412

F9

1 455Eminus 11 368Eminus 09 891Eminus 11 367Eminus 10 271 012762 247Eminus 20 191Eminus 11 197Eminus 13 191Eminus 12 591 045253 605Eminus 24 312Eminus 19 270Eminus 20 548Eminus 20 580 196064 0 315Eminus 11 146Eminus 22 511Eminus 12 132 007385 0 0 0 0 2686 011364

F10

1 436Eminus 07 04699 00047 0047 109 001972 0 365Eminus 12 112Eminus 13 403Eminus 13 588 045643 0 0 0 0 539 167364 0 0 0 0 78 00255 0 0 0 0 18488 086886

F11

1 15851 15851 15851 378Eminus 07 434 057882 15934 1594 15935 00000826 595 049983 155Eminus 06 21015 07618 05833 594 17894 921Eminus 07 000018069 520Eminus 05 41549ndash5 535 108225 0 0 0 0 1 019021

F12

1 20399 times Eminus 4 00277 00056 00059 597 15772 00151 02574 00863 00552 592 056083 0139 16442 09017 03516 600 211694 428Eminus 08 00214 000038043 00013 600 197285 0 0 0 0 1 74354

Computational Intelligence and Neuroscience 9

CWOA is better than PSO LWPS and CWOA and it meansthe new proposed ASGS-CWOA has advantages in timeperformance

In a word ASGS-CWOA has good optimization qualitystability advantage on performance of iteration numberand speed of convergence

3 Results and Discussion

eoretical research and experimental results reveal thatcompared with traditional genetic algorithm particle swarmoptimization leading wolf pack algorithm and chaotic wolf

optimization algorithm ASGS-CWOA has better globaloptimization accuracy fast convergence speed and highrobustness under the same conditions

In fact the strategy of ASGS greatly strengthens the localexploitation power of the original algorithm which makes iteasier for the algorithm to find the global optimum the strategyof ORM and SDUA effectively enhances the global explorationpower whichmakes the algorithm not easy to fall into the localoptimum and thus easier to find the global optimum

Focusing on Tables 3 and 5 and Figures 6 and 7 abovecompared with the four state-of-the-art algorithms ASGS-CWOA is effective and efficient in most of terms on

1 2 3 4 5

Stan

dard

dev

iatio

n

Algorithm order

Matyas

0E + 00

(a)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Easom

0E + 00

2E ndash 02

4E ndash 02

(b)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sumsquares

0E + 00

5E ndash 06

1E ndash 05

(c)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sphere

0E + 00

3E ndash 02

6E ndash 02

(d)

Stan

dard

dev

iatio

n

Algorithm order51 2 3 4

Execrate

0E + 00

(e)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

(f)

Algorithm order1 2 3 4 5

Bohachevsky3

Stan

dard

dev

iatio

n

0E + 00

(g)

Algorithm order1 2 3 4 5

Bridge

Stan

dard

dev

iatio

n

0E + 00

(h)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Booth

0E + 00

(i)

Algorithm order1 2 3 4 5

Bohachevsky1

Stan

dard

dev

iatio

n

0E + 00

4E ndash 13

8E ndash 13

(j)

Algorithm order1 2 3 4 5

Ackley

Stan

dard

dev

iatio

n

0E + 00

4E ndash 07

8E ndash 07

(k)

Algorithm order1 2 3 54

Quadric

Stan

dard

dev

iatio

n

0E + 00

4E ndash 03

8E ndash 03

(l)

Figure 6 Histograms of the SD about the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g) ndashF7 (h)ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12 (red means the value is out of upper limit of the chart)

10 Computational Intelligence and Neuroscience

Mea

n tim

e

Matyas

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(a)

Mea

n tim

e

Easom

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(b)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sumsquares

00E + 00

60E + 00

12E + 01

(c)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sphere

00E + 00

50E + 00

10E + 01

(d)

Mea

n tim

e

Algorithm order1 2 3 4 5

Execrate

0E + 00

1E + 00

2E + 00

3E + 00

(e)

Mea

n tim

e

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

1E + 00

2E + 00

3E + 00

(f )

Mea

n tim

e

Algorithm order1 2 3 4 5

Bohachevsky3

0E + 00

1E + 00

2E + 00

(g)

Mea

n tim

e

Algorithm order1 2 3 4 5

Bridge

0E + 00

1E + 00

2E + 00

3E + 00

(h)

Mea

n tim

e

Algorithm order1 2 3 4 5

Booth

0E + 00

1E + 00

2E + 00

(i)

Mea

n tim

e

1 2 3 4 5Algorithm order

Bohachevsky1

0E + 00

1E + 00

2E + 00

(j)

Mea

n tim

e

1 2 3 4 5Algorithm order

Ackley

0E + 00

1E + 00

2E + 00

3E + 00

(k)

Mea

n tim

e

1 2 3 4 5Algorithm order

Quadric

0E + 00

4E + 00

8E + 00

(l)

Figure 7 Histograms of the mean time spent on the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g)ndashF7 (h) ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12

Table 4 Part of the CECrsquo14 test functions [38]

No Functions Dimension Range Filowast Fi (xlowast)

1 Rotated High Conditioned Elliptic Function 2 [minus100 100] 1002 Rotated Bent Cigar Function 2 [minus100 100] 2003 Rotated Discuss Function 2 [minus100 100] 3004 Shifted and Rotated Rosenbrockrsquos Function 2 [minus100 100] 4005 Shifted and Rotated Ackleyrsquos Function 2 [minus100 100] 5006 Shifted and Rotated Weier Stress Function 2 [minus100 100] 6007 Shifted and Rotated Griewankrsquos Function 2 [minus100 100] 7008 Shifted Rastriginrsquos Function 2 [minus100 100] 8009 Shifted and Rotated Rastriginrsquos Function 2 [minus100 100] 900

Computational Intelligence and Neuroscience 11

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 9: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

Finally the mean time spent by ASGS-CWOA are smallerthan the othersrsquo on F1 F5 F6 and F11 as shown in Figures 7(a)7(e) 7(f) and 7(k) respectively On F7 F9 and F10 the fivealgorithms spent time roughly in the same order of magnitudeand ASGS-CWOA has better performance than GA PSO andLWPS on F7 and F9 as shown in Figures 7(g) and 7(i) Un-fortunately ASGS-CWOA spent most time on F2 F3 F4 F8and F12 yet it is a comfort that the times spent byASGS-CWOAare not too much to be unacceptable is phenomenon con-forms to a philosophical truth that nothing is perfect and flawlessin the world and ASGS-CWOA is without exception Ac-cordingly in general ASGS-CWOA has a better speed ofconvergence e detailed is shown in Figure 7

242 Supplementary Experiments In order to further verifythe performance of the new proposed algorithm supplementary

experiments are conducted on CEC-2014 (IEEE Congress onEvolutionary Computation 2014) testing functions [38] detailedin Table 4 It is different with the above 12 testing functions thatCEC2014 testing functions are conducted on a computerequipment with Win7-32 bit Matlab 2014a CPU (AMD A6-3400M) and RAM (40GB) due to the match between the givencec14_funcmexw32 andwindows 32 bit system not Linux 64 bit

From Table 5 and Figure 8(a) obviously it can be seenthat new proposed algorithm has better performance thanGA and LWPS in terms of ldquooptimal valuerdquo which meansASGS-CWOA has better performance in finding the globaloptima From Figures 8(b)ndash8(d) it can be seen that the newproposed algorithm has best performance in terms of ldquoworstvaluerdquo ldquoaverage valuerdquo and ldquostandard deviationrdquo in otherwords the new proposed algorithm has best stability androbustness From Figure 8(e) the proportion of ASGS-

Table 3 Continued

Function Order Best value Worst value Average value Standard deviation Mean iteration Average time

F5

1 467Eminus 10 467Eminus 10 467Eminus 10 312Eminus 25 73 000852 920Eminus 22 666Eminus 15 134Eminus 16 678Eminus 16 594 042983 123Eminus 23 176Eminus 19 156Eminus 20 260Eminus 20 585 241124 375Eminus 136 131Eminus 19 428Eminus 22 224Eminus 11 195 021695 0 0 0 0 101 106Eminus 03

F6

1 minus10316 minus10316 minus10316 179Eminus 15 68 000772 minus10316 minus10316 minus10316 147Eminus 15 578 045973 minus10316 minus10316 minus10316 152Eminus 15 519 247274 minus10316 minus10316 minus10316 125Eminus 15 157 018235 minus10316 minus10316 minus10316 560Eminus 11 332 0021855

F7

1 407Eminus 08 164Eminus 06 317Eminus 07 501Eminus 07 437 03272 278Eminus 16 260Eminus 10 885Eminus 12 310Eminus 11 590 045513 0 167Eminus 16 777Eminus 18 250Eminus 17 556 177454 0 169Eminus 11 636Eminus 15 236Eminus 11 139 007935 0 0 0 0 2406 01064

F8

1 30054 27052 29787 00629 69 000792 30054 30054 30054 453Eminus 16 550 039463 30054 30038 30053 000016125 399 129644 30054 30054 30054 941Eminus 11 67 003075 30054 30054 30054 966Eminus 30 600 30412

F9

1 455Eminus 11 368Eminus 09 891Eminus 11 367Eminus 10 271 012762 247Eminus 20 191Eminus 11 197Eminus 13 191Eminus 12 591 045253 605Eminus 24 312Eminus 19 270Eminus 20 548Eminus 20 580 196064 0 315Eminus 11 146Eminus 22 511Eminus 12 132 007385 0 0 0 0 2686 011364

F10

1 436Eminus 07 04699 00047 0047 109 001972 0 365Eminus 12 112Eminus 13 403Eminus 13 588 045643 0 0 0 0 539 167364 0 0 0 0 78 00255 0 0 0 0 18488 086886

F11

1 15851 15851 15851 378Eminus 07 434 057882 15934 1594 15935 00000826 595 049983 155Eminus 06 21015 07618 05833 594 17894 921Eminus 07 000018069 520Eminus 05 41549ndash5 535 108225 0 0 0 0 1 019021

F12

1 20399 times Eminus 4 00277 00056 00059 597 15772 00151 02574 00863 00552 592 056083 0139 16442 09017 03516 600 211694 428Eminus 08 00214 000038043 00013 600 197285 0 0 0 0 1 74354

Computational Intelligence and Neuroscience 9

CWOA is better than PSO LWPS and CWOA and it meansthe new proposed ASGS-CWOA has advantages in timeperformance

In a word ASGS-CWOA has good optimization qualitystability advantage on performance of iteration numberand speed of convergence

3 Results and Discussion

eoretical research and experimental results reveal thatcompared with traditional genetic algorithm particle swarmoptimization leading wolf pack algorithm and chaotic wolf

optimization algorithm ASGS-CWOA has better globaloptimization accuracy fast convergence speed and highrobustness under the same conditions

In fact the strategy of ASGS greatly strengthens the localexploitation power of the original algorithm which makes iteasier for the algorithm to find the global optimum the strategyof ORM and SDUA effectively enhances the global explorationpower whichmakes the algorithm not easy to fall into the localoptimum and thus easier to find the global optimum

Focusing on Tables 3 and 5 and Figures 6 and 7 abovecompared with the four state-of-the-art algorithms ASGS-CWOA is effective and efficient in most of terms on

1 2 3 4 5

Stan

dard

dev

iatio

n

Algorithm order

Matyas

0E + 00

(a)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Easom

0E + 00

2E ndash 02

4E ndash 02

(b)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sumsquares

0E + 00

5E ndash 06

1E ndash 05

(c)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sphere

0E + 00

3E ndash 02

6E ndash 02

(d)

Stan

dard

dev

iatio

n

Algorithm order51 2 3 4

Execrate

0E + 00

(e)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

(f)

Algorithm order1 2 3 4 5

Bohachevsky3

Stan

dard

dev

iatio

n

0E + 00

(g)

Algorithm order1 2 3 4 5

Bridge

Stan

dard

dev

iatio

n

0E + 00

(h)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Booth

0E + 00

(i)

Algorithm order1 2 3 4 5

Bohachevsky1

Stan

dard

dev

iatio

n

0E + 00

4E ndash 13

8E ndash 13

(j)

Algorithm order1 2 3 4 5

Ackley

Stan

dard

dev

iatio

n

0E + 00

4E ndash 07

8E ndash 07

(k)

Algorithm order1 2 3 54

Quadric

Stan

dard

dev

iatio

n

0E + 00

4E ndash 03

8E ndash 03

(l)

Figure 6 Histograms of the SD about the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g) ndashF7 (h)ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12 (red means the value is out of upper limit of the chart)

10 Computational Intelligence and Neuroscience

Mea

n tim

e

Matyas

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(a)

Mea

n tim

e

Easom

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(b)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sumsquares

00E + 00

60E + 00

12E + 01

(c)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sphere

00E + 00

50E + 00

10E + 01

(d)

Mea

n tim

e

Algorithm order1 2 3 4 5

Execrate

0E + 00

1E + 00

2E + 00

3E + 00

(e)

Mea

n tim

e

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

1E + 00

2E + 00

3E + 00

(f )

Mea

n tim

e

Algorithm order1 2 3 4 5

Bohachevsky3

0E + 00

1E + 00

2E + 00

(g)

Mea

n tim

e

Algorithm order1 2 3 4 5

Bridge

0E + 00

1E + 00

2E + 00

3E + 00

(h)

Mea

n tim

e

Algorithm order1 2 3 4 5

Booth

0E + 00

1E + 00

2E + 00

(i)

Mea

n tim

e

1 2 3 4 5Algorithm order

Bohachevsky1

0E + 00

1E + 00

2E + 00

(j)

Mea

n tim

e

1 2 3 4 5Algorithm order

Ackley

0E + 00

1E + 00

2E + 00

3E + 00

(k)

Mea

n tim

e

1 2 3 4 5Algorithm order

Quadric

0E + 00

4E + 00

8E + 00

(l)

Figure 7 Histograms of the mean time spent on the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g)ndashF7 (h) ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12

Table 4 Part of the CECrsquo14 test functions [38]

No Functions Dimension Range Filowast Fi (xlowast)

1 Rotated High Conditioned Elliptic Function 2 [minus100 100] 1002 Rotated Bent Cigar Function 2 [minus100 100] 2003 Rotated Discuss Function 2 [minus100 100] 3004 Shifted and Rotated Rosenbrockrsquos Function 2 [minus100 100] 4005 Shifted and Rotated Ackleyrsquos Function 2 [minus100 100] 5006 Shifted and Rotated Weier Stress Function 2 [minus100 100] 6007 Shifted and Rotated Griewankrsquos Function 2 [minus100 100] 7008 Shifted Rastriginrsquos Function 2 [minus100 100] 8009 Shifted and Rotated Rastriginrsquos Function 2 [minus100 100] 900

Computational Intelligence and Neuroscience 11

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 10: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

CWOA is better than PSO LWPS and CWOA and it meansthe new proposed ASGS-CWOA has advantages in timeperformance

In a word ASGS-CWOA has good optimization qualitystability advantage on performance of iteration numberand speed of convergence

3 Results and Discussion

eoretical research and experimental results reveal thatcompared with traditional genetic algorithm particle swarmoptimization leading wolf pack algorithm and chaotic wolf

optimization algorithm ASGS-CWOA has better globaloptimization accuracy fast convergence speed and highrobustness under the same conditions

In fact the strategy of ASGS greatly strengthens the localexploitation power of the original algorithm which makes iteasier for the algorithm to find the global optimum the strategyof ORM and SDUA effectively enhances the global explorationpower whichmakes the algorithm not easy to fall into the localoptimum and thus easier to find the global optimum

Focusing on Tables 3 and 5 and Figures 6 and 7 abovecompared with the four state-of-the-art algorithms ASGS-CWOA is effective and efficient in most of terms on

1 2 3 4 5

Stan

dard

dev

iatio

n

Algorithm order

Matyas

0E + 00

(a)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Easom

0E + 00

2E ndash 02

4E ndash 02

(b)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sumsquares

0E + 00

5E ndash 06

1E ndash 05

(c)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Sphere

0E + 00

3E ndash 02

6E ndash 02

(d)

Stan

dard

dev

iatio

n

Algorithm order51 2 3 4

Execrate

0E + 00

(e)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

(f)

Algorithm order1 2 3 4 5

Bohachevsky3

Stan

dard

dev

iatio

n

0E + 00

(g)

Algorithm order1 2 3 4 5

Bridge

Stan

dard

dev

iatio

n

0E + 00

(h)

Stan

dard

dev

iatio

n

Algorithm order1 2 3 4 5

Booth

0E + 00

(i)

Algorithm order1 2 3 4 5

Bohachevsky1

Stan

dard

dev

iatio

n

0E + 00

4E ndash 13

8E ndash 13

(j)

Algorithm order1 2 3 4 5

Ackley

Stan

dard

dev

iatio

n

0E + 00

4E ndash 07

8E ndash 07

(k)

Algorithm order1 2 3 54

Quadric

Stan

dard

dev

iatio

n

0E + 00

4E ndash 03

8E ndash 03

(l)

Figure 6 Histograms of the SD about the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g) ndashF7 (h)ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12 (red means the value is out of upper limit of the chart)

10 Computational Intelligence and Neuroscience

Mea

n tim

e

Matyas

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(a)

Mea

n tim

e

Easom

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(b)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sumsquares

00E + 00

60E + 00

12E + 01

(c)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sphere

00E + 00

50E + 00

10E + 01

(d)

Mea

n tim

e

Algorithm order1 2 3 4 5

Execrate

0E + 00

1E + 00

2E + 00

3E + 00

(e)

Mea

n tim

e

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

1E + 00

2E + 00

3E + 00

(f )

Mea

n tim

e

Algorithm order1 2 3 4 5

Bohachevsky3

0E + 00

1E + 00

2E + 00

(g)

Mea

n tim

e

Algorithm order1 2 3 4 5

Bridge

0E + 00

1E + 00

2E + 00

3E + 00

(h)

Mea

n tim

e

Algorithm order1 2 3 4 5

Booth

0E + 00

1E + 00

2E + 00

(i)

Mea

n tim

e

1 2 3 4 5Algorithm order

Bohachevsky1

0E + 00

1E + 00

2E + 00

(j)

Mea

n tim

e

1 2 3 4 5Algorithm order

Ackley

0E + 00

1E + 00

2E + 00

3E + 00

(k)

Mea

n tim

e

1 2 3 4 5Algorithm order

Quadric

0E + 00

4E + 00

8E + 00

(l)

Figure 7 Histograms of the mean time spent on the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g)ndashF7 (h) ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12

Table 4 Part of the CECrsquo14 test functions [38]

No Functions Dimension Range Filowast Fi (xlowast)

1 Rotated High Conditioned Elliptic Function 2 [minus100 100] 1002 Rotated Bent Cigar Function 2 [minus100 100] 2003 Rotated Discuss Function 2 [minus100 100] 3004 Shifted and Rotated Rosenbrockrsquos Function 2 [minus100 100] 4005 Shifted and Rotated Ackleyrsquos Function 2 [minus100 100] 5006 Shifted and Rotated Weier Stress Function 2 [minus100 100] 6007 Shifted and Rotated Griewankrsquos Function 2 [minus100 100] 7008 Shifted Rastriginrsquos Function 2 [minus100 100] 8009 Shifted and Rotated Rastriginrsquos Function 2 [minus100 100] 900

Computational Intelligence and Neuroscience 11

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 11: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

Mea

n tim

e

Matyas

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(a)

Mea

n tim

e

Easom

Algorithm order1 2 3 4 5

0E + 00

1E + 00

2E + 00

3E + 00

(b)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sumsquares

00E + 00

60E + 00

12E + 01

(c)

Mea

n tim

e

Algorithm order1 2 3 4 5

Sphere

00E + 00

50E + 00

10E + 01

(d)

Mea

n tim

e

Algorithm order1 2 3 4 5

Execrate

0E + 00

1E + 00

2E + 00

3E + 00

(e)

Mea

n tim

e

Algorithm order1 2 3 4 5

Six hump camel back

0E + 00

1E + 00

2E + 00

3E + 00

(f )

Mea

n tim

e

Algorithm order1 2 3 4 5

Bohachevsky3

0E + 00

1E + 00

2E + 00

(g)

Mea

n tim

e

Algorithm order1 2 3 4 5

Bridge

0E + 00

1E + 00

2E + 00

3E + 00

(h)

Mea

n tim

e

Algorithm order1 2 3 4 5

Booth

0E + 00

1E + 00

2E + 00

(i)

Mea

n tim

e

1 2 3 4 5Algorithm order

Bohachevsky1

0E + 00

1E + 00

2E + 00

(j)

Mea

n tim

e

1 2 3 4 5Algorithm order

Ackley

0E + 00

1E + 00

2E + 00

3E + 00

(k)

Mea

n tim

e

1 2 3 4 5Algorithm order

Quadric

0E + 00

4E + 00

8E + 00

(l)

Figure 7 Histograms of the mean time spent on the benchmark functions for testing (a) ndashF1 (b) ndashF2 (c) ndashF3 (d) ndashF4 (e) ndashF5 (f ) ndashF6 (g)ndashF7 (h) ndashF8 (i) ndashF9 (j) ndashF10 (k) ndashF11 (l) ndashF12

Table 4 Part of the CECrsquo14 test functions [38]

No Functions Dimension Range Filowast Fi (xlowast)

1 Rotated High Conditioned Elliptic Function 2 [minus100 100] 1002 Rotated Bent Cigar Function 2 [minus100 100] 2003 Rotated Discuss Function 2 [minus100 100] 3004 Shifted and Rotated Rosenbrockrsquos Function 2 [minus100 100] 4005 Shifted and Rotated Ackleyrsquos Function 2 [minus100 100] 5006 Shifted and Rotated Weier Stress Function 2 [minus100 100] 6007 Shifted and Rotated Griewankrsquos Function 2 [minus100 100] 7008 Shifted Rastriginrsquos Function 2 [minus100 100] 8009 Shifted and Rotated Rastriginrsquos Function 2 [minus100 100] 900

Computational Intelligence and Neuroscience 11

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 12: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

Table 4 Continued

No Functions Dimension Range Filowast Fi (xlowast)

10 Shifted Schwefelrsquos Function 2 [minus100 100] 100011 Shifted and Rotated Schwefelrsquos Function 2 [minus100 100] 110012 Shifted and Rotated Katsuura Function 2 [minus100 100] 120013 Shifted and Rotated Happycat Function [6] 2 [minus100 100] 130014 Shifted and Rotated HGBat Function [6] 2 [minus100 100] 140015 Shifted and Rotated Expanded Griewankrsquos Plus Rosenbrockrsquos Function 2 [minus100 100] 150016 Shifted and Rotated Expanded Schafferrsquos F6 Function 2 [minus100 100] 1600

Table 5 Results of supplementary experiments

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

1 CECrsquo14-F1

GA 1049799 3072111 7855763 81199210 5760667 59246PSO 100 7167217 9669491 23642939 600 018536LWPS 104035 2219115 6507181 2761017 600 87827CWOA 1004238 8063862 30346 489991796 600 90239

ASGS-CWOA 100 100 100 0 1706 075354

2 CECrsquo14-F2

GA 2016655 5624136 15117 22028598 5359 55281PSO 2000028 5541629 18773888 25595547 600 01806LWPS 208964 2723770 7699574 47510154 600 92476CWOA 2000115 1815693 4895193 16911157 600 86269

ASGS-CWOA 200 200 200 0 1578 070151

3 CECrsquo14-F3

GA 3161717 170775349 4508479 23875975 5333 57871PSO 3000007 2391641 4469141 10156353 600 018159LWPS 3003407 5796796 88949 11049497 600 92331CWOA 3001111 2196991 7345405 23224581 600 85499

ASGS-CWOA 300 300 300 0 1762 072644

4 CECrsquo14-F4

GA 400 4000911 4000109 00005323 867333 0924PSO 400 400 400 323Eminus 29 600 017534LWPS 400 400 400 590Eminus 17 600 87697CWOA 400 400 400 0 3087 39497

ASGS-CWOA 400 400 400 0 1296 053133

5 CECrsquo14-F5

GA 500 520 5073432 505077 714333 076291PSO 500 5152678 5009217 67366 600 018704LWPS 500 5000006 5000001 149Eminus 08 600 88697CWOA 500 500 500 134Eminus 22 600 85232

ASGS-CWOA 5017851 520 51418 285606 600 298243

6 CECrsquo14-F6

GA 6000001 6009784 600302 010694 679 086812PSO 600 600 600 286Eminus 11 600 13431LWPS 6000001 6004594 6000245 00087421 600 195858CWOA 600 600 600 232Eminus 25 5995667 191092

ASGS-CWOA 600 600 600 0 4516 175578

7 CECrsquo14-F7

GA 700 7014272 7003767 019181 614333 067019PSO 700 7006977 7000213 00089771 600 018424LWPS 700 700323 7000795 00063773 600 85961CWOA 700 7000271 7000053 469Eminus 05 5145667 75231

ASGS-CWOA 700 700 700 0 2382 11578

8 CECrsquo14-F8

GA 800 8019899 8010945 048507 576333 062438PSO 800 8014806 8002342 013654 59426 017554LWPS 800 800995 8001327 011439 600 8486CWOA 800 8019899 8006965 053787 4973667 72666

ASGS-CWOA 800 800995 8002786 019957 46934 218131

9 CECrsquo14-F9

GA 900 9039798 9013929 11615 582667 063184PSO 900 900995 900186 010073 59131 017613LWPS 900 900995 9000995 0089095 600 85478CWOA 900 9019899 9006965 053787 5035333 69673

ASGS-CWOA 900 9039798 9005273 050398 49631 230867

12 Computational Intelligence and Neuroscience

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 13: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

Table 5 Continued

Order Function Algorithm Optimal value Worst value Average value Standard deviation Average iteration Average time

10 CECrsquo14-F10

GA 1000 1058504 1005603 1404036 635333 068942PSO 1000 1118750 1009425 8083984 59193 019874LWPS 1000 1017069 1000839 91243 600 87552CWOA 1000 1058192 1006533 1458132 5478333 7833

ASGS-CWOA 1063306 1363476 1174777 76510577 600 334225

11 CECrsquo14-F11

GA 1100 1333896 1156947 39367124 622333 067663PSO 1100 1218438 1107871 2283351 59369 019941LWPS 1100 1100624 1100208 0047643 600 90343CWOA 1100 1218438 1105273 4592835 5342 76136

ASGS-CWOA 1100312 1403453 1323981 50005589 600 29089

12 CECrsquo14-F12

GA 1200000 1205058 1200198 084017 741667 090516PSO 1200 1200286 1200008 000091935 59003 10801LWPS 1200001 1201136 1200242 0041088 600 172774CWOA 1200 1200 1200 149Eminus 20 600 176222

ASGS-CWOA 1200 1200016 1200001 925Eminus 06 53761 173256

13 CECrsquo14-F13

GA 1300005 1300188 130002 00012674 692667 074406PSO 1300003 1300303 1300123 00073239 59187 018376LWPS 1300007 1300267 1300089 00039127 600 87117CWOA 1300000 1300132 1300017 00010414 600 90362

ASGS-CWOA 1300013 1300242 1300120 0003075 600 286266

14 CECrsquo14-F14

GA 1400000 1400499 1400264 0035262 579 063369PSO 1400 1400067 1400023 000039174 5918 019591LWPS 1400000 1400175 1400037 00015846 600 90429CWOA 1400000 1400179 1400030 0001558 600 93025

ASGS-CWOA 1400111 1400491 1400457 00032626 600 304334

15 CECrsquo14-F15

GA 1500 1500175 1500073 00033323 547667 059602PSO 1500 1500098 1500012 00004289 59132 018265LWPS 1500 1500139 1500021 00010981 600 74239CWOA 1500 1500098 1500015 00003538 4776667 65924

ASGS-CWOA 1500 1500020 1500001 224Eminus 05 10129 48

16 CECrsquo14-F16

GA 1600 1600746 1600281 0038563 538333 058607PSO 1600 1600356 1600015 0001519 59395 018491LWPS 1600019 1600074 1600024 0000272 600 87907CWOA 1600 1600254 1600058 00031938 5935 90235

ASGS-CWOA 1600 1600019 1600007 869Eminus 05 54555 261972

GA PSO LWPS CWOA ASGS-CWOA

Algorithm name

Optimal value

0

01

02

03

Prop

ortio

n

(a)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Worst value

0

025

05

Prop

ortio

n

(b)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average value

0

025

05

Prop

ortio

n

(c)

Figure 8 Continued

Computational Intelligence and Neuroscience 13

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 14: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

benchmark functions for testing but it has weaker perfor-mance in some terms on some benchmark functions fortesting such as functions 2 3 4 8 and 12 shown re-spectively in Figures 7(b)ndash7(d) 7(h) and 7(l) ASGS-CWOA spent more time on iterations than the four otheralgorithms When D (dimension of the solution space) isvery large this means too much memory space of thecomputer is required to implement the algorithm completelyaccording to the formula and it is impossible to meet thespatial demand growing exponentially which is also a re-flection of its limitations Moreover in the supplementaryexperiments ASGS-CWOA spent more time than the otheralgorithms and its proportion is 0 in the optimal timestatistic index of 16 times detailed in Figure 7(f)

Our future work is to continue perfecting the perfor-mance of ASGS-CWOA in all aspects and to apply it tospecific projects and test its performance

4 Conclusions

To further improve the speed of convergence and optimi-zation accuracy under a same condition this paper proposesan adaptive shrinking grid search chaotic wolf optimizationalgorithm using standard deviation updating amountFirstly ASGS was designed for wolf pack algorithm to en-hance its searching capability through which any wolf canbe the leader wolf and this benefits to improve the proba-bility of finding the global optimization Moreover OMR isused in the wolf pack algorithm to enhance the convergencespeed In addition we take a concept named SDUA toeliminate some poorer wolves and regenerate the sameamount of wolves so as to update wolf population and keepits biodiversity

Data Availability

e tasks in this paper are listed in httpspanbaiducoms1Df_9D04DSKZI2xhUDg8wgQ (password 3fie)

Conflicts of Interest

e authors declare that there are no conflicts of interestregarding the publication of this paper

Acknowledgments

e authors are grateful for peer experts for full support ofthis paper and thank China University of Mining amp Tech-nology Beijing for providing supporting necessary scientificenvironmentis study was funded by the National NaturalScience Foundation of China (Grant no 61873299 and no51761135121) National Key Research and DevelopmentProgram of China (Grant no 2016YFB1001404) and ChinaPostdoctoral Science Foundation (Grant no 2017M620760)

References

[1] M G Hinchey R Sterritt and C Rouff ldquoSwarms and swarmintelligencerdquo Computer vol 40 no 4 pp 111ndash113 2007

[2] C Yang J Ji J Liu and B Yin ldquoBacterial foraging opti-mization using novel chemotaxis and conjugation strategiesrdquoInformation Sciences vol 363 pp 72ndash95 2016

[3] C Yang J Ji J Liu J Liu and B Yin ldquoStructural learning ofbayesian networks by bacterial foraging optimizationrdquo In-ternational Journal of Approximate Reasoning vol 69pp 147ndash167 2016

[4] C Yang J Ji and A Zhang ldquoBFO-FMD bacterial foragingoptimization for functional module detection in proteinndashprotein interaction networksrdquo Soft Computing vol 8 pp 1ndash22 2017

[5] C Yang J Ji and A Zhang ldquoBacterial biological mechanismsfor functional module detection in PPI networksrdquo in Pro-ceedings of the 2016 IEEE International Conference on Bio-informatics and Biomedicine (BIBM) pp 318ndash323 ShenzhenChina December 2017

[6] J Ji C Yang J Liu J Liu and B Yin ldquoA comparative studyon swarm intelligence for structure learning of Bayesiannetworksrdquo Soft Computing vol 21 no 22 pp 1ndash26 2016

[7] J Ji J Liu P Liang and A Zhang ldquoLearning effectiveconnectivity network structure from fmri data based on ar-tificial immune algorithmrdquo PLoS One vol 11 no 4 Article IDe0152600 2016

[8] J Liu J Ji A Zhang and P Liang ldquoAn ant colony opti-mization algorithm for learning brain effective connectivitynetwork from fMRI datardquo in Proceedings of the 2016 IEEEInternational Conference on Bioinformatics and Biomedicine(BIBM) pp 360ndash367 Shenzhen China December 2017

[9] J Kennedy ldquoParticle swarm optimizationrdquo in Proceedings ofthe of 1995 IEEE International Conference on Neural Net-works vol 4 pp 1942ndash1948 Perth Australia November 2011

ASGS-CWOA

GA PSO LWPS CWOA

Algorithm name

Standard deviation

0

025

05Pr

opor

tion

(d)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average iteration

0

035

07

Prop

ortio

n

(e)

ASGS-CWOA

Algorithm name

GA PSO LWPS CWOA

Average time

0

03

06

09

Prop

ortio

n

(f )

Figure 8 (a) e statistical proportion of each algorithm in the number of best ldquooptimal valuerdquo on 16 test functions (b) the one of bestldquoworst valuerdquo (c) the one of best ldquoaverage valuerdquo (d) the one of best ldquostandard deviationrdquo (e) the one of best ldquoaverage iterationrdquo (f ) the oneof best ldquoaverage timerdquo

14 Computational Intelligence and Neuroscience

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15

Page 15: AnAdaptiveShrinkingGridSearchChaoticWolfOptimization ...downloads.hindawi.com/journals/cin/2020/7986982.pdf · wolves’ foraging behavior in order to improve its opti- mizationabilityandefficiency.Duetothereasonsabove,

[10] M Dorigo V Maniezzo and A Colorni ldquoAnt system op-timization by a colony of cooperating agentsrdquo IEEE Trans-actions on Systems Man and Cybernetics Part B (Cybernetics)vol 26 no 1 pp 29ndash41 1996

[11] X L Li Z J Shao and J X Qian ldquoAn optimizing methodbased on autonomous animats fish-swarm algorithmrdquo Sys-tem EngineeringFeory and Practice vol 22 no 11 pp 32ndash382002

[12] K M Passino ldquoBiomimicry of bacterial foraging for dis-tributed optimization and controlrdquo Control Systems IEEEvol 22 no 3 pp 52ndash67 2002

[13] M M Eusuff and K E Lansey ldquoOptimization of waterdistribution network design using the shuffled frog leapingalgorithmrdquo Journal of Water Resources Planning amp Man-agement vol 129 no 3 pp 210ndash225 2015

[14] D Karaboga and B Basturk ldquoA powerful and efficient al-gorithm for numerical function optimization artificial beecolony (abc) algorithmrdquo Journal of Global Optimizationvol 39 no 3 pp 459ndash471 2007

[15] W Deng J Xu and H Zhao ldquoAn improved ant colonyoptimization algorithm based on hybrid strategies forscheduling problemrdquo IEEE Access vol 7 pp 20281ndash202922019

[16] M Srinivas and L M Patnaik ldquoGenetic algorithms a surveyrdquoComputer vol 27 no 6 pp 17ndash26 1994

[17] W Deng H Zhao L Zou G Li X Yang and D Wu ldquoAnovel collaborative optimization algorithm in solving com-plex optimization problemsrdquo Soft Computing vol 21 no 15pp 4387ndash4398 2016

[18] Z Huimin G Weitong D Wu and S Meng ldquoStudy on anadaptive co-evolutionary aco algorithm for complex opti-mization problemsrdquo Symmetry vol 10 no 4 p 104 2018

[19] C Yang X Tu and J Chen ldquoAlgorithm of marriage in honeybees optimization based on the wolf pack searchrdquo in Pro-ceedings of the 2007 International Conference on IntelligentPervasive Computing (IPC 2007) pp 462ndash467 Jeju CitySouth Korea October 2007

[20] H Li and H Wu ldquoAn oppositional wolf pack algorithm forparameter identification of the chaotic systemsrdquo Optikvol 127 no 20 pp 9853ndash9864 2016

[21] Y B Chen M YueSong Y JianQiao S XiaoLong andX Nuo ldquoree-dimensional unmanned aerial vehicle pathplanning using modified wolf pack search algorithmrdquo Neu-rocomputing vol 266 pp 445ndash457 2017

[22] N Yang and D L Guo ldquoSolving polynomial equation rootsbased on wolves algorithmrdquo Science amp Technology Visionno 15 pp 35-36 2016

[23] X B Hui ldquoAn improved wolf pack algorithmrdquo Control ampDecision vol 32 no 7 pp 1163ndash1172 2017

[24] Z Qiang and Y Q Zhou ldquoWolf colony search algorithmbased on leader strategyrdquo Application Research of Computersvol 30 no 9 pp 2629ndash2632 2013

[25] Y Zhu W Jiang X Kong L Quan and Y Zhang ldquoA chaoswolf optimization algorithm with self-adaptive variable step-sizerdquo Aip Advances vol 7 no 10 Article ID 105024 2017

[26] S Mirjalili S M Mirjalili and A Lewis ldquoGrey wolf opti-mizerrdquo Advances in Engineering Software vol 69 no 3pp 46ndash61 2014

[27] M Abdo S Kamel M Ebeed J Yu and F Jurado ldquoSolvingnon-smooth optimal power flow problems using a developedgrey wolf optimizerrdquo Energies vol 11 no 7 p 1692 2018

[28] M Mohsen Mohamed A-R Youssef M Ebeed andS Kamel ldquoOptimal planning of renewable distributed gen-eration in distribution systems using grey wolf optimizerrdquo in

Proceedings of the Nineteenth International Middle East PowerSystems Conference (MEPCON) Shibin Al Kawm EgyptDecember 2017

[29] A Ahmed S Kamel and E Mohamed ldquoOptimal reactivepower dispatch considering SSSC using grey wolf algorithmrdquoin Proceedings of the 2016 Eighteenth International MiddleEast Power Systems Conference (MEPCON) Cairo EgyptDecember 2016

[30] E Cuevas M Gonzalez D Zaldivar M Perezcisneros andG Garcıa ldquoAn algorithm for global optimization inspired bycollective animal behaviorrdquo Discrete Dynamics in Nature andSociety vol 2012 Article ID 638275 24 pages 2012

[31] R Oftadeh M J Mahjoob and M Shariatpanahi ldquoA novelmeta-heuristic optimization algorithm inspired by grouphunting of animals hunting searchrdquo Computers amp Mathe-matics with Applications vol 60 no 7 pp 2087ndash2098 2010

[32] G U Qinlong Y Minghai and Z Rui ldquoDesign of PIDcontroller based on mutative scale chaos optimization algo-rithmrdquo Basic Automation vol 10 no 2 pp 149ndash152 2003

[33] W Xu K-J Xu J-P Wu X-L Yu and X-X Yan ldquoPeak-to-peak standard deviation based bubble detection method insodium flow with electromagnetic vortex flowmeterrdquo Reviewof Scientific Instruments vol 90 no 6 Article ID 0651052019

[34] R Paridar M Mozaffarzadeh V Periyasamy et al ldquoVali-dation of delay-multiply-and-standard-deviation weightingfactor for improved photoacoustic imaging of sentinel lymphnoderdquo Journal of Biophotonics vol 12 no 6 Article IDe201800292 2019

[35] H J J Roeykens P De Coster W Jacquet andR J G De Moor ldquoHow standard deviation contributes to thevalidity of a LDF signal a cohort study of 8 years of dentaltraumardquo Lasers in Medical Science vol 34 no 9 pp 1905ndash1916 2019

[36] Y Wang C Zheng H Peng and Q Chen ldquoAn adaptivebeamforming method for ultrasound imaging based on themean-to-standard-deviation factorrdquo Ultrasonics vol 90pp 32ndash41 2018

[37] T G Eschenbach and N A Lewis ldquoRisk standard deviationand expected value when should an individual start socialsecurityrdquoFe Engineering Economist vol 64 no 1 pp 24ndash392019

[38] J J Liang B-Y Qu and P N Suganthan ldquoProblem defi-nitions and evaluation criteria for the CEC 2014 specialsession and competition on single objective real-parameternumerical optimizationrdquo Tech Rep 201311 ComputationalIntelligence Laboratory and Nanyang Technological Uni-versity Singapore 2014

Computational Intelligence and Neuroscience 15