14
Global best harmony search algorithm with control parameters co-evolution based on PSO and its application to constrained optimal problems Xunhua Wang, Xuefeng Yan Key Laboratory of Advanced Control and Optimization for Chemical Processes of Ministry of Education, East China University of Science and Technology, Shanghai 200237, PR China article info Keywords: Harmony search Particle swarm optimization Co-evolution Self-adaptive control parameter abstract A global best harmony search algorithm with control parameters co-evolution based on particle swarm optimization (PSO-CE-GHS) is proposed. In PSO-CE-GHS, two control parameters, i.e. harmony memory considering rate and pitch adjusting rate, are encoded to be a symbiotic individual of original individual (i.e. harmony vector). Harmony search operators are applied to evolve the original population. And, PSO is applied to co-evolve the symbiotic population. Thus, with the evolution of the original population in PSO-CE-GHS, the symbiotic population is dynamically and self-adaptively adjusted and the real-time optimum control parameters are obtained. The proposed PSO-CE-GHS algorithm has been applied to various benchmark functions and constrained optimal problems. The results show that the proposed algorithm can find better solutions when compared to HS and its variants. Ó 2013 Elsevier Inc. All rights reserved. 1. Introduction Harmony search (HS) is a new meta-heuristic algorithm firstly proposed by Geem et al. [1], which imitates the music improvisation process where the musicians continue to experiment and change their instruments’ pitches in order to find a better state of harmony. HS was first adapted into engineering optimization problems by Geem et al. [1]. In the HS algo- rithm, the solution vector is analogous to the harmony in music, and the local and global search schemes are analogous to musician’s improvisations. In musical composition, musicians try to find a musically pleasing harmony by the notes which are stored in their memories whereas a global optimal solution is found by the set of values assigned to each design variable in the optimization process. So far HS algorithm has been successfully applied to many practical optimization problems, such as parameter estimation of the nonlinear Muskingum model, vehicle routing, bandwidth-delay-constrained least-cost mul- ticast routing, design optimization of water distribution networks, combined heat and power economic dispatch and others [2–18]. There are also some shortcomings in the HS algorithm. Especially, it is not good at local search, and its control parameters, which make a significant impact on its search performance, are hard to be determined and tuned. There were several new variants of HS proposed to overcome its shortcomings. The IHS [19] based the basic HS algorithm, which use dynamical val- ues for its control parameters, was proposed by Mahdavi. The GHS [20], which borrow the concept from swarm intelligence, was presented by Omran and Mahdavi. The SGHS [21], which introduce self-adaptive control parameter strategy, was 0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2013.03.111 Corresponding author. Address: East China University of Science and Technology, P.O. BOX 293, MeiLong Road No. 130, Shanghai 200237, PR China. E-mail address: [email protected] (X. Yan). Applied Mathematics and Computation 219 (2013) 10059–10072 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Global best harmony search algorithm with control parameters co-evolution based on PSO and its application to constrained optimal problems

  • Upload
    xuefeng

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Applied Mathematics and Computation 219 (2013) 10059–10072

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation

journal homepage: www.elsevier .com/ locate/amc

Global best harmony search algorithm with control parametersco-evolution based on PSO and its application to constrainedoptimal problems

0096-3003/$ - see front matter � 2013 Elsevier Inc. All rights reserved.http://dx.doi.org/10.1016/j.amc.2013.03.111

⇑ Corresponding author. Address: East China University of Science and Technology, P.O. BOX 293, MeiLong Road No. 130, Shanghai 200237, PRE-mail address: [email protected] (X. Yan).

Xunhua Wang, Xuefeng Yan ⇑Key Laboratory of Advanced Control and Optimization for Chemical Processes of Ministry of Education, East China University of Science and Technology,Shanghai 200237, PR China

a r t i c l e i n f o

Keywords:Harmony searchParticle swarm optimizationCo-evolutionSelf-adaptive control parameter

a b s t r a c t

A global best harmony search algorithm with control parameters co-evolution based onparticle swarm optimization (PSO-CE-GHS) is proposed. In PSO-CE-GHS, two controlparameters, i.e. harmony memory considering rate and pitch adjusting rate, are encodedto be a symbiotic individual of original individual (i.e. harmony vector). Harmonysearch operators are applied to evolve the original population. And, PSO is applied toco-evolve the symbiotic population. Thus, with the evolution of the original populationin PSO-CE-GHS, the symbiotic population is dynamically and self-adaptively adjustedand the real-time optimum control parameters are obtained. The proposed PSO-CE-GHSalgorithm has been applied to various benchmark functions and constrained optimalproblems. The results show that the proposed algorithm can find better solutions whencompared to HS and its variants.

� 2013 Elsevier Inc. All rights reserved.

1. Introduction

Harmony search (HS) is a new meta-heuristic algorithm firstly proposed by Geem et al. [1], which imitates the musicimprovisation process where the musicians continue to experiment and change their instruments’ pitches in order to finda better state of harmony. HS was first adapted into engineering optimization problems by Geem et al. [1]. In the HS algo-rithm, the solution vector is analogous to the harmony in music, and the local and global search schemes are analogous tomusician’s improvisations. In musical composition, musicians try to find a musically pleasing harmony by the notes whichare stored in their memories whereas a global optimal solution is found by the set of values assigned to each design variablein the optimization process. So far HS algorithm has been successfully applied to many practical optimization problems, suchas parameter estimation of the nonlinear Muskingum model, vehicle routing, bandwidth-delay-constrained least-cost mul-ticast routing, design optimization of water distribution networks, combined heat and power economic dispatch and others[2–18].

There are also some shortcomings in the HS algorithm. Especially, it is not good at local search, and its control parameters,which make a significant impact on its search performance, are hard to be determined and tuned. There were several newvariants of HS proposed to overcome its shortcomings. The IHS [19] based the basic HS algorithm, which use dynamical val-ues for its control parameters, was proposed by Mahdavi. The GHS [20], which borrow the concept from swarm intelligence,was presented by Omran and Mahdavi. The SGHS [21], which introduce self-adaptive control parameter strategy, was

China.

10060 X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072

proposed by Pan et al. The new variants’ experiment results show that their performances are better. But the above modifiedalgorithms use the control parameters which are dynamically tuned with increasing Generations, it is difficult to obtain theoptimum values of the control parameters due to the randomicity of HS and the different complexity of the optimized prob-lems and then hard to get the best results for different problems. Therefore, a new variant called global best harmony searchalgorithm with control parameters co-evolution based on particle swarm optimization (PSO-CE-GHS) is proposed in this pa-per. In PSO-CE-GHS, control parameters are encoded to be a symbiotic individual of original individual, and PSO is applied toco-evolve the symbiotic population. Thus, with the evolution of the original population in PSO-CE-GHS, the symbiotic pop-ulation is dynamically and self-adaptively adjusted and the real-time optimum control parameters are obtained for differentproblems. The results of experiment show that the proposed algorithm outperforms the mentioned HS, IHS, GHS and SGHSwhen they are applied to optimize 16 benchmark functions.

The remainder of the paper is organized as follows. In Section 2, a brief introduction of the previous mentioned HS, HIS,GHS and SGHS is summarized. The proposed algorithm is presented in Section 3. Results and analysis of the experiments arepresented and discussed in Section 4. In Section 5, the proposed algorithm is applied to two constrained optimal problems.Finally, the conclusion of the paper is given in Section 6.

2. HS, IHS GHS and SGHS algorithms

2.1. HS algorithm

In the HS, each solution is analogous to a ‘‘harmony’’ and represented by an n-dimension real vector, it is measured interms of the objective function. There are three basic phases called initialization, improvisation of a harmony vector andupdating the harmony memory (HM) in the HS, it works as follows:

2.1.1. Initialize the problem and algorithm parametersThe optimization problem is defined as Minimize f ðxÞðx ¼ fx1; x2; . . . ; xNgÞ subject to xL 6 xj 6 xU (j = 1,2, . . . ,N), in which N

is the number of decision variables, xj is the jth decision variable, and xL and xU are the lower and upper bounds of decisionvariables. The HS algorithm parameters, called harmony memory size (HMS) which is the number of solution vectors in theharmony memory; harmony memory considering rate (HMCR); pitch adjusting rate (PAR); and the maximum number of gen-eration (NImax), are also specified in this step.

2.1.2. Initialize the harmony memory (HM)The HM consists of HMS harmony vectors and each harmony vector consists of N decision variables, each harmony vector

xi ¼ fxi1; xi2; . . . ; xiNg is randomly generated as follows: xij ¼ xL þ ðxU � xLÞ � r for i = 1,2, . . . ,HMS, j = 1,2, . . . ,N, where r is arandom number between 0 and 1. Then, the HM matrix is generated as follow.

HM ¼

x1;1 x1;2 . . . x1;N

x2;1 x2;2 . . . x2;N

� � � � � � � � � � � �xHMS;1 xHMS;2 . . . xHMS;N

26664

37775

2.1.3. Improvise a new harmonyA new harmony vector xnew = (xnew;1; xnew;2; . . . ; xnew;N) is improvised by applying three rules, namely, memory consider-

ation, pitch adjustment and random selection. Each component of the new harmony vector xnew;j is generated by Eq. (1) basedupon the HMCR defined in step 1.

xnew;j xnew;j 2 fx1;j; x2;j; . . . ; xHMS;jg with probability HMCR

xnew;j 2 Xj with probability 1� HMCR

�ð1Þ

in which Xj is the set of the possible range of values for each decision variable, HMCR is defined as the probability of selectinga component from the HM members, and 1� HMCR is, therefore, the probability of generating it from the appointed regionrandomly. If xnew;j is generated from the HM, then it is further adjusted according to pitch adjustment rate (PAR). The PARdetermines the probability of the selected component from the HM to be mutated and 1� PAR is the probability of doingnothing.The pitch adjustment for the selected xnew;j is given by (2)

xnew;j xnew;j � r � BW with probability PARxnew;j with probability 1� PAR

�ð2Þ

where r is a uniform random number between 0 and 1 and BW is an arbitrary distance bandwidth.

X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072 10061

2.1.4. Update harmony memoryThe new harmony vector will replace the worst harmony in the HM if it is fitness (measured in terms of the objective

function) is better than that of the worst harmony.

2.1.5. Check the stopping criterionIf the number of generation (NImax) is reached, computation is terminated;Otherwise, the step of improvising a new harmony is repeated.The computational procedure of HS can be summarized as follows [1]:

Step 1: Set the parameters HMS;HMCR; PAR;BW and NImax.Step 2: Initialize HM and calculate the objective function value of each harmony vector. Set generation counter NI = 0.Step 3: Improvise a new harmony xnew as follows:

for (j = 1 to N) doif (r1 < HMCR) then

xnew;j ¼ xa;j where a 2 ð1;2; . . . ;HMSÞif (r2 < PAR) then

xnew;j ¼ xnew;j � r3 � BW where r1; r2; r3 2 ð0;1Þendif

elsexnew;j ¼ xL þ ðxU � xLÞ � r, where r 2 ð0;1Þ

endifendfor

Step 4: Update the HM as xworst ¼ xnew if f ðxnewÞ < f ðxworstÞ.Step 5: NI = NI + 1; If NI = NImax, return the best harmony vector xbest in the HM; otherwise go back to step 3.

2.2. Improved harmony search (IHS) algorithm

The IHS algorithm proposed by Mahdavi [19] applies the same memory consideration, pitch adjustment and randomselection as the basic HS algorithm, but dynamically updates the values of PAR and BW as follows.

PARðNIÞ ¼ PARmin þPARmax � PARmin

NImax� NI ð3Þ

where PARðNIÞ is the pitch adjustment rate in generation NI, PARmin is the minimum adjustment rate, PARmax is the maximumadjustment rate.

BWðNIÞ ¼ BWmax � e

lnBWminBWmax

� �NImax

�NI

0@

1A

ð4Þ

where BWðNIÞ is the distance bandwidth in generation NI, BWmin and BWmax are the minimum and maximum bandwidths.The IHS algorithm uses dynamical values of PAR and BW to address the shortcomings of the basic HS algorithm which

uses fixed values for PAR and BW . The IHS algorithm performed well in several test problems both in terms of the numberof fitness function evaluations required and in terms of the quality of the solutions found when compared with other (evo-lutionary and mathematical programing) techniques reported in the literature.

2.3. Global best harmony search (GHS) algorithm

The GHS algorithm proposed by Mahdavi [20] mirrors the concept of the particle swarm optimization, and generates anew harmony vector xnew by making use of the best harmony vector xbest ¼ xbest;1; xbest;2; . . . ; xbest;N in the HM when applyingthe pitch adjustment rule.

The GHS has exactly the same steps as the IHS with the exception that step 3 is modified as follows:

for (j = 1 to N) doif (r1 < HMCR) then

xnew;j ¼ xa;j where a 2 ð1;2; . . . ;HMSÞ

(continued on next page)

10062 X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072

if (r2 < PAR) thenxnew;j ¼ xbest;k where k 2 ð1;2; . . . ;NÞ and r1; r2 2 ð0;1Þ

endifelse

xnew;j ¼ xL þ ðxU � xLÞ � r, where r 2 ð0;1Þendif

endfor

The GHS outperformed HS and IHS in most of the ten benchmark functions and when the dimensionality increases, theGHS takes the lead and outperforms the other methods.

2.4. Self-adaptive GHS (SGHS) algorithm

Inspired by the GHS algorithm, a self-adaptive GHS (SGHS) algorithm is presented by Pan et al. [21]. Unlike the GHS algo-rithm, the SGHS algorithm employs a new improvisation scheme and an adaptive parameter tuning methods.

To well inherit good information from xbest , a modified pitch adjustment rule is presented by the authors. In addition, inthe memory consideration phase, an improved method is proposed to avoid getting trapped in a locally optimal solution. In anutshell, the new scheme to improvise a new harmony xnew can be summarized as follows:

for (j = 1 to N) doif (r1 < HMCR) then

xnew;j ¼ xa;j �r � BW where a 2 ð1;2; . . . ;HMSÞ and r 2 ð0;1Þif (r2 < PAR) then

xnew;j ¼ xbest;j where r1; r2 2 ð0;1Þendif

elsexnew;j ¼ xL þ ðxU � xLÞ � r, where r 2 ð0;1Þ

endifendfor

The SGHS algorithm use self-adapting HMCR, PAR and dynamically changing BW . The SGHS starts with aHMCRðPARÞ value generated according to the normal distribution with given mean HMCRmðPARmÞ and standarddeviation. During the evolution, the value associated with the generated harmony successfully replacing the worstmember in the HM is recorded. After a specified number of generations LP, HMCRmðPARmÞ is recalculated by aver-aging all the recorded HMCRðPARÞ values during this period. With the new mean and the given standard deviation,new HMCRðPARÞ value is produced and used in the subsequent generations. The above procedure is repeated.According to the authors, an appropriate HMCRðPARÞ value can be gradually learned to suit the particular problemand the particular phases of the search process. To well balance the exploration and exploitation of the proposedSGHS algorithm, the SGHS algorithm let the BW value decreases dynamically with increasing generations (NI) asfollows:

BWðNIÞ ¼ BWmax � BWmax�BWminNImax

� 2NI if NI < NImax=2

BWmin if NI P NImax=2

(ð5Þ

where BWmax and BWmin are the maximum and minimum distance bandwidths, respectively.The computational procedure of the SGHS algorithm can be summarized as follows:

Step 1: Set parameters HMS, LP and NImax.Step 2: Initialize BWmax, BWmin, HMCRm and PARm.Step 3: Initialize and evaluate HM. Set generation counter lp = 1, NI = 0.Step 4: Generate HMCR and PAR according to HMCRm and PARm. Yield BW according to BWmax and BWmin.Step 5: Improvise a new harmony xnew as follows:

for (j = 1 to N) doif (r1 < HMCR) then

X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072 10063

xnew;j ¼ xa;j �r � BW where a 2 ð1;2; . . . ;HMSÞ and r 2 ð0;1Þif (r2 < PAR) then

xnew;j ¼ xbest;j where r1; r2 2 ð0;1Þendif

elsexnew;j ¼ xL þ ðxU � xLÞ � r, where r 2 ð0;1Þ

endifendfor

Step 6: If f ðxnewÞ < f ðxworstÞ, update the HM as xworst ¼ xnew and record the values of HMCR and PAR.Step 7: If lp = LP, recalculate HMCRmðPARmÞ according to the recorded values of HMCRðPARÞ and reset lp = 1; otherwise,lp = lp + 1;Step 8: NI = NI + 1; If NI = NImax, return the best harmony vector xbest in the HM; otherwise, go back to step 4.

3. Global best harmony search algorithm with control parameters co-evolution based on PSO

In this section, a global best harmony search algorithm with control parameters co-evolution based on PSO is proposed.The obvious difference between PSO-CE-GHS and SGHS is that another optimization algorithm, the known PSO, is employedto dynamically adjust two control parameters of HMCR and PAR along with the optimization of harmony search. The impro-visation scheme of PSO-CE-GHS is the same as the SGHS.

3.1. Particle swarm optimization algorithm

PSO is first proposed by James Kennedy and Eberhart in 1995, motivated by social behavior of organisms such as birdflocking and fish schooling [22]. In PSO, each particle keeps track of two ‘‘best’’ value, one called pbest is the best solutionit has achieved so far and the other called gbest is the best solution obtained so far by any particle in the neighbors ofthe particle.

Let PSS be the number of particles in the swarm, each having a position xi ¼ ðxi1; xi2; . . . ; xiDÞ in the search-space and avelocity v i ¼ ðv i1;v i2; . . . ;v iDÞ. Let pbesti be the best known position of particle i and let gbest be the best known positionof the entire swarm. A basic PSO algorithm works as follows:

Step 1: Set parameters PSS, NImax, XL, XU , VMIN , VMAX ,w, c1, c2.Step 2: Initialization. For each particle i ¼ 1; . . . ; PSS, initialize the particle’s position with a uniformly distributed randomvector: xi � UðXL;XUÞ. Initialize the particle’s velocity v i � UðVMIN;VMAXÞ. Initialize the particle’s best known position to itsinitial position: pbesti xi, find out the best pbesti, where i ¼ 1; . . . ; PSS, as gbest.Step 3: Update the particle’s velocity and position. For each particle i ¼ 1; . . . ; PSS,

v i ¼ w � v i þ c1 � r1ðpbesti � xiÞ þ c2 � r2ðgbest � xiÞ ð6Þ

where r1; r2 2 ð0;1Þ.

xi ¼ xi þ v i ð7Þ

Step 4: Update the particle’s best known position and the swarm’s best known position. For each particle i ¼ 1; . . . ; PSS, iff ðxiÞ < f ðpbestiÞ, pbesti = xi; if f ðpbestiÞ < f ðgbestÞ, gbest = pbesti.Step 5: check the number of generations. If it is satisfied, stop; otherwise, go back to step3.

3.2. Co-evolution of HMCR and PAR based PSO

HMCR is the probability of choosing one value from the HM. PAR is the adjustment rate for the pitch chosen fromxbest . The choice of HMCR and PAR have significant influence on the convergence rate, local exploitation ability andglobal exploration ability of the algorithm, but it is hard to set appropriate values for HMCR and PAR in the searchprocess.

In order to implement self-adaptive adjustment of control parameters, control parameters can be coded into theindividuals that undergo evolution operations. Better values for these encoded parameters are supposed to result in betterindividuals that in turn are more likely to survive and produce offspring and hence propagate better parameter values. InPSO-CE-GHS, two important control parameters, harmony memory considering rate (HMCR) and pitch adjusting rate(PAR), are designed as the symbiotic individual of original individual (i.e. harmony vector). Harmony search operators areapplied to evolve the original population. And, PSO is applied to co-evolve the symbiotic population consisting of symbioticindividuals, which are also regarded as particles of PSO. The procedure of PSO-CE-GHS can be described as follows.

10064 X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072

(1) Initialization. Determine the number of individuals (i.e. harmony vectors) in the original population HMS and thenumber of individuals in symbiotic population PSS, generate the original population HM and symbiotic populationPS0 whose individuals, i.e. HMCR and PAR, are randomly distributed in the range of [0.96,1.0] and [0.90,1.0], respec-tively. Determine the maximal number of generations NImax, the current generation NI ¼ 0, ETmax (i.e., the maximalemploy times of a group of HMCR and PAR before updating) and the current ET = 1, the control parameters of PSO,i.e. w, c1 and c2, and initialize particle velocity v i. Set the current generation of PSO G ¼ 0 and pitch adjustment counterD.

(2) Evolve the original population. For HM with each symbiotic individual [HMCRi; PARi] from PSG(i ¼ 1;2; . . . ; PSS), imple-ment memory consideration, pitch adjustment and random selection of harmony search as follow:

Step 1: for i = bNI=PSSc � PSS in which b�c represents the decimal fraction of the datum, select a symbiotic individual[HMCRi; PARi] from PSG

i randomly (if i = 1, PSGi = PSG). For HM with the selected symbiotic individual [HMCRi; PARi] to generate

a new harmony vector xiET (i is corresponding to the selected symbiotic individual [HMCRi; PARi]) as follow:

For j 2 ð1;2; . . . ;NÞ apply memory consideration and random selection for each variable xiET;j:

xiET;j ¼

xa;j � r � BW r < HMCRi

xL þ ðxU � xLÞ � r otherwise

�ð8Þ

where a is a random integer between 1 and HMS, r is a random number between 0 and 1.Pitch adjustment for xi

ET;j: if xiET;j is generated by memory consideration then it is further adjusted as:8

xiET;j ¼

b ¼lllxbest;j if NI 6 D

xbest;k if NI > D

�r < PARi

xiET;j otherwise

><>: ð9Þ

where k is a random integer between 1 and N. The pitch adjustment rules xnew;j ¼ xbest;k is proposed in paper [20], but in paper[21] it is mentioned that the pitch adjustment rules xnew;j ¼ xbest;k pitch may break the building structures in xbest , so anotherpitch adjustment rules xnew;j ¼ xbest;j is proposed in paper [21]. In this paper the pitch adjustment xnew;j ¼ xbest;j is used in theprevious D generations, after the PSO-CE-GHS is convergent, the pitch adjustment xnew;j ¼ xbest;k is used in the last D gener-ations to get better results by breaking the building structures in xbest which may be the local optimum solution.

If f ðxiETÞ < f ðxworstÞ, update the HM as xworst ¼ xi

ET .Step 2: Delete the selected symbiotic individual [HMCRi; PARi] from PSG

i to generate PSGiþ1.

Step 3: if bNI=PSSc > 0, then NI = NI + 1 and go back to step 1; otherwise, ET = ET + 1.Step 4: if ET = ETmax + 1, then ET = 1, each symbiotic individual is measured in terms of the mean fitness of its correspond-

ing new harmony vectors as:

FGi ¼

PETmaxET¼1 f ðxi

ETÞETmax

ð10Þ

where G is the PSO generation of symbiotic population.Then go to (3);

otherwise, NI = NI + 1 and go back to step 1.(3) Employ PSO to co-evolve the symbiotic population to produce PSGþ1. Calculate symbiotic individual’s previous best

solution (pbestGi ) and the best position of the symbiotic population (gbestG).

If G = 0, each pbest0i (i ¼ 1;2; . . . ; PSS) is from PS0,

F0min = F0

n = minfFGi ; i ¼ 1;2; . . . ; PSSg,gbest0 = [HMCR0

n; PAR0n];

else, FGmin = FG

k = minfFGi ; i ¼ 1;2; . . . ; PSSg;

gbestG ¼½HMCRG

k ; PARGk � FG

min < FG�1min

gbestG�1 otherwise

(ð11Þ

pbestGi ¼

½HMCRGi ; PARG

i � FGi < FG�1

i

pbestG�1i otherwise

(ð12Þ

HMCR and PAR are co-evolved according to the following formulas.

vGþ11i ¼ w� vG

1i þ c1 � rand� ðpbestG1i � HMCRG

i Þ þ c2 � rand� ðgbest1 � HMCRGi Þ ð13Þ

HMCRGþ1i ¼ HMCRG

i þ vGþ11i ð14Þ

vGþ12i ¼ w� vG

2i þ c1 � rand� ðpbestG2i � PARG

i Þ þ c2 � rand� ðgbest2 � PARGi Þ ð15Þ

PARGþ1i ¼ PARG

i þ vGþ12i ð16Þ

X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072 10065

(4) Repeat steps (2) to (3) as long as the number of generations NI ¼ NImax.With the harmony vectors converging to better feasible solutions, HMCR and PAR, as the control parameters of HS algo-

rithm and the particles of PSO, converge to more appropriate control parameters for the harmony search.

3.3. Dynamically changing BW

BW is another important parameter in harmony search process for continuous optimization problems. A large BW is suit-able for global search, while a small BW is suitable for local search. So it is better for BW to have a large value at the begin-ning and a small value at the end. In this paper, a similar dynamically changing BW to SGHS is employed as follows:

BWðNIÞ ¼BWMAX1 � ðBWMAX1 � BWMAX2Þ � NI � 10=NImax if NI < NImax=10BWMAX2 � ðNI � NImax=2Þ=ðNImax=10� NImax=2Þ þ BWMIN if NImax=10 < NI < NImax=2BWMIN if NI > NImax=2

8><>: ð17Þ

where BWMAX1, BWMAX2, BWMIN are the gradually decreasing distance bandwidths, respectively. NI is the current generation.

4. Experimental results and parameter analysis

4.1. Experimental results

This section compares the performance of PSO-CE-GHS with that of the HS, IHS, GHS and SGHS algorithms based on agroup of 14 benchmark functions which provide a balance of unimodal and multimodal functions as follows:

For each of these functions, the goal is to find the global minimum f ðxÞ.

A. Sphere function, defined as

min f ðxÞ ¼Xn

i¼1

x2ðiÞ;

where global optimum x = 0 and f ðxÞ = 0 for �100 6 xðiÞ 6 100.B. Schwefel’s problem 2.22, defined as

min f ðxÞ ¼Xn

i¼1

jxðiÞj þYn

i¼1

xðiÞ;

where global optimum x = 0 and f ðxÞ = 0 for �10 6 xðiÞ 6 10.C. Rosenbrock function, defined as

min f ðxÞ ¼Xn�1

i¼1

ð100ðxðiþ 1Þ � x2ðiÞÞ2 þ ðxðiÞ � 1Þ2Þ;

where global optimum x ¼ ð1;1; . . . ;1Þ and f ðxÞ ¼ 0 for �30 6 xðiÞ 6 30.D. Step function, defined as

min f ðxÞ ¼Xn

i¼1

ðbxðiÞ þ 0:5cÞ2;

where global optimum x = 0 and f ðxÞ = 0 for �100 6 xðiÞ 6 100.E. Rotated hyper-ellipsoid function, defined as

min f ðxÞ ¼Xn

i¼1

ðXi

j¼1

xðjÞÞ2;

where global optimum x = 0 and f ðxÞ = 0 for �100 6 xðiÞ 6 100.F. Schwefel’s problem 2.26 , defined as

min f ðxÞ ¼ 418:9829n�Xn

i¼1

ðxðiÞ sinðffiffiffiffiffiffiffiffiffiffijxðiÞj

qÞÞ;

where global optimum x = (420.9687,420.9687, . . . ,420.9687) and f ðxÞ = 0 for �500 6 xðiÞ 6 500.G. Rastrigin function, defined as

min f ðxÞ ¼Xn

i¼1

ðx2ðiÞ � 10 cosð2pxðiÞÞ þ 10Þ;

where global optimum x = 0 and f ðxÞ = 0 for �5:12 6 xðiÞ 6 5:12.

10066 X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072

H. Ackley’s function, defined as

min f ðxÞ ¼ �20 expð�0:2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1n

Xn

i¼1

x2ðiÞ

vuut Þ � exp1n

Xn

i¼1

cosð2pxðiÞÞ !

þ 20þ e;

where global optimum x = 0 and f ðxÞ = 0 for �32 6 xðiÞ 6 32.I. Griewank function, defined as

min f ðxÞ ¼ 14000

Xn

i¼1

x2ðiÞ �Yn

i¼1

cosxðiÞffiffi

ip

� �þ 1;

where global optimum x = 0 and f ðxÞ = 0 for �600 6 xðiÞ 6 600.J. Six-hump Camel-back function, defined as

min f ðxÞ ¼ 4x2ð1Þ � 2:1x4ð1Þ þ 13

x6ð1Þ þ xð1Þ � xð2Þ � 4x2ð2Þ þ 4x4ð2Þ;

where global optimum x = (0.08983,0.7126) and f ðxÞ = �1.0316285 for �5 6 xðiÞ 6 5.K. Shifted Sphere function

min f ðxÞ ¼Xn

i¼1

z2ðiÞ þ f bias1;

where z ¼ x� o; o ¼ foð1Þ; oð2Þ; . . . ; oðnÞg is the shifted global optimum; x ¼ o and f ðxÞ = f bias1 = �450, for�100 6 xðiÞ 6 100.

L. Shifted Schwefel’s problem 1.2

min f ðxÞ ¼Xn

i¼1

Xi

j¼1

zðjÞ !2

þ f bias2;

where z ¼ x� o; o ¼ foð1Þ; oð2Þ; . . . ; oðnÞg is the shifted global optimum; x ¼ o and f ðxÞ= f bias2 = �450, for�100 6 xðiÞ 6 100.

M. Shifted Rosenbrock function, defined as

min f ðxÞ ¼Xn�1

i¼1

ð100ðzðiþ 1Þ � z2ðiÞÞ2 þ ðzðiÞ � 1Þ2Þ þ f bias6;

where z ¼ x� oþ 1; o ¼ foð1Þ; oð2Þ; . . . ; oðnÞg is the shifted global optimum; x ¼ o and f ðxÞ = f bias6 = �390, for�100 6 xðiÞ 6 100.

N. Shifted Rastrigin function, defined as

min f ðxÞ ¼Xn

i¼1

ðz2ðiÞ � 10 cosð2pzðiÞÞ þ 10Þ þ f bias9;

where z ¼ x� o; o ¼ foð1Þ; oð2Þ; . . . ; oðnÞg is the shifted global optimum; x ¼ o and f ðxÞ = f bias9 = �330, for �5 6 xðiÞ 6 5.

Sphere, Schwefel’s Problem 2.22, Rosenbrock, rotated hyper-ellipsoid, Shifted Sphere function, Shifted Schwefel’s problem1.2, Shifted Rosenbrock function and are unimodal, while the Step function is a discontinuous unimodal function. Schwefel’sProblem 2.26, Rastrigin, Ackley, Griewank and Shifted Rastrigin function are difficult multimodal functions where the num-ber of local optima increases exponentially with the problem dimension. The Camel-Back function is a low-dimensionalfunction with only a few local optima.

In the experiments, each problem is run 30 independent replications. The results reported in this section are averages andstandard deviations. The settings of the compared HS algorithms, the average and standard deviations over these 30 repli-cations for dimensions equal to 30 and 100 (except for the two-dimensional Six-hump camel-back function) are reported inTables 1–3, respectively. The results of the HS, HIS and GHS algorithms were obtained from reference [20,21]. The statisti-cally significant best solutions have been shown in bold (using the z-test with a = 0.05 [20]).

The results in Table 2 and 3 show that the PSO-CE-GHS produces much better results for most of the test functions whilecompared with the variants of HS.

4.1. Parameter analysis

4.2.1. Analysis of HMCR and PAR co-evolutionThe convergence trajectories of HMCR and PAR of sphere and Six-hump Camel-back functions are shown in Figs.1 1 and 2,

respectively. It is mentioned that the sphere function is a unimodal function and the Six-hump Camel-back function is mul-timodal function. A large HMCR and PAR values are appropriate for unimodal functions and a small HMCR and PAR values areappropriate for multimodal functions because a large HMCR and PAR values can increase the convergence rate of the

Table 1Parameter settings for the compared HS algorithms.

HS IHS GHS SGHS PSO-CE-GHS

HMS 5 5 5 5 5HMCR 0.9 0.9 0.9 HMCRm = 0.98 [0.96,1]PAR 0.3 PARmin = 0.01

PARmax = 0.99PARmin = 0.01PARmax = 0.99

PARn = 0.9 [0.80,1]

BW 0.01 BWmax = (UB–LB)/20BWmin = 0.0001

– BWmax = (UB–LB)/10BWmin = 0.0005

BWmax1 = (UB-LB)/50BWmax2 = (UB-LB)/100BWmin = 0.0005

LP – – – 100 –PSS ET – – – – 50 8w – – – – 0.3c1 = c2 0.05D – – – – 49,600NI 50,000 50,000 50,000 50,000 50,000

Table 2Mean and standard deviation (±SD) of the benchmark function optimization results (n = 30).

HS IHS GHS SGHS PSO-CE-GHS

A 0.000187 0.000712 0.000010 0.000000 0.000000(0.000032) (0.000644) (0.000022) (0.000000) (0.000000)

B 0.171524 1.097325 0.072815 0.000102 0.000074(0.072851) (0.181253) (0.114464) (0.000017) (0.000014)

C 340.297100 624.323216 49.669203 150.929754 41.438127(266.691353) (559.847363) (59.161192) (131.054916) (28.513215)

D 4.233333 3.333333 0 0.000000 0(3.029668) (2.195956) (0) (0.000000) (0)

E 4297.816457 4313.653320 5146.176259 11.796490 2.661134(1362.148438) (1062.106222) (6348.792556) (7.454435) (3.619655)

F 30.262214 34.531375 0.041657 0.004015 0.000000(11.960017) (10.400177) (0.050361) (0.006237) (0.000000)

G 1.390625 3.499144 0.008629 0.017737 0.000006(0.824244) (1.182907) (0.015277) (0.067494) (0.000002)

H 1.130004 1.893394 0.020909 0.484445 0.000014(0.407044) (0.314610) (0.021686) (0.356729) (0.000004)

I 1.119266 1.120992 0.102407 0.050467 0.046993(0.041207) (0.040887) (0.175640) (0.035419) (0.033043)

J �1.031628 �1.031628 �1.031600 �1.031628 �1.031628(0.000000) (0.000000) (0.000018) (0.000000) (0.000000)

K �443.553193 �438.815459 1353.211035 �450.000000 �450.000000(2.777075) (3.703810) (361.763223) (0.000000) (0.000000)

L 3888.178656 3316.602220 18440.504168 �431.095663 �446.595442(1115.259221) (1519.408280) (4537.943604) (17.251617) (3.072013)

M 3790.700528 5752.122700 35046942.785443 2511.678953 1430.141475(3271.573964) (3762.543380) (22136432.286008) (3966.480932) (2165.850641)

N �329.128972 �328.056701 �263.271951 �329.860811 �329.696528(0.808682) (0.667483) (9.356208) (0.349908) (0.834101)

X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072 10067

algorithm while a small HMCR and PAR values can increase the diversity of the harmony memory. It is obvious in Fig. 1 thatthe HMCR and PAR values of the sphere function converge to a large values while in Fig. 2 that the HMCR and PAR values ofthe Six-hump Camel-back function converge to a small values. From the comparison of the convergence trajectories of HMCRand PAR of unimodal function and multimodal function, the conclusion is that the PSO can guide HMCR and PAR converge tosuitable values for different problems.

4.2.2. Selection of ETET (i.e. the employ times of a group of HMCR and PAR) has been introduced above in the paper. In this subsection, the

effect of ET value on the performance of the proposed algorithm is shown. The results generated by using different ET valuesfor dimensions equal to 30 and 100 are summarized in Tables 4 and 5, respectively. Other parameter values are given in Ta-ble 1. From Tables 4 and 5, it is can be found that more benchmark functions will gain better function values when ET is 8.w,c1 and c2 are other three parameters which are also tested in this paper. w is advised within [0.3,0.6], and c1 and c2 are ad-vised within [0.01,0.1] in this paper.

5. Constrained optimal problems

In this paper, two constrained optimal problems are taken to show the validity and effectiveness of the PSO-CE-GHS algo-rithm. The two constrained optimal problems are solved through penalty function. The objective function f ðxÞ is subjected to

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

x 104

0.96

0.97

0.98

0.99

1HMCR

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

x 104

0.8

0.85

0.9

0.95

1PAR

max min mean best

max min mean best

Fig. 1. Convergence trajectories of HMCR and PAR of sphere function.

Table 3Mean and standard deviation (±SD) of the benchmark function optimization results (n = 100).

HS IHS GHS SGHS PSO-CE-GHS

A 8.683062 8.840449 2.230721 0.000002 0.000000(0.775134) (0.762496) (0.565271) (0.000003) (0.000000)

B 82.926284 82.548978 19.020813 0.017581 0.005935(6.717904) (6.341707) (5.093733) (0.021205) (0.004252)

C 16675172.184717 17277654.059718 2598652.617273 621.749360 227.331911(3182464.488466) (2945544.275052) (915937.797217) (583.889593) (151.874492)

D 20280.200000 20827.733333 5219.933333 0.100000 0(2003.829956) (2175.284501) (1134.876027) (0.305129) (0)

E 215052.904398 213812.584732 321780.353575 37282.096600 25620.277221(28276.375538) (28305.249583) (39589.041160) (5913.489066) (7248.128848)

F 7960.925495 8301.390783 1270.944476 35.675398 0.000312(572.390489) (731.191869) (395.457330) (86.000104) (0.005492)

G 343.497796 343.232044 80.657677 12.353767 0.000917(27.245380) (25.149464) (30.368471) (2.635607) (0.000613)

H 13.857189 13.801383 8.767846 �0.000000 0.004476(0.284945) (0.530388) (0.880066) (0.000000) (0.013478)

I 195.592577 204.291518 54.252289 0.027932 0.017681(24.808359) (19.157177) (18.600195) (0.009209) (0.018286)

K 22241.554607 23026.241628 88835.245672 �449.999980 �450.000000(2550.746480) (2304.787587) (9065.418923) (0.000093) (0.000000)

L 272495.060293 274439.336302 496668.916387 63251.604588 47269.463391(38504.505752) (37300.950900) (51929.415486) (12430.053431) (12393.192142)

M 2242245818.867268 2211121263.779596 27910012932.716747 781.510290 2287.454756(380621042.775803) (358676387.353021) (3941689420.106002) (293.228166) (2762.051617)

N 36.164513 36.685585 509.066964 �317.225748 �312.635437(25.576559) (25.311496) (45.183819) (2.732871) (8.630611)

10068 X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072

inequality constraints gi(x)60, (i = 1,2, . . . ,m) and the function FðxÞ = f ðxÞ + rPm

i¼1½maxf0; giðxÞg� (r ¼ 50þ 10ð1þNI=NImaxÞ [23],NI is the current generation and NImax is the max generation in this paper) is defined so that the constrained optimal prob-lems are turned into unconstrained optimal problems.

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

x 104

0.96

0.97

0.98

0.99

1HMCR

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

x 104

0.8

0.85

0.9

0.95

1PAR

max min mean best

max min mean best

Fig. 2. Convergence trajectories of HMCR and PAR of Six-hump Camel-back function.

X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072 10069

5.1. constrained function I: minimization of the weight of spring

This problem is described by Arora [24], Coello [25] and Belegundu [26]. It consists of minimizing the weight ðf ~ð x!ÞÞ of atension/compression spring subject to constraints on shear stress, surge frequency and minimum deflection as shown inFig. 3. The design variables are the mean coil diameter Dð¼ x2Þ; the wire diameter dð¼ x1Þ and the number of active coilsNð¼ x3Þ. Formally, the problem can be expressed as:

Table 4The effect of ET (n = 30).

ET = 4 ET = 6 ET = 8 ET = 10

A 0.000000 0.000000 0.000000 0.000000(0.000000) (0.000000) (0.000000) (0.000000)

B 0.000075 0.000075 0.000074 0.000093(0.000014) (0.000018) (0.000014) (0.000025)

C 50.64155 38.43081 41.438127 40.94394(10.27212) (27.27405) (28.513215) (26.62082)

D 0 0 0 0(0) (0) (0) (0)

E 3.298281 2.327646 2.661134 2.356256(1.966636) (1.896010) (3.619655) (2.014568)

F 0.000000 0.000000 0.000000 0.000000(0.000000) (0.000000) (0.000000) (0.000000)

G 0.000008 0.000008 0.000006 0.000009(0.000003) (0.000003) (0.000002) (0.000004)

H 0.000014 0.000014 0.000014 0.000015(0.000002) (0.000004) (0.000004) (0.000003)

I 0.048103 0.061991 0.046993 0.058280(0.026635) (0.045538) (0.033043) (0.015250)

J �1.031628 �1.031628 �1.031628 �1.031628(0.000000) (0.000000) (0.000000) (0.000000)

K �450.000000 �450.000000 �450.000000 �450.000000(0.000000) (0.000000) (0.000000) (0.000000)

L �437.404256 �446.149812 �446.595442 �446.524255(19.148102) (3.505237) (3.072013) (2.682012)

M 1652.896556 1437.488980 1430.141475 2079.412084(2346.795038) (2165.850641) (2016.268157) (3028.415514)

N �329.965739 �329.856852 �329.696528 �329.645796(0.183974) (0.354822) (0.834101) (0.326541)

10070 X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072

Minimize f ðxÞ ¼ ðx3 þ 2Þx2x21

Subject to g1ðxÞ ¼ 1� x32x3

71785x416 0;

g2ðxÞ ¼4x2

2�x1x2

12566ðx2x31�x4

1Þþ 1

5108x21� 1 6 0;

g3ðxÞ ¼ 1� 140:45x1x2

2x36 0;

g4ðxÞ ¼ x2�x11:5 � 1 6 0;

This problem has been solved by Belegundu [26] using eight different mathematical optimization techniques. Arora [24]also solved this problem using a numerical optimization technique called constraint correction at constant cost. Coello [25]solved this problem using GA-based method and Mahdavi [19] solved this problem using HIS algorithm. Table 6 presents thebest solution of this problem obtained using the PSO-CE-GHS algorithm and compares the PSO-CE-GHS results with solu-tions reported by other researchers. In this paper it runs 50,000 evaluations as the same as IHS. It is obvious from the Table 6that the result obtained using PSO-CE-GHS algorithm is better than those reported previously in the literature.

5.2. constrained function II: welded beam design

The welded beam structure is shown in Fig. 4. The objective is to find the minimum fabricating cost of the welded beamsubject to constraints on shear stress ðsÞ, bending stress ðrÞ, buckling load ðPCÞ, end deflection ðdÞ, and side constraint. Thereare four design variables: hð¼ x1Þ, lð¼ x2Þ, tð¼ x3Þ and bð¼ x4Þ. The mathematical formulation of the objective function f ðxÞ,which is the total fabricating cost mainly comprised of the set-up, welding labor, and material costs, is as follows:

Minimize f ðxÞ ¼ 1:10471x21x2 þ 0:04811x3x4ð14:0þ x2Þ

Subject to g1ðxÞ ¼ sðxÞ � smax 6 0;g2ðxÞ ¼ rðxÞ � rmax 6 0;g3ðxÞ ¼ x1 � x4 6 0;g4ðxÞ ¼ dðxÞ � dmax 6 0;g5ðxÞ ¼ P � PCðxÞ 6 0;

where sðxÞ ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðs0Þ2 þ 2s0s00 x2

2Rþ ðs00Þ2

q;

s0 ¼ Pffiffiffi2p

x1x2; s00 ¼ MR

J; M ¼ P Lþ x2

2

� �;

R ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2

2

4þ ðx1 þ x3

2Þ2

r; J ¼ 2

ffiffiffi2p

x1x2x2

2

4þ x1 þ x3

2

� �2� �

rðxÞ ¼ 6PLx4x2

3

; dðxÞ ¼ 6PL3

Ex4x23

; PCðxÞ ¼4:013E

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix6

4x23=36

qL2 1� x3

2L

ffiffiffiffiffiffiE

4G

r !;

P ¼ 6000 lb; L ¼ 14 in:; dmax ¼ 0:25 in:; E ¼ 30� 106 psi; G ¼ 12� 106 psi

smax ¼ 13;600 psi; rmax ¼ 30;000 psi

This problem has been solved by Deb [27] and Coello [28] using a simple GA methods. It has also been solved by Ragsdelland Phillips [29] using geometric programing. Ragsdell and Phillips also compared their results with those produced by themethods contained in a software package called ‘Opti-Sep’ [30], which includes the following numerical optimization tech-niques: ADRANS (Gall’s adaptive random search with a penalty function), APPROX (Griffith and Stewart’s successive linearapproximation),DAVID (Davidon–Fletcher–Powell with a penalty function), MEMGRD (Miele’s memory gradient with a

Fig. 3. Tension/compression spring.

Table 6Optimal results for minimization of the weight of spring.

Design variables Proposedmethod Mahdavi [19] Arora [24] Belegundu [26] Coello [25]

x1(d) 0.051756 0.051154 0.053396 0.050000 0.051989x2(D) 0.358330 0.349871 0.399180 0.315900 0.363965x3(N) 11.195100 12.076432 9.185400 14.25000 10.890522g1(x) �0.000000 0.000000 0.000019 �0.000014 �0.000013g2(x) �0.000000 �0.000007 �0.000018 �0.003782 �0.000021g3(x) �4.056962 �4.027840 �4.123832 �3.938302 �4.061338g4(x) �0.726609 �0.736572 �0.698283 �0.756067 �0.722698f(x) 0.012665 0.012670 0.012730 0.012833 0.012681

Table 5The effect of ET (n = 100).

ET = 4 ET = 6 ET = 8 ET = 10

A 0.009268 0.003143 0.000000 0.000000(0.023172) (0.011178) (0.000000) (0.000000)

B 0.008720 0.007180 0.005935 0.010434(0.006632) (0.008221) (0.004252) (0.014565)

C 260.154545 244.172505 227.331911 201.389260(156.498987) (149.550848) (151.874492) (160.121231)

D 0 0 0 0(0) (0) (0) (0)

E 29492.497522 26241.683627 25620.277221 26681.750284(8234.266257) (6558.377456) (7248.128848) (6843.478945)

F 0.007575 0.010648 0.000312 0.000141(0.013569) (0.032904) (0.005492) (0.002503)

G 0.000410 0.000376 0.000917 0.000584(0.000412) (0.000211) (0.000613) (0.000344)

H 0.004167 0.008889 0.004476 0.008905(0.005658) (0.013565) (0.013478) (0.013376)

I 0.052433 0.072724 0.017681 0.086978(0.047672) (0.045324) (0.018286) (0.060867)

K �449.986757 �449.997969 �450.000000 �449.9929420.043144 0.006571 (0.000000) 0.014039

L 46339.335212 48288.429403 47269.463391 47806.988632(13385.139034) (12045.189958) (12393.192142) (10779.781584)

M 1751.618248 2149.160028 2287.454756 2020.395711(1398.133555) (2350.967984) (2762.051617) (2123.249783)

N �319.081104 �316.425925 �312.635437 �310.945346(3.267281) (5.706127) (8.630611) (16.718144)

Fig. 4. Welded beam structure.

X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072 10071

penalty function), SEEK1 and SEEK2 (Hooke and Jeeves with two different penalty functions), SIMPLX (Simplex method witha penalty function) and RANDOM (Richardson’s random method). Additionally, Lee and Geem [6] solved the problem usingHS method and Mahdavi [19] solved this problem using IHS algorithm. In this paper it runs 300,000 evaluations as the sameas IHS. The results of the best solution produced by the above methods are shown in Table 7. The PSO-CE-GHS algorithm getsthe best solution.

Table 7Optimal results for welded beam design.

Methods h(=x1) l(=x2) t(=x3) b(=x4) Cost

Ragsdell [28] 0.2455 6.1960 8.2730 0.2455 2.3859Siddall [30] 0.2444 6.2189 8.2915 0.2444 2.3815Deb [27] 0.2489 6.1730 8.1789 0.2533 2.4328Coello [29] 0.2088 3.4205 8.9975 0.2100 1.7483Lee and Geem [6] 0.2442 6.2231 8.2915 0.2443 2.3807Mahdavi [19] 0.2057 3.4704 9.0366 0.2057 1.7248Proposed method 0.2161 3.2606 9.0314 0.2059 1.7232

10072 X. Wang, X. Yan / Applied Mathematics and Computation 219 (2013) 10059–10072

6. Conclusions

A global best harmony search algorithm with control parameters co-evolution based PSO is proposed in this paper. Theproposed algorithm employed PSO to guide its two significant control parameters to converge to proper value for differentproblems and different searching generation. The result obtained from benchmark functions and constrained optimal prob-lems showed that the proposed algorithm was more powerful than the existing variants of HS and suitable for the con-strained optimal problems.

Acknowledgments

The authors gratefully acknowledge the supports from the following foundations: 973 project of China (2013CB733600).National Natural Science Foundation of China (21176073), Doctoral Fund of Ministry of Education of China(20090074110005), Program for New Century Excellent Talents in University (NCET-09-0346) and ‘‘Shu Guang’’ project (09SG29).

References

[1] Z.W. Geem, J.H. Kim, G.V. Loganathan, A new heuristic optimization algorithm: harmony search, Simulations 76 (2001) 60–68.[2] J.H. Kim, Z.W. Geem, E.S. Kim, Parameter estimation of the nonlinear Muskingum model using harmony search, J. Am. Water Resour. Assoc. 37 (2001)

1131–1138.[3] Z.W. Geem, J. Kim, G. Loganathan, Harmony search optimization: application to pipe network design, Int. J. Model. Simul. 22 (2) (2002) 125–133.[4] K.S. Lee, Z.W. Geem, A new structural optimization method based on the harmony search algorithm, Comput. Struct. 82 (2004) 781–798.[5] Z.W. Geem, K.S. Lee, Y.J. Park, Application of harmony search to vehicle routing, Am. J. Appl. Sci. 2 (2005) 1552–1557.[6] K.S. Lee, Z.W. Geem, A new meta-heuristic algorithm for continuous engineering optimization, harmony search theory and practice, Comput. Methods

Appl. Mech. Eng. 194 (2005) 3902–3933.[7] K.S. Lee, Z.W. Geem, S.H. Lee, K.-W. Bae, The harmony search heuristic algorithm for discrete structural optimization, Eng. Optim. 37 (2005) 663–684.[8] Z.W. Geem, Optimal cost design of water distribution networks using harmony search, Eng. Optim. 38 (2006) 259–280.[9] T.M. Ayvaz, Simultaneous determination of aquifer parameters and zone structures with fuzzy c-means clustering and meta-heuristic harmony search

algorithm, Adv. Water Resour. 30 (2007) 2326–2338.[10] A. Vasebi, M. Fesanghary, S.M.T. Bathaee, Combined heat and power economic dispatch by harmony search algorithm, Electr. Power Energy Syst. 29

(2007) 713–719.[11] S.O. Degertekin, Optimum design of steel frames using harmony search algorithm, Struct. Multi. Optim. 36 (4) (2008) 393–401.[12] Z.W. Geem, Novel derivative of harmony search algorithm for discrete design variables, Appl. Math. Comput. 199 (1) (2008) 223–230.[13] R. Forsati, A.T. Haghighat, M. Mahdavi, Harmony search based algorithms for bandwidth-delay-constrained least-cost multicast routing, Comput.

Commun. 31 (10) (2008) 2505–2519.[14] H. Ceylan, H. Ceylan, S. HaIdenbilen, et al, Transport energy modeling with meta-heuristic harmony search algorithm, an application to Turkey, Energy

Policy 36 (7) (2008) 2527–2535.[15] M. Fesanghary, M. Mahdavi, M. Minary-Jolandan, et al, Hybridizing harmony search algorithm with sequential quadratic programming for engineering

optimization problems, Comput. Methods Appl. Mech. Eng. 197 (33–40) (2008) 3080–3091.[16] S.O. Degertekin, Harmony search algorithm for optimum design of steel frame structures: a comparative study with other optimization methods,

Struct. Eng. Mech. 29 (4) (2008) 391–410.[17] Z.W. Geem, Harmony search optimization to the pump-included water distribution network design, Civ. Eng. Environ. Syst. 26 (3) (2009) 211–221.[18] Z.W. Geem, Particle-swarm harmony search for water network design, Eng. Optim. 41 (4) (2009) 297–311.[19] M. Mahdavi, M. Fesanghary, E. Damangir, An improved harmony search algorithm for solving optimization problems, Appl. Math. Comput. 188 (2007)

1567–1579.[20] M.G.H. Omran, M. Mahdavi, Global-best harmony search, Appl. Math. Comput. 198 (2008) 643–656.[21] Quan-Ke Pan, P.N. Suganthan, M. Fatih Tasgetiren, J.J. Liang, A self-adaptive global best harmony search algorithm for continuous optimization

problems, Appl. Math. Comput. 216 (2010) 830–848.[22] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of the Sixth International Symposium on Micro Machine and

Human Science, Nagoya, Japan, 1995, pp. 39–43.[23] Qinqin Fan, Xuefeng Yan, Differential evolution algorithm with co-evolution of control parameters and penalty factors for constrained optimization

problems, Asia-Pac. J. Chem. Eng. (2010) Published online in Wiley Online Library (wileyonlinelibrary.com) http://dx.doi.org/10.1002/apj.524.[24] J.S. Arora, Introduction to Optimum Design, McGraw-Hill, New York, 1989.[25] C.A.C. Coello, Constraint-handling in genetic algorithms through the use of dominance-based tournament selection, Adv. Eng. Inf. 16 (2002) 193–203.[26] A.D. Belegundu, A study of mathematical programming methods for structural optimization, PhD thesis, Department of Civil and nvironmental

Engineering, University of Iowa, Iowa, 1982.[27] K. Deb, Optimal design of a welded beam via genetic algorithms, AIAA J. 29 (11) (1991).[28] C.A.C. Coello, Use of a self-adaptive penalty approach for engineering optimization problems, Comput. Ind. 41 (2) (2000) 113–127.[29] K.M. Ragsdell, D.T. Phillips, Optimal design of a class of welded structures using geometric programming, ASME J. Eng. Ind. Ser. B98 (3) (1976) 1021–

1025.[30] J.N. Siddall, Analytical Design-making in Engineering Design, Prentice-Hall, Englewood Cliffs, NJ, 1972.