7
Experimental Parameter Investigations on Particle Swarm Optimization Acceleration Coefficients Jian Zhang School of Information, Guangdong Ocean University, Zhanjiang, Guangdong, P. R. China, 524088, E-mail: [email protected] Abstract Particle swarm optimization (PSO) is one of the most successful optimization techniques of swarm intelligence and has been fast developed in recent years. However, the performance of PSO is significantly depended on the acceleration coefficients c 1 and c 2 which control the exploration and convergence abilities. Parameters c 1 and c 2 are the “self-cognitive” coefficient and “social-influence” coefficient respectively and are both set to 2.0 in traditional studies. Even though some studies have been conducted and argued that the c 1 and c 2 are unnecessary to be 2.0 for good performance, few literatures that based on the experimental study of the two parameters can be found. This paper gives a comprehensive investigation on the acceleration coefficients c 1 and c 2 through a set of 13 unimodal and multimodal benchmark functions, in order to study how to set these two parameters for different functions in order to obtain better performance. The experimental results indicate a conclusion that the sum of c 1 and c 2 should be clamped in the interval of [3.5, 4.5]. This conclusion would be the guidelines and rule for adapting c 1 and c 2 during the running phases of PSO. Keywords: Particle Swarm Optimization (Pso), Parameter Investigation, Acceleration Coefficients 1. Introduction Particle swarm optimization (PSO) is one of the most important swarm intelligence algorithms for global optimization [1][2]. The PSO algorithm is inspired by the swarm behavior of bird flocking and has developed fast in recently years, mainly due to its simple concept and easy implementation [3]. PSO is a population-based, generation iterative algorithms like genetic algorithm (GA) [4]. However, it is novel that PSO uses only two formulations to update each particle’s velocity and position generation by generation instead of the selection, crossover, and mutation operations which are essential in GA [5]. Since its introduction in 1995, PSO has been fast developed, with lots of work on performance improvement and real-world application. In the aspect of performance improvement, many researches can be found that conducted on the acceleration coefficients. Suganthan [6] and Ratnaweera et al . [7] suggested the time varying acceleration coefficients. Zhan et al . [8] proposed an adaptive PSO (APSO) that controls both the inertia weight and the acceleration coefficients. Apart from the studies on parameters control, some other researches are conducted to improve the algorithm performance by hybriding PSO with other techniques like the simplex method [9] and tabu search [10], or by using a comprehensive learning strategy [12], an orthogonal learning strategy [13], or a replacement strategy [14]. Moreover, due to the simple concept and low computational cost, PSO has been proposed into various kinds of real- world applications, like aircraft flight planning [15], wireless sensor networks design [16], power electronic circuit optimization [17], job shop scheduling [18], traffic flow forecasting [19], router CPU time management [20], and PID control design [21], etc. Among the amount of work on PSO as mentioned above, researchers have found that the PSO algorithm performance is significantly depended on the algorithm parameters, especially the acceleration coefficients [6]-[8]. In traditional PSO, the acceleration coefficients c 1 and c 2 are both set to be 2.0 [2]. Even though some studies have been conducted and argued that they are unnecessary to be 2.0 for good performance [6]-[8], they were designed by the researches’ experiences. Few literatures based on experimental study of the two parameters can be found. In order to make clear how the acceleration coefficients c 1 and c 2 affect the algorithm performance on different problems, and make clear how to set these two parameters to obtain better algorithm performance, this paper the first time to make a comprehensive investigation on Experimental Parameter Investigations on Particle Swarm Optimization Acceleration Coefficients Jian Zhang International Journal of Advancements in Computing Technology(IJACT) Volume4, Number5, March 2012 doi: 10.4156/ijact.vol4.issue5.12 99

Experimental Parameter Investigations on Particle Swarm … IJACT.VOL4NO5MAIN_part12.pdf · 2012-03-30 · Experimental Parameter Investigations on Particle Swarm Optimization Acceleration

  • Upload
    others

  • View
    24

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Experimental Parameter Investigations on Particle Swarm … IJACT.VOL4NO5MAIN_part12.pdf · 2012-03-30 · Experimental Parameter Investigations on Particle Swarm Optimization Acceleration

Experimental Parameter Investigations on Particle Swarm Optimization Acceleration Coefficients

Jian Zhang

School of Information, Guangdong Ocean University, Zhanjiang, Guangdong, P. R. China, 524088, E-mail: [email protected]

Abstract Particle swarm optimization (PSO) is one of the most successful optimization techniques of swarm

intelligence and has been fast developed in recent years. However, the performance of PSO is significantly depended on the acceleration coefficients c1 and c2 which control the exploration and convergence abilities. Parameters c1 and c2 are the “self-cognitive” coefficient and “social-influence” coefficient respectively and are both set to 2.0 in traditional studies. Even though some studies have been conducted and argued that the c1 and c2 are unnecessary to be 2.0 for good performance, few literatures that based on the experimental study of the two parameters can be found. This paper gives a comprehensive investigation on the acceleration coefficients c1 and c2 through a set of 13 unimodal and multimodal benchmark functions, in order to study how to set these two parameters for different functions in order to obtain better performance. The experimental results indicate a conclusion that the sum of c1 and c2 should be clamped in the interval of [3.5, 4.5]. This conclusion would be the guidelines and rule for adapting c1 and c2 during the running phases of PSO.

Keywords: Particle Swarm Optimization (Pso), Parameter Investigation, Acceleration Coefficients

1. Introduction

Particle swarm optimization (PSO) is one of the most important swarm intelligence algorithms for

global optimization [1][2]. The PSO algorithm is inspired by the swarm behavior of bird flocking and has developed fast in recently years, mainly due to its simple concept and easy implementation [3]. PSO is a population-based, generation iterative algorithms like genetic algorithm (GA) [4]. However, it is novel that PSO uses only two formulations to update each particle’s velocity and position generation by generation instead of the selection, crossover, and mutation operations which are essential in GA [5].

Since its introduction in 1995, PSO has been fast developed, with lots of work on performance improvement and real-world application. In the aspect of performance improvement, many researches can be found that conducted on the acceleration coefficients. Suganthan [6] and Ratnaweera et al. [7] suggested the time varying acceleration coefficients. Zhan et al. [8] proposed an adaptive PSO (APSO) that controls both the inertia weight and the acceleration coefficients. Apart from the studies on parameters control, some other researches are conducted to improve the algorithm performance by hybriding PSO with other techniques like the simplex method [9] and tabu search [10], or by using a comprehensive learning strategy [12], an orthogonal learning strategy [13], or a replacement strategy [14]. Moreover, due to the simple concept and low computational cost, PSO has been proposed into various kinds of real-world applications, like aircraft flight planning [15], wireless sensor networks design [16], power electronic circuit optimization [17], job shop scheduling [18], traffic flow forecasting [19], router CPU time management [20], and PID control design [21], etc.

Among the amount of work on PSO as mentioned above, researchers have found that the PSO algorithm performance is significantly depended on the algorithm parameters, especially the acceleration coefficients [6]-[8]. In traditional PSO, the acceleration coefficients c1 and c2 are both set to be 2.0 [2]. Even though some studies have been conducted and argued that they are unnecessary to be 2.0 for good performance [6]-[8], they were designed by the researches’ experiences. Few literatures based on experimental study of the two parameters can be found.

In order to make clear how the acceleration coefficients c1 and c2 affect the algorithm performance on different problems, and make clear how to set these two parameters to obtain better algorithm performance, this paper the first time to make a comprehensive investigation on

Experimental Parameter Investigations on Particle Swarm Optimization Acceleration Coefficients Jian Zhang

International Journal of Advancements in Computing Technology(IJACT) Volume4, Number5, March 2012 doi: 10.4156/ijact.vol4.issue5.12

99

Page 2: Experimental Parameter Investigations on Particle Swarm … IJACT.VOL4NO5MAIN_part12.pdf · 2012-03-30 · Experimental Parameter Investigations on Particle Swarm Optimization Acceleration

the acceleration coefficients c1 and c2 through a set of 13 unimodal and multimodal benchmark functions. Based on the comprehensive experimental results, this paper will summarize the values selection rules of these two parameters and the promising combination of c1 and c2 for better performance.

The rest of the paper is organized as follows. Section 2 introduces the PSO algorithm framework and makes a brief survey on the work on the acceleration coefficients. Section 3 makes the experimental study on the acceleration coefficients based on a set of 13 benchmark functions. Experimental results are discussed and the selection rules for setting the values of c1 and c2 are summarized. At last, Section 4 concludes this paper and highlights the future research work. 2. Particle Swarm Optimization 2.1. Algorithm Framework

In PSO, a swarm of particles are used to represent the potential solutions, and each particle i has two vectors, the velocity Vi = [vi1, vi2, …, viD] vector and the position Xi = [xi1, xi2, …, xiD] vector. Here D means that the solution is in D-dimension space. In the initialization, the velocity and position of each particle are set randomly within the search space. During the evolutionary process, the particle i is evaluated according to its present position. If the present fitness is better than the fitness of pBesti, which stores the best solution that the ith particle has been explored so far, then the pBesti will be replaced by the current solution (include the position and fitness). At the same time of determining pBesti, the algorithm selects the best pBesti of the swarm to be the globally best, which is regarded as gBest. Then, the velocity and position of each particle will be updated using the following two equations (1) and (2).

vid = vid + c1r1d(pBestid – xid) + c2r2d(gBestd – xid) (1) xid = xid + vid (2)

where is the inertia weight, and c1, c2 are the acceleration coefficients; r1d and r2d are two separately generated uniformly distributed random numbers in the ranges [0, 1] for the dth dimension. After updating the velocity and position, iteration goes on until the stop condition is met. 2.2. Acceleration Coefficients c1 and c2

In order to gain a clear view that how the PSO works, the equation (1) for velocity update

should be considered carefully. The first part of Eq. (1) represents the influence of the previous velocity, and the inertia weight is demonstrated to be important for the performance. The second part is considered as the “self-cognitive” component, which represents the memory and the personal thinking. This component encourages the particles to move towards their own best positions found so far. The third part of (1), however, is known as the “social-influence” component, which represents the cooperation of the swarm in finding the global optimal solution. This component always pulls the particles towards the globally best particle found so far.

The Eq. (1) shows that both the values of c1 and c2 have significant influences on the flying behavior. If c1 and c2 are too small, the particles have to use many generations to reach the good region of the search space. If c1 and c2 are too large, the particles will change their fly velocity dramatically and cannot converge to a certain region.

Kennedy and Eberhart [3] suggested the fixed values 2.0 for both c1 and c2. As the acceleration coefficients are multiplied by a uniform random value in range [0, 1], they give a mean value of 1.0 to make the particle “over-fly” the target about half the time. However, the authors of [3] claimed that further researches should be carried out to show whether there were some other optimal values other than 2.0 for better performance. From a psychological standpoint, the cognitive component coefficient (c1) and the social component coefficient (c2) should be time-varying values in order to get various exploration and convergence abilities in

Experimental Parameter Investigations on Particle Swarm Optimization Acceleration Coefficients Jian Zhang

100

Page 3: Experimental Parameter Investigations on Particle Swarm … IJACT.VOL4NO5MAIN_part12.pdf · 2012-03-30 · Experimental Parameter Investigations on Particle Swarm Optimization Acceleration

different evolutionary states. Therefore, it is significant and promising to research how to set these two parameters for better performance.

Some attempts have been conducted in the literatures on the studies of c1 and c2. Suganthan [6] found that for different problems, there existed different combinations of values for c1 and c2, which were not necessary to be 2.0, that yielded good performance. Ratnaweera et al. [7] proposed a linearly decreasing strategy for the acceleration coefficient c1 and a linearly increasing strategy for the acceleration coefficient c2 during the evolutionary process. Zhan et al. [8] further adaptively controlled the values of c1 and c2 according to different evolutionary states.

However, a systemic study on the parameters c1 and c2 to investigate their influences on PSO performance and to provide some selection rules of their values for good performance is still in great need. This paper makes the first attempt to make such an experimental study based on 13 unimodal and multimodal benchmark functions.

3. Experimental Studies 3.1. Experimental Settings

The experiments are based on thirteen benchmark functions as listed in Table 1 [22]. The functions are divided into two groups, with 6 unimodal functions (f1-f6) and 7 multimodal functions (f7-f13). All functions are with D=30 dimensions and their search ranges are given in Table 1.

Table 1. Thirteen benchmark functions

Function Search Range

Name Comments*

D

i ixxf1

21 )( [-100, 100]D Sphere U & S

D

i

D

i ii xxxf1 12 )( [-10, 10]D Schwefel 2.22 U & S

D

i

i

j jxxf1 1

23 )()( [-100, 100]D Quadric U & L

)1,(max)(4 Dixxf ii [-100, 100]D Schwefel 2.21 U & S

D

i ixxf1

25 )5.0()( [-100, 100]D Step U & S

D

i i randomixxf1

46 )1,0[)( [-1.28, 1.28]D Noise U & S

1

1

22217 ])1()(100[)(

D

i iii xxxxf [-10, 10]D Rosenbrock M & L

Di ixixxf 1 )sin()(8

[-500, 500]D Schwefel M & S

D

i ii xxxf1

29 ]10)2cos(10[)( [-5.12, 5.12]D Rastrigin M & S

exD

xD

xfD

i i

D

i i 20)2cos

1exp( )

12.0exp(20)(

11

210 [-32, 32]D Ackley M & S

D

i

D

i ii ixxxf1 1

211 1)/cos(4000/1)( [-600, 600]D Griewank M & L

D

i iD

i

D

i i

xuy

yyyD

xf

1

2

12

21

112

12

)4,100,10,(})1(

)](sin101[)1()(sin10{)(

axaxk

axa

axaxk

mkaxuxy

im

i

i

im

i

iii

,)(

,0

,)(

),,,( ),1(4

11 where

[-50, 50]D

Generalized Penalized

M & S

D

i iDD

i

D

i i

xuxx

xxxxf

1

2

12

21

112

13

)4,100,5,()]}2(sin1)[1(

)]3(sin1[)1()3({sin1.0)(

[-50, 50]D

Experimental Parameter Investigations on Particle Swarm Optimization Acceleration Coefficients Jian Zhang

101

Page 4: Experimental Parameter Investigations on Particle Swarm … IJACT.VOL4NO5MAIN_part12.pdf · 2012-03-30 · Experimental Parameter Investigations on Particle Swarm Optimization Acceleration

* U, M, S, and L stand for Unimodal, Multimodal, Variables Separable, and Variables Linkage, respectively

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c1

c 2

03113622593381.245E41.556E41.868E42.179E42.490E4

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c1

c 2

09.85019.7029.5539.4049.2559.1068.9578.80

(a) f1 Sphere (b) f2 Schwefel 2.22

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c1

c 2

0480096001.440E41.920E42.400E42.880E43.360E43.840E4

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c1

c 2

0.40008.05015.7023.3531.0038.6546.3053.9561.60

(c) f3 Quadric (d) f4 Schwefel 2.21

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c 2

c1

0320.05320100001.240E41.550E41.860E42.170E42.480E4

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c 2

c1

00.20000.50001.0002.0003.0005.00010.0010.00

(e) f5 Step (f) f6 Noise

Figure 1. The Contour Plot for Different Couples of c1 and c2 on Unimodal Functions, the Higher the Color in the Color Indicating Bar, the Better (Smaller) Value it Represents.

Experiments on each benchmark function can be briefly described as follows. When each

function is optimized by PSO, we make parameters c1 and c2 span from 0.5 to 3.5 separately by 0.1 increased each step. Therefore, many couples of c1 and c2 form (exactly 31 by 31 couples). PSO variants with a specific couple of c1 and c2 values optimize the function for 100 independent times. The mean final fitness value of these 100 runs is calculated. A matrix is used to store this mean value and then a matrix with 31 rows by 31 columns forms. We plot this matrix to a contour figure with the x-coordinate standing for c1 from 0.5 to 3.5 and y-coordinate

Experimental Parameter Investigations on Particle Swarm Optimization Acceleration Coefficients Jian Zhang

102

Page 5: Experimental Parameter Investigations on Particle Swarm … IJACT.VOL4NO5MAIN_part12.pdf · 2012-03-30 · Experimental Parameter Investigations on Particle Swarm Optimization Acceleration

standing for c2 from 0.5 to 3.5. This contour figure can compare the performance PSO with different couples of c1 and c2.

For the other experimental settings, the PSO algorithm has a population size of 20 particles, with the inertia weight being 0.5, and with the maximal generation number being 2000.

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c1

c 2

53.00100.0320.0100053201.532E45.320E41.532E53.200E5

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c1

c 2

-1.149E4-1.057E4-9658-8742-7826-6911-5995-5079-4163

(a) f7 Rosenbrock (b) f8 Schwefel

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c1

c 2

21.0055.2589.50123.8158.0192.3226.5260.8295.0

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c1

c 2

02.3254.6506.9759.30011.6313.9516.2718.60

(c) f9 Rastrigin (d) f10 Ackley

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c1

c 2

053201.320E53.200E54.460E55.575E56.690E57.805E58.920E5

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c1

c 2

03.532E45.320E51.000E62.680E73.350E74.020E74.690E75.360E7

0.5 1.0 1.5 2.0 2.5 3.0 3.50.5

1.0

1.5

2.0

2.5

3.0

3.5

c1

c 2

01.000E51.000E61.000E77.125E78.906E71.069E81.247E81.425E8

(e) f11 Griewank (f) f12 Generalized Penalized (g) f13 Generalized Penalized

Figure 2. The Contour Plot for Different Couples of c1 and c2 on Multimodal Functions, the Higher the Color in the Color Indicating Bar, the Better (Smaller) Value it Represents.

3.2. Experimental Results Discussions

The contour figures for the unimodal functions (f1-f6) are given in Fig. 1 and the contour figures for the multimodal functions (f7-f13) are given in Fig. 2.

The Fig. 1 and Fig. 2 show that for most of the functions, the sum of c1 and c2 lay in the interval [3.5, 4.5] would result in good solutions. Therefore, the traditional values of 2.0 for

Experimental Parameter Investigations on Particle Swarm Optimization Acceleration Coefficients Jian Zhang

103

Page 6: Experimental Parameter Investigations on Particle Swarm … IJACT.VOL4NO5MAIN_part12.pdf · 2012-03-30 · Experimental Parameter Investigations on Particle Swarm Optimization Acceleration

both c1 and c2 which make the sum 4.0 are also a good choice for PSO. The figures show that no matter for unimodal functions or multimodal functions, the contours share a similar characteristic that the figures are divided by the contour lines that are nearly parallel to the diagonal crossing the left-up corner to the right-down corner, and the value becomes worse and worse as the contour lines being further away from this main diagonal. That is to say, the combination of c1 and c2 should be chosen under the rule that their sum is near 4.0, more specifically, being in the interval of [3.5, 4.5], if we want to obtain good performance.

Moreover, the figures for the unimodal functions yield a characteristic that if the value of c1 is relative smaller, the value of c2 can span in a relative wider range while the opposite conclusion can be drawn if the value of c1 is relative larger. For example, Fig. 1(b) shows that, when c1 is set as 1.0, c2 can be set approximately from 2.25 to 3.25 (the width is about 1.00) to get good solution, whilst if c1 is set as 3.0, c2 should be clamped in a relative narrower range (approximately from 0.75 to 1.5, the width is about 0.75) if we want to get the good solution.

However, such a characteristic is not very evident for some multimodal functions. For example, the Fig. 2(c) shows that the good values for c2 are in the interval [3.0, 3.5] when c1 is 0.5, whilst the good values for c2 are in the interval [0.5, 1.5] when c1 is 3.0. Therefore, the span range for c2 is even larger when the value of c1 is larger. This means that setting c1 relative large may be good for the PSO performance on multimodal functions.

Observed from the Fig. 1 and Fig. 2 and by the discussions as above, we can conclude three rules for choosing good values for c1 and c2 as follows.

1) The sum of c1 and c2 should be clamped in the interval [3.5, 4.5]; 2) The c1 value can be relative small while the c2 value can be relative large when

optimizing unimodal problems; 3) The c1 value can be relative large while the c2 value can be relative small when

optimizing multimodal problems. 4. Conclusion and Future Work

A comprehensive experimental study on the acceleration coefficients c1 and c2 has been

carried out in this paper, and the rules for choosing c1 and c2 are summarized from the experimental results. The experimental results show that the sum of the c1 and c2 values should be clamped in the interval of [3.5, 4.5] for a PSO algorithm to obtain good performance.

It is expected that the parameter setting rules for c1 and c2 can be used by researchers who want to enhance the PSO algorithm via controlling the acceleration coefficients. For example, even though some studies have been conducted on controlling the values of c1 and c2 in the literatures [6][7][8], it seems that the study in [6] did not use constraints for c1 and c2, the study in [7] just simply decreased c1 from 2.5 to 0.5 while increased c2 from 0.5 to 2.5, and the study in [8] clamped the sum of c1 and c2 in [3.0, 4.0]. These studies may be further enhanced if the parameter setting rules obtained in this paper are used in these existing studies. In the future work, further experimental studies based on more complex benchmarks, such as those proposed in CEC2005, will be carried out for more comprehensive investigations. Moreover, researches on using the parameter setting rules for the acceleration coefficients c1 and c2 obtained in this paper will be carried out to enhance the traditional and existing improved PSO variants’ performance. 5. Acknowledgements

This work is partly supported by the Industry-Academia-Research Cooperation Project of

Guangdong Province in China under Grant No. 2011B090400008. 6. References [1] Jiangang Yang and Jinqiu Yang, “Intelligence Optimization Algorithms: A Survey”, International

Journal of Advancements in Computing Technology, vol. 3, no. 4, pp. 144~152, 2011. [2] James Kennedy, Russell Eberhart, and Yuhui Shi, Swarm Intelligence, San Mateo, CA: Morgan

Experimental Parameter Investigations on Particle Swarm Optimization Acceleration Coefficients Jian Zhang

104

Page 7: Experimental Parameter Investigations on Particle Swarm … IJACT.VOL4NO5MAIN_part12.pdf · 2012-03-30 · Experimental Parameter Investigations on Particle Swarm Optimization Acceleration

Kaufmann, USA, 2001. [3] James Kennedy and Russell Eberhart, “Particle swarm optimization,” in Proceedings of IEEE

International Conference on Neural Networks, pp. 1942–1948, 1995. [4] X. C. Dong, Z. S. Li, D. T. Ouyang, Y. X. Ye, ,H. H. Yu, and Y. G. Zhang, “An Improved

Genetic Algorithm for Optimal Tree Decomposition of Bayesian Networks”, IJACT: International Journal of Advancements in Computing Technology, vol. 3, no. 2, pp. 95~103, 2011.

[5] Yu-hui Shi and Russell Eberhart, “Comparison between genetic algorithms and particle swarm optimization,” in Proceedings of. 7th International Conference on Evolutionary Programming, pp. 611–616, 1998.

[6] P. N. Suganthan, “Particle swarm optimizer with neighborhood operator,” in Proceedings of IEEE International Congress on Evolutionary Computation, pp. 1958–1962, 1999.

[7] A. Ratnaweera, S. Halgamuge, and H. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240-255, 2004.

[8] Zhi-hui Zhan, Jun Zhang, Yun Li, and Henry Chung, “Adaptive particle swarm optimization,” IEEE Transactions on System, Man, and Cybernetics. Part B, vol. 39, no. 6, pp. 1362-1381, 2009.

[9] Ping Zhang, Ping Wei, Hong-yang Yu, and Chun Fei, “A Novel Search Algorithm Based on Particle Swarm Optimization and Simplex Method for Block Motion Estimation”, International Journal of Digital Content Technology and its Applications, vol. 5, no. 1, pp. 76~86, 2011.

[10] Yu-dong Zhang and Le-nan Wu, “A Hybrid TS-PSO Optimization Algorithm”, JCIT: Journal of Convergence Information Technology, vol. 6, no. 5, pp. 169~174, 2011.

[11] Cun-li Song, Xiao-bing Liu, Wei Wang, and Xin Bai, “A Hybrid Particle Swarm Optimization Algorithm for Job-Shop Scheduling Problem”, IJACT: International Journal of Advancements in Computing Technology, vol. 3, no. 4, pp. 79~88, 2011.

[12] J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol.10, no.3, pp.281-295. 2006.

[13] Zhi-hui Zhan, Jun Zhang, Yun Li, and Yu-hui Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832-847, 2011.

[14] Cheng-hong Yang, Sheng-wei Tsai, Li-yeh Chuang, and Cheng-huei Yang, “A Modified Particle Swarm Optimization for Global Optimization”, IJACT: International Journal of Advancements in Computing Technology, vol. 3, no. 7, pp. 169~189, 2011.

[15] Zhi-hui Zhan, Xinling Feng, Yuejiao Gong, and Jun Zhang, “Solving the flight frequency programming problem with particle swarm optimization,” in Proceedings of IEEE International Congress on Evolutionary Computation, pp. 1383-1390, 2009.

[16] Zhi-hui Zhan, Jun Zhang, and Zun Fan, “Solving the optimal coverage problem in wireless sensor networks using evolutionary computation algorithms,” in Proceedings of 8th International Conference of Simulated Evolution and Learning, pp. 166–176, 2010.

[17] Zhi-hui Zhan and Jun Zhang, “Orthogonal learning particle swarm optimization for power electronic circuit optimization with free search range,” in Proceedings of IEEE International Congress on Evolutionary Computation, pp.2563-2570, 2011.

[18] Rui Zhang, “A Particle Swarm Optimization Algorithm based on Local Perturbations for the Job Shop Scheduling Problem”, IJACT: International Journal of Advancements in Computing Technology, vol. 3, no. 4, pp. 256~264, 2011.

[19] Yue-sheng Gu, Yan-pei Liu, and Jiu-cheng Xu, “Improved Hybrid Intelligent Method for Urban Road Traffic Flow Forecasting based on Chaos-PSO Optimization”, IJACT: International Journal of Advancements in Computing Technology, vol. 3, no. 7, pp. 282~290, 2011.

[20] Mohammad Anbar and Deo-prakash Vidyarthi, “Router CPU Time Management using Particle Swarm Optimization in Cellular IP Networks”, IJACT: International Journal of Advancements in Computing Technology, vol. 1, no. 2, pp. 48~55, 2009.

[21] Ramzy S. Ali Al-Waily, “Design of Robust Mixed H2/H∞ PID Controller Using Particle Swarm Optimization”, IJACT : International Journal of Advancements in Computing Technology, vol. 2, no. 5, pp. 53~60, 2010.

[22] Xin Yao, Yong Liu, and Guangmin Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82-102, 1999.

Experimental Parameter Investigations on Particle Swarm Optimization Acceleration Coefficients Jian Zhang

105