24
36 CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS 3.1 INTRODUCTION There is a growing interest on the application of multi-objective optimization based design approach to enhance the performance of electrical machine. This necessitates the formulation of the design problem as a multi- objective optimization problem. In many cases, the multiple objectives are aggregated into one single objective function. Optimization is then conducted with one optimal design as the result. Another approach to tackle multi- objective design problems is to employ the concept of Pareto optimality. The outcome from such a multi-objective optimization is a set of Pareto optimal solutions that visualize the trade-off between the competing objectives. Research interest has increased over the past two decades on the development and application of evolutionary algorithms for multi-objective optimization (Deb 2001). In this study both the above mentioned approaches for multi- objective optimization problem are considered and modern computational intelligence technique based optimization techniques are applied to determine design solutions for SRM. This chapter provides a brief overview of optimization followed by discussion on GA, PSO, DE and NSGA-II applied for design optimization of SRM.

CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

  • Upload
    others

  • View
    19

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

36

CHAPTER 3

EVOLUTIONARY AND PARTICLE SWARM

OPTIMIZATION ALGORITHMS

3.1 INTRODUCTION

There is a growing interest on the application of multi-objective

optimization based design approach to enhance the performance of electrical

machine. This necessitates the formulation of the design problem as a multi-

objective optimization problem. In many cases, the multiple objectives are

aggregated into one single objective function. Optimization is then conducted

with one optimal design as the result. Another approach to tackle multi-

objective design problems is to employ the concept of Pareto optimality. The

outcome from such a multi-objective optimization is a set of Pareto optimal

solutions that visualize the trade-off between the competing objectives.

Research interest has increased over the past two decades on the development

and application of evolutionary algorithms for multi-objective optimization

(Deb 2001). In this study both the above mentioned approaches for multi-

objective optimization problem are considered and modern computational

intelligence technique based optimization techniques are applied to determine

design solutions for SRM. This chapter provides a brief overview of

optimization followed by discussion on GA, PSO, DE and NSGA-II applied

for design optimization of SRM.

Page 2: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

37

3.2 OPTIMIZATION

A single objective optimization problem is a problem in which one

seeks the best (maximum or minimum) value of a well defined objective. A

nonlinear optimization problem can be stated in mathematical terms as given

in Equation (3.1)

Find1 2 n

X x , x ,.....x (3.1)

such that F(X) is minimum or maximum

subject to the constraint and bounds as given by Equation (3.2).

g j (X) 0, j = 1, 2,..m and xjL xi xj

U, j = 1,2...n (3.2)

where F is the objective function to be minimized or maximized, xj’s are

variables, gj is constraint function, xjL and xj

U are the lower and upper bounds

on the variables.

While a single objective optimization provides a powerful tool to

explore the trade space of a given optimization problem, most problems in

nature have several objectives to be satisfied. These problems are classified as

multi-objective or multi-criteria problems. Such problems are common in

electrical machine design where one has to balance multiple requirements

while trying to achieve multiple goals simultaneously. The multi-objective

problem formulation may be presented as

1 2 kx F

min f (x),f (x),...., f (x) (3.3)

where nx R ,n

if : R R ,and F is the feasible set of Equation(3.3) which is

described by the inequalities as follows

n

iF x R :g (x) 0,i 1,2,...,p (3.4)

Page 3: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

38

Where )(xg i is called the constraint function .We denote kf (x) R the vector

made up of all objective functions, that is

1 2 kf (x) f (x),f (x),.....f (x) (3.5)

In multi-objective problem there may exist no single optimum

solution. Rather in such cases the focus is to find good compromises among

conflicting objectives. Hence in a certain class of multi-objective problems,

there always exist a number of solutions which can be termed as optimal. A

set of such optimal solutions is commonly known as Pareto-optimal solutions

or a Pareto front. These solutions are optimal in the sense that there is no

other solution in the search space superior than them when all the objectives

are taken into consideration. In other words the Pareto optimal solutions are

non dominated solutions (Deb 2001). Preference information of the decision

maker is needed to perform further selection.

An ideal solution of Equation (3.3) would be a point *x F such that

*

i if (x ) f (x), x F, i 1,2,.....k (3.6)

The point *x seldom exists, therefore Equation (3.3) turns into finding some or

all the Pareto optimal solutions. A point *x F is a Pareto optimal solution of

(3.3) if there does not exist any feasible point x F such that

*

i if (x) f (x ), i 1, 2,.....k (3.7)

and *

j jf (x) f (x )

for at least one index j 1,2,..k

Page 4: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

39

The illustration of Pareto front and non-dominated solutions for

two conflicting objectives f1 and f2 is given in Figure 3.1

Figure 3.1 Illustration of Pareto front

3.2.1 Handling of Multi-Objective Problems

Several methods exist to handle multi-objective problems (Deb

2001, Liuzzi et al 2003). The most wide spread method is the weighted sum,

in which each objective is assigned a weight and these weighted objectives

are added together into a single objective. The weight of the objective is

chosen in proportion to the objectives relative importance of the problem.

Weighted sum objective function for ‘M’ objectives ( 1 2 Mf (x),f (x),...., f (x) ) is

formulated as

M

i i

i 1

F(x) w f (x) (3.8)

Where iw 0 is the weighting coefficient representing the relative

importance of the ith objective function. By choosing different weight

coefficient for different objectives the preference of the decision maker is

taken into account. The weight coefficients are selected such that

Page 5: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

40

M

i

i 1

w 1 (3.9)

This method has the advantage of generating single compromised

solution.

Evolutionary algorithm based multi-objective optimization deal

simultaneously with a set of possible solutions. The entire set of Pareto

optimal solutions are determined in a single run of the algorithm.

In the weighted sum approach, the weighting of the different

objectives into a single objective is necessary which requires prior weights of

objectives. This approach finds one solution in one run. Whereas, multi-

objective optimization treats each objective independently, and does not

require any prior weights. Multi-objective optimization generates a set of

Pareto optimal solutions in one single run, and the designer can identify the

trade-off between competing objectives.

In this work to determine the Pareto optimal solutions, NSGA-II

has been applied. Further the design problem is formulated as a single

objective optimization problem using the weighted sum approach and solved

using PSO and DE techniques.

3.3 OPTIMIZATION METHODS

Classical optimization methods can be classified into two distinct

groups: direct and gradient based methods. In direct search methods, only the

objective function and the constraint values are used to guide the search

strategy, whereas gradient based methods use the first and second order

derivatives of the objective function to guide the search process. The common

Page 6: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

41

difficulties faced with most classical and gradient based methods are as

follows

1. The convergence to optimal solution depends on the chosen

initial solution.

2. Most algorithms tend to get stuck to suboptimal solution

3. An algorithm efficient in solving one optimization problem

may not be efficient in solving a different optimization

problem.

4. Algorithms are not efficient in handling problems having a

discrete search space.

5. Algorithms cannot be efficiently used on a parallel machine.

To overcome the above mentioned difficulties, Evolutionary

Algorithms (EAs) and swarm based techniques are applied to solve many real

world optimization problems. The main difference between a classical search

and EA based optimization techniques is that these techniques maintain a

population of potential solutions to a problem and not just one solution. In the

next section a description of GA, PSO and DE are given.

3.4 GENETIC ALGORITHM

GA based optimization (Goldberg 1989) is an adaptive heuristic

search technique that involves generation, systematic evaluation and

enhancement of potential design solution until a stopping criterion is met.

GA, derived from Darwin’s theory of natural selection mimics the

reproduction behavior observed in biological populations and employs the

principal of “survival of the fittest” in its search process. The idea is that an

individual (design solution) is more likely to survive if it is adapted to its

Page 7: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

42

environment (design objectives and constraints). Therefore, over a number of

generations, desirable traits will evolve and remain in genome composition of

the population over traits with weaker characteristics. Figure 3.2 illustrates the

flow chart of GA. There are three fundamental operators involved in the

search process of a genetic algorithm: selection, crossover and mutation.

Selection is a process in which the fittest individuals are selected to reproduce

offspring’s for the next generation. There are many approaches to conduct

“survival of the fittest” operations. Some common approaches are: fitness

proportional selection, ranking selection and tournament selection. Selection

is a process which chooses a chromosome from the current generation’s

population for inclusion in the next generation’s population according to their

fitness. Crossover operator combines two chromosomes to produce a new

chromosome (offspring) with the intention of improving the fitness of the

individuals in the next generation. Mutation operator maintains genetic

diversity from one generation of population to the next and aims to achieve

some stochastic variability of GA in order to get a quicker convergence.

Figure 3.2 Flow chart of GA

Page 8: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

43

3.5 PARTICLE SWARM OPTIMIZATION

Swarm Intelligence (SI) is mainly inspired by social behaviour

patterns of organisms that live and interact within large groups of

unsophisticated autonomous individuals. In particular, it incorporates

swarming behaviours observed in flocks of birds, schools of fish, or swarms

of bees, colonies of ants, and even human social behavior, from which the

intelligence is emerged (Liu et al 2007). PSO, proposed by Kennedy and

Eberhart (1995) uses a number of particles that constitute a swarm moving

around in an N-dimensional search space looking for the best solution.

Each particle in PSO keeps track of its coordinates in the problem

which are associated with the best solution (best fitness) it has achieved so

far. This value is called “pbest”. Another ‘‘best’’ value that is tracked by the

global version of the particle swarm optimizer is the overall best value and its

location obtained so far by any particle in the swarm. This location is called

“gbest”. Each particle tries to modify its position based on the current

position, current velocity, distance between the current position and the pbest

and distance between the current position and the gbest. The PSO concept

consists of altering the velocity of each particle towards its pbest and gbest

locations at each iteration. Acceleration is weighted by a random term, with

separate random number generating for acceleration toward pbest and gbest

locations.

PSO has many key features that attracted many researchers to

employ it in different applications (Coelho et al 2009, Chaturvedi et al 2009,

Yuan et al 2008, Valle 2008, Kannan et al 2007, Abido 2002) in which

conventional optimization algorithms might fail such as:

It only requires a fitness function to measure the ‘‘quality” of a

solution instead of complex mathematical operations like

gradient, Hessian, or matrix inversion. This reduces the

Page 9: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

44

computational complexity and relieves some of the restrictions

that are usually imposed on the objective function like

differentiability, continuity, or convexity

It is less sensitive to a good initial solution.

It can be easily incorporated with other optimization tools to

form hybrid ones

It has the ability to escape local minima since it follows

probabilistic transition rules.

PSO is a history-based algorithm such that in each step,

particles use their own behavior associated with the previous

iterations.

Compared to the other evolutionary optimization algorithms,

such as GA, PSO is easy to implement and, only few parameters

to be adjusted. Therefore, the computation time is less and it

requires less memory.

The PSO algorithm is explained as follows

Let X and V denote the particle’s position and its corresponding

velocity in search space, respectively. At iteration K, each particle i has its

position defined by Xi = [xi1, xi2, . . ., xiN] and velocity is defined as Vi = [vi1,

vi2, . . ., viN] in the search space N. Velocity and position of each particle in

the next iteration can be calculated as

k 1 k k k

i,n i,n 1 1 i ,n i,n 2 2 n i,nv wv c rand pbest x c rand gbest x (3.10)

where i=1, 2… m

n=1, 2… N

Page 10: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

45

k 1 k k 1

i ,n i ,n i ,nX X V if k 1

min,i ,n i ,n max,i,nX X X (3.11)

k 1

i ,n min,i ,nX X if k 1

i ,n min,i ,nX X (3.12)

k 1

i ,n max,i ,nX X if k 1

i ,n max,i ,nX X (3.13)

where m is the number of particles in the swarm, N is the number of

dimensions in a particle, K is the pointer of iterations (generations), k

i ,nX is

the current position of particle i at iteration k, k

i,nV is the velocity of particle i

at iteration k, W is the weighting factor, Cj is the acceleration factor and randj

is the random number between 0 and 1.

In the above procedures, the convergence speed of each particle

could be influenced by the parameters of acceleration factors C1 and C2. The

optimization process will modify the position slowly, if the value of Cj is

chosen to be very low. On the other hand, the optimization process can

become unstable, if the value of Cj is chosen to be very high (Eberhart et al

2000). The first term of Equation (3.10) is the initial velocity of particle,

which reflects the memory behavior of particle; the second term ‘‘cognition

part’’, which represents the private thinking of the particle itself and the third

part is the ‘‘social’’ part, which shows the particles behavior stem from the

experience of other particles in the population. The convergence of PSO

algorithm is guaranteed by proper tuning of the cognitive and social

parameters of the algorithm (Clerc 2002).

The following weighting function is usually used in Equation (3.10)

max minmax

max

w ww w iter

iter (3.14)

Page 11: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

46

where Wmax and Wmin are the initial and the final weight, respectively, Iter is

the current iteration number and Itermax is maximum iteration number. The

model using Equation (3.14) is called the ‘inertia weights approach’. The

inertia weight is employed to control the impact of the previous history of

velocities on the current velocity (shi et al 1998). Thus, the parameter W

regulates the trade-off between the global and the local exploration abilities of

the swarm. A large inertia weight facilitates exploration, while a small one

tends to facilitate exploitation.

In this work two modifications are introduced in classical PSO

algorithm and their performance with respect to design optimization of SRM

is analysed. The modifications introduced are explained in the following

section.

3.5.1 Enhanced PSO with Craziness Factor (EPSO1)

This method incorporates a craziness operator with chaotic weight

updation to maintain the diversity of the particles. In birds flocking or fish

schooling, a bird or a fish frequently changes directions. This is described

using a “craziness” factor (Kennedy et al 1995). To maintain the diversity of

the particles in an optimization algorithm, it is necessary to introduce the

craziness operation in a PSO algorithm (Ho et al 2005). Consequently, before

updating its position using Equation (3.11), the velocity of the particle is

given by

k 1 k 1 cr

i ,n i ,n 4 4v v p(r )sign(r )V (3.15)

Where 4r a random number is uniformly distributed between 0 and 1. crV is a

random parameter uniformly chosen between minV and maxV . 4p(r )and 4sign(r )

are defined respectively as

Page 12: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

47

P( 4r ) = 1 if 4r crazinessp

= 0 otherwise (3.16)

sign( 4r ) =1 if 4r 0.5

=-1 otherwise (3.17)

where crazinessp is a predefined probability of craziness.

The inertia weight is an important control parameter that affects the

PSO’s convergence. The use of chaotic sequences (caponetto et al 2003, Bo

Liu et al 2005) in PSO can be helpful to escape easily from local minima than

the traditional PSO methods. In this paper, to enrich the searching behavior

and to avoid being trapped into local optimum, chaotic dynamics is

incorporated into PSO. In this context, Chaotic PSO (CPSO) approach based

on logistic equation is applied for determining the weight factor. The logistic

equation is defined as follows

y(k) * y(k 1)*(1 y(k 1)) (3.18)

where k is the sample and is the control parameter, 0 4 .The behavior

of the system represented by Equation (3.18) is greatly changed with the

variation of . The value of determines whether ‘y’ stabilizes at a constant

size, oscillates between limited sequences of sizes, or behaves chaotically in

an unpredictable pattern. And also the behavior of the system is sensitive to

initial value of ‘y’ (song et al 2007). Equation (3.18) is deterministic,

displaying chaotic dynamics when =4 andy(1) 0,0.25,0.5,0.75,1 .

The parameter ‘W’ of Equation (3.14) is modified by the Equation

(3.19) through the following equation:

newW W y(k) (3.19)

Page 13: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

48

The conventional weight decreases monotonously from Wmax to

Wmin, the proposed new weight decreases and oscillates simultaneously for

total iteration as shown in Figure 3.3

0 20 40 60 80 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Iteration

Weight

Wnew

W

Figure 3.3 Weight updation using chaotic sequences

3.5.2 Enhanced PSO with Concept of Repellor (EPSO2)

In this method, the improvements include two aspects. Firstly, the

concept of the repellor that acts complementary to the concept of the attractor

is introduced into PSO, that is, the particle is made to remember its worst

position also. With the inclusion of the worst experience component the

particle can bypass its previous worst position and tries to occupy a better

position. Secondly, chaotic sequence is applied to update the inertia weight

instead of a linearly decreasing function. According to these improvements

the modified velocity update formula is given as

Page 14: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

49

k 1 k k k k

i,n i,n 1 1 i,n i,n 2 2 n i,n 3 3 3 i,n i,nv wv c rand pbest x c rand gbest x c p(r )r pworst x

(3.20)

where 3c is a weight factor. 3r is a random number uniformly distributed

between 0 and 1.P ( 3r ) is defined as

P( 3r ) =1 if 3r <=repel

p

= 0 otherwise (3.21)

whererepel

p is a predefined probability of inclusion of worst experience

component.

The third component introduced in the velocity update Equation

(3.20) is called the bad experience component, which makes the particle

remember its previously visited worst position and can help in exploring the

search space very effectively to identify the promising solution region

(Leontitsis et al 2002). The positions are updated using Equation (3.11)-(3.13).

In addition to incorporation of concept of repellor, the parameter

‘W’ is updated using the chaotic sequence as per the Equation (3.19).

3.6 DIFFERENTIAL EVOLUTION

DE is one of the most prominent new generation evolutionary

algorithms, proposed by Storn and Price (1995 and 1997). The advantages of

DE over other evolutionary algorithms, include simple and compact structure,

few control parameters, high convergence characteristics. DE exhibits

consistent and reliable performance when applied to problems in various

fields such as parameter identification (Subudhi et al 2009) economic dispatch

of electric power systems (Coelho et al 2006, Panda et al 2009), image

processing applications (Coelho et al 2009) and other significant engineering

Page 15: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

50

applications (Das et al 2008, Das et al 2011). DE combines simple

arithmetical operators with the classical operators of recombination, mutation

and selection to evolve final solution from a randomly generated starting

population.

In each generation, individuals of the current population are

designated as target vectors. For each target vector, the mutation operation

produces a mutant vector, by adding the weighted difference between two

randomly chosen vectors to a third vector. By mixing the parameters of the

mutant vector with those of the target vector a new vector, called trial vector

is generated by crossover operation. If the trial vector obtains a better fitness

value than the target vector, then the trial vector replaces the target vector in

the next generation.

Price and Storn (1995 and 1997) proposed 10 different variants for

DE based on the individual being perturbed, the number of individuals used in

the mutation process and the type of crossover used. Each strategy generates

trial vectors by adding the weighted difference between other randomly

selected members of the population. The general convention used above is

DE/x/y/z. DE stands for differential evolution, x represents a string denoting

the vector to be perturbed, y is the number of difference vectors considered

for perturbation of x, and z stands for the type of crossover being used

exponential or binomial. The optimization procedure of DE is given by the

following steps

Step 1: Parameter setup

Initialize DE parameters such as population size, the boundary

constraints of optimization variables, the mutation factor (F), the crossover

rate (CR), and maximum number of iterations (generations), iter_max.

Page 16: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

51

Step 2: Initialization of the population

DE starts with a population of NP D-dimensional search variable

vectors. The i th vector of the population at the current generation is given by

i i,1 i ,2 i ,3 i,DX (t) x (t), x (t), x (t)..........x (t) (3.22)

There is a feasible numerical range for each search-variable, within

which value of the parameter should lie for better search results. Initially the

problem parameters or independent variables are initialized in their feasible

numerical range. If the jth parameter of the given problem has its lower and

upper bound as L

jx and U

jx respectively, then the jth component of the ith

population members is initialized as

L U L

i, j j j jx (0) x rand(0,1) (x x ) (3.23)

Step 3: Evaluation of the population

Evaluate the fitness value of each individual of the population.

Step 4: Mutation operation (or differential operation)

In each iteration to change the population member iX (t) , a donor

vector iV (t) is created. To create iV (t) for each i th member, three other

parameter vectors ( 1 2 3r , r ,r vectors) are selected in random fashion from the

current population. A scalar number F scales the difference of any two of the

three vectors and the scaled difference is added to the third one to obtain the

donor vector iV (t) . The mutation process for jth component of each vector is

expressed by equation.

Page 17: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

52

i , j r1, j r 2, j r3,jv (t 1) x (t) F (x (t) x (t)) (3.24)

The method of creating donor vector demarcates between various

DE schemes. Price and storn have suggested ten different mutation strategies.

The above mutation strategy is referred as DE/rand/1 .This scheme uses a

randomly selected vector r1X and only one weighted difference vector

r2 r3F (X X ) is used to perturb it. In this work mutation strategy DE/best/1

is used. In this scheme the vector to be perturbed is the best vector of the

current population and the perturbation is caused by single difference vector.

i , j best r1, j r 2, jv (t 1) x (t) F (x (t) x (t)) (3.25)

F is a real parameter, called mutation factor, which controls the amplification

of the difference between two individuals so as to avoid search stagnation.

Step 5: Crossover operation

To increase the potential diversity of the population a crossover

operator is used. DE uses two kinds of cross over schemes namely

“Exponential” and “Binomial”. In this work binomial crossover is used. In

this crossover scheme, the crossover is performed on each of the D variables

whenever a randomly picked number between 0 and 1 is within the CR value.

The scheme may be outlined as

i, j i , ju (t) v (t) if (rand(0,1)) CR

i , jx (t) if (rand(0,1)) CR (3.26)

In this way the trial vector is developed from the elements of the

target vector and the elements of the donor vector.

Page 18: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

53

Step 6: Selection operation

Selection operator is used to determine which one of the target

vector and the trial vector will survive in the next generation. The selection

process may be outlined as

i iX (t 1) U (t) if i if (U (t)) f (X (t)

iX (t) if i if (X (t) f (U (t)) (3.27)

where f is the function to be minimized. If the new trial vector yields a better

value of the fitness function, it replaces its target in the next generation

otherwise the target vector is retained in the population. Once a new

population is installed, the process of mutation, recombination and selection is

repeated until the number of generations reaches iter_max.

3.6.1 Differential Evolution with Chaotic Sequences

The three vital control parameters of DE are the population number,

the mutation factor and the crossover rate. The speed and robustness of the

search are affected with the variation of these parameters. The difficulty in the

use of DE arises in view of the fact that the choice of these is mainly based on

empirical evidence and practical experience (Qin et al 2005). DE’s parameters

usually are constant throughout the entire search process. However, it is

difficult to properly set control parameters in DE. The application of chaotic

sequences in mutation factor design is a powerful strategy to diversify the DE

population and improve DE’s performance in preventing premature

convergence to local minima (Coelho et al 2009). The application of chaotic

sequences can be a good alternative to provide the search diversity in

stochastic optimization procedures. Due to the ergodicity property, chaos can

be used to enrich the searching behavior and to avoid being trapped into local

Page 19: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

54

optimum in optimization problems. In this work, chaotic dynamics based on

logistic equation is incorporated into the DE, for determining the mutation

factor

The two Chaotic DE (CDE) approaches in combination of chaotic

sequences are described as follows

CDE1 approach: The mutation factor ‘F’ of Equation (3.25) is

modified by the formula (3.28) through the following equation:

i best i,2 i ,3v (t 1) x (t) y(k)* (x (t) x (t)) (3.28)

CDE2 approach: The parameter ‘F’ of Equation (3.25) is modified

using the following equation

i best i,2 i,3

iterv (t 1) x (t) y(k)* exp( ) *(x (t) x (t))

iter _ max (3.29)

The value of y(k) is determined using the Equation (3.18).

3.7 NON-DOMINATED SORTING GENETIC ALGORITHM II

When dealing with multi-objective problems, EAs are superior to

classical strategies as they can find multiple Pareto-optimal solutions

simultaneously. There are numerous versions of the MOEAs discussed in

(Deb 2001). In these approaches, a simple evolutionary algorithm is extended

to maintain a diverse set of solutions with the emphasis on moving toward a

true Pareto-optimal region. The non-dominated sorting GA (NSGA) proposed

by Srinivas and Deb (1994), is one of the first such algorithms. NSGA-II,

proposed by Deb et al (2002) is an improved version of NSGA. It has better

computational complexity, ensures the elitism, and has more even individual

distribution over the Pareto front. In (Deb et al 2002) a comparison of NSGA-

Page 20: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

55

II with the two other powerful algorithms: Pareto-archived evolution strategy

and strength Pareto is presented which shows that NSGA-II out performs its

competitors when used for solving truly diverse problems. NSGA-II has

drawn the attention of researcher’s world wide resulting in the application of

the algorithm to solve wide variety of engineering problems (Kannan et al

2009, Zhihuan 2010).

Figure 3.4 Flow chart of NSGA-II

Page 21: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

56

The NSGA-II algorithm uses non-dominated sorting for fitness

assignments. One individual is said to dominate another if 1) its solution is no

worse than the other in all objectives and 2) its solution is strictly better than

the other in at least one objective. The offspring is first created using the

parent population. Then the two populations are combined together and a non-

dominated sorting is used to classify the entire population. This allows global

non domination check among the offsprings and parent solutions. All

individuals not dominated by any other individuals are assigned front number

1. Individuals dominated only by the individuals in front number 1 are

assigned front number 2, and so on. The simulated binary crossover and

polynomial mutation generate new offspring, and tournament is then used to

select the population for next generation. The new population is filled by

solutions of different non dominated fronts one at a time. The filling starts

with the best non-dominated front and continues with solutions of second

non-dominated front, followed by third non-dominated front and so on till N

slots available in the new population are accommodated. The flowchart of

NSGA-II approach is given in Figure 3.4.

3.7.1 Simulated Binary Crossover

The SBX operator works with two parent solutions and creates two

offspring. The difference between offspring and parent depends on crossover

index c .The procedure for determining offspring solutions from parent

solutions is given below. A spread factor i is defined as the ratio of the

absolute difference in the offspring values to its parents.

(2,t 1) (1,t 1)

i ii (2,t ) (1,t )

i i

x x

x x (3.30)

Page 22: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

57

First a random number ui between 0 and 1 is created. Thereafter

from a specified probability distribution function, the ordinate qi is found so

that the area under the probability curve from 0 to qi is equal to the chosen

random number ui.

ci c ip( ) 0.5( 1) , if i 1 (3.31)

i c 2ci

1p( ) 0.5( 1) , if i 1

Using Equation (3.31) we calculate qi by equating the area under the

probability curve equal to ui as follows

1

1cqi i

(2u ) if iu 0.5 (3.32)

1

1cqi

i)

1( )2(1 u

, if ui > 0.5

In the Equation (3.31) and (3.32) the distribution index c is any

positive real number. A large value of “ c ” gives a higher probability for

creating “near-parent” solutions and a small value of “ c ” allows distant

solutions to be selected as offspring. After obtaining qi from the probability

distribution function, the off spring is calculated as follows.

(1,t 1) (1,t ) (2,t )

i qi i qi ix 0.5 (1 )x (1 )x (3.33)

(2,t 1) (1,t ) (2,t )

i qi i qi ix 0.5 (1 )x (1 )x (3.34)

Page 23: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

58

The following step by step procedure is followed to create two offsprings

Step1: Choose a random number ui [0,1]

Step2: Calculate qi using Equation (3.32)

Step3: Create offsprings using Equations (3.33) and (3.34)

3.7.2 Polynomial Mutation

In polynomial mutation, the probability distribution function is a

polynomial function instead of a normal distribution. The shape of the

probability distribution is directly controlled by an external parameter and the

distribution remains unchanged throughout the iterations. The offsprings

created are given by the formula

(1,t 1) (1, t 1) U L

i i i i iy x (x x ) (3.35)

where the parameter i is calculated from the polynomial probability

distribution

mm

p( ) 0.5( 1)(1 ) (3.36)

1

( m 1)

i i(2r ) 1if ir 0.5 (3.37)

1

( m 1)

i i(1 2(1 r ) if ir 0.5

The shape of the probability distribution is directly controlled by an

external parameter “ m ”, and the distribution is not dynamically changed

with iterations.

Page 24: CHAPTER 3 EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION …shodhganga.inflibnet.ac.in/bitstream/10603/22885/8/08_chapter 3.pdf · EVOLUTIONARY AND PARTICLE SWARM OPTIMIZATION ALGORITHMS

59

3.7.3 Crowded Tournament Selection

The crowded comparison operator ( c ) guides the selection process

towards the Pareto front. Assume that every solution i has two attributes: the

non-domination rank (ri) and crowding distance (d i). A solution i wins a

tournament with another solution j

if { (ri < rj) or (ri = rj and di > dj)} (3.38)

The crowding-distance computation requires sorting of a given

population according to each objective function value in ascending order of

magnitude. Once this is done, the two boundary solutions with the largest and

smallest objective values are assigned distance values of infinity. All other

solutions lying in between these two solutions are then assigned a distance

value calculated by the absolute normalized distance between each pair of

adjacent solutions.

3.8 CONCLUSION

This chapter gives an overview of single and multi-objective

optimization algorithms based on swarm intelligence and evolutionary

computation techniques. Further, the modifications introduced in PSO and DE

techniques to enhance their performance are discussed.