Upload
labyrinth43
View
435
Download
6
Tags:
Embed Size (px)
Citation preview
An Overview of Particle Swarm Optimization
Jagdish Chand BansalMathematics Group
Birla Institute of Technology and Science, PilaniEmail: [email protected], bits-pilani.ac.in
2
Overview
IntroductionAn ExampleSome DevelopmentsResearch Issues
3
Optimization Methods
Deterministic and
Probabilistic
4
Deterministic Method
MeritsGive exact solutionsDo not use any stochastic techniqueRely on the thorough search of the feasible domain.
DemeritsNot Robust- can only be applied to restricted class of problems.Often too time consuming or sometimes unable to solve real world problems.
5
Merits• Applicable to wider set of problems i.e. function need not be
convex, continuous or explicitly defined• Use the stochastic or probabilistic approach i.e. random
approach
Probabilistic Method
DemeritsConverges to the global optima probabilisticallySome times get stuck at local optima.
6
Some Existing Probabilistic Methods
Simulated Annealing (SA)Random Search Technique (RST)Genetic Algorithm (GA)Memetic Algorithm (MA)Ant Colony Optimization (ACO)Differential Evolution (DE)Particle Swarm Optimization (PSO)
7
Why PSO for Optimization ?
Continuous Optimization ProblemNon-differentiable,Non-Convex Highly nonlinear Many local-optima
Discrete Optimization ProblemNP-Complete problems: Nobody has found so far any good algorithm for any problem in this class
Search speed
8
Artificial Life
The term artificial life is used to describe research into human made systems that possess some of the essential properties of life. A-life includes two folded research:
A-life studies how computational techniques can help studying biological phenomena
Particle Swarm Optimization Inspiration
A-life studies how biological techniques can help out with computational problem
9
Inspiration cont..
Based on bird flocking or fish schooling and swarming theory of A-Life.
About fish schooling: “In theory at least, individual members of the school can profit from the discoveries and previous experience of all other members of the school during the search for food “.(a sociobiologist E. O. Wilson)
This is the basic concept behind PSO.
10
Inventors
Developed in 1995 by
Prof. James Kennedy (Right)
Prof. Russel Eberhart (Left)
11
PSO uses a population of individuals, to search feasible region of the function space. In this context, the population is called swarm and the individuals are called particles.
Though the PSO algorithm has been shown to perform well, researchers have not been able to explain fully how it works yet.
12
Each particle tries to modify its current position and velocity according to the distance between its current position and pbest, and the distance between its current position and gbest.
Update Equations
13
)()( 2211 currentgbestrccurrentpbestrcvv −+−+=
Current Velocity
Updated Velocity
Velocity Update Equation (Rate of Change in Particle’s Position)
rand (0,1), to stop the swarm converging too quickly
Acceleration factors, can be used to change the weighting between personal and population experience
This is the cognitive component which draws individuals back to their previous best situations.
This is the social component where individuals compare themselves to others in their group.
vcurrentcurrent += Position Update Equation
14
PSO Parameters
1. The number of particles :
20 – 40 particles. For most of the problems 10 particles are large enough to get good results.
2. Dimension of particles :It is determined by the problem to be optimized.
3. Range of particles :It is also determined by the problem to be optimized, we can specify different ranges for different dimension of particles.
4. Vmax :This is done to help keep the swarm under control. we set the range of the particle as the Vmax. e.g. X belongs [-10, 10], then Vmax = 20.One another approach is Vmax= ⎣(UpBound – LoBound)/5⎦
5. Learning/Acceleration factors :c1 and c2 usually equal to 2. However, other settings were also used in different papers. But usually c1equals to c2 and ranges from [0, 4].
6. The stopping criteria :The maximum number of iterations the PSO execute and the minimum error requirement.
16
Basic Flow of PSO
1. Initialize the swarm from the solution space2. Evaluate fitness of individual particles3. Modify gbest, pbest and velocity4. Move each particle to a new position.5. Go to step 2, and repeat until convergence or a
stopping condition is satisfied.
17
An Example
Understanding of Step by step Procedure of PSO
18
gbest PSO - global version is faster but might converge to local optimum for some problems.
lbest PSO - local version is a little bit slower but not easy to be trapped into local optimum.
One can use global version to get quick result and use local version to refine the search
Two Versions of PSO
19
BINARY PSO
This version has attracted much lesser attention as compared to PSO
Particle position is not a real value, but either 0 or 1
Velocity represents the probability of a bit to take the value 0 or 1 not the rate of change in particle’s position as in PSO for continuous optimization
20
BINARY PSO
The particle’s position in a dimension is randomly generated using sigmoid function
)exp(11)(
xxsigm
−+=
0
0.2
0.4
0.6
0.8
1
-6 -4 -2 0 2 4 6
21
Velocity and Position Update
)()( 2211 idgdidididid xprcxprcvv −+−+=
⎩⎨⎧ <
=otherwise
vsigmrandifx id
id 0)(()1
22
No Free Lunch Theorem
• In a controversial paper in 1997 (available at AUC library), Wolpert and Macready proved that “averaged over all possible problems or cost functions, the performance of all search algorithms is exactly the same”
• No algorithm is better on average than blind guessing
23
Important Developments
Almost all modifications vary in some way the velocity update equation.
24
PSO-W : With Inertia WeightPSO-C : With Constriction FactorFIPSO : Fully Informed PSOHPSOM : Hybrid PSO with Mutation MeanPSO : Mean PSOqPSO : Quadratic approximation PSO
A Brief Review
25
Inertia Weight
Shi and Eberhart introduced the inertia weight w in thealgorithm (PSO-W).Then the iterative expression becomes:
w represents the inertia weight, which enhances the exploration ability of particles
)()(* 2211 currentgbestrccurrentpbestrcvwv −+−+=vcurrentcurrent +=
26
Why Inertia Weight
When using PSO, it is possible for the magnitude of the velocities to become very large.
Performance can suffer if Vmax is inappropriately set.
For controlling the growth of velocities a dynamically adjusted or constant inertia weight were introduced.
Larger w - greater global search ability
Smaller w - greater local search ability.
27
Constriction Factor
Clerc and Kennedy proposed that the constriction factor is effective for the algorithm to converge (PSO-C)
))()((* 2211 currentgbestrccurrentpbestrcvv −+−+= χ
vcurrentcurrent +=
421 >+= ccφ( )42
2−−−
=φφφ
χ
28
Fully Informed PSO
A particle is attracted by every other particle in its neighborhood.
{ }⎥⎦
⎤⎢⎣
⎡−+= ∑
∈ iNiii icurrentiprcvv )()(*χ
29
)()(* 2211 currentgbestrccurrentpbestrcvwv −+−+=
)()(* 2211 currentgbestrccurrentpbestrcvv −+−+= χ
PSO algorithm performs well in the early stage, but easily becomes premature in the local optima area.
The velocity is only related with inertia weight and constriction factor
If the current position of a particle is identical with the global best position and if the current velocity is a small value, the velocity in next iteration will be smaller. Then the particle will be trapped in this area which leads to premature convergence.
This phenomenon is known as stagnation
Stagnation
30
Hybrid Particle Swarm Optimizer with Mutation (HPSOM).
HPSOM has the potential to escape from the local optimum and search in a new position. The mutation scheme randomly chooses a particle and then move to a different position in search area. The operation shows as follows:
5.0(),)(
5.0(),)(
>∆−=
<∆+=
randxxxmut
randxxxmutdi
di
di
di
x∆ is randomly obtained from
))](min)((max1.0,0[ drangedrange −×
This mutation operation is governed by a constant called probability of mutation
31
MeanPSO
gbest Solution0 current pbest
PSO
47.1pbestgbest −
47.1pbestgbest +
MeanPSO
32
33
R1 Particle with best fitness value
R2 and R3 Randomly chosen distinct particles
( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ⎟
⎟⎠
⎞⎜⎜⎝
⎛−+−+−−+−+−
=321213132
322
212
21
231
23
22*
)()()()(*5.0R
RfRRRfRRRfRRRfRRRfRRRfRR
Where f(Ri) is the objective function value at Ri , for i=1, 2, and 3.
The calculations are to be done component wise to obtain R* ..
qPSO:Quadratic Approximation (QA)
34
The Process of Hybridization
Figure 4.1: Transition from ith iteration to i+1th iteration
s1s2---sp
sp+1sp+2--sm
s'1s'2---s'p
s'p+1s'p+2--s'm
qPSO
PSO
QA
qPSO
PSO
QA
ith iteration i+1th iteration
Particle Index
The percentage of swarm which is to be updated by QA is called Coefficient of Hybridization (CH)
35
Random Swarm
ITER =0
Yes
End
Evaluate Objective Function Value of all Particles and Determine GBEST
Stopping Criterion Satisfied?
ITER =ITER + 1
Split Swarm S into subswarms S1 and S2
Is it possible to Determine R1, R2 and
R3 such that atleast two of them are distinct?
No
Determine pbest and gbest (=
GBEST) No
Yes
Velocity Update
Position Update using PSO
Determine R1 (= GBEST), R2 and R3
Position Update using QA
Report Best Particle
Evaluate Objective Function Value of all Particles and Determine
GBEST
Start
For S1 For S2
Flowchart of qPSO Process
36
Research Issues
Hybridization
Parallel Implementation
New Variants modification in Velocity Update EquationIntroduce some new operators in PSO
Discrete Particle Swarm Optimization
Interaction with biological intelligence
Convergence Analysis 36
37
Some Unsolved Issues
• Convergence analysis.• Dealing with discrete variables.• Combination of various PSO techniques
to deal with complex problems.• Interaction with biological intelligence.• Cryptanalysis.