23
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

  • View
    220

  • Download
    3

Embed Size (px)

Citation preview

Page 1: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

MAE 552 – Heuristic Optimization

Lecture 6

February 6, 2002

Page 3: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

Simulated AnnealingA psuedo-code of this algorithm might look like this.

T=current temperature

Do i=1,k

Generate a random displacement for a particle.

Calculate the change in energy, E = E’-E

If (E0) then

it’s a downhill move to lower energy so accept and update configuration

else

it’s an uphill move so generate random number P’[0,1]

compare with Pr(E)=exp(- E/KBT)

if (P’<Pr(E) then

accept move and update configuration

else

reject move – keep original configuration

endif

endif

enddo

Page 4: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

• SA is a application of the Metropolis Algorithm to function optimization.

• It assumes a similarity between the physical annealing of a solid and the global optimization of a function by the following:1. The value of an objective function can be viewed as

the energy of a solid. 2. The values of the Design Variables can be viewed

as the configuration of the particles of a solid.

• So, optimizing a function is analogous to finding the ground state of a solid.

Page 5: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

•A parameter, T, called the control parameter is used in place of the temperature as the Metropolis algorithm is used for function optimization.

•In physical annealing, T has a true physical meaning - the temperature of the material undergoing the annealing process.

•In function optimization, the parameter T, is simply an artificial control parameter that governs both the jumps that move out of local minima and the search for the global optimum.

•SA can be considered as a sequence of Metropolis algorithms evaluate for a decreasing sequence of the control parameter, T.

Page 6: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

1. For a high value of T, the objective is ‘melted’ and thus most uphill moves are accepted which allows a large-scale random search to be performed.

2. As the value of T decreases, fewer uphill moves are accepted. At this stage, searches are confined to a smaller region of the design space and the hill-jumping behavior is somewhat limited. However some local optima can still be avoided.

3. As the control parameter, T, approaches zero, almost no uphill moves are accepted and the solution almost ‘frozen’ to its final form. At this stage, SA acts like a traditional downhill only technique.

Page 7: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

•At each value of the control parameter, SA accepts or rejects a new configuration by using the Metropolis algorithm.

•The difference between the values of the evaluation function at two configurations is:

f = f (X’) - f (X)

X is the latest accepted solution.

X’ is the trial configuration

Page 8: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

• If f 0 - Accept the new configuration and use as a starting point for your next move.

• If f > 0:

Generate a random number P’=U[0,1]

Calculate the probability of acceptance of the move

)/()Pr( kTfef Where Tk is the kth value of the control parameter after the starting value of the control parameter.

• If P’ < Pr(f ), the new configuration is accepted, otherwise it is rejected.

Page 9: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

•To achieve ‘thermal equilibrium’ at each value of the control parameter, the SA process must go through sufficiently many iterations for the objective function to reach a steady state.

•Then as the control parameter approaches zero, the algorithm converges asymptotically to the global optimum.

Page 10: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

T0: m10, m20, m30, m40, …………………………………mm0

T1: m11, m21, m31, m41, …………………………………mm1

T2: m12, m22, m32, m42, …………………………………mm2

T3: m13, m23, m33, m43, …………………………………mm3

T4: m14, m24, m34, m44, …………………………………mm4

T5: m15, m25, m35, m45, …………………………………mm5

…..

Tn: m1n, m2n, m3n, m4n, …………………………………mmn

n=number of levels in cooling schedule

m=number of transitions in each Markov chain

Page 11: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

The following musty be specified in implementing SA:

1. An unambiguous description for the evaluation function (analogous to energy) and possible constraints.

2. A clear representation of the design vector (analogous to the configuration of a solid) over which an optimum is sought.

3. A ‘cooling schedule’ – this includes the starting value of the control parameter, To, and rules to determine when the current value of the control parameter should be reduced and by how much (‘the decrement rule’) and a stopping criterion to determine when the optimization process should be terminated.

Page 12: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

4. A ‘move set generator’ which generates candidate points.

5. An ‘acceptance criterion; which decides whether or not a new move is accepted.

• Steps 4 and 5 together are called a ‘transition mechanism’ which results in the transformation of a current state into a subsequent one.

Page 13: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

4. A ‘move set generator’ which generates candidate points.

5. An ‘acceptance criterion; which decides whether or not a new move is accepted.

• Steps 4 and 5 together are called a ‘transition mechanism’ which results in the transformation of a current state into a subsequent one.

Page 14: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

•The SA algorithm is outlined as follows:

Step 1

Input the starting value of the control parameter (temperature) Tk and set k=0.

Step 2

Choose a starting point (initial configuration) X0 and calculate the value of the objective function (energy) at X0, f(X0). Then set X=X0 and f(X)=f(X0).

Page 15: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA AlgorithmStep 3Use the transition mechanism to generate a random point X’ and compute f(X’). Evaluate f = f(X’) – f(X).If f 0:

accept X’,set X=X’,set f(X)=f(X’);

elsegenerate a random number P’ from [0,1],

compare P’ with Pr(f )=exp(- f /Tk),if P’<Pr(f ):

accept X’,set X = X’,set f(X)=f(X’);

else: reject X’ and keep the original point;endif;

endif;

Page 16: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

Step 4

Use the cooling schedule to decide if the steady state (thermal equilibrium) of the objective

Function has been reached at the current value of the control parameter.

If it is true:

reduce the control parameter by the decrement rule,

set k = k+1

else:

go to Step 3

endif.

Page 17: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

Step 5

Use the stopping criterion to decide if the simulated annealing algorithm has to be terminated.

If it is true:

stop;

else:

go to Step 3;

endif.

Page 18: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

Common Stopping Criteria

1. If Xbest does not change for successive Markov chains, then stop.

2. Fixed length cooling schedule – algorithms automatically stops when T reaches a certain level

3. Maximum number of function evaluations

Page 19: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm

• 2 Loops in the SA Algorithm•There is an inner loop that generates a sequence of trial points unit “thermal equilibrium” is reached at that value of the control parameter.

•There is also an outer loop that constantly decreases the control parameter and checks to see if the optimization process should be terminated.

Page 20: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

A

B

C

• Start with a ball at point A. Shake it up and it might jump out of A and into B.

•Give it another shake (adding energy) and it might go to C.

•This is the general idea behind SAs.

Page 21: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

The SA Algorithm – Convergence Issues

• Since the convergence of an stochastic algorithm is asymptotic SA asymptotically obtain a global optimum with probability of 1.

• The larger the number of samples, the higher the probability of the algorithm finding the global optimum.

• In general an infinite number of moves is required to obtain the exact global optimum.

• In practical implementations, this is not realizable and the asymptotic convergence must be approximated. This is done using a proper cooling mechanism which will be discussed next.

Page 22: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

• A cooling schedule is used to achieve convergence to a global optimum in function optimization.

• Cooling schedule describes how control parameter T changes during optimization process.

• First let us look at the concept of acceptance ratio, X(Tk).

X(Tk) = (# of Accepted Moves / # of Attempted Moves)

• If T is large almost all moves are accepted– X(Tk)->1

• As T decreases:– X(Tk)->1

• For maximum efficiency, it is important to set the proper value of To.

Cooling Schedules

Page 23: MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002

Simulated Annealing – Cooling Schedule

Steps in a cooling schedule:

1. Choose the starting value of the control parameter, T0. • It should be large enough to melt the objective function,

to leap over all peaks. • This is accomplished by ensuring that the initial X(T0) is

close to 1.0 (most random moves are accepted).

2. Start the SA Algorithm • At some T0 and execute for some number of transitions

and check X(T0). • If not close to 1.0 multiply Tk by a factor greater than 1.0

and execute again. • Repeat until X(T0) close to 1.0.