64
Sandeep Kumar Poonia Algorithms Local Search Methods By: Sandeep Kumar Poonia Asst. Professor, Jagannath University, Jaipur 6/15/2013

Lecture29

Embed Size (px)

Citation preview

Page 1: Lecture29

Sandeep Kumar Poonia

Algorithms

Local Search Methods

By: Sandeep Kumar Poonia

Asst. Professor, Jagannath University, Jaipur

6/15/2013

Page 2: Lecture29

Genetic algorithms

• Genetic algorithms is based on the concepts

from population genetics and evolution theory.

The algorithm are constructed to optimize

fitness of a population of elements through

crossover (recombination) and mutation

(perturbation) operations on their genes.

Sandeep Kumar Poonia

6/15/2013

Page 3: Lecture29

The Ant algorithm is based on the observation of real ant’s

behavior. Ants can coordinate their activities via stigmergy, a

way of indirect communication through the modification of the

environment. The main idea of ant algorithm is to use self-

organizing principles of artificial agents which collaborate to

solve the problems.

Sandeep Kumar Poonia

6/15/2013

The Ant algorithm

Page 4: Lecture29

Tabu search

• Tabu search is a deterministic iterative

improvement local search method with a

possibility to accept worse-cost local solution

in order to escape from local optimum. The set

of legal local solutions are restricted by a tabu

list which is designed to present from going

back to the recently visited solutions. The set

of solutions in tabe list are not accepted in the

next iteration.

Sandeep Kumar Poonia

6/15/2013

Page 5: Lecture29

1

Particle Swarm Optimization (PSO) algorithm is a population-

based stochastic optimization method proposed by James

Kennedy and R. C. Eberhart in 1995. It is motivated by social

behavior of organisms such as bird flocking and fish schooling.

In the PSO algorithm, the potential solutions called particles, are

flown in the problem hyperspace. Change of position of a

particle is called velocity. The particle changes their position

with time. During flight, particle’s velocity is stochastically

accelerated toward its previous best position and toward a

neighborhood best solution. PSO has been successfully applied

to solve various optimization problems, artificial neural network

training, fuzzy system control, and others.

Sandeep Kumar Poonia

6/15/2013

Page 6: Lecture29

1

The Bees Algorithm is a new population-based search algorithm,

first developed in 2005 by Pham DT etc. [1] and Karaboga.D [2]

independently. The algorithm mimics the food foraging

behaviour of swarms of honey bees. In its basic version, the

algorithm performs a kind of neighbourhood search combined

with random search and can be used for optimization problems.

Sandeep Kumar Poonia

6/15/2013

Page 7: Lecture29

Contents

1 Introduction

2 Genetic Algorithms

3 Non-linear Programming Problems

4 Foundation of Genetic Algorithms

5 Metaheuristic Algorithms

Sandeep Kumar Poonia

6/15/2013

Page 8: Lecture29

6. Simulated Annealing

7. Tabu Search

8. Ant Algorithms

9. Particle Swarm Optimization

10. Bee Algorithms

Sandeep Kumar Poonia

6/15/2013

Page 9: Lecture29

11. The Other Bio-Inspired Metaheuristic Algorithms

12. Traveling Salesman Problems

13. Knapsack Problems

14. Set Covering Problems

15. Minimum Spanning Tree Problems

16. Flow Shop Scheduling Problems

17. Job Shop and Open Shop Scheduling Problems

Sandeep Kumar Poonia

6/15/2013

Page 10: Lecture29

18. Bin Packing Problems

19. Data Mining

20. Quadratic Assignment Problems

21. Reliability Optimization Problems

22. Layout and Location Problems

Sandeep Kumar Poonia

6/15/2013

Page 11: Lecture29

1. The Computational complexity

We do not expect NP-hard (and NP-complete) problems to be

solved in polynomial steps.

Ambati et al. (1991): An evolution algorithm that can achieve

heuristic solutions 25% worse than the expected optimal solution on

random Traveling Salesman Problem in O(NlogN) time.

Fogel (1993): An evolution algorithm that can achieve heuristic

solutions 10% worse than the expected optimal solution on random

Traveling Salesman Problem in O(N2) time.

Sandeep Kumar Poonia

6/15/2013

Page 12: Lecture29

goal

statecurrent

state

initial

state

Algorithm – Problem Solving

2. Search Spaces (Solution Space)

How to do efficient search in the state space?

Sandeep Kumar Poonia

6/15/2013

Page 13: Lecture29

1

1. To obtain a good initial state.

2. Goal identification (identify the optimal condition).

3. Control the search process.

For some problems, we are not able to identify the goal state.

Sandeep Kumar Poonia

6/15/2013

Page 14: Lecture29

9

Evolution Process

01

02

0k

S

S

S

11

12

1k

S

S

S

21

22

2k

S

S

S

1

2

n

n

nk

S

S

S

. . . .

Initial

Population

s

1st

Generation

2nd

Generation

n-th

Generation

. . . .

Genetic Algorithm, Ant Algorithm, Particle Swarm Optimization,

Bee Algorithm, Firefly Algorithm, . . .

Sandeep Kumar Poonia

6/15/2013

Page 15: Lecture29

Example 1: Simplex algorithm for linear program

Goal identification: primal feasible & dual feasible condition

Goal identification:

1. The goal state can be identified. (e.g. linear program, convex program )

2. The goal state cannot be identified. (e.g. traveling salesman problem,

quadratic assignment problem, knapsack problem, scheduling

problems, . . . )

Control the search process: moving to improved adjacent basic

solution

Example 2: Steepest descent algorithm for convex program

Goal identification: Karash-Kuhn-Tucker condition

Control the search process: moving to the steepest descent

direction ( minimization problem)

Sandeep Kumar Poonia

6/15/2013

Page 16: Lecture29

Example 4: Ant algorithm for the Traveling Salesman Problem

Goal identification: The optimal condition can not be identified

efficiently. Using heuristic rules to identify approximate solution.

Control the search process: pheromone & state transition

probability

Example 3: Genetic algorithm for the Traveling Salesman Problem

Goal identification: The optimal condition can not be identified

efficiently. Using heuristic rules to identify approximate solution.

Control the search process: Crossover, mutation and selection

operations.

Sandeep Kumar Poonia

6/15/2013

Page 17: Lecture29

A state space of a Traveling Salesman Problem with n = 4.

(A X X X)

(A B X X)

(A C X X)

(A D X X)

(A B C D)

(A B D C)

(A C B D)

(A C D B)

(A D B C)

(A D C B)

Sandeep Kumar Poonia

6/15/2013

Page 18: Lecture29

1

searchincomplete

space)theinsearch(partialSearchLocal

searchcomplete

sapce)theinsearchc(systematiSearchExact

MethodsSearch

3. Search Methods

Sandeep Kumar Poonia

6/15/2013

Page 19: Lecture29

Exact search methods:

Advantage: To produce an optimal solution and It is able to

detect that a given problem has no feasible solution.

Disadvantage: It is time-consuming. For example, it is infeasible

for real-time problems.

Local search methods:

Advantage: It is time-efficient and easy to write a program.

Disadvantage: It may not produce an optimal solution and is not

able to detect that a given problem has no feasible

solution.

Traversal Search, Backtracking Algorithm, Branch and Bound

Algorithm, Dynamic Programming Method, etc.

Traditional Local search does not provide a mechanism for the

search to escape from a local optimum. The goal of local search is

find a solution which is as close as possible to the optimum.Sandeep Kumar Poonia

6/15/2013

Page 20: Lecture29

local search• local search is a metaheuristic method for solving

computationally hard optimization problems.

• Local search can be used on problems that can be

formulated as finding a solution maximizing a

criterion among a number of candidate solutions.

• Local search algorithms move from solution to

solution in the space of candidate solutions (the

search space) by applying local changes, until a

solution deemed optimal is found or a time bound

is elapsed.

Sandeep Kumar Poonia

6/15/2013

Page 21: Lecture29

local search

• A local search algorithm starts from a

candidate solution and then iteratively

moves to a neighbor solution. This is only

possible if a neighborhood relation is

defined on the search space.

• Every candidate solution has more than one

neighbor solution; the choice of which one

to move to is taken using only information

about the solutions in the neighborhood of

the current one, hence the name local

search.Sandeep Kumar Poonia

6/15/2013

Page 22: Lecture29

Metaheuristic

• Metaheuristic designates a computational

method that optimizes a problem by

iteratively trying to improve a candidate

solution with regard to a given measure of

quality. Metaheuristics make few or no

assumptions about the problem being

optimized and can search very large spaces

of candidate solutions.

Sandeep Kumar Poonia

6/15/2013

Page 23: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 24: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 25: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 26: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 27: Lecture29

Background and Motivation

Local Search Methods

Metaheuristics

Integer Programming

Exact Methods

Sandeep Kumar Poonia

6/15/2013

Page 28: Lecture29

Background and Motivation

• Local Improvement

• Tabu Search

• Iterated Local Search

• Simulated annealing

• Genetic Algorithms

• Evolutionary algorithms

• Ant Colony Optimization

• Scatter Search

• Memetic Algorithms

• Etc….

Local Search Methods

Metaheuristics

Sandeep Kumar Poonia

6/15/2013

Page 29: Lecture29

Background and Motivation

• Branch-and-bound

• Branch-and-cut

• Column generation

• Cutting and price

• Dynamic programming

• Lagrangian relaxation

• Linear relaxation

• Surrogate relaxation

• Lower bounds

• Etc…

Integer Programming

Exact Methods

Sandeep Kumar Poonia

6/15/2013

Page 30: Lecture29

Background and Motivation

• Good solutions for

complex and large-scale

problems

• Short running times

• Easily adapted

Local Search Methods

Metaheuristics

Integer Programming

Exact Methods

• Proved optimal solutions

• Important information on the characteristics and properties of the problem.

Sandeep Kumar Poonia

6/15/2013

Page 31: Lecture29

Background and Motivation

Local Search Methods

Metaheuristics

Integer Programming

Exact Methods

Optimized Search

Heuristics

Sandeep Kumar Poonia

6/15/2013

Page 32: Lecture29

1

MethodsSearchdPerturbate

MethodsSearchveConstructiMethodsSearch Local

MethodsSearchLocalStochastic

MethodsSearchLocalticDeterminisMethodsSearch Local

MethodsSearchveCoorperati

MethodsSearchvecoorperatiNonMethodsSearch Local

neighborhood search, simulated annealing

population-based search

It makes use the information of a set of solutions.

e.g. genetic algorithms, ant algorithm, . . . Sandeep Kumar Poonia

6/15/2013

Page 33: Lecture29

Initialstate

Partialsolution

Completesolution

Improvedsolution

Acceptedsolution

Constructive procedure Iteratively improvementprocedure

Refinement:

The best solution comes from a process of repeatedly refining and

inventing alternative solutions.

Sandeep Kumar Poonia

6/15/2013

Page 34: Lecture29

A J D E F K H I G C B A

(A J D E F K H I G C B) is a complete solution.

Constructive Search Methods:

To generate a complete solution by iteratively extending partial

solutions.

A CB

J D

F E

K

I

H G

5

3

5 5

6 3

5

4107

9

6

8

3

94

Sandeep Kumar Poonia

6/15/2013

Page 35: Lecture29

Perturbation Search Methods:

For a complete solution, we can easily change it into new complete

solution by modifying one or more solution components.

For example, in TSP a complete solution (ABCD) is changes into a

new solution (ADCB) by interchange the positions of B and D.

( neighborhood search methods, mutation operations i.e. )

( The Liberty Times, July 27, 2005 )

Sandeep Kumar Poonia

6/15/2013

Page 36: Lecture29

1

For a set of complete solutions, we can easily change them into new

complete solutions by modifying one or more solution components

among the solutions. ( crossover and mutation operations in genetic

algorithms. etc. )

Sandeep Kumar Poonia

6/15/2013

Page 37: Lecture29

Deterministic Algorithms: In each search step, it progresses

toward the complete solution by making deterministic decision.

e.g. Simplex method, Quasi-Newton algorithms, tabu search and

many other conventional algorithms.

Deterministic algorithm will produce the same solution for a given

problem instance. Even for the same instance, the stochastic

algorithm usually product distinct solutions at each run.

Ackley’s function

Sandeep Kumar Poonia

6/15/2013

Page 38: Lecture29

4. Stochastic Algorithms:

It make a random decision at each search step. e.g. Monte Carlo

algorithms, simulated annealing, genetic algorithms, ant algorithms,

etc.

There are two cases.

(1) The available information – the objective function to be

optimized – may be considered possible erroneous or corrupted by

random noise.

(2) For the case with perfect information, we may introduce a

random element to guide us when searching for the optimum

solution.

Sandeep Kumar Poonia

6/15/2013

Page 39: Lecture29

1. They are efficient for the practical uses.

2. They are simple to implement. For many applications, stochastic

algorithm is the simplest algorithm available, or fastest, or both.

3. They are very general and can be implemented for a wide class of

optimization. For example, no differential function of real valued

parameters is required. It need not be expressible in any particular

constraint language.

4. They can run in parallel. The quality of solutions may be improved

time by time.

Why stochastic algorithms?

Sandeep Kumar Poonia

6/15/2013

Page 40: Lecture29

Transition probabilities for a deterministic algorithm:

5

3

5

7

8 9

1

3

2

11

1

1

1

1

1

1

1

Minimize f (x)

subject to x S

f (xk) = 2

Configuration Graph

Deterministic algorithm will produce the same solution for a given

problem instance.

Sandeep Kumar Poonia

6/15/2013

Page 41: Lecture29

Transition probabilities for a stochastic algorithm:

Remark: Transition probabilities may be dependent on the number

of iterations.

S8

S1

S2

S9

S7

S5

S6

S3

S4

2/4

1/4 1/81/8 1/4

3/4

1/3

1/3

1/3

1/8

1/8

2/41/8 1/8

3/4

1/3

1/3

1/3

3/4

1/8

1/8

1/2

1/4

1/4

1/8

1/8

1/8

2/4

1/4

Even for the same instance, the stochastic algorithm usually product

distinct solutions at each run.Sandeep Kumar Poonia

6/15/2013

Page 42: Lecture29

S8

S1

S2

S9

S7

S5

S6

S3

S4

1 - (e-2/T)/3 - (e-1/T)/3

(e-2/T)/3 (e-1/T)/31/8 (e-3/T)/3

3/4

1/3

1/3

1/3

2/3 - (e-1/T)/3

1/3

1/8 1/8

3/4

1/3

1/3

1/3

3/4

1/8

1/8

1 - (e-3/T)/3 -

(e-1/T)/3

1/4

1/8

1/8

1/8

2/4

1/4

(e-1/T)/3

(e-1/T)/3

T = the number of iterations.

Sandeep Kumar Poonia

6/15/2013

Page 43: Lecture29

There are two ways to avoid getting trapped in a local optimum.

1. Accommodate nongreedy search move. It is allowed to move to

a neighborhood state with a worse function value. ( Tabu search,

simulated annealing, . . . )

2. To increase the number of edges in the configuration graph.

However, the denser the configuration graph is, the more

inefficient search step will be.

To enlarge the neighborhood for each state.

Sandeep Kumar Poonia

6/15/2013

Page 44: Lecture29

General local search algorithm

• G(S) is the value of the objective under schedule S

1. Let k=1. Start with a schedule S1 and let the best schedule S0=S1

2. Choose a schedule Sc from the neighborhood of Sk N(Sk)

3. If Sc is accepted let Sk+1 = Sc, otherwise let Sk+1= Sk. If G(Sk+1)<G(S0) let S0=Sk+1

4. Let k=k+1. Terminate the search if the stopping criteria are satisfied. Otherwise return to 2.

Sandeep Kumar Poonia

6/15/2013

Page 45: Lecture29

Local search example

Schedule representation

• Let the vector S=(j1,…,jn) represent the schedule

– jk=j if j is the kth job in the sequence

• Use EDD to construct the initial schedule

S1= …

• The total weighted tardiness

– Let G(S) be SwjTj under schedule S

=> G(S1)= …

Sandeep Kumar Poonia

6/15/2013

Page 46: Lecture29

Local search example

Neighborhood structure

• Manipulating S1

1. Pairwise adjacent interchange

2. Try to move a job to a different location in the sequence

• Rules 1 and 2 above define two types of neighborhoods N1 and N2

• N1(S1)=…

• N2(S1)=…

Sandeep Kumar Poonia

6/15/2013

Page 47: Lecture29

Local search example

Choosing Sc

• Assume we use N1

• Methods for choosing Sc from N1(Sk)

1. Randomly

2. Move the job forward that has the highest

contribution to the objective

• Follow rule 2

Interchange jobs … and …

Sc = ( , , , )

G(Sc) =

Sandeep Kumar Poonia

6/15/2013

Page 48: Lecture29

Local search example

Acceptance criteria

• Is G(Sc) < G(Sk)?

• Should we consider accepting Sc if G(Sc) ≥

G(Sk) ?

• In this example we only accept if we get an

improvement in the objective

Sandeep Kumar Poonia

6/15/2013

Page 49: Lecture29

Local search example

Stopping criteria

• Max number of iterations

• No or little improvement

– We would terminate the search since we did not

improve the current schedule

• Local optimal solution

– No solution S in N(Sk) satisfies G(S)<G(Sk)

Sandeep Kumar Poonia

6/15/2013

Page 50: Lecture29

Local search

Design criteria

i. The representation of the schedule

ii. The design of the neighborhood

iii. The search process within the

neighborhood

iv. The acceptance-rejection criteria

v. Stopping criteria

Sandeep Kumar Poonia

6/15/2013

Page 51: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 52: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 53: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 54: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 55: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 56: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 57: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 58: Lecture29

Sandeep Kumar Poonia

6/15/2013

Page 59: Lecture29

Simulated Annealing (SA)

• Annealing: Heating of a material (metal) to

a high temperature and then cooling it at a

certain rate to achieve a desired crystalline

structure

• SA: Avoids getting stuck at a local

minimum by accepting a worse schedule Sc

with probability

P(Sk,Sc)= exp(-G(Sc)-G(Sk) )

bkSandeep Kumar Poonia

6/15/2013

Page 60: Lecture29

SA: Temperature parameter

bk ≥ 0 is the temperature (also called cooling parameter)

• Initially the temperature is high making moves to a worse schedule more likely

~50% chance of accepting a slightly worse schedule seems to work well

• As the temperature decreases the probability of accepting a worse schedule decreases

– Often, bk=Tak for some .9<a<1 and T>0

Sandeep Kumar Poonia

6/15/2013

Page 61: Lecture29

SA algorithm

1. Set k=1 and select b1.Select S1 and set S0=S1.

2. Select Sc (randomly) from N(Sk).

i. If G(S0)<G(Sc)<G(Sk) set Sk+1=Sc and go to 3

ii. If G(Sc)<G(S0) set S0=Sk+1=Sc and go to 3

iii. If G(Sc)>G(Sk), generate a uniform random number Uk from a Uniform(0,1) distribution (e.g., rand() in Excel)

If Uk≤P(Sk,Sc), set Sk+1=Sc; otherwise set Sk+1=Sk.

3. Select bk+1≤ bk.Set k=k+1.Stop if stopping criteria are satisfied; otherwise go to 2.

Sandeep Kumar Poonia

6/15/2013

Page 62: Lecture29

Tabu (taboo?) search

• Tabu search tries to model human memory

processes

• A “tabu-list” is maintained throughout the

search

– Moves according to the items on the list are

forbidden

Sandeep Kumar Poonia

6/15/2013

Page 63: Lecture29

Tabu search algorithm

1. Set k=1. Select S1 and set S0=S1.

2. Select Sc from N(Sk).

i. If the move SkSc is on the tabu list set Sk+1=Sk

and go to 3

ii. If SkSc is not on the tabu list set Sk+1=Sc.Add the reverse move to the top of the tabu list and delete the entry on the bottom.If G(Sc)<G(S0), set S0=Sc.

3. Set k=k+1.Stop if stopping criteria are satisfied; otherwise go to 2.

Sandeep Kumar Poonia

6/15/2013

Page 64: Lecture29

Sandeep Kumar Poonia

6/15/2013