28
Informed Search Methods Read Chapter 4 Use text for more Examples: work them out yourself

Informed Search Methods

  • Upload
    aspen

  • View
    35

  • Download
    0

Embed Size (px)

DESCRIPTION

Informed Search Methods. Read Chapter 4 Use text for more Examples: work them out yourself. Best First. Store is replaced by sorted data structure Knowledge added by the “sort” function No guarantees yet – depends on qualities of the evaluation function - PowerPoint PPT Presentation

Citation preview

Page 1: Informed Search Methods

Informed Search Methods

Read Chapter 4

Use text for more Examples:

work them out yourself

Page 2: Informed Search Methods

Best First

• Store is replaced by sorted data structure

• Knowledge added by the “sort” function

• No guarantees yet – depends on qualities of the evaluation function

• ~ Uniform Cost with user supplied evaluation function.

Page 3: Informed Search Methods

Concerns

• What knowledge is available?

• How can it be added to the search?

• What guarantees are there?

• Time

• Space

Page 4: Informed Search Methods

Greedy Search

• Adding heuristic h(n)

• h(n) = estimated cost of cheapest solution from state n to the goal

• Require h(goal) = 0.

• Complete – no; can be mislead.

Page 5: Informed Search Methods

Examples:

• Route Finding: goal from A to B– straight-line distance from current to B

• 8-tile puzzle:– number of misplaced tiles– number and distance of misplaced tiles

Page 6: Informed Search Methods

A*

• Combines greedy and Uniform cost• f(n) = g(n)+h(n) where

– g(n) = path cost to node n– h(n) = estimated cost to goal

• If h(n) <= true cost to goal, then admissible.• Best-first using admissible f is A*.• Theorem: A* is optimal and complete

Page 7: Informed Search Methods

A* optimality Proof

• Note: Along any path from root, f increases.

• Definition of monotonicity.

• Let f* be cost of optimal solution.– A* expands all nodes with f(n) <f*– A* may expand nodes for which f(n) = f*

• Let G be optimal goal state and G2 a suboptimal one.

Page 8: Informed Search Methods

A* Proof• Let n be leaf node on path to G.

• h admissible => f*>= f(n)

• G2 choosen before n => f(n)>=f(G2)

• Then G2 is not suboptimal.

• A* is complete. Searches increasing contours.

• A* is exponential in time and space, generally.

Page 9: Informed Search Methods

A* Properties

• Dechter and Pearl: A* optimal among all algorithms using h. (Any algorithm must search at least as many nodes).

• If 0<=h1 <= h2 and h2 is admissible, then h1 is admissible and h1 will search at least as many nodes as h2. So bigger is better.

• Sub exponential if h estimate error is within (approximately) log of true cost.

Page 10: Informed Search Methods

A* special cases

• Suppose h(n) = 0. => Uniform Cost

• Suppose g(n) = 1, h(n) = 0 => Breadth First

• If non-admissible heuristic– g(n) = 0, h(n) = 1/depth => depth first

• One code, many algorithms

Page 11: Informed Search Methods

Heuristic Generation

• Relaxation: make the problem simpler

• Route-Planning– don’t worry about paths: go straight

• 8-tile puzzle– don’t worry about physical constraints: pick up

tile and move to correct position– better: allow sliding over existing tiles

• Should be easy to compute

Page 12: Informed Search Methods

Iterative Deepening A*

• Like iterative deepening, but:

• Replaces depth limit with f-cost

• Increase f-cost by smallest operator cost.

• Complete and optimal

Page 13: Informed Search Methods

SMA*

• Memory Bounded version due to authors

• Beware authors.

Page 14: Informed Search Methods

Hill-climbing

• Goal: Optimizing an objective function.

• Does not require differentiable functions

• Can be applied to “goal” predicate type of problems.– BSAT with objective function number of

clauses satisfied.

• Intuition: Always move to a better state

Page 15: Informed Search Methods

Some Hill-Climbing Algo’s• Start = random state or special state.• Until (no improvement)

– Steepest Ascent: find best successor– OR (greedy): select first improving successor– Go to that successor

• Repeat the above process some number of times (Restarts).

• Can be done with partial solutions or full solutions.

Page 16: Informed Search Methods

Hill-climbing Algorithm• In Best-first, replace storage by single node• Works if single hill• Use restarts if multiple hills• Problems:

– finds local maximum, not global– plateaux: large flat regions (happens in BSAT)– ridges: fast up ridge, slow on ridge

• Not complete, not optimal• No memory problems

Page 17: Informed Search Methods

Beam

• Mix of hill-climbing and best first

• Storage is a cache of best K states

• Solves storage problem, but…

• Not optimal, not complete

Page 18: Informed Search Methods

Local (Iterative) Improving

• Initial state = full candidate solution

• Greedy hill-climbing: – if up, do it– if flat, probabilistically decide to accept move– if down, don’t do it

• We are gradually expanding the possible moves.

Page 19: Informed Search Methods

Local Improving: Performance

• Solves 1,000,000 queen problem quickly

• Useful for scheduling

• Useful for BSAT– solves (sometimes) large problems

• More time, better answer

• No memory problems

• No guarantees of anything

Page 20: Informed Search Methods

Simulated Annealing

• Like hill-climbing, but probabilistically allows down moves, controlled by current temperature and how bad move is.

• Let t[1], t[2],… be a temperature schedule.– usually t[1] is high, t[k] = 0.9*t[k-1].

• Let E be quality measure of state

• Goal: maximize E.

Page 21: Informed Search Methods

Simulated Annealing Algorithm

• Current = random state, k = 1• If T[k] = 0, stop.• Next = random next state• If Next is better than start, move there.• If Next is worse:

– Let Delta = E(next)-E(current)

– Move to next with probabilty e^(Delta/T[k])

• k = k+1

Page 22: Informed Search Methods

Simulated Annealing Discussion• No guarantees• When T is large, e^delta/t is close to e^0, or

1. So for large T, you go anywhere.• When T is small, e^delta/t is close to e^-inf,

or 0. So you avoid most bad moves.• After T becomes 0, one often does simple

hill-climbing.• Execution time depends on schedule;

memory use is trivial.

Page 23: Informed Search Methods

Genetic Algorithm• Weakly analogous to “evolution”

• No theoretic guarantees

• Applies to nearly any problem.

• Population = set of individuals

• Fitness function on individuals

• Mutation operator: new individual from old one.

• Cross-over: new individuals from parents

Page 24: Informed Search Methods

GA Algorithm (a version)• Population = random set of n individuals

• Probabilistically choose n pairs of individuals to mate

• Probabilistically choose n descendants for next generation (may include parents or not)

• Probability depends on fitness function as in simulated annealing.

• How well does it work? Good question

Page 25: Informed Search Methods

Scores to Probabilities

• Suppose the scores of the n individuals are:

a[1], a[2],….a[n].

The probability of choosing the jth individual

prob = a[j]/(a[1]+a[2]+….a[n]).

Page 26: Informed Search Methods

GA Example

• Problem Boolean Satisfiability.

• Individual = bindings for variables

• Mutation = flip a variable

• Cross-over = For 2 parents, randomly positions from 1 parent. For one son take those bindings and use other parent for others.

• Fitness = number of clauses solved.

Page 27: Informed Search Methods

GA Example

• N-queens problem

• Individual: array indicating column where ith queen is assigned.

• Mating: Cross-over

• Fitness (minimize): number of constraint violations

Page 28: Informed Search Methods

GA Discussion

• Reported to work well on some problems.

• Typically not compared with other approaches, e.g. hill-climbing with restarts.

• Opinion: Works if the “mating” operator captures good substructures.

• Any ideas for GA on TSP?