L5 Problem Solving n Search p4

Embed Size (px)

Citation preview

  • 8/3/2019 L5 Problem Solving n Search p4

    1/30

    241-320 Design Architecture and Engineeringfor Intelligent System

    Suntorn Witosurapot

    Contact Address:Phone: 074 287369 or

    Email: [email protected]

    November 2009

  • 8/3/2019 L5 Problem Solving n Search p4

    2/30

    Lecture 5:

    Problem Solving and Search

    part 4

    Informed Search

  • 8/3/2019 L5 Problem Solving n Search p4

    3/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 3

    Overview

    Heuristics

    Informed Search Methods

    Greedy Best-First Search

    A Search

    Iterative Deepening A Search

    Local Search

    Conclusion

  • 8/3/2019 L5 Problem Solving n Search p4

    4/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 4

    Local search algorithms

    In many optimization problems, the path to the goal isirrelevant; the goal state itself is the solution

    State space = set of "complete" configurations

    Find configuration satisfying constraints, e.g., n-queens

    In such cases, we can use local search algorithms Keep a single "current" state, try to improve it

    (Ex:as in our part-2 slide, the Evaluation function should be

    lower and lower, when proceeding in each step)

  • 8/3/2019 L5 Problem Solving n Search p4

    5/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 5

    Example: n-queens

    Put nqueens on an n nboard with no two queens onthe same row, column, or diagonal

    Note: The eight queens puzzle has 92 distinct solutions

    [Ref] http://en.wikipedia.org/wiki/Eight_queens_puzzle

  • 8/3/2019 L5 Problem Solving n Search p4

    6/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 6

    Hill-climbing with Steepest Descent

    It is a kind of local search algorithm.

    Analogy: Imagine you have climbed a hill, but it hasgot dark before you are back down. Your goal is thebottom of the hill. Your problem is finding it. Youhave no GPS, so greedy search is not an option.

    You would make sure that every step you made tookyou down hill and, ignoring cliffs, as steeply aspossible

  • 8/3/2019 L5 Problem Solving n Search p4

    7/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 7

    Hill-climbing with Steepest Descent

    That is the fastest way to the bottom, unless: You are at the bottom of a dip (M point), and every step you

    might make takes you up hill

    You are on a plateau (B region) and every step makes nodifference

    Objective function

    Current stateState space

    A

    MB

    Descenddirection

  • 8/3/2019 L5 Problem Solving n Search p4

    8/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 8

    Steepest Descent Search

    Steepest descent searches are designed to take(you guessed it) the steepest path down theevaluation function

    The evaluation function equals zero at the goal andis positive elsewhere

    You actually dont need a goal, you could just betrying to minimise the cost of something

    Or reverse it all and try to maximise something

  • 8/3/2019 L5 Problem Solving n Search p4

    9/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 9

    Steepest Descent

    1) S initial state

    2) Repeat:

    a) S arg minS

    SUCCESSORS(S)

    {h(S)}

    b) if GOAL?(S) return S

    c) if h(S) < h(S) then S S else return failure

  • 8/3/2019 L5 Problem Solving n Search p4

    10/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 10

    Application: 8-QueenRepeat n times:

    1) Pick an initial state S at random with one queen in each column2) Repeat k times:

    a) If GOAL?(S) then return S

    b) Pick an attacked queen Q at random

    c) Move Q in its column to minimize the number of attackingqueens new S [min-conflicts heuristic]

    3) Return failure

    1

    2

    3

    3

    2

    2

    3

    2

    2

    2

    2

    2

    0

    2

  • 8/3/2019 L5 Problem Solving n Search p4

    11/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 11

    Application: 8-QueenRepeat n times:

    1) Pick an initial state S at random with one queen in each column2) Repeat k times:

    a) If GOAL?(S) then return S

    b) Pick an attacked queen Q at random

    c) Move Q in its column to minimize the number of attackingqueens new S [min-conflicts heuristic]

    3) Return failure

    1

    2

    3

    3

    2

    2

    3

    2

    2

    2

    2

    2

    0

    2

    Why does it work ???1) There are many goal states that are

    well-distributed over the state space2) If no solution has been found after a few

    steps, its better to start it all over again.Building a search tree would be much lessefficient because of the high branchingfactor

    3) Running time almost independent of thenumber of queens

  • 8/3/2019 L5 Problem Solving n Search p4

    12/30

  • 8/3/2019 L5 Problem Solving n Search p4

    13/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 13

    Steepest Descent Search(cont.)

    Problem: depending on initial state, can get stuck inlocal minima

    Objective function

    Current state Local minima global minima

    State space

    A

    MB

    Suppose we are at point A,and would like to be at point B,the goal

    Everything goes fine until weget to m, a local minimum.Then we are stuck.

    Descenddirection

  • 8/3/2019 L5 Problem Solving n Search p4

    14/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 14

    Simulated Annealing Search

    Note: If youre curious, annealing refers to the process used toharden metals by heating them to a high temperature (hence,mealting) and then gradually cooling them

    Idea: escape local maxima by allowing some "bad"moves but gradually decrease their frequency

    A

    MB

    Current stateBad moveGood move

  • 8/3/2019 L5 Problem Solving n Search p4

    15/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 15

    Simulated Annealing Search: No pain, no gain

    x0 x1 x2

    z(x)Allow non-improving moves sothat it is possible to go down

    x11x

    4x5x6x7 x

    8x9x10

    x12x

    13

    x

    in order to rise again

    to reach global optimum

  • 8/3/2019 L5 Problem Solving n Search p4

    16/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 16

    Simulated Annealing

    Improving moves always accepted Non-improving moves may be accepted probabilistically

    and in a manner depending on the temperature parameterT. loosely

    the worse the move the lesslikely it is to be accepted a worsening move is less likely to be accepted, the cooler the

    temperature

    The temperature T starts high and is gradually cooled asthe search progresses.

    Initially (when things are hot) virtually anything is accepted,at the end (when things are nearly frozen) only improving moves

    are allowed (and the search effectively reduces to hill-climbing)

  • 8/3/2019 L5 Problem Solving n Search p4

    17/30

    241-320 Design Architecture &Engineering for Intelligent System

    Problem Solving and Search part 4 17

    Simulated Annealing Search

    1) S initial state

    2) Repeat:

    a) S arg minS

    SUCCESSORS(S)

    {h(S)}

    b) if GOAL?(S) return S

    c) if h(S) < h(S) then S S

    d) else with probability p

    S S

    This part is differentfrom hill-climbing

    Q: How can we calculate p?

  • 8/3/2019 L5 Problem Solving n Search p4

    18/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 18

    Setting p

    What if p is too low?

    We dont make many downhill moves and wemight not get out of many local maxima

    What if p is too high?

    We may be making too many suboptimal moves

    Should p be constant?

    We might be making too many random moveswhen we are near the global maximum

  • 8/3/2019 L5 Problem Solving n Search p4

    19/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 19

    Setting p (cont.)

    Decrease p as iterations progress

    Accept more uphill moves early, accept fewer assearch goes on

    Intuition: as search progresses, we are movingtowards more promising areas and quite likelytoward a global minimum

    Decrease p as h(s) - h(s) increases

    Accept fewer uphill moves if slope is high

    See next slide for intutition

  • 8/3/2019 L5 Problem Solving n Search p4

    20/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 20

    Decreasing p as h(S)-h(S) increases

    h(S)-h(S) is large: weare likely moving towards

    a sharp (interesting)minimum so dont moveuphill too much

    h(S)-h(S) is small: we arelikely moving towards a

    smooth (uninteresting)minimum so we want toescape this local minimum

    h(s)h(s)h(s)

    h(s)

  • 8/3/2019 L5 Problem Solving n Search p4

    21/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 21

    Complete Simulated Annealing SearchAlgorithm

    1) S initial state

    2) Iterate:

    Repeat k times:

    a) If GOAL?(S) then return S

    b) S successor of S picked at random

    c) if h(S)

    h(S) then S

    Sd) else

    - h = h(S)-h(S)

    - with probability ~ exp(h/T), where T is calledthe temperature, do: S S

    3) T = T

    definitely accept the change

    Else, accept the change with probability

    When enough iterations have

    passed without improvement,terminate.

    Simulated annealing lowers T over the k iterations.It starts with a large T and slowly decreases T

  • 8/3/2019 L5 Problem Solving n Search p4

    22/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 22

    Simulated Annealing SearchAlgorithm (cont.)

    Probability of moving downhillfor negative h values atdifferent temperature ranges:

    High temp: accept all moves

    (Random Walk) Low temp: Stochastic Hill-Climbing

  • 8/3/2019 L5 Problem Solving n Search p4

    23/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 23

    Convergence

    If the schedule lowers T slowly enough, thealgorithm will find a global optimum withprobability approaching 1

    In practice, reaching the global optimumcould take an enormous number of iterations

  • 8/3/2019 L5 Problem Solving n Search p4

    24/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 24

    Very Basic Simulated Annealing Example

    1 Do 400 trial moves

    2 Do 400 trial moves

    3 Do 400 trial moves

    4 Do 400 trial moves

    m Do 400 trial moves

    100=T

    95.0=TT

    95.0=TT

    95.0=TT

    95.0=TT

    00001.0=Tn Do 400 trial moves

    95.0=TT

    Iteration

  • 8/3/2019 L5 Problem Solving n Search p4

    25/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 25

    Conclusion: Simulated annealing

    Design of neighborhood is critical

    Lots of parameters to tweak eg. , K, initial

    temperature

    Simulated annealing is usually better thanhillclimbing if you can find the right

    parameters

  • 8/3/2019 L5 Problem Solving n Search p4

    26/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 26

    Parallel Local Search Techniques

    They perform several local searchesconcurrently, but not independently:

    Beam search

    Genetic algorithms(will be studied later)

  • 8/3/2019 L5 Problem Solving n Search p4

    27/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 27

    Conclusion

    Walking downhill is not as easy as youd think

    Informed search algorithms try to move quicklytowards a goal based on the distance metric from

    their current point

    Greedy search algorithms only follow paths searchspace that bring them closest to the goal

    Local search algorithms have no memory to storetree structures, but work by intelligently coveringselected parts of search space

  • 8/3/2019 L5 Problem Solving n Search p4

    28/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 28

    Search problems

    Blind search

    Heuristic search:best-first and A*

    Construction of heuristics Local searchVariants of A*

  • 8/3/2019 L5 Problem Solving n Search p4

    29/30

    241-320 Design Architecture &Engineering for Intelligent System Problem Solving and Search part 4 29

    See Visualization of N-Queen Solutions

    For viewing an live demo of N-Queens solutionalgorithms with different algorithms, visit

    http://yuval.bar-or.org/index.php?item=9

    http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9http://yuval.bar-or.org/index.php?item=9
  • 8/3/2019 L5 Problem Solving n Search p4

    30/30