19
Heuristic Search In addition to depth-first search, breadth-first search, bound depth- first search, and iterative deepening, we can also use informed or heuristic search. We need an estimate, h(n), of the distance from state n to a goal state. We will then use priority-first search with the priority being f(n) = g(n) + h(n) where f(n) is the known distance from the start state

Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Embed Size (px)

Citation preview

Page 1: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Heuristic Search

In addition to depth-first search, breadth-first

search, bound depth-first search, and iterative

deepening, we can also use informed or heuristic

search.We need an estimate, h(n), of the distance from

state n to a goal state.We will then use priority-first search with the

priority being f(n) = g(n) + h(n) where f(n) is the

known distance from the start state to state n.

Page 2: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Example – the Eight Puzzle

2 8 31 6 47 5

1 2 38 47 6 5

Page 3: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

The Eight Puzzle

The rule is that you can only move one tile at a time

and you must move into the space. Alternatively,

you can think of a move as moving the space up,

down, left, or right one square (if possible).

The search space is the graph of possible states of the

puzzle, which can can specified as a permutation of

the numbers 1, 2, 3, 4, 5, 6, 7, 8, 9 (where 9 is the

space).

Page 4: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Heuristics for the Eight Puzzle

h1(n) = the total number of tiles out of place.

h2(n) = the sum of the distances that each tile must

be moved to the proper place (Manhattan distance).h

3(n) = p

1(p

2-1(p

1(n))) where p

1 is the starting

permutation and p2 is the goal permutation.

h4(n) = 0. (Blind search)

Page 5: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Results

h1(n): 110 nodes visited, 183 nodes created.h2(n): 55 nodes visited, 88 nodes created.h3(n): 1583 nodes visited, 2574 nodes created.h4(n): 1811 nodes visited, 2968 nodes created.

Page 6: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Code

pq.add(start->number,start->distance);while(!pq.empty()) { int nodeNum, priority; pq.remove(nodeNum,priority); numVisited++; if(nodeNum == goal->number) return priority; Node *node = nodes[nodeNum]; //node->print(priority); int g = priority - node->distance + 1; node->priority = priority; node->mark = 2;

Page 7: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Code (cont'd)

for(int i=0;i<node->numNeighbors();i++) {Node *neighbor = node->neighbor(i);int f = g + neighbor->distance;

if(neighbor->mark == 0 || (neighbor->mark == 2 && neighbor->priority > f)) {

neighbor->mark = 1; pq.add(neighbor->number,f); neighbor->previous = node;

}else if(neighbor->mark == 1 && pq.getPriority(neighbor-

>number) > f) { pq.update(neighbor->number,f); neighbor->previous = node;

} }}

return INT_MAX;

Page 8: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Definitions

g*(n) = the shortest path from the start state to state

n.h*(n) = the shortest path from state n to a goal state.f*(n) = g*(n) + h*(n) = the shortest path from the

start state to a goal state that passes thru state n.A search algorithm is admissible if it is guaranteed

to find a minimal solution whenever one exists.

Page 9: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

A*-Algorithms

Using priority-first search with the priority f(n) =

g(n) + h(n) is called Algorithm A.If, in addition, we have that h(n) <= h*(n), the

procedure is called an A*-Algorithm.A*-algorithms are all admissible.BFS is Algorithm A with h(n) = 0, and hence

admissible.

Page 10: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Monotonicity

A heuristic function h is monotone ifFor all states n

i and n

j, where n

j is a descendant of

ni, h(n

i) – h(n

j) <= cost(n

i,n

j).

The heuristic evaluation of any goal state is zero,

i.e., h(Goal) = 0.

This means that the difference between the heuristic

measure for a state and its successor is bound by the

actual cost of going from the state to its successor.

Page 11: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Why do we care?

Monotonicity ensures that the first time a state is

reached, we've found the shortest path to that state.

Subsequent visits to the state are longer paths, and

so, do not have to be reconsidered. The upshot is

that although we have to check if a state has been

visited, once it is, we never have to consider it

again.

Page 12: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Informedness

For two A* heuristics, h1 and h

2, if h

1(n) <= h

2(n) for

all states n in the search space, then heuristic h2 is

said to be more informed than h1.

Page 13: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Two Player Games

We can use state space search for two (or more)

player games as well. We assume that we have two

players, MAX and MIN, who alternate turns. We

assume one player must win (no ties).The state

space search graph is then stratified into layers,

alternate layers belonging to each player. At each

level, the nodes are labeled by the player whose

turn it is.

Page 14: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Minimax Procedue

Leaves are labeled according to who wins: A win

for MAX is labeled 1 and a win for MIN is labeled

0.An internal MAX node is labeled by the maximum

value of its children.An internal MIN node is labeled by the minimum

value of its children.We work from the bottom up.

Page 15: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Fixed Ply Depth Minimax

This assumes that we can create the complete graph.

If we can't do that, we can generate the graph to a

fixed depth, and then arbitrarily cutoff search. We

must estimate the goodness of the leaves using a

heuristic measure rather than knowing if they are

wins or losses.

Page 16: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or
Page 17: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or
Page 18: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Alpha-Beta Pruning

With alpha-beta pruning, we do not have to search

the entire space. We search the state-space in depth-

first fashion to a limited depth. Leaves are assigned

values given a heuristic function. The values are

propagated back up the tree. However, once MAX

has a node with a given value (called alpha), we

know that MAX will never choose a move which

has a smaller value.

Page 19: Heuristic Search In addition to depth-first search, breadth-first search, bound depth-first search, and iterative deepening, we can also use informed or

Alpha-Beta Pruning (cont'd)

If another child of the given node will yield a smaller

(or equal) value, we can cut off the search and not

expand the entire tree under that child. Since the

child is a MIN node, we know that MIN will always

choose the move with the smallest value. So, if a

child of the MIN node has value less than alpha, the

MIN node is guaranteed to have a value less than

alpha, and MAX will not choose that node.