Upload
abhinavkishore891729
View
114
Download
0
Embed Size (px)
DESCRIPTION
AI:1st module
Citation preview
BY:ABHINAV KISHORE
AIMODULE:1
THE MONKEY & BANANAS PROBLEM A monkey is in a cage and bananas are
suspended from the ceiling, the monkey wants to eat a banana but cannot reach them in the room are a chair and a stick if the monkey stands on the chair and waves the
stick, he can knock a banana down to eat it what are the actions the monkey should take?
Initial state: monkey on ground with empty hand bananas suspendedGoal state: monkey eating Actions: climb chair/get off grab X wave X eat X
SEARCH Given a problem expressed as a state space
(whether explicitly or implicitly) with operators/actions, an initial state and a goal state,
how do we find the sequence of operators needed to solve the problem?
this requires search Formally, we define a search space as [N, A, S, GD]
N = set of nodes or states of a graph A = set of arcs (edges) between nodes that correspond to
the steps in the problem (the legal actions or operators) S = a nonempty subset of N that represents start states GD = a nonempty subset of N that represents goal states
Our problem becomes one of traversing the graph from a node in S to a node in GD we can use any of the numerous graph traversal
techniques for this but in general, they divide into two categories:
brute force – unguided search heuristic – guided search
CONSEQUENCES OF SEARCH As shown a few slides back, the 8-puzzle has over
40000 different states what about the 15 puzzle?
A brute force search means trying all possible states blindly until you find the solution for a state space for a problem requiring n moves where
each move consists of m choices, there are 2m*n possible states
two forms of brute force search are: depth first search, breath first search
A guided search examines a state and uses some heuristic (usually a function) to determine how good that state is (how close you might be to a solution) to help determine what state to move to hill climbing best-first search A/A* algorithm Minimax
While a good heuristic can reduce the complexity from 2m*n to something tractable, there is no guarantee so any form of search is O(2n) in the worst case
FORWARD VS BACKWARD SEARCH
The common form of reasoning starts with data and leads to conclusions for instance, diagnosis is data-driven – given the patient
symptoms, we work toward disease hypotheses we often think of this form of reasoning as “forward chaining”
through rules Backward search reasons from goals to actions
Planning and design are often goal-driven “backward chaining”
DEPTH-FIRST SEARCH
Starting at node A, our search gives us:A, B, E, K, S, L, T, F, M, C, G, N, H, O, P,U, D, I, Q, J, R
DEPTH-FIRST SEARCH EXAMPLE
TRAVELING SALESMAN PROBLEM
BREADTH-FIRST SEARCH
Starting at node A, our search would generate the nodes in alphabetical order from A to U
BREADTH-FIRST SEARCH EXAMPLE
BACKTRACKING SEARCH ALGORITHM
The monkey and the banana The purpose of this example is to show the use of variables.
DescriptionA monkey enters a room via the door. In the room, near the window, is a box. In the middle of the room hangs a banana from the ceiling. The monkey wants to grasp the banana, and can do so after climbing on the box in the middle of the room.
StatesFor each state, we need to record:- the position of the monkey (door, window, middle, ...)- the position of the box- if the monkey is on the box- if the monkey has the bananaThe initial state is (door, window, no, no).The set of goal states is (*, *, *, yes).
Moveswalk(P): from (M, B, no, H) to (P, B, no, H).push(P): from (M, M, no, H) to (P, P, no, H).climb: from (M, M, no, H) to (M, M, yes, H).grasp: from (middle, B, yes, no) to (middle, B, yes, yes).
State spaceWithout variables, the state space and search space can be very large (how many positions are there?).With variables, we can represent the reachable part as follows.
Monkey and Banana ExampleThere is a monkey at the door of a room. �In the middle of the room a banana hangs �from the ceiling. The monkey wants it, but cannot jump high enough from the floor.At the window of the room there is a box that �the monkey can use.
The monkey can perform the following actions: Walk on the floor Climb the box Push the box around (if it is beside the box) Grasp the banana if it is standing on the box directly under the banana We define the state as a 4 tuple : (monkey at, on floor, box at, has banana)
move( state( middle, onbox , middle, hasnot ), onbox, grasp, state( middle, onbox , middle, has)). onbox move( state( P, onfloor , P, H ), onfloor climb, state( P, onbox , P, H )). onbox move( state( P1, onfloor , P1, H ), onfloor push( P1, P2 ), state( P2, onfloor , P2, H)). onfloor move( state( P1, onfloor , B, H ), onfloor walk( P1, P2 ), state( P2, onfloor , B, H )). Onfloor
canget ( state( _, _, _, has )). canget ( State1 ) :-move( State1, Move, State2 ), canget ( State2 ). canget( state( atdoor , onfloor , atwindow , hasnot ))
18
INTRODUCTORY PROBLEM: TIC-TAC-TOE
X X o
19
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Program 1:Data Structures: Board: 9 element vector representing the board, with 1-9 for
each square. An element contains the value 0 if it is blank, 1 if it is filled by X, or 2 if it is filled with a O
Movetable: A large vector of 19,683 elements ( 3^9), each element is 9-element vector.
Algorithm:1. View the vector as a ternary number. Convert it to a
decimal number.2. Use the computed number as an index into
Move-Table and access the vector stored there.3. Set the new board to that vector.
20
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Comments:This program is very efficient in time.1. A lot of space to store the Move-Table.2. A lot of work to specify all the entries
in the Move-Table.
3. Difficult to extend.
21
INTRODUCTORY PROBLEM: TIC-TAC-TOE
1 2 3 4 5 6 7 8 9
22
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Program 2:Data Structure: A nine element vector representing the board.
But instead of using 0,1 and 2 in each element, we store 2 for blank, 3 for X and 5 for O
Functions:Make2: returns 5 if the center sqaure is blank. Else any other
balnk sqPosswin(p): Returns 0 if the player p cannot win on his next
move; otherwise it returns the number of the square that constitutes a winning move. If the product is 18 (3x3x2), then X can win. If the product is 50 ( 5x5x2) then O can win.
Go(n): Makes a move in the square nStrategy:Turn = 1 Go(1)Turn = 2 If Board[5] is blank, Go(5), else Go(1)Turn = 3 If Board[9] is blank, Go(9), else Go(3)Turn = 4 If Posswin(X) 0, then Go(Posswin(X)).......
23
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Comments:1. Not efficient in time, as it has to check
several conditions before making each move.
2. Easier to understand the program’s strategy.
3. Hard to generalize.
24
INTRODUCTORY PROBLEM: TIC-TAC-TOE
8 3 4 1 5 9 6 7 215 - (8 + 5)
25
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Comments:1. Checking for a possible win is quicker.2. Human finds the row-scan approach
easier, whilecomputer finds the number-counting approach moreefficient.
26
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Program 3:
1. If it is a win, give it the highest rating.
2. Otherwise, consider all the moves the opponent could make next. Assume the opponent will make the move that is worst for us. Assign the rating of that move to the current node.
3. The best node is then the one with the highest rating.
27
INTRODUCTORY PROBLEM: TIC-TAC-TOE
Comments:1. Require much more time to consider all
possible moves.
2. Could be extended to handle more complicated games.
28
STATE SPACE SEARCH: PLAYING CHESS Each position can be described by an 8-by-8
array. Initial position is the game opening position. Goal position is any position in which the
opponent does not have a legal move and his or her king is under attack.
Legal moves can be described by a set of rules:- Left sides are matched against the current state.- Right sides describe the new resulting state.
29
STATE SPACE SEARCH: PLAYING CHESS State space is a set of legal positions. Starting at the initial state. Using the set of rules to move from one
state to another. Attempting to end up in a goal state.
30
STATE SPACE SEARCH: WATER JUG PROBLEM“You are given two jugs, a 4-litre one and a
3-litre one. Neither has any measuring markers on it.
There is a pump that can be used to fill the jugs with
water. How can you get exactly 2 litres of water into 4-
litre jug.”
31
STATE SPACE SEARCH: WATER JUG PROBLEM State: (x, y)
x = 0, 1, 2, 3, or 4 y = 0, 1, 2, 3
Start state: (0, 0). Goal state: (2, n) for any n. Attempting to end up in a goal state.
32
STATE SPACE SEARCH: WATER JUG PROBLEM1. (x, y) (4, y)
if x 42. (x, y) (x, 3)
if y 33. (x, y) (x - d, y)
if x 04. (x, y) (x, y - d)
if y 0
33
STATE SPACE SEARCH: WATER JUG PROBLEM5. (x, y) (0, y)
if x 06. (x, y) (x, 0)
if y 07. (x, y) (4, y - (4 - x))
if x y 4, y 08. (x, y) (x - (3 - y), 3)
if x y 3, x 0
34
STATE SPACE SEARCH: WATER JUG PROBLEM9. (x, y) (x y, 0)
if x y 4, y 010. (x, y) (0, x y)
if x y 3, x 011. (0, 2) (2, 0)
12. (2, y) (0, y)
35
STATE SPACE SEARCH: WATER JUG PROBLEM
1. current state = (0, 0)2. Loop until reaching the goal state (2, 0)
- Apply a rule whose left side matches the current state- Set the new current state to be the resulting state
(0, 0)(0, 3)(3, 0)(3, 3)(4, 2)(0, 2)(2, 0)
36
STATE SPACE SEARCH: WATER JUG PROBLEM
The role of the condition in the left side of a rule restrict the application of the rule more efficient
1. (x, y) (4, y)if x 4
2. (x, y) (x, 3)if y 3
37
STATE SPACE SEARCH: WATER JUG PROBLEM
Special-purpose rules to capture special-case
knowledge that can be used at some stage in solving a
problem
11. (0, 2) (2, 0)
12. (2, y) (0, y)
38
SEARCH STRATEGIESRequirements of a good search strategy:1. It causes motion
Otherwise, it will never lead to a solution.
2. It is systematicOtherwise, it may use more steps than necessary.
3. It is efficientFind a good, but not necessarily the best, answer.
39
SEARCH STRATEGIES1. Uninformed search (blind search)
Having no information about the number of steps from the current state to the goal.
2. Informed search (heuristic search)More efficient than uninformed search.
40
SEARCH STRATEGIES
(0, 0)
(4, 0) (0, 3)
(1, 3)(0, 0)(4, 3) (3, 0)(0, 0)(4, 3)
41
SEARCH STRATEGIES: BLIND SEARCH Breadth-first search
Expand all the nodes of one level first.
Depth-first searchExpand one of the nodes at the deepest level.
42
SEARCH STRATEGIES: BLIND SEARCH
Criterion Breadth-First
Depth-First
Time
Space
Optimal?
Complete?
b: branching factor d: solution depth m: maximum depth
43
SEARCH STRATEGIES: BLIND SEARCH
Criterion Breadth-First
Depth-First
Time bd bm
Space bd bmOptimal? Yes No
Complete?
Yes No
b: branching factor d: solution depth m: maximum depth
44
SEARCH STRATEGIES: HEURISTIC SEARCH Heuristic: involving or serving as an aid
to learning, discovery, or problem-solving by experimental and especially trial-and-error methods. (Merriam-Webster’s dictionary)
Heuristic technique improves the efficiency of a search process, possibly by sacrificing claims of completeness or optimality.
45
SEARCH STRATEGIES: HEURISTIC SEARCH Heuristic is for combinatorial explosion. Optimal solutions are rarely needed.
46
SEARCH STRATEGIES: HEURISTIC SEARCHThe Travelling Salesman Problem“A salesman has a list of cities, each of which he must visit exactly once. There are direct roads between each pair of cities on the list. Find the route the salesman should follow for the shortest possible round trip that both starts and finishes at any one of the cities.”
A
B
C
D E1 10
5 5515
47
SEARCH STRATEGIES: HEURISTIC SEARCHNearest neighbour heuristic:1. Select a starting city.2. Select the one closest to the current
city.
3. Repeat step 2 until all cities have been visited.
48
SEARCH STRATEGIES: HEURISTIC SEARCHNearest neighbour heuristic:1. Select a starting city.2. Select the one closest to the current
city.
3. Repeat step 2 until all cities have been visited.
O(n2) vs. O(n!)
49
HILL CLIMBING
Searching for a goal state = Climbing to the top of a hill
50
HILL CLIMBING
Generate-and-test + direction to move. Heuristic function to estimate how
close a given state is to a goal state.
51
SIMPLE HILL CLIMBING
Algorithm1. Evaluate the initial state.2. Loop until a solution is found or there
are no new operators left to be applied:- Select and apply a new operator- Evaluate the new state:goal quitbetter than current state new current state
52
SIMPLE HILL CLIMBING Evaluation function as a way to inject
task-specific knowledge into the control process.
53
STEEPEST-ASCENT HILL CLIMBING (GRADIENT SEARCH)
Considers all the moves from the current state.
Selects the best one as the next state.
54
STEEPEST-ASCENT HILL CLIMBING (GRADIENT SEARCH)
Algorithm1. Evaluate the initial state.2. Loop until a solution is found or a complete
iteration produces no change to current state:- SUCC = a state such that any possible successor of the current state will be better than SUCC (the worst state). - For each operator that applies to the current state, evaluate the new state:goal quitbetter than SUCC set SUCC to this state - SUCC is better than the current state set the current state to SUCC.
55
HILL CLIMBING: DISADVANTAGESLocal maximumA state that is better than all of its
neighbours, but not better than some other states far away.
56
HILL CLIMBING: DISADVANTAGESPlateauA flat area of the search space in which
all neighbouring states have the same value.
57
HILL CLIMBING: DISADVANTAGES
Ways Out Backtrack to some earlier node and try
going in a different direction. Make a big jump to try to get in a new
section. Moving in several directions at once.
58
HILL CLIMBING: DISADVANTAGES Hill climbing is a local method:
Decides what to do next by looking only at the “immediate” consequences of its choices.
Global information might be encoded in heuristic functions.
59
BEST-FIRST SEARCH Depth-first search: not all competing
branches having to be expanded.
Breadth-first search: not getting trapped on dead-end paths. Combining the two is to follow a single path at a time, but switch paths whenever some competing path look more promising than the current one.
60
BEST-FIRST SEARCH
A
DCBFEHG
JI
566 5
2 1
A
DCBFEHG 566 5 4
A
DCBFE56
34
A
DCB53 1
A
61
BEST-FIRST SEARCH OPEN: nodes that have been generated,
but have not examined.This is organized as a priority queue.
CLOSED: nodes that have already been examined.Whenever a new node is generated, check whether it has been generated before.
62
BEST-FIRST SEARCH
Algorithm1. OPEN = {initial state}.2. Loop until a goal is found or there are
no nodes left in OPEN:- Pick the best node in OPEN- Generate its successors- For each successor:new evaluate it, add it to OPEN, record its parentgenerated before change parent, update successors
63
BEST-FIRST SEARCH Greedy search:
h(n) = estimated cost of the cheapest path from node n to a goal state.Neither optimal nor complete
Uniform-cost search:g(n) = cost of the cheapest path from the initial state to node n.Optimal and complete, but very inefficient
64
PROBLEM REDUCTION
Goal: Acquire TV set
AND-OR Graphs
Goal: Steal TV set Goal: Earn some money Goal: Buy TV set
Algorithm AO* (Martelli & Montanari 1973, Nilsson 1980)
65
PROBLEM REDUCTION: AO*
A
DCB43 5
A5
6
FE44
A
DCB43
10
9
9
9
FE44
A
DCB4
6 10
1112
HG75
66
PROBLEM REDUCTION: AO*
A
G
CB 10
5
11
13
ED 65 F 3
A
G
CB 15
10
14
13
ED 65 F 3
H 9Necessary backward propagation