61
Artificial Intelligence and Expert System PREETI SONI LECTURER(C S E) RSRRCET Bhilai, C.G. 1 Prepared By PREETI SONI LECTURER(CSE)

cnt Unit-01.pptx

Embed Size (px)

Citation preview

Page 1: cnt Unit-01.pptx

1

Artificial Intelligence and Expert System

PREETI SONILECTURER(C S E)

RSRRCETBhilai, C.G.

Prepared By PREETI SONILECTURER(CSE)

Page 2: cnt Unit-01.pptx

UNIT-II

Prepared By PREETI SONILECTURER( CSE)

Puja SrivastavaAsst. Prof.(C S E)RSRRCETBhilai, C.G.

Uninformed search methods lack problem-specific knowledge. Such methods are prohibitively inefficient in many cases. Using problem-specific knowledge can dramatically improve the search speed. In this Unit-II we will study some informed search algorithms that use problem specific heuristics. At the heart of such algorithms there is the concept of a heuristic function.

Page 3: cnt Unit-01.pptx

In heuristic search or informed search, heuristics are used to identify the most

promising search path.

A heuristic function, is a function that ranks alternatives in various search

algorithms at each branching step based on the available information

(heuristically) in order to make a decision about which branch to follow during a

search. Heuristic function is a function that estimate the value of a state, It is an

approximation used to minimize the search process .

Heuristic Knowledge : knowledge of approaches that are likely to work or of

properties that are likely to be true (but not guaranteed).

This is also known as objective function.

Heuristic Search

Prepared By PREETI SONILECTURER(CSE)

Page 4: cnt Unit-01.pptx

A heuristic function at a node n is an estimate of the optimum cost from the current node to a goal. It is denoted by h(n).

h(n) = estimated cost of the cheapest path from node n to a goal node HEURISTIC FUNCTIONS:

f: States --> Numbers

f(T) : expresses the quality of the state TS

allow to express problem-specific knowledge, can be imported in

a generic way in the algorithms.

Heuristic function…

Prepared By PREETI SONILECTURER(CSE

Page 5: cnt Unit-01.pptx

Example (1)

Prepared By PREETI SONILECTURER(CSE

Example 1: We want a path from Kolkata to Guwahati

Heuristic for Guwahati may be straight-line distance between Kolkata and Guwahati.

h(Kolkata) = euclideanDistance(Kolkata, Guwahati)

Page 6: cnt Unit-01.pptx

Prepared By PREETI SONILECTURER(CSE

Example (2): 8-puzzle: Misplaced Tiles Heuristics is the number of tiles out of place.

Page 7: cnt Unit-01.pptx

f1(T) = the number correctly placed tiles on the board:

Example (2): 8-puzzle: Misplaced Tiles Heuristics is the number of tiles out of

place.

Most often, ‘distance to goal’ heuristics are more useful !

f2 = 41 3 2

4765

8

f2(T) = number or incorrectly placed tiles on board: gives (rough!) estimate of how far we are from goal

Prepared By PREETI SONILECTURER(CSE)

Page 8: cnt Unit-01.pptx

f3(T) = the sum of ( the horizontal + vertical distance that each tile is away from its final destination):◦ gives a better estimate of distance from the goal node

Examples (3): Manhattan distance

f3

= 1 + 4 + 2 + 3 = 10

1 5 24763

8

Prepared By Puja Srivastava Asst. Prof. (CSE)

Page 9: cnt Unit-01.pptx

F(T) = (Value count of black pieces) - (Value count of white pieces)

Examples (4): Chess

f = v( ) + v( ) + v( ) + v( ) - v( ) - v( )

Prepared By preeti sonilect(CSE)

Page 10: cnt Unit-01.pptx

Heuristic - a “rule of thumb” used to help guide search

◦ often, something learned experientially and recalled when

needed

given a search space, a current state and a goal state

generate all successor states and evaluate each with our heuristic

function

select the move that yields the best heuristic value

Heuristic Search

Page 11: cnt Unit-01.pptx

Methods that use a heuristic function to provide specific knowledge about the problem◦ Hill Climbing◦ Best First Search◦ A*◦ AO*◦ Beem Search◦ Greedy Search

Heuristic Search Algorithm

Page 12: cnt Unit-01.pptx

In simple hill climbing method, the first state is better than the current state is selected.

In case of gradient search we consider all the moves from the current state and select the best as the the next state.

There is a trade off between the time required to slect a move and number of moves required to get a solution that must be considered wwhile deciding which method will work better for a particular problem.

Usually time required to select a move is longer for gradient search and number of moves required to get a solution is usually longer for basic hill climbing.

Steppedt Asent Hill Climbing / Gradien Search

Page 13: cnt Unit-01.pptx

Search Tree for hill climbing procedure

Goal Node

Root

A B C

FED

8 37

2.7 2 2.9

Page 14: cnt Unit-01.pptx

1. Put the initial node on a list START2. If (START is empty) or (START == GOAL) terminate search3. Remove the first node from START. Call this node ‘a‘.4. If (a == GOAL) terminate search with success.5. Else if node ‘a‘ has successors generate all of them. Find

out how far they are from the goal node. Sort them by remaining distance from the goal and addd them to the beginning of START.

6. Goto step 2.

Steppest Assent Hill Climbing Algorithm

Page 15: cnt Unit-01.pptx

Both the simple hill climbing and gradient search may fail to find a solution, either the algorithm may terminate not by finding the goal state but getting to state from which any better state can’t be generated.

This will happen when the program has reached either: 1. Local maximum 2. Plateau 3. Ridge

Problem with Hill Climbing Technique

Page 16: cnt Unit-01.pptx

1. LOCAL MAXIMUM A local maximum is a state i.e. better than all its neighbor but not

better than some other state further away At local maximum all moves appear to make thing worse. Remedy for Local Maximum Back track to some earlier node and try to go in some different

direction.

Problem with Hill Climbing Technique (Cont..)

Page 17: cnt Unit-01.pptx

Problem with Hill Climbing Technique...2. PLATEAU

• A flat area of the search space in which all neighbouring states have the same value.

• On a plateau it is not possible to determine the best direction in which to move by making a local comparison.

• Remedy

• Make a big jump in some direction and try to get a new section of search space.

Page 18: cnt Unit-01.pptx

3.RIDGE It is a special kind of local maximum. The orientation of the high region, compared to the set of

available moves, makes it impossible to climb up. Remedy Apply two or more rules before doing the test.

Problem with Hill Climbing Technique...

Page 19: cnt Unit-01.pptx

Adventages Of DFS: 1. Requires less memory. 2. It allows a solution without examining all the nodes. Adventages Of BFS: 1. It doesn‘t get trapped at dead end If there is a solution definitly it will find the solution and if more

than on solution then then it will give the minimum solution

Best First Search

Page 20: cnt Unit-01.pptx

In case of Best first search it combines both the advantages of BFS and DFS to follow a single path at a time but changes path whenever some computing path looks more promising than the current node.

In Best First Search one move is selected but the other nodes are kept in consideration so that they can be examined or expanded later on if the select path becomes less promising.

f(n) = g(n) + h(n) g(n) major of the cost getting from initial state to the current node h(n) estimate of the cost getting from current node to the goal

node

Best First Search (cont..)

Page 21: cnt Unit-01.pptx

1. Put the initial node on a list START2. If (START is empty) or (START == GOAL) terminate search3. Remove the first node from START. Call this node ‘a‘.4. If (a == GOAL) terminate search with success.5. Else if node ‘a‘ has successors generate all of them. Find out

how far they are from the goal node. Sort all the childern generated so far by the remaining distance form the goal.

6. Name this list as START1.7. Replace START with START1.8. Goto step 2.

Algorithm of Best First Search

Page 22: cnt Unit-01.pptx

A Sample tree for best first search

Start Node

M

I L

K

J

B

A

C

E

D

F

G

H

3

6

5

9

8

12

14

7

5

6

1

0

2

Goal Node

Page 23: cnt Unit-01.pptx

Best first search

S.No.

Node being expanded

Children Available Nodes Node Choosen

1. S (A:3), (B:6), (C:5) (A:3), (B:6), (C:5) (A:3)2. A (D:9), (E:8) (B:6), (C:5), (D:9), (E:8) (C:5)

3. C (H:7) (B:6), (D:9), (E:8), (H:7) (B:6)

4. B (F:12), (G:14) (D:9), (E:8), (H:7), (F:12), (G:14) (H:7)

5. H (I:5), (J:6) (D:9), (E:8), (F:12), (G:14), (I:5), (J:6) (I:5)

6. I (K:1), (L:0), (M:2)(D:9), (E:8), (F:12), (G:14), (J:6), (K:1), (L:0), (M:2)

Search stops as goal is reached

Page 24: cnt Unit-01.pptx

A* utilizes evaluation function values and cost

function values.

Fitness Number = Evaluation function + Cost

function involved from start node to target

node.

A* Algorithm

Page 25: cnt Unit-01.pptx

Example

M

I L

K

J

B

A

C

E

D

F

G

H

3

6

5

98

12

14

7

5

6

1

0

2

3

6

14

2

FitnessValue

Page 26: cnt Unit-01.pptx

1. Put the initial node on a list START2. If (START is empty) or (START == GOAL) terminate search3. Remove the first node from START. Call this node ‘a‘.4. If (a == GOAL) terminate search with success.5. Else if node ‘a‘ has successors generate all of them. Estimate the

fitness number by the successors by totaling the evaluation function value and cost function value. Sort the list by fitness number.

6. Name this list as START1.7. Replace START with START1.8. Goto step 2.

Algorithm of A*

Page 27: cnt Unit-01.pptx

An arc ( ) connecting different branches is called AND tree.

The complex problem and the sub problem , there exist two kinds of relationships.◦ AND relationship

◦ OR relationship.

In AND relationship, the solution for the problem is obtained by solving all the sub

problems.

In OR relationship, the solution for the problem is obtained by solving any of the

sub problem.

AND or OR

Page 28: cnt Unit-01.pptx

Example

AB C

D

547 6

9

Page 29: cnt Unit-01.pptx

Step 1: Create an intial graph GRAPH with a single NODE. Compute the evaluation function value of NODE.

Step 2: Repeat until NODE is solve or cost reaches a very high value that cannot be expanded.◦ Step 2.1 Select a node NODE1 from NODE. Keep track of the path.◦ Step2.2 Expand NODE1 by generating its childern. For childern which are not

the ancessors of NODE1, evaluate the evaluation Function value. If the child node is terminal node label it END_NODE.

◦ Step2.3. Generate a set of nodes DIFF_NODES having only NODE1.◦ Step 2.4 Repeat until DIFF_NODES is empty.

Step 2.4.1. Choose a node CHOOSE_NODE from DIFF_NODES such that none of the descedant of CHOOSE_NODE is in DIFF_NODE.

Step 2.4.2. Estimate the cost of each node emerging from CHOOSE_NODE. This cost is the total of the evaluation function value and the cost of the arc.

Step 2.4.3 Find the minimal value and mark a connector through which minimum is achieved overwriting the previous if it is different.

Step 2.4.4 If all the output nodes of the marked connector are marked END_NODE label CHOOSE_NODE as over.

Step 2.4.5 If CHOOSE_NODE has been marked OVER or the cost has changed, add to set DIFF_NODES all ancestors of CHOOSE_NODE.

AO* Algorithm

Page 30: cnt Unit-01.pptx

AO*

A

DCB43 5

A5

6

FE44

A

DCB43

10

9

9

9

FE44

A

DCB4

6 10

1112

HG75

Page 31: cnt Unit-01.pptx

Many AI problems can be viewed as problems of constraint satisfaction.

As compared with a straightforard search procedure, viewing a problem as one of constraint satisfaction can reduce substantially the amount of search.

Operates in a space of constraint sets. Initial state contains the original constraints given in the problem. A goal state is any state that has been constrained “enough”.

Constraint Satisfaction

Page 32: cnt Unit-01.pptx

Two-step process:

1. Constraints are discovered and propagated as far as possible.

2. If there is still not a solution, then search begins, adding new constraints.

Two kinds of rules:

1. Rules that define valid constraint propagation.

2. Rules that suggest guesses when necessary.

Constraint Satisfaction

Page 33: cnt Unit-01.pptx

Cryptarithmetic puzzle:

Constraint Satisfaction

SEND MORE MONEY

Page 34: cnt Unit-01.pptx

Rules for propagating constraints generates the following constraints:

M = 1, since two single-digit numbers plus a carry can not total more than 19.

S = 8 or 9, since S+M+C3 > 9 (to generate the carry) and M = 1, S+1+C3>9, so S+C3 >8 and C3 is at most 1.

O = 0, since S + M(1) + C3(<=1) must be at least 10 to generate a carry and it can be most 11. But M is already 1, so O must be 0.

N = E or E+1, depending on the value of C2. But N cannot have the same value as E. So N = E+1 and C2 is 1.

In order for C2 to be 1, the sum of N + R + C1 must be greater than 9, so N + R must be greater than 8.

N + R cannot be greater than 18, even with a carry in so E cannot be 9.

Solution

Page 35: cnt Unit-01.pptx

Suppose E is assigned the value 2. The constraint propagator now observes that: N = 3 since N = E + 1. R = 8 or 9, since R + N (3) + C1 (1 or 0) = 2 or 12. But since

N is already 3, the sum of these nonnegative numbers cannot be less than 3. Thus R + 3 +(0 or 1) = 12 and R = 8 or 9.

2 + D = Y or 2 + D = 10 + Y, fro the sum in rithmost column.

Solution...

Page 36: cnt Unit-01.pptx

M = 1S = 8 or 9O = 0N = E + 1C2 = 1N + R > 8E 9

N = 3R = 8 or 92 + D = Y or 2 + D = 10 + Y

2 + D = YN + R = 10 + ER = 9S =8

2 + D = 10 + YD = 8 + YD = 8 or 9

Y = 0 Y = 1

Start

E = 2

C1 = 0

C1 = 1

D = 8

D = 9

Initial state:• No two letters

have the same value.• The sum of the

digits must be as shown.

SEND MORE MONEY

M=1R = 9S=8E=2N=3O=0D = 4Y =6

M=1R = 9S=8E=2N=3O=0D=8Y=0

M=1R = 9S=8E=2N=3O=0D=9Y=1

Conflict Conflict

Page 37: cnt Unit-01.pptx

Why has game playing been a focus of AI? Games have well-defined rules, which can be implemented in programs The rules of the game are limited. Hence extensive amounts of

domain-specific knowlege are seldom needed. Game provid a structured task wherein success or failure can be

measured with least effort. interfaces required are usually simple For the human expert, it is easy to explain the rationale for a move

unlike other domains.

Games in Artificial Intelligence

Page 38: cnt Unit-01.pptx

Usual conditions:

Each player has a global view of the board

Zero-sum game: any gain for one player is a loss for the other

Two-player games

Page 39: cnt Unit-01.pptx

Components of game: initial state for each state, list of legal moves and consequent states test to determine if a state is a terminal state--the end of the

game utility function:

◦ computes a single numeric value for a terminal state ◦ win, lose, or draw, and sometimes by how much.

Two-player games

Page 40: cnt Unit-01.pptx

It is a depth first, depth-limited search procedure. 1st player “MAX” tries to maximize the utility fn 2nd player “MIN” tries to minimize the utility fn assumes the opponent always makes the best possible move

◦ not always assumed by a human player under such conditions gives best possible outcome—maximizes

the worst-case outcome

Minimax Strategy

Page 41: cnt Unit-01.pptx

Let A be the intial state of the game . The plausible move generator three childern for that move and

the static evaluation function generator assigns the values given along with each of the states.

It is asssumed that the static evaluation function generator returns a value from -20 and +20, wherein a value +20 indicates a win for the maximizer and a value of +20 a win for the minimizer.

A value of 0 (zero) indicates a tie or darw. The maximizer, always tries to go to a position whrere the static

evaluation function value is the maximum positive value.

Example

Page 42: cnt Unit-01.pptx

Example...A

BD

2-6 7C

Page 43: cnt Unit-01.pptx

current node is the root; a MAX node want to make a decision at the root look ahead to consequent states resulting from legal moves

Game trees

Page 44: cnt Unit-01.pptx

edges are legal moves nodes are game states leaves are terminal states

Game trees

Page 45: cnt Unit-01.pptx

when a leaf node is evaluated, a large value is good for player “MAX”; a small value is good for player “MIN”

which player is making the move alternates between adjacent levels (level 0 MAX, level 1 MIN, level 2 MAX, etc.)

Game trees

Page 46: cnt Unit-01.pptx

minimaxValue(n) = ◦ utility(n)

if n is a terminal state◦ max minimaxValue(s) of all successors, s

if n is a MAX node◦ min minimaxValue(s) of all successors, s

if n is a MIN node

Minimax Algorithm

Page 47: cnt Unit-01.pptx

the purpose of exploring the game tree with minimax is to avoid potential loss

due to time constraints, it may not be possible to explore a path to a leaf

going one level deeper may have revealed a “trap”—a sudden negative turn of events!

Horizon effect

Page 48: cnt Unit-01.pptx

Game tree with depth d, branching factor b: full minimax requires Θ(bd) evaluations

Limits of minimax

Page 49: cnt Unit-01.pptx

minimax on chess game: average number of alternative legal moves: 35 average number of moves for one player over course of

game: 50 number of Nodes: 35100 = 10154

number of distinct nodes: 1040

Limits of minimax

Page 50: cnt Unit-01.pptx

looks at every line of play, no matter how unlikely can retain optimality without this drawback smarter searching…

Limits of minimax

Page 51: cnt Unit-01.pptx

improves minimax algorithm by pruning needless evaluations

computes same result without searching the entire tree

don’t explore a move which is inferior to a known alternative

if cannot search to terminal state, use a heuristic to approximate the eventual terminal state

Alpha-Beta Pruning

Page 52: cnt Unit-01.pptx

Alpha: minimal score that player MAX is guaranteed to attain (best known so far, but possibility of improvement. Minimum attainable)

Beta: best score that player MIN can attain so far (lowest score known so far, lower score may yet be found. Maximum attainable)

Alpha-Beta Pruning

Page 53: cnt Unit-01.pptx

Pruning example 1: A Beta cutoff

Max

Min

Max

Min 2

6

6

≥6

86

Beta = 6

≥8

Beta: best score that MIN can attain so far

Page 54: cnt Unit-01.pptx

Pruning example 2: An Alpha cutoff

Max

Min

Max

Min 2

6

6

≥6

86

Alpha = 6

≥8 5

52

≤5

Alpha: best score that MAX can attain so far

Page 55: cnt Unit-01.pptx

typically, can reduce branching factor to the square root of full branching

from bd to bd/2

Deep Blue (chess-playing program that beat Garry Kasparov): branching factor reduced by alpha beta pruning from 35 to 6!

Effectiveness of pruning

Page 56: cnt Unit-01.pptx

if possible, consider best successors first cutoffs will occur earlier makes it possible to search smaller portion of tree requires evaluation of interior nodes sort on evaluation function neural net to “learn” better evaluation function?

Refinement

Page 57: cnt Unit-01.pptx

tic-tac-toe connect four checkers chess amazon

nim othello go hex

Standard ApplicationsEach player knows the total game state.

Page 58: cnt Unit-01.pptx

backgammon (dice)

Application with element of randomness

Page 59: cnt Unit-01.pptx

scrabble bridge poker battleship kriegspiel

Applications with incomplete knowledge

No player knows the total game state.

Page 60: cnt Unit-01.pptx

Waiting for Quiescence

Secondary Search

Using Book Moves

Alternatives to Minmax

Additional Refinements

Page 61: cnt Unit-01.pptx