Upload
others
View
5
Download
0
Embed Size (px)
Citation preview
Logical equivalence
n Two sentences are logically equivalent iff they are true in the same models: α ≡ ß iff α╞ β and β╞ α
2
Validity and satisfiability A sentence is valid (a tautology) if it is true in all models
e.g., True, A ∨¬A, A ⇒ A, (A ∧ (A ⇒ B)) ⇒ B
Validity is connected to inference via the Deduction Theorem: KB ╞ α if and only if (KB ⇒ α) is valid
A sentence is satisfiable if it is true in some model e.g., A ∨ B
A sentence is unsatisfiable if it is false in all models e.g., A ∧ ¬A
Satisfiability is connected to inference via the following: KB ╞ α if and only if (KB ∧¬α) is unsatisfiable (known as proof by contradiction)
3
Inference rules
n Modus Ponens A ⇒ B, A B Example: “raining implies soggy courts”, “raining” Infer: “soggy courts”
4
Example
n KB: {A⇒B, B ⇒C, A} n Is C entailed? n Yes.
Given Rule Inferred
A⇒B, A Modus Ponens B
B ⇒C, B Modus Ponens C
Inference rules (cont.)
n Modus Tollens A ⇒ B, ¬B ¬A Example: “raining implies soggy courts”, “courts not soggy” Infer: “not raining”
n And-elimination A ∧ B A
6
Reminder: The Wumpus World
n Performance measure q gold: +1000, death: -1000 q -1 per step, -10 for using the arrow
n Environment q Squares adjacent to wumpus: smelly q Squares adjacent to pit: breezy q Glitter iff gold is in the same square q Shooting kills wumpus if you are facing it q Shooting uses up the only arrow q Grabbing picks up gold if in same square
n Sensors: Stench, Breeze, Glitter, Bump n Actuators: Left turn, Right turn, Forward, Grab, Release,
Shoot
7
Inference in the wumpus world
Given: 1. ¬B1,1 2. B1,1 ⇔ (P1,2 ∨ P2,1) Let’s make some inferences: 1. (B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1 ) (By
definition of the biconditional) 2. (P1,2 ∨ P2,1) ⇒ B1,1 (And-elimination)
3. ¬B1,1 ⇒ ¬(P1,2 ∨ P2,1) (equivalence with contrapositive) 4. ¬(P1,2 ∨ P2,1) (modus ponens) 5. ¬P1,2 ∧ ¬P2,1 (DeMorgan’s rule) 6. ¬P1,2 (And-elimination) 7. ¬P2,1 (And-elimination)
8
Inference using inference rules
n Inference using inference rules is sound. n Is it complete? Depends if we have a rich enough set of
rules. n Inference is a search problem. n Resolution: a sound inference rule that when coupled
with a complete search method yields a complete inference algorithm
9
More inference
n Recall that when we were at (2,1) we could not decide on a safe move, so we backtracked, and explored (1,2), which yielded ¬B1,2. This yields ¬P2,2 ∧ ¬P1,3
n Now we can consider the implications of B2,1
10
ABG
PS
W
= Agent = Breeze = Glitter, Gold
= Pit = Stench
= Wumpus
OK = Safe square
V = Visited
A
OK 1,1 2,1 3,1 4,1
1,2 2,2 3,2 4,2
1,3 2,3 3,3 4,3
1,4 2,4 3,4 4,4
OKOKB
P?
P?A
OK OK
OK 1,1 2,1 3,1 4,1
1,2 2,2 3,2 4,2
1,3 2,3 3,3 4,3
1,4 2,4 3,4 4,4
V
(a) (b)
BB P!
A
OK OK
OK 1,1 2,1 3,1 4,1
1,2 2,2 3,2 4,2
1,3 2,3 3,3 4,3
1,4 2,4 3,4 4,4
V
OK
W!
VP!
A
OK OK
OK 1,1 2,1 3,1 4,1
1,2 2,2 3,2 4,2
1,3 2,3 3,3 4,3
1,4 2,4 3,4 4,4
V
S
OK
W!
V
V V
BS G
P?
P?
(b)(a)
S
ABG
PS
W
= Agent = Breeze = Glitter, Gold
= Pit = Stench
= Wumpus
OK = Safe square
V = Visited
Resolution Rule
1. ¬P2,2, ¬P1,1 2. B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1) 3. B2,1 ⇒ (P1,1 ∨ P2,2 ∨ P3,1) (biconditional elimination) 4. P1,1 ∨ P2,2 ∨ P3,1 (modus ponens) 5. P1,1 ∨ P3,1 (resolution rule) 6. P3,1 (resolution rule)
The resolution rule: if there is a pit in (1,1) or (3,1), and it’s not in (1,1), then it’s in (3,1).
P1,1 ∨ P3,1, ¬P1,1 P3,1
11
Resolution Rule
Unit Resolution inference rule: l1 ∨… ∨ lk, m
l1 ∨ … ∨ li-1 ∨ li+1 ∨ … ∨ lk
where li and m are complementary literals. Full Resolution
l1 ∨… ∨ lk, m1 ∨ … ∨ mnl1 ∨…∨ li-1 ∨ li+1 ∨…∨ lk ∨ m1 ∨…∨ mj-1 ∨ mj+1 ∨...∨ mn where li and mj are complementary literals.
12
Resolution rule is sound
For simplicity let’s consider clauses of length two:
l1 ∨ l2, ¬l2 ∨ l3
l1 ∨ l3
To demonstrate the soundness of resolution consider the values l2 can take: q If l2 is True, then since we know that ¬l2 ∨ l3 holds, it must
be the case that l3 is True. q If l2 is False, then since we know that l1 ∨ l2 holds, it must
be the case that l1 is True.
13
Resolution Rule
n Properties of the resolution rule: q Sound q Complete
n Resolution can be applied only to disjunctions of literals. How can it be complete?
n Turns out any knowledgebase can be expressed as a conjunction of disjunctions (conjunctive normal form, CNF).
n Example: (A ∨ ¬B) ∧ (B ∨ ¬C ∨ ¬D)
14
Conversion to CNF
B1,1 ⇔ (P1,2 ∨ P2,1) 1. Eliminate ⇔, replacing α ⇔ β with (α ⇒ β)∧(β ⇒α).
(B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1) 2. Eliminate ⇒, replacing α ⇒ β with ¬α ∨ β.
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬(P1,2 ∨ P2,1) ∨ B1,1)
3. Move ¬ inwards using de Morgan's rules and double-negation: (¬B1,1 ∨ P1,2 ∨ P2,1) ∧ ((¬P1,2 ∧ ¬P2,1) ∨ B1,1)
4. Apply distributive law (∧ over ∨) and flatten: (¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬P1,2 ∨ B1,1) ∧ (¬P2,1 ∨ B1,1)
15
Converting to CNF
n Every sentence can be converted to CNF 1. Replace α ⇔ β with (α ⇒ β) ∧(β ⇒ α) 2. Replace α ⇒ β with (¬ α v β) 3. Move ¬ “inward”
1. Replace ¬(¬ α) with α 2. Replace ¬(α ∧ β) with (¬ α v ¬ β) 3. Replace ¬(α v β) with (¬ α ∧ ¬ β)
4. Replace (α v (β ∧ γ)) with (α v β) ∧(α v γ)
Converting to CNF (II)
n While converting expressions, note that q ((α v β) v γ) is equivalent to (α v β v γ) q ((α ∧ β) ∧ γ) is equivalent to (α ∧ β ∧ γ)
n Why does this algorithm work? q Because ⇒ and ⇔ are eliminated q Because ¬ is always directly attached to literals q Because what is left must be ∧’s and v’s, and they
can be distributed over to make CNF clauses
Using resolution
n Even if our KB entails a sentence α, resolution is not guaranteed to produce α.
n To get around this we use proof by contradiction, i.e., show that KB∧¬α is unsatisfiable.
n Resolution is complete with respect to proof by contradiction.
18
Example of proof by resolution
KB = (B1,1 ⇔ (P1,2∨ P2,1)) ∧¬ B1,1 in CNF… (¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬P1,2 ∨ B1,1) ∧ (¬P2,1 ∨ B1,1)
α = ¬P1,2
Resolution yielded the empty clause. The empty clause is False (a disjunction is True only if
at least one of its disjuncts is true).
19
¬P2,1 B1,1 ¬B1,1 P1,2 P2,1 ¬P1,2 B1,1 ¬B1,1 P1,2
¬P2,1 ¬P1,2P1,2 P2,1 ¬P2,1 ¬B1,1 P2,1 B1,1 P1,2 P2,1 ¬P1,2¬B1,1 P1,2 B1,1
^ ^ ^
^^ ^ ^ ^ ^ ^ ^
^
Proof by resolution
n How do we automate the inference process? q Step 1: assume the negation of the consequent and add it
to the knowledgebase q Step 2: convert KB to CNF
n i.e. a collection of disjunctive clauses
q Step 3: Repeatedly apply resolution until: n It produces an empty clause (contradiction), in which case the
consequent is proven, or n No more terms can be resolved, in which case the consequent
cannot be proven
Resolution Algorithm
function PL-RESOLUTION(KB, α) returns true or false inputs: KB, the knowledge base, a sentence in propositional logic α, the query, a sentence in propositional logic clauses ← the set of clauses in the CNF representation of KB ∧ ¬αnew ← {} loop do for each pair of clauses Ci,Cj in clauses do resolvents ← PL-RESOLVE(Ci,Cj) if resolvents contains the empty clause then return true new ← new U resolvents if new ⊆ clauses then return false clauses ← clauses U new
21
Another example
n If it rains, I get wet. n If I’m wet I get mad. n Given that I’m not mad, prove that it’s not
raining.
22
Inference for Horn clauses Horn Form
KB = conjunction of Horn clauses Horn clause =
propositional symbol; or (conjunction of symbols) ⇒ symbol
Example of a Horn clause: (C ∧ D) ⇒ B Example of a Horn KB: C ∧ ( B ⇒ A) ∧ ((C ∧ D) ⇒ B) Horn form is a special case of CNF where a clause can have at most one positive literal
23
Inference for Horn clauses Horn Form
KB = conjunction of Horn clauses Horn clause =
propositional symbol; or (conjunction of symbols) ⇒ symbol
Example: C ∧ ( B ⇒ A) ∧ ((C ∧ D) ⇒ B)
Modus Ponens is a natural way to make inference in Horn KBs α1, … ,αn, α1 ∧ … ∧ αn ⇒ β
β
n Successive application of modus ponens leads to algorithms that are sound and complete, and run in linear time
24
Forward chaining
n Idea: fire any rule whose premises are satisfied in the KB q add its conclusion to the KB, until query is found
25
Q
P
M
L
BA
Forward chaining algorithm function PL-FC-ENTAILS?(KB, q) returns true or false
inputs: KB, knowledge base, a set of propositional definite clauses q, the query, a propositional symbol count ← a table, where count[c] is the number of symbols in c’s premise inferred ← a table, where inferred[s] is initially false for all symbols agenda ← a queue of symbols, initially symbols known to be true in KB while agenda is not empty do p ← POP(agenda) if p=q then return true if inferred[p]=false then inferred[p] ← true for each clause c in KB where p is in c.PREMISE do decrement count[c] if count[c]=0 then add c.CONCLUSION to agenda return false
Forward chaining is sound and complete for Horn KB
32
Backward chaining
Idea: work backwards from the query q: check if q is known already, or prove by backward chaining all premises of some rule
concluding q
33
Backward chaining
Idea: work backwards from the query q: check if q is known already, or prove by backward chaining all premises of some rule
concluding q
Avoid loops:
check if new subgoal is already on the goal stack
Avoid repeated work: check if new subgoal has already been proved true, or has already failed
44
Forward vs. backward chaining
n FC is data-driven n May do lots of work that is irrelevant to the goal
n BC is goal-driven, appropriate for problem-solving, q e.g., What courses do I need to take to graduate?
n Complexity of BC can be much less than linear in size of KB
45
Efficient propositional inference by model checking Two families of efficient algorithms for satisfiability: n Backtracking search algorithms:
q DPLL algorithm (Davis, Putnam, Logemann, Loveland) n Local search algorithms
q WalkSAT algorithm: n Start with a random assignment n At each iteration pick an unsatisfied clause and pick a symbol in the
clause to flip; alternate between: q Pick the symbol that minimizes the number of unsatisfied clauses q Pick a random symbol
Is WalkSAT sound? Complete?
46
In the wumpus world
A wumpus-world agent using propositional logic: ¬P1,1 ¬W1,1 Bx,y ⇔ (Px,y+1 ∨ Px,y-1 ∨ Px+1,y ∨ Px-1,y) Sx,y ⇔ (Wx,y+1 ∨ Wx,y-1 ∨ Wx+1,y ∨ Wx-1,y) W1,1 ∨ W1,2 ∨ … ∨ W4,4 ¬W1,1 ∨ ¬W1,2 ¬W1,1 ∨ ¬W1,3 …
⇒ 64 distinct proposition symbols, 155 sentences
47
Summary
n Logical agents apply inference to a knowledge base to derive new information and make decisions
n Basic concepts of logic: q syntax: formal structure of sentences q semantics: truth of sentences wrt models q entailment: truth of one sentence given a knowledge base q inference: deriving sentences from other sentences q soundness: derivations produce only entailed sentences q completeness: derivations can produce all entailed sentences
48