36
Logic for Robotics: Defeasible Reasoning and Non-monotonicity

Logic for Robotics: Defeasible Reasoning and Non-monotonicity

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Logic for Robotics: Defeasible Reasoning and Non-monotonicity

The Plan● I. Explain and argue for the role of non-

monotonic logic in robotics and ● II. Briefly introduce some non-monotonic logics● III. Fun, speculative stuff.

Introduction● A. Robots!

● Robots are (physical, virtual, generally electro-mechanical) agents designed to perform tasks on their own or with (ideally minimized) guidance (or, from another angle, to amplify the power of people to perform tasks by performing parts or subtasks for us – autonomously or semi-autonomously).

● No knock-down arguments for precise definitions are forthcoming here, but autonomous or semi-autonomous agency seems to be a crucial feature of “robothood”.

Introduction● B. Logic!

● The ability to reason (represent the world and infer things about it - e.g. changes in the local environment that result from actions and events) seems to be required for autonomous or semi-autonomous agency.– If we are leaving a task to a robot, we want to be able to

trust it to do its job with some degree of accuracy.– Suppose we want an autonomous robot for painting

houses. We need the robot to be able to (among other things) recognize when a house is the right color or it might end up painting the house forever.

– More complicated tasks → more reasoning– More autonomy → more reasoning

● Reasoning (representation and inference) is pretty much what logic is about.

I.1 Logic● Consequence

● Language is the most notable format for reasoning● The pieces of language we use for reasoning:

ArgumentsPremises {true sentences or known sentences}

Conclusions {sentences that “follow”}– Conclusions are consequences of premises (they are

derivable/provable from the premises, they are semantically or logically entailed by the premises, any interpretation that satisfies the premises satisfies the conclusions...)

– The line between premises and conclusions signifies the consequence relation.

– Also symbols like “:.”, “├”, “╞” and words like “therefore”, “it follows that...”, “so...” etc.

● The consequence relation: the relation between premises and conclusions that makes it right to infer the conclusions from the premises.– Truth preservation, warrant transmission, etc.

– Propositional Logic (PL)● Connectives

– First Order Logic (FOL)● Quantification and Connectives

● Validity – The property we want our arguments to have; the feature

we want a consequence relation to have.● An argument is valid iff, in every possible situation

where the premises are true, the conclusion is true.● An argument is invalid if there is even one situation

where the premises are true and the conclusions are false.

● Interpretations = possible situations– Gather up all the sentences in the world, make them

either true or false (but not neither and not both) – that's an interpretation. Then take the premises of your argument and the conclusion, and see if, in every interpretation, conditions for validity hold.

● Inference Rules– Schematic rules that are shaped like valid arguments– Modus Ponens, Modus Tollens...

● Formalization● Using mathematical tools, especially algebra and

set theory, to analyze arguments● Symbols and structures made of symbols

(expressions, formulas) represent parts of (sentences which are parts of) arguments

● Abstraction from content – we can figure out what is shared by all good arguments, and we can classify argument forms (e.g., inference rules).– Suppose: If Ralph is in the sun for a little while, then he

will get sunburned.– Suppose: Ralph has been in the sun for a little while.– Conclude: He will get sunburned.– Modus Ponens

● If P, then Q.● P● :. Q

Logic is Active● Main idea: we can use the tools we have

already developed/discovered to study new parts of reasoning.

● Propositional Logic (PL)● Sentences (propositions) are represented by

symbols {P, Q, R, …}● Connectives are isolated for study as things

especially relevant to good form– {not (~), and(^), or (v), if...then (→), if and only if (iff, ↔)}

● PL is great● Complete, consistent, expressive, decidable

● There are some arguments that PL just cannot analyze, and those arguments are very common in natural language● All intelligent agents deserve rights, all robots are

intelligent agents, Foodmotron is a robot :. Foodmotron is an intelligent agent and Foodmotron deserves rights

● P and Q and R :. S● First Order Logic (FOL) can handle the

arguments PL can't● Quantification and Predication

– All (x), There-is(x), P(x), Q(x), and all the connectives

● Analyzing the Foodmotron argument:● Intelligent Agent = I, Deserves rights = D*, Robot = R● All(x)Ix → Dx, All(x)Rx → Ix, R(f) :. I(f), D(f), I(f) and D(f)

– 1. (x)Ix → Dx– 2. (x)Rx → Ix – 3. Rf

● 4. If → Df Universal Instantiation 1● 5. Rf → If Universal Instantiation 2● 6. If Modus Ponens 3,5● 7. Df Modus Ponens 4, 6● C. If ^ Df Conjunction Introduction 6, 7

*It would be more precise to analyze the predicate “deserves rights” as a binary relation between x and some y such that y is a right, i.e., “(There-is(y) y is a right) and D(x,y)” - but this is just an example, and it's convenient.

Bridge● C. Robot Logic!

● Logic can be used for specifying, at a high or abstract level, the reasoning that we want robots to be able to do (and consequently, it characterizes the capabilities we need to design and program robots with).

● What kind of logic do we need to describe (and prescribe) the reasoning that robots do?

● First-Order Logic (FOL) seems the obvious candidate:– It's deductively powerful.– It's expressive.– It's complete.– It (arguably) characterizes the best (arguably) reasoning

that people do (mathematical theorem proving).

The Set-up.● C. Robot Logic

● Unfortunately, FOL and its consequence relation have some features that make it completely unsuitable for the task of characterizing a lot of interesting, good reasoning that humans and robots need to do.

● Keep in mind:– Logic is Active– Consequence (the relation between premises and

conclusions) and Validity.● We can have different kinds of consequence relations that still

have validity.● Valid arguments in FOL have problematic features and valid

arguments in other logics may not have those features.

Monotonicity ● Structural Rules

● The consequence relation in a logic can be abstractly characterized with structural rules.– Supraclassicality: if Γ ╞ φ then Γ ├ φ.– Reflexivity: if φ Γ then Γ ∈ ├ φ;– Cut: If Γ ├ φ and Γ, φ ├ ψ then Γ ├ ψ;– Monotony/Monotonicity: If Γ ├ φ and Γ Δ then Δ ⊆ ├ φ

● If a formula (Phi) is a consequence of set of assumed or premise formulas (Gamma), and Gamma is a subset of a new, larger set of formulas (Delta), then Phi is a consequence of Delta.

● No matter what you add to your original set of premises, you can always validly infer your original conclusions.

● Monotonicity is a property of the consequence relation in FOL (and all classical logics)

● Monotonicity:● If “T1, therefore, P” is valid, no addition to T1 can

make the inference to P invalid.● Conclusions are never invalidated as premises

increase.● Conclusions are cumulative – no take backs.● Extending sets of premises/assumptions can never

reduce the number of derivable/valid/entailed/provable conclusions (consequences).

● If a formula P is a consequence of a set of formulas, T1, then for any extension of T1 with another set T2, P is a consequence of the union of T1 and T2.

● Think about it like this:● Suppose that a formula P validly follows from a set of formulas T1, i.e.,

the argument “T1 therefore P” is valid.● To extend T1, we add a non-empty set of new formulas (at least one new

formula) to T1. Let's call this non-empty set T2. The resulting set (at least one formula larger than T1) is the union of T1 and T2.

● T2 is either consistent with T1 (does not contradict a formula in T1 or a consequence of T1) or inconsistent with T1.

● If T2 is consistent with T1, then we can ignore T2 and still derive P as a consequence of T1.– Since T2 doesn't contradict anything in T1 or P, in any situation

where T1 was true before adding T2, T1 is still true and in any situation where P was true before adding T2, P is still true. So, if “T1 therefore P” was valid before, it's valid now!

● Generally, inconsistency is to be avoided. In FOL, if T2 is inconsistent with T1, then we can prove anything from T1 together with T2.– In classical logics, including FOL, any and every arbitrary sentence

is a consequence of inconsistency – (arguably) that's what makes inconsistency bad!

● In either of the two possible cases, we can never retract P.

● A set of formulas is inconsistent if it contains a contradiction or entails a contradiction. We'll look at both cases.

● {...P...}● 1. ~P● 2. P● 3. P v Q● 4. ~P● 5. Q

● For any arbitrary Q at all!

● {P, ~P,}● 1. P● 2. P v Q● 3. ~P● 4. Q

● For any arbitrary Q at all!

● Why monotonicity is problematic (for robots and for us):● Argument 1: Defeasible Reasoning → Non-

monotonic● Argument 2: Intelligent Agency → Defeasible

Reasoning

● Defeasible reasoning = reasoning with a consequence relation that can be defeated – when a defeasible consequence relation holds between premises and conclusions, the conclusions can be invalidated when we learn something new.● Some conclusions make sense until we acquire

new information.● It's clear that defeasible reasoning is non-

monotonic.– Monotonic = no matter what new premises you add, your

original conclusion is still valid– Defeasible = adding new premises can make your

original conclusion invalid

● Most human reasoning is defeasible reasoning● Defeasible Arguments

T1 = --------------------------------------------------------------------------- x is B*.

– It can be the case that x is not-B even though every premise in T1 is true. Improbable things happen, x may not be like most other As, x may be atypical, abnormal, not generic or just a surprise.

– These arguments are good, but in what way?– *Distinct from inferring that “there is n probability that x is

B,”which should follow non-monotonically– Default Assumptions make these arguments work.

{Most As are B, x is A} {If x is A, there is n probability that x is B, x is A}

{Typically, normally, generically we can safely assume As are B, x is A}

● Defaults● Generics - “Birds (most birds, birds in the context of

this sentence) fly.”● Prototypes - “Prototypical birds fly.”

– Psychological?● Typicality, Normality – “Birds fly if they are not

atypical or abnormal.”● Probability - “N% of birds fly.”● No-risk or Safe Guess - “It's safe to assume birds

fly.”● Consistency - “In the absence of contradictory

evidence, birds fly.”● Autoepistemology - “If birds didn't fly, I'd know it,

and I don't know that they don't fly.”

● Abduction– T1 = {P is true, Q would explain P}

● Q is true● Q could be false and all premises in T1 can still be true!

● Belief Revision– {P, P → Q}

● ~Q● ~{P, P → Q}● ~P or ~(P → Q)

● Why is this defeasible reasoning stuff relevant to robotics?● We want to design intelligent agents for performing tasks.

● More complicated tasks → more reasoning– More exactly, the less able a robot is to represent and infer about the

world it operates in, the less useful it is.– A robot unable to perform complex reasoning would only be able to

complete tasks in a very simple world..– The actual world is very much not simple.

● More autonomy → more reasoning– We want to minimize our input to the robot and let it make its own

decisions. That's not happening without some smarts. ● We can control the reasoning a robot does by using a logic

or a consequence relation to control the conclusions the robot draws from the things we let it know.

● Designing an intelligent agent:● Intelligence, not omniscience

P. Airplanes fly, p is an airplaneC. p flies

– Unless its wings are full of holes, its engine is wrecked, it's out of fuel, ..., it's a life-size model, it's made of cake, ..., it's on the moon,...

– P is generic; there are massive numbers of exceptions to the rule.

– It's hard to represent this argument in FOL: either we treat “Airplanes fly” as a (false) universal claim or the robot would have to list all of the possible exceptions as part of its premises and confirm them before it could infer C.

– All(x)(Ax → Fx) – but that's just false, so C can't be concluded!

– All(x)((Ax ^ ~Hx ^ ~Ex ^ ~Gx ^ … ^~Mx) → Fx) – almost everything about the airplane has to be confirmed before C can be concluded.

● The more complex the world is, the more exceptions there are to generic rules. It's just unfeasible to explicitly program in all exceptions to generic statements using FOL and then expect the robot to confirm (prove) every exception does not hold for every inference the robot might make about particular objects in a complex world. Without defaults, there are inferences the robot just can't make without vast amounts of information.

● Avoiding Contradiction: ● If a robot uses monotonic reasoning and can't guarantee

consistency by explicitly representing all exceptions, then the only way it can avoid ending up with contradictory beliefs (or “beliefs”) is to reason defeasibly.– Suppose the robot infers P from T1. Suppose it finds out

that P is false. Then what? It can't take back P. It can never correct its error.

● The Frame Problem● If the robot is to perform a complex task, it will have to keep

track of the effects of its actions, and it will have to represent the invariance of the properties of unaffected things.– Most properties don't change without being acted on: if

the robot performs an action that changes an object's color, the action will usually not change the object's position. It will also not change the location of other objects. It will also not … Of course, there are exceptions.

– The list of invariances is as large as the world is complicated; we don't want to have to explicitly represent every unchanging fact.

– The fact that things that aren't explicitly changed stay the same is a default assumption – every action leaves facts unchanged unless it is possible to infer that the action changes it.

II. Non-monotonic Logic● Non-monotonic logic is the project of using logical tools we

already have at our disposal to formalize defeasible reasoning.

● We want argument forms that allow us to figure out when we have valid arguments that also allow us to take back conclusions. We want our inferences to be good, even though we reserve the right to take them back.

Non-monotonic Logics, Formalisms● Default Rules/Logic● Closed World Assumption● Autoepistemic Logic● Preferred Models, Circumscription● Inheritance Networks

Default Rules● If {A1,...,An} and ~{B1,...,Bn}, then conclude C.

● Normal Default Rule = Consistency Test– If {A1,...,An} and ~(~C), then conclude C.

A(x):M(B1(x),...,Bn(x))C(x)

● A(x) = the prerequisite● C(x) = the consequent● M(B1(x),...,Bm(x)) = the justification● M = “It is consistent to assume that...”● B1(x),...,Bm(x) = some set of things it is consistent to

assume.● If it's consistent to assume C, then go ahead and

conclude C.

Default Logic– P. (x is an A: M(x is a B))

● x is a B– P. (x is an airplane: M(x flies))

● C. x flies.– We can infer C defeasibly from P because the

default rule says that, given our premises, and the satisfaction of the consistency condition, we can conclude C. If we add new premises that make it inconsistent to conclude C, the consistency condition does not hold.

● “x is an A” = premise, M(x is a B) = condition of the rule that can fail to hold if we get new information

– We can retain validity of the original argument (any situation where x is an A and it's consistent to assume x is a B is a situation where x is a B), and we can take back our conclusions.

Systems For Defaults● Closed World Assumption

● :M~P– ~P– If it's not inconsistent to infer ~P, go ahead and infer it.

● Autoepistemic Logic● Modal Logic

– The extension of a set of premises S = {P| P follows in system K45 from the union of S and the positive and negative introspective closures of the subset, S0, of S}

– Q follows from P defeasibly if Q follows from the union of P plus all the things I know and all the things I don't know – I could always learn something new that invalidates Q.

● P:If ~Q were the case, I'd know it, and I don't know it– Q

More Systems for Defaults● Preferred Models – choose the smallest set of

interpretations that makes the premises true, then see if those interpretations make the conclusion true● This results in assuming that whatever we are not

assuming (or proving) is false.● Technically very difficult:

– Second-Order Logic (at least for Circumscription)

● Inheritance Networks ● “Is-A” ● A → B = As are Bs, generally

● Image taken from Stanford Encyclopedia of Philosophy

III. The Interesting Stuff● Defeasible reasoning is a key component of human

epistemic/cognitive abilities. Suppose some nonmonotonic logic or system really is the theory of human reasoning. If we imbue robots with programs that instantiate these logics, we've opened up the possibility that robots can really possess our abilities.

● Intelligent Agency → Rights? Moral Status?● Robot rights? A function of the moral importance of the

tasks they perform? ● They should not be interfered with in their performance

of morally important tasks.● Suppose a robot is so complex that it performs a wide

and varied assortment of morally important tasks – does this “enmesh” it in a network of moral rights and obligations?

III. More interesting stuff● There is more than one approach to defeasible

reasoning:● Logical● Epistemological

– Pollock's System of Defeasible Reasoning, OSCAR Project

– The Web of Belief and the AGM Theory of Belief Revision

III. Even More...● Logic as robotic enhancement of thought

● Artificial Intelligences, Theorem Proving programs, Automated Proof procedures... robots?

● Extended Cognition Thesis● Regimentation● Virtual Machines● Logical Formalization, Deductive Procedures

– Turning complicated reasoning problems into smaller, more symbolic problems that can be solved with simple, mechanical routines.

● Symbolize, check for valid argument form, if not found, compile loopholes/counterexamples.

● Little programs or Ais (robots?) in your brain.

Things to Read● Reiter, Ray, 1980, “A logic for default reasoning”, Artificial Intelligence, 13: 81-137.

● McDermott, Drew and Doyle, Jon, 1982, “Non-Monotonic Logic I”, Artificial Intelligence, 13: 41-72.

● McCarthy, J., 1980, “Circumscription — A Form of Non-Monotonic Reasoning”, Artificial Inteligence, 13: 27–39.

● Moore, Robert C., 1993, “Autoepistemic logic revisited”, Artificial Intelligence, 59(1-2): 27-30.

● Stanford Encyclopedia of Philosophy Articles

– “Non-monotonic Logic”– “Defeasible Reasoning”– “Logic and Artificial Intelligence”– “Classical Logic”– “The Frame Problem”

● Quine, Willard van Orman, and Ullian, J. S., 1982, The Web of Belief, New York: Random House.

● Alchourrón, C., Gärdenfors, P. and Makinson, D., 1982, “On the logic of theory change: contraction functions and their associated revision functions”, Theoria, 48: 14-37.

● Pollock, John L. 1987, “Defeasible Reasoning”, Cognitive Science, 11: 481-518.

● –––, 1995, Cognitive Carpentry, Cambridge, Mass.: MIT Press.