73
Machine Intelligence Cairo University Faculty of Engineering Computer Engineering Department Course Instructor: Prof. Dr. Nevin Darwish

Machine Intelligence Cairo University Faculty of Engineering Computer Engineering Department Course Instructor: Prof. Dr. Nevin Darwish

Embed Size (px)

Citation preview

  • Slide 1
  • Machine Intelligence Cairo University Faculty of Engineering Computer Engineering Department Course Instructor: Prof. Dr. Nevin Darwish
  • Slide 2
  • Team members Sylvia Boshra Lydia Wahid Madonna Samuel Wessam Wagdy
  • Slide 3
  • Artificial Intelligence A Modern Approach 3rd Edition Chapter 11
  • Slide 4
  • Planning and Acting in the Real World
  • Slide 5
  • Agenda 1. Recall classical planning 2. Types of the environment 3. Methods To deal with different types of the environment I.Sensorless (Conformant)Planning II.Contingent Planning III.Online Replanning 4. Multiagent Planning I.Planning with multiple simultaneous actions II.Planning with multiple agents 5. Summary
  • Slide 6
  • 1. Recall Classical Planning Classical Planning Example: The spare tire problem Init(Tire(Flat) Tire(Spare) At(Flat, Axle) At(Spare, Trunk)) Goal(At(Spare, Axle)) Action(Remove(obj, loc), PRECOND: At(obj, loc) EFFECT : At(obj,loc) At(obj, Ground)) Action(PutOn(t. Axle), PRECOND: Tire(t) At(t, Ground) At(Flat, Axle) EFFECT: At(t, Ground) At(t, Axle))
  • Slide 7
  • 2. Types of the Environment Fully Observable Partially Observable Non Observable
  • Slide 8
  • Example: Painting a chair and a table Init(Object(Table) Object(Chair) Can(C1) Can(C2)) Goal(Color(Chair,c) Color(Table,c) ) Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can)) Action(Paint(x,can), PRECOND: Object(x) Can(can) Color(can,c) Open(can) EFFECT: Color(x,c)) Percept (Color(x,c), PRECOND: Object(x) InView(x) Percept (Color(can,c), PRECOND: Can(can) InView(can) Open (can)
  • Slide 9
  • Example: Painting a chair and a table Init(Object(Table) Object(Chair) Can(C1) Can(C2) InView(Table)) Goal(Color(Chair,c) Color(Table,c) ) Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can)) Action(Paint(x,can), PRECOND: Object(x) Can(can) Color(can,c) Open(can) EFFECT: Color(x,c)) Percept (Color(x,c), PRECOND: Object(x) InView(x) Percept (Color(can,c), PRECOND: Can(can) InView(can) Open (can)
  • Slide 10
  • Example: Painting a chair and a table Init(Object(Table) Object(Chair) Can(C1) Can(C2) InView(Table)) Goal(Color(Chair,c) Color(Table,c) ) Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can)) Action(Paint(x,can), PRECOND: Object(x) Can(can) Color(can,c) Open(can) EFFECT: Color(x,c)) Percept (Color(x,c), PRECOND: Object(x) InView(x) Percept (Color(can,c), PRECOND: Can(can) InView(can) Open (can)
  • Slide 11
  • Example: Painting a chair and a table Init(Object(Table) Object(Chair) Can(C1) Can(C2) InView(Table)) Goal(Color(Chair,c) Color(Table,c) ) Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can)) Action(Paint(x,can), PRECOND: Object(x) Can(can) Color(can,c) Open(can) EFFECT: Color(x,c)) Action(LookAt(x), PRECOND: InView(y) (x y) EFFECT : InView(x) InView(y)) ggggggggggg
  • Slide 12
  • 3. Methods To deal with different types of the environment I. Sensorless (Conformant)Planning II. Contingent Planning III. Online Replanning
  • Slide 13
  • I. Sensorless Planning Belief state Belief state for the coloring problem Object(Table) Object(Chair) Can(C1) Can(C2) Color(x,C(x)) b o = Color(x,C(x)) Open-world assumption
  • Slide 14
  • Using belief state to reach the goal Update belief state b=RESULT(b,a)=(b-DEL(a)) ADD(a)
  • Slide 15
  • Init(Object(Table) Object(Chair) Can(C1) Can(C2)) Goal(Color(Chair,c) Color(Table,c) ) Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can)) Action(Paint(x,can), PRECOND: Object(x) Can(can) Color(can,c) Open(can) EFFECT: Color(x,c)) Percept (Color(x,c), PRECOND: Object(x) InView(x) Percept (Color(can,c), PRECOND: Can(can) InView(can) Open (can)
  • Slide 16
  • Using belief state to reach the goal Update belief state b=RESULT(b,a)=(b-DEL(a)) ADD(a) After applying action RemoveLid(Can1) b 1 = Color(x,C(x)) Open(Can1)
  • Slide 17
  • Init(Object(Table) Object(Chair) Can(C1) Can(C2)) Goal(Color(Chair,c) Color(Table,c) ) Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can)) Action(Paint(x,can), PRECOND: Object(x) Can(can) Color(can,c) Open(can) EFFECT: Color(x,c)) Percept (Color(x,c), PRECOND: Object(x) InView(x) Percept (Color(can,c), PRECOND: Can(can) InView(can) Open (can)
  • Slide 18
  • Using belief state to reach the goal Update belief state b=RESULT(b,a)=(b-DEL(a)) ADD(a) After applying action RemoveLid(Can1) b1 = Color(x,C(x)) Open(Can1) After applying action Paint(Chair, Can1) b2 = Color(x,C(x)) Open(Can1) Color(Chair,C(Can1)) After applying Paint(Table, Can1) b3= Color(x,C(x)) Open(Can1) Color(Chair,C(Can1)) Color(Table,C(Can1))
  • Slide 19
  • Goal(Color(Chair,c) Color(Table,c) ) Belief state satisfies the goal.
  • Slide 20
  • Problem Vacuum world Belief state= AtL AtR If we applied action Suck There is a problem: Two different effects!! If AtL, effect: CleanL If AtR, effect:CleanR
  • Slide 21
  • Solution 1. Conditional effect Action(Suck EFFECT: when AtL: CleanL when AtR: CleanR) b=(AtL CleanL) (AtR CleanR) 2. Action(SuckL PRECOND: AtL; EFFECT: CleanL) Action(SuckR PRECOND: AtR; EFFECT: CleanR)
  • Slide 22
  • 3. Conservative approach Look for action sequences that keep the belief state as simple as possible Retain 1-CNF belief state Some sequences can go outside 1-CNF
  • Slide 23
  • Agenda 1. Recall classical planning 2. Types of the environment 3. Methods To deal with different types of the environment I.Sensorless (Conformant)Planning II.Contingent Planning III.Online Replanning 4. Multiagent Planning I.Planning with multiple simultaneous actions II.Planning with multiple agents 5. Summary
  • Slide 24
  • II. Contingent Planning: Whats contingent planning: Whats contingent planning: Contingent planning is the generation of plans with conditional branching based on percepts. It is appropriate for environments with partial observability and/or non- determinism.
  • Slide 25
  • After an action and subsequent percept, calculating the new belief state is done in two stages: 1.Calculating the belief state after the action. 2.Updating the belief state after perception of the environment. If a percept P has more than one percept axiom then we have to add the disjunction (OR) of the preconditions. Which will take the belief state out of CNF (and of ors). We can generate contingent plans with an extension of the AND-OR forward search over belief states.
  • Slide 26
  • III. Online Planning Example: The spot-welding agent in a car plant: The robot welds the same accurate position in every car that passes down the line, if a car door falls off a car as the robot is trying to apply a spot weld, the robot replaces the welding actuator with a gripper, picks up the door, checks for scratches, reattaches it to the car, email the floor supervisor, switches back to its welding actuator and continues its work. The robots behavior seems purposive. The robot knows what its trying to do.
  • Slide 27
  • Conditions for Replanning: Execution monitoring to determine the need for a new plan. The need for a new plan arises when a contingent planning agent gets tired of planning for every contingency such as the sky might fall on its head.
  • Slide 28
  • Needs for Replanning: 1.If the agents model of the world is incorrect. 2.If the agents model of an action have a missing precondition. Example: Opening a can of paints involves using a screwdriver to remove the lid. 3.If the agents model have a missing effect. Example: Painting a chair may get paint on the floor. 4.If the agents model is missing a state variable. Example: How the amount of paint in the can can effect the agents actions. 5.If the agents model is lacking provision for exogenous events. Events that are out of the hands of the agent, like someone knocking over the can of paint.
  • Slide 29
  • Without monitoring and replanning, the agents behavior is as fragile if it relies on absolute correctness of its model. Levels of monitoring the environment: Action monitoring: Before the execution of an action, the agent verifies that all the preconditions still hold. Plan Monitoring: Before the execution of an action, the agent verifies that the remaining plan will still succeed. Goal Monitoring: Before the execution of an action, the agent checks to see if there is a better set of goals it could be trying to achieve.
  • Slide 30
  • Action monitoring Vs. Plan Monitoring: Action monitoring is a simple method of execution monitoring, but it can sometimes lead to less than intelligent behavior. Example: The agent constructs a plan to solve the painting problem by painting both the chair and the table red. Suppose that there is only enough red paint for the chair. With action monitoring, the agent would go ahead and paint the chair red, then notice that it is out of paint and cannot paint the table, at which it would replan a repair, painting the chair and table green. A plan monitoring agent can detect failure whenever the current state is such that the plan no longer works. Thus, it would not waste time painting a chair red.
  • Slide 31
  • Does it work? We cannot guarantee that the agent always reaches the goal, Because it could arrive to a dead end state from which there is no repair. For example, the Vacuum cleaner agent may have a faulty model of itself and doesnt know that its batteries may run out. For the agent to always reach the goal, we must assume that: 1. There are no dead ends, the goal is reachable from all states in the environment. 2.Environment is really non-deterministic.
  • Slide 32
  • When actions are not really non- deterministic: Trouble occurs when actions depend on some precondition the agent doesnt know about. Example: The painting agent may not know that the paint can is empty and no amount of retrying to paint would reach the goal. Two Approaches to solve this problem: Choosing randomly among the set of possible repair plans, rather than trying the same repair. Learning: The agent should be able to modify its model of the world to accord with its percepts.
  • Slide 33
  • Agenda 1. Recall classical planning 2. Types of the environment 3. Methods To deal with different types of the environment I.Sensorless (Conformant)Planning II.Contingent Planning III.Online Replanning 4. Multiagent Planning I.Planning with multiple simultaneous actions II.Planning with multiple agents 5. Summary
  • Slide 34
  • 4. MULTIAGENT PLANNING When there are multiple agents in the environment, each agent faces a multiagent planning problem. Between the purely single-agent and truly multiagent cases is a wide spectrum of problems. Examples: human who can type and speak at the same time A fleet of delivery robots in a factory 34
  • Slide 35
  • The multiple bodies act as a single body as long as the relevant sensor information collected by each body can be pooled form a common estimate of the world state that then informs the execution of the overall plan. When communication constraints make this impossible, we have a decentralized planning problem Example: Multiple reconnaissance robots covering a wide area 35
  • Slide 36
  • When a single entity is doing the planning, there is really only one goal, which all the bodies necessarily share. When the bodies are distinct agents that do their own planning, they may still share identical goals Example: two human tennis players who form a doubles team share the goal of winning the match. The multibody and multiagent cases are quite different In a multibody robotic doubles team 36
  • Slide 37
  • The clearest case of a multiagent problem, of course is when the agents have different goals. Example: In tennis, the goals of two opposing teams are in direct conflict. Some systems are a mixture of centralized and multiagent planning. Example: a delivery company 37
  • Slide 38
  • The issues involved in multiagent planning can be divided roughly into two sets: 1. involves issues of representing and planning for multiple simultaneous actions; these issues occur in all settings from multieffector to multiagent planning. 2. involves issues of cooperation, coordination, and competition arising in true multiagent settings. 38
  • Slide 39
  • I. Planning with multiple simultaneous actions For the time being, we will treat the multieffector, multibody, and multiagent settings in the same way. A correct plan is one that, if executed by the actors, achieves the goal. We assume perfect synchronization: each action takes the same amount of time and actions at each point in the joint plan are simultaneous. 39
  • Slide 40
  • 40
  • Slide 41
  • If the actors have no interaction with one then we can simply solve n separate problems.(Example: n actors each playing a game of solitaire). The standard approach to loosely coupled problems is to pretend the problems are completely decoupled and then fix up the interactions. For the transition model, this means writing action schemas as if the actors acted independently. 41
  • Slide 42
  • Problems arise, however, when a plan has both agents hitting the ball at the same time. Technically, the difficulty is that preconditions constrain the state in which an action can be executed successfully, but do not constrain other actions that might mess it up. 42
  • Slide 43
  • A concurrent action list stating which actions must or must not be executed concurrently. the Hit action has its stated effect only if no other Hit action by another agent Occurs at the same time. 43
  • Slide 44
  • For some actions, the desired effect is achieved only when another action occurs concurrently. Example: two agents are needed to carry a cooler full of beverages to the tennis court 44
  • Slide 45
  • Agenda 1. Recall classical planning 2. Types of the environment 3. Methods To deal with different types of the environment I.Sensorless (Conformant)Planning II.Contingent Planning III.Online Replanning 4. Multiagent Planning I.Planning with multiple simultaneous actions II.Planning with multiple agents 5. Summary
  • Slide 46
  • II. Planning with multiple agents: Co-operation and Co-ordination 46 Actors(A,B) Init(At(A, LeftBaseline) At(B,RightNet) Approaching(Ball,RightBaseline)) Partner(A,B) Partner(B,A) Goal(Returned(Ball) (At(a,RightNet) OR At(a,LeftNet)) Action(Hit(actor,Ball), PRECOND: Approaching(Ball,loc) AT(actor,loc) EFFECR: Returned(Ball)) Action(Go(actor,to), PRECOND: AT(actor,loc) to loc, EFFECT: At(actor,to) At(actor,loc)) Plan1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)] Plan2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]
  • Slide 47
  • II. Planning with multiple agents: Co-operation and Co-ordination 47 Initially Init(At(A, LeftBaseline) At(B,RightNet) Approaching(Ball,RightBaseline)) Partner(A,B) Partner(B,A) Goal(Returned(Ball) (At(a,RightNet) OR At(a,LeftNet))
  • Slide 48
  • II. Planning with multiple agents: Co-operation and Co-ordination 48 Plan 1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)]
  • Slide 49
  • II. Planning with multiple agents: Co-operation and Co-ordination 49 Plan 1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)]
  • Slide 50
  • II. Planning with multiple agents: Co-operation and Co-ordination 50 Plan 1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)] The GOAL
  • Slide 51
  • II. Planning with multiple agents: Co-operation and Co-ordination 51 Plan 2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]
  • Slide 52
  • II. Planning with multiple agents: Co-operation and Co-ordination 52 Plan 2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]
  • Slide 53
  • II. Planning with multiple agents: Co-operation and Co-ordination 53 Plan 2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]
  • Slide 54
  • II. Planning with multiple agents: Co-operation and Co-ordination 54 Plan 2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)] The GOAL
  • Slide 55
  • II. Planning with multiple agents: Co-operation and Co-ordination 55 Again initially Init(At(A, LeftBaseline) At(B,RightNet) Approaching(Ball,RightBaseline)) Partner(A,B) Partner(B,A) Goal(Returned(Ball) (At(a,RightNet) OR At(a,LeftNet))
  • Slide 56
  • II. Planning with multiple agents: Co-operation and Co-ordination 56 Plan 1 by A Plan1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)]
  • Slide 57
  • II. Planning with multiple agents: Co-operation and Co-ordination 57 Plan 2 by B Plan2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]
  • Slide 58
  • II. Planning with multiple agents: Co-operation and Co-ordination 58 Plan 2 by A Plan2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]
  • Slide 59
  • II. Planning with multiple agents: Co-operation and Co-ordination 59 Plan 2 by A Plan2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]
  • Slide 60
  • II. Planning with multiple agents: Co-operation and Co-ordination 60 Plan 1 by B Plan1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)]
  • Slide 61
  • II. Planning with multiple agents: Co-operation and Co-ordination How to coordinate to make sure they agree on the plan? 1. Adopt a convention 2. Use communication 1.Verbal Exchange 2.Plan Recognition 61
  • Slide 62
  • II. Planning with multiple agents: Co-operation and Co-ordination 1. Adopt a convention: Any constraint on selection of joint plans. Convention: Stick to your side of the court (Plan 1 is out, both agents will select Plan 2) Any alternative conventions works equally as long as all agents in an environment agree When the conventions are widespread then called social laws 62
  • Slide 63
  • II. Planning with multiple agents: Co-operation and Co-ordination 2. Use communication: To achieve common knowledge of a feasible joint plan Chapter 22: more mechanisms for commination Can work as well with competitive agents as with cooperative ones 63
  • Slide 64
  • II. Planning with multiple agents: Co-operation and Co-ordination 64 Yours ! 2. Use communication 1. By Verbal Exchange
  • Slide 65
  • II. Planning with multiple agents: Co-operation and Co-ordination 65 Mine ! 2. Use communication 1. By Verbal Exchange
  • Slide 66
  • II. Planning with multiple agents: Co-operation and Co-ordination 66 Then its Plan 1 Plan1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)]
  • Slide 67
  • II. Planning with multiple agents: Co-operation and Co-ordination 67 2. Use communication 2. Plan Recognition By Executing the first part of the plan
  • Slide 68
  • II. Planning with multiple agents: Co-operation and Co-ordination 68 Its plan 2 2. Use communication 2. Plan Recognition By Executing the first part of the plan
  • Slide 69
  • II. Planning with multiple agents: Co-operation and Co-ordination 69 2. Use communication 2. Plan Recognition By Executing the first part of the plan
  • Slide 70
  • II. Planning with multiple agents: Co-operation and Co-ordination Another Examples: 1. Seed-Eating harvester ants The queens job is to reproduce not to do centralized planning There is some learning mechanism that make them able to make successful actions over decades-long life even though individual ants live only about a year 2. Flocking behavior of birds Observes the position of its nearest neighbors and choose the heading and acceleration that max the weighted sum of: 1.Cohesion 2.Separation 3.Alignment 70
  • Slide 71
  • 5.Summary: Resources as numeric measures Time is handled by specialized scheduling or integrated with planning HTN planning using HLAs Effects of HLAs can be defined with angelic semantic Contingent plans allows the agent to sense the world, Sensor-less and Contingent plan can be constructed by search in the space of Belief States Online planning agent can re-plan due to nondeterministic actions or incorrect models of environment Multiagents to cooperate or compete and must agree on which joint plans to be executed This chapter extends classical planning to cover nondeterministic environments 71
  • Slide 72
  • 72
  • Slide 73
  • 73