Upload
georgina-allison
View
214
Download
1
Embed Size (px)
Citation preview
Software Agent- architecture-
Outline
• Overview of agent architectures
• Deliberative agents– Deductive reasoning– Practical reasoning
• Reactive agents
• Hybrid agents
• Summary
2/30
Agent Architectures
• An agent is a computer system capable of flexible autonomous action– Autonomy, reactiveness, pro-activeness, and social ability
• Kaelbling considers an agent architecture to be– A specific collection of software (or hardware) modules, typically designated by
boxes with arrows indicating the data and control flow among the modules. A more
abstract view of an architecture is as a general methodology for designing particu-
lar modular decompositions for particular tasks.
• Maes defines an agent architecture as:– A particular methodology for building [agents]. It specifies how… the agent can be
decomposed into the construction of a set of component modules and how these
modules should be made to interact. The total set of modules and their interactions
has to provide an answer to the question of how the sensor data and the current in-
ternal state of the agent determine the actions… and future internal state of the
agent. An architecture encompasses techniques and algorithms that support this
methodology.3/30
Brief History of Agent Architectures
• Originally (1956-1985), pretty much all agents designed within AI were symbolic reasoning agents
• Its purest expression proposes that agents use explicit logical reason-ing in order to decide what to do
• Problems with symbolic reasoning led to a reaction against this — the so-called reactive agents movement, 1985–present
• From 1990-present, a number of alternatives proposed: hybrid archi-tectures, which attempt to combine the best of reasoning and reactive architectures
4/30
Types of Agent Architecture
• Deliberative approach– Deductive reasoning agents– Practical reasoning agents
• Reactive approach
• Hybrid approach
5/30
Deliberative Agents (1)
• We define a deliberative agent or agent architecture to be one that:– It contains an explicitly represented, symbolic model of the world– It makes decisions (for example about what actions to perform) via symbolic rea-
soning– It suggests that intelligent behavior can be generated by such representation and
manipulation of symbols
• This paradigm is known as symbolic AI– Explicit symbolic model of the world in which decisions are made via logical reason-
ing, based on pattern matching and symbolic manipulation– Sense-plan-act problem-solving paradigm of classical AI planning systems– Examples of deliberative architectures
• BDI• GRATE*, HOMER• Shoham: Agent-Oriented Programming
6/30
Deliberative Agents (2)
Sensors
Sensors
Effectors
Effectors
WorldModelWorldModel PlannerPlanner Plan
executorPlan
executor
Agent
7/30
Problems of Deliberative Agents
• Performance problems– Transduction problem
• time consuming to translate all of the needed information into the symbolic representation, especially if the environment is changing rapidly.
– Representation problem• how the world-model is represented in symbolically and how to get agents to reason with
the information in time for the results to be useful.
• Late results may be useless
• Does not scale to real-world scenarios
8/30
Deductive Reasoning (1)
• How can an agent decide what to do using theorem proving?
• Basic idea is to use logic to encode a theory stating the best action to perform in any given situation
• Let:– be this theory (typically a set of rules)– be a logical database that describes the current state of the world– Ac be the set of actions the agent can perform– ├ mean that can be proved from using
• Agent internal state:– a set of rules – the current state of the world
Deliberative Agents
Environment
Agent
see action
next ,
9/30
Deductive Reasoning (2)
/* try to find an action explicitly prescribed */1. for each a Ac do2. if ├ Do(a) then3. return a4. end-if5. end-for
/* try to find an action not excluded */6. for each a Ac do7. if ├ Do(a) then8. return a9. end-if10. end-for11. return null /* no action found */
Deliberative Agents
10/30
Example of Deductive Reasoning (1)
• The Vacuum World: A robot to clean up a house
• Environment– 3 x 3 grids– The vacuum world changes with the random appearance and disappearance of dirt– Always starting at (0, 0) and facing north– Always a definite orientation d {north, south, west, or east}
Deliberative Agents
• Agent perception: only perceive dirt beneath it
• Possible actions: Ac = {turn, for-ward, suck}
• Goal: traverse the room continu-ously searching and clear dirt
11/30
Example of Deductive Reasoning (2)
• Representation of the world– Use 3 domain predicates to solve problem:
In(x, y) agent is at (x, y)
Dirt(x, y) there is dirt at (x, y)
Facing(d) the agent is facing direction d
• Update the world model of two stages– In and Facing predicates: update: D Ac D– Dirt predicate: next : D Per D
• Rules for determining what to do:
Deliberative Agents
12/30
Discussion of Deductive Reasoning
• Advantages– Simple– Elegant logical semantics
• Problems:– How to convert video camera input to Dirt(0, 1)?– Time complexity for reasoning (search the space)
• decision making assumes a static environment: calculative rationality• decision making using logic is undecidable!
Deliberative Agents
13/30
Practical Reasoning (1)
• Practical reasoning is reasoning directed towards actions– “Practical reasoning is a matter of weighing conflicting considerations for and
against competing options, where the relevant considerations are provided by what the agent desires/values/cares about and what the agent believes.” (Bratman)
• Human practical reasoning consists of two activities:– Deliberation: deciding what state of affairs we want to achieve– Means-ends reasoning: deciding how to achieve these states of affairs
• The outputs of deliberation are intentions
Deliberative Agents
14/30
Practical Reasoning (2)
• Intentions– The state of affairs that an agent has chosen and committed to– It plays a crucial role in the practical reasoning process
• Intentions drive means-ends reasoning• Intentions are persist• Intentions constrain future deliberations• Intentions are closely related to beliefs about the future
• Means-end reasoning– Basic idea is to give an agent:
• representation of goal/intention to achieve• representation actions it can perform• representation of the environment
– Have it generate a plan to achieve the goal– Known as planning in AI community
Deliberative Agents
planner
goal/intention/
task
state of en-vironment
possibleactions
plan to achieve goal
15/30
Example of Practical Reasoning (1)
• The Blocks World– Contains a robot arm, 3 blocks (A, B, and C) of equal size, and a table-top– Use the closed world assumption: anything not stated is assumed to be false
• Representation of the environment – On(x, y): obj x on top of obj y– OnTable(x); obj x is on the table– Clear(x): nothing is on top of obj x– Holding(x): arm is holding x– ArmEmpty: arm is empty
• A goal is represented as a set of formulae– Here is a goal: OnTable(A) OnTable(B) OnTable(C)
Deliberative Agents
A
B C
Clear(A)On(A, B)OnTable(B)OnTable(C)
AB C
16/30
Example of Practical Reasoning (2)
• Actions= {stack, unstack, pickup, putdown}– Actions are represented using STRIPS operators
• Pre-condition/delete/add list notation– Each action has:
• a name: which may have arguments• a pre-condition list: list of facts which must be true for action to be executed• a delete list: list of facts that are no longer true after action is performed• an add list: list of facts made true by executing the action
• Example “stack”The stack action occurs when the robot arm places the object x it is holding is placed on top of object y.
Stack(x, y)pre Clear(y) Holding(x)del Clear(y) Holding(x)add ArmEmpty On(x, y)
Deliberative Agents
A
B
17/30
Practical Reasoning: Plan
• A plan – A sequence (list) of actions, Π=(a1, ....., an), determines n+1 environment state Δ0,
Δ1,.........., Δn
• A plan is correct – Δ0 is the initial state
– the precondition of every action is satisfied in the preceding environment state– Δn is the goal state
• Plan generation becomes search problem– Forward search– Backward search– Heuristic search
Deliberative Agents
I G
a1
a17
a142
18/30
Discussion of Practical Reasoning
• Problem: deliberation and means-ends reasoning processes are not instantaneous. They have a time cost.
– Suppose the agent starts deliberating at t0, begins means-ends reasoning at t1, and begins executing the plan at time t2.
• Time to deliberate is tdeliberate = t1 – t0
• Time for means-ends reasoning is tme = t2 – t1
• So, this agent will have overall optimal behavior in the following cir-cumstances
– When deliberation and means-ends reasoning take a vanishingly small amount of time
– When the world is guaranteed to remain static while the agent is deliberating and performing means-ends reasoning, so that the assumptions upon which the choice of intention to achieve and plan to achieve the intention remain valid until the agent has completed deliberation and means-ends reasoning
– When an intention that is optimal when achieved at time t0 (the time at which the world is observed) is guaranteed to remain optimal until time t2 (the time at which the agent has found a course of action to achieve the intention).
Deliberative Agents
19/30
Behavior Languages
• Brooks has put forward three theses– Intelligent behavior can be generated without explicit representations of the kind
that symbolic AI proposes– Intelligent behavior can be generated without explicit abstract reasoning of the kind
that symbolic AI proposes– Intelligence is an emergent property of certain complex systems
• He identifies two key ideas that have informed his research– Situatedness and embodiment: ‘Real’ intelligence is situated in the world, not in
disembodied systems such as theorem provers or expert systems– Intelligence and emergence: ‘Intelligent’ behavior arises as a result of an agent’s in-
teraction with its environment. Also, intelligence is ‘in the eye of the beholder’; it is not an innate, isolated property
20/30
Subsumption Architecture
• A hierarchy of task-accomplishing behaviors– Each behavior is a rather simple rule-like structure– Each behavior ‘competes’ with others to exercise control over the agent– Lower layers represent more primitive kinds of behavior (such as avoiding obsta-
cles), and have precedence over layers further up the hierarchy
• The resulting systems are, in terms of the amount of computation they do, extremely simple
– Some of the robots do tasks that would be impressive if they were accomplished by symbolic AI systems
21/30
Reactive Agents (1)
• Reactive agents have – at most a very simple internal representation of the world– provide tight coupling of perception and action
• Behavior-based paradigm
• Intelligence is a product of interaction between an agent and its envi-ronment.
• Do we really need abstract reasoning?
22/30
Reactive Agents (2)
Sensors
Sensors
Effectors
Effectors
Agent
Stimulus-response behaviours
State1State1
State2State2
StatenStaten
Action1Action1
Action2Action2
ActionnActionn
......
23/30
Reactive Agents (3)
• Each behavior continually maps perceptual input to action output
• Reactive behavior:– action: S -> A
where S denotes the states of the environment, and A the primitive actions the agent is capable of perform.
• Example:– action(s) =
Heater off, if temperature OKHeater on, otherwise
24/30
Discussion of Reactive Agents
• Advantages: simplicity, economy, computational tractability, robust-ness against failure, elegance
• Problems– Agents without environment models must have sufficient information available from
local environment– If decisions are based on local environment, how does it take into account non-lo-
cal information (i.e., it has a “short-term” view)– Difficult to make reactive agents that learn– Since behavior emerges from component interactions plus environment, it is hard
to see how to engineer specific agents (no principled methodology exists)– Typically “handcrafted”
• Development takes a lot of time• Impossible to build large systems?• Can be used only for its original purpose
25/30
Comparison between Two Approaches
26/30
Hybrid Agents (1)
• Combination of deliberative and reactive behavior– An agent consists of several subsystems
• Subsystems that develop plans and make decisions using symbolic reasoning (delibera-tive component)
• Reactive subsystems that are able to react quickly to events without complex reasoning (reactive component)
• Examples:– InteRRaP– Touring Machines– Procedural Reasoning System (PRS)– 3T
27/30
Hybrid Agents (2)
Agent
Sensors
Sensors
Effectors
Effectors
Reactive component
State1
State2
Staten
Action2
Actionn
......
WorldModelWorldModel PlannerPlanner Plan
executorPlan
executor
Deliberative component
Action1
observations modifications
28/30
Hybrid Agents: Layered Architectures
Layern
Layern-1
Layer2
Layer1
Sensor inputAction output
Action output
...
Layern
Layern-1
Layer2
Layer1
...Sensor input Action output
29/30
Hybrid Agents: InteRRaP
Perceptual input action output
Cooperation layer
Plan layer
Behaviour layer
World interface
World model
Planning knowledge
Social knowledge
30/30
Summary
• Features of each agent architecture– Deliberative agents– Reactive agents– Hybrid agents
• How to organize information processing of agents?– Hierarchical modeling– Functionalized modularization– Learning of internal modules
• Next lectures– Implementation of agent architectures– Multi-agent systems
31/30