View
220
Download
0
Tags:
Embed Size (px)
Citation preview
Luís Moniz Pereira
Centro de Inteligência Artificial - CENTRIA
Universidade Nova de Lisboa
Pierangelo Dell’Acqua
Dept. of Science and Technology
Linköping University
Our agents
We propose a LP approach to agents that can:
Reason and React to other agentsPrefer among possible choices Intend to reason and to actUpdate their own knowledge, reactions, and goals Interact by updating the theory of another agentDecide whether to accept an update depending on the
requesting agent
Framework
This framework builds on the works:
Updating Agents - P. Dell’Acqua & L. M. Pereira MAS’99
Updates plus Preferences - J. J. Alferes & L. M. Pereira
JELIA’00
Enabling agents to update their KB
Updating agent: a rational, reactive agent that can dynamically change its own knowledge and goals
makes observations reciprocally updates other agents with goals and rules thinks (rational) selects and executes an action (reactive)
Agent’s language
Atomic formulae:
A objective atoms
not A default atoms
i:C projects
updatesiC
Formulae: Li is an atom, an update or a negated update
active rule
generalized rules
Zj is a project
integrity constraint
false L1 Ln Z1 Zm
A L1 Ln
not A L1 Ln
L1 Ln Z
Projects and updates
A project j:C denotes the intention of some agent i of proposing the updating the theory of agent j with C.
denotes an update proposed by i of the current theory of some agent j with C .
wilma:C
iC
fredC
Example: active rules
money maria : not work
beach maria : goToBeach
travelling pedro : bookTravel
Consider the following active rules in the theory of Maria.
Agent’s language
A project i:C can take one of the forms:
Note that a program can be updated with another program, i.e., any rule can be updated.
i : ( A L1 Ln )
i : ( L1 Ln Z )
i : ( ?- L1 Ln )
i : ( not A L1 Ln )
i : ( false L1 Ln Z1 Zm )
Agents’ knowledge states
Knowledge states represent dynamically evolving states of agents’ knowledge. They undergo change due to updates.
Given the current knowledge state Ps , its successor knowledge state Ps+1 is produced as a result of the occurrence of a set of
parallel updates.
Update actions do not modify the current or any of the previous knowledge states. They only affect the successor state: the precondition of the action is evaluated in the current state and the postcondition updates the successor state.
Enabling agents to prefer
city not mountain not beachnot travelling
work
vacation not work
mountain not city not beachnot travellingmoney
beach not city not mountainnot travellingmoney
travelling not city not mountainnot beachmoney
Let the underlying theory of Maria be:
Since the theory has a unique two-valued model:
M={city, work}
Maria decides to live in the city.
Enabling agents to prefer
If we add the fact ”money” to the theory of Maria, then the theory has 4 models:
M1={city, money, work} M2= {mountain, money, work}
M3= {beach, money, work} M4= {travelling, money, work}
Therefore, Maria is unable to decide where to live.
To select among alternative choices, Maria needs the ability of preferring.
Updates plus preferences
A logic programming framework that combines two distinct forms of reasoning: preferring and updating.
Updates create new models, while preferences allow us to select among pre-existing models
The priority relation can itself be updated.
A language capable of considering sequences of logic programs that result from the consecutive updates of an initial program, where it is possible to define a priority relation among the rules of all successive programs.
Preferring agents
Agents can express preferences about their own rules.
Preferring agent: an agent that is able to prefer beliefs and reactions when several alternatives are possible.
Preferences are expressed via priority rules.
Preferences can be updated, possibly on advice from others.
Priority rules
Let < be a binary predicate symbol whose set of constants includes all the generalized rules:
r1 < r2 means that the rule r1 is preferred to rule r2 .
A priority rule is a generalized rule defining < .
A prioritized LP is a set of generalized rules (possibly, priority rules) and integrity constraints.
Example: a prioritized LP
(1) city not mountain not beachnot travelling
(2) work
(3) vacation not work
(4) mountain not city not beachnot travellingmoney
(5) beach not city not mountainnot travellingmoney
(6) travelling not city not mountainnot beachmoney
1<4 work 4<6 vacation
1<5 work 5<6 vacation
1<6 work 6<1 vacation M={city, money, work}
If we add ”money” to the theory, then there is a unique model:
If work is false, then vacation holds:
M1={mountain, money, vacation} M2={beach, money, vacation}
Agent theory
The initial theory of an agent is a pair (P,R):- P is an prioritized LP.- R is a set of active rules.
An updating program is a finite set of updates.
Let S be a set of natural numbers. We call the elements sS states.
An agent at state s , written s , is a pair (T,U):
- T is the initial theory of .
- U={U1,…, Us} is a sequence of updating programs.
Multi-agent system
A multi-agent system M={1s ,…, n
s } at state s is a
set of agents 1,…,n at state s.
M characterizes a fixed society of evolving agents.
The declarative semantics of M characterizes the relationship among the agents in M and how the system evolves.
The declarative semantics is stable models based.
Example: happy story
(1) city not mountain not beachnot travelling
(2) work
(3) vacation not work
(4) mountain not city not beachnot travellingmoney
(5) beach not city not mountainnot travellingmoney
(6) travelling not city not mountainnot beachmone
1<4 work 4<6 vacation
1<5 work 5<6 vacation
1<6 work 6<1 vacation
money maria : not work
beach maria : goToBeach
travelling pedro : bookTravel
Let the initial theory (P,R) of Maria be:
U={ }
State: 0
Example: happy story
(1) city not mountain not beachnot travelling
(2) work
(3) vacation not work
(4) mountain not city not beachnot travellingmoney
(5) beach not city not mountainnot travellingmoney
(6) travelling not city not mountainnot beachmone
1<4 work 4<6 vacation
1<5 work 5<6 vacation
1<6 work 6<1 vacation
money maria : not work
beach maria : goToBeach
travelling pedro : bookTravel
At state 0 Maria receives l money
U={ } U1={ }l money
State: 1
Example: happy story
(1) city not mountain not beachnot travelling
(2) work
(3) vacation not work
(4) mountain not city not beachnot travellingmoney
(5) beach not city not mountainnot travellingmoney
(6) travelling not city not mountainnot beachmone
1<4 work 4<6 vacation
1<5 work 5<6 vacation
1<6 work 6<1 vacation
money maria : not work
beach maria : goToBeach
travelling pedro : bookTravel
State: 2
Then, Maria receives maria not work
U={ } U1={ }, U2={ }l money marianot work
Example: happy story
(1) city not mountain not beachnot travelling
(2) work
(3) vacation not work
(4) mountain not city not beachnot travellingmoney
(5) beach not city not mountainnot travellingmoney
(6) travelling not city not mountainnot beachmoney
1<4 work 4<6 vacation
1<5 work 5<6 vacation
1<6 work 6<1 vacation
money maria : not work
beach maria : goToBeach
travelling pedro : bookTravel
State: 3
Then, Maria receives f (5<4vacation)
U={ }U1={ }, U2={ }, U3={ }lmoney marianot work f (5<4vacation)
Future work
The approach can be extended in several ways: Non synchronous, dynamic multi-agent system.
Other rational abilities can be incorporated, e.g., learning.
Development of a proof procedure for updating and preferring reasoning.