32
A History of Autonomous Agents From Thinking Machines to Machines for Thinking S. Costantini & *F. Gobbo University of L’Aquila CiE2013, Univ. Milano-Bicocca, Milan, Italy, July 1-5, 2013 1 of 20

A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

Embed Size (px)

Citation preview

Page 1: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

A History of Autonomous AgentsFrom Thinking Machines to Machines for Thinking

S. Costantini & *F. GobboUniversity of L’Aquila

CiE2013, Univ. Milano-Bicocca,Milan, Italy, July 1-5, 2013

1 of 20

Page 2: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

IntroductionWhat is an Autonomous Agent?

Page 3: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

What is an Autonomous Agent? The old answer...

source: Turing100 blog at Blogspot

Page 4: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

...of Good Old-Fashioned Artificial Intelligence

Autonomous Agents were designed to interact mainly with humans:

� their behaviour pretends to be human-like – fooling they werehumans;

� their ability to manipulate symbols is more important than theirphysical implementation;

� they often speak or write in a natural language;

� they can play games – above all, chess.

The defining metaphor of Autonomous Agents is the thinkingmachine.

4 of 20

Page 5: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

...of Good Old-Fashioned Artificial Intelligence

Autonomous Agents were designed to interact mainly with humans:

� their behaviour pretends to be human-like – fooling they werehumans;

� their ability to manipulate symbols is more important than theirphysical implementation;

� they often speak or write in a natural language;

� they can play games – above all, chess.

The defining metaphor of Autonomous Agents is the thinkingmachine.

4 of 20

Page 6: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

...of Good Old-Fashioned Artificial Intelligence

Autonomous Agents were designed to interact mainly with humans:

� their behaviour pretends to be human-like – fooling they werehumans;

� their ability to manipulate symbols is more important than theirphysical implementation;

� they often speak or write in a natural language;

� they can play games – above all, chess.

The defining metaphor of Autonomous Agents is the thinkingmachine.

4 of 20

Page 7: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

...of Good Old-Fashioned Artificial Intelligence

Autonomous Agents were designed to interact mainly with humans:

� their behaviour pretends to be human-like – fooling they werehumans;

� their ability to manipulate symbols is more important than theirphysical implementation;

� they often speak or write in a natural language;

� they can play games – above all, chess.

The defining metaphor of Autonomous Agents is the thinkingmachine.

4 of 20

Page 8: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

What is an Autonomous Agent? The new answer...

source: Fast, Cheap & Out of Control paper by R. Brooks and A. M. Flynn (1989)

Page 9: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

...of nouvelle Artificial Intelligence

Autonomous Agents were designed to interact with theenvironment:

� their behaviour is action-driven, inspired by Nature (animals likeants or bees);

� their physical implementation is important at least as their abilityof manipulate symbols;

� they do things in the physical world;

� they go where humans do not (still) go – e.g., planetary rovers.

The ‘thinking machine’ metaphor enters a crisis, while the agents’environment assumes importance.

6 of 20

Page 10: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

...of nouvelle Artificial Intelligence

Autonomous Agents were designed to interact with theenvironment:

� their behaviour is action-driven, inspired by Nature (animals likeants or bees);

� their physical implementation is important at least as their abilityof manipulate symbols;

� they do things in the physical world;

� they go where humans do not (still) go – e.g., planetary rovers.

The ‘thinking machine’ metaphor enters a crisis, while the agents’environment assumes importance.

6 of 20

Page 11: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

...of nouvelle Artificial Intelligence

Autonomous Agents were designed to interact with theenvironment:

� their behaviour is action-driven, inspired by Nature (animals likeants or bees);

� their physical implementation is important at least as their abilityof manipulate symbols;

� they do things in the physical world;

� they go where humans do not (still) go – e.g., planetary rovers.

The ‘thinking machine’ metaphor enters a crisis, while the agents’environment assumes importance.

6 of 20

Page 12: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

...of nouvelle Artificial Intelligence

Autonomous Agents were designed to interact with theenvironment:

� their behaviour is action-driven, inspired by Nature (animals likeants or bees);

� their physical implementation is important at least as their abilityof manipulate symbols;

� they do things in the physical world;

� they go where humans do not (still) go – e.g., planetary rovers.

The ‘thinking machine’ metaphor enters a crisis, while the agents’environment assumes importance.

6 of 20

Page 13: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

...of nouvelle Artificial Intelligence

Autonomous Agents were designed to interact with theenvironment:

� their behaviour is action-driven, inspired by Nature (animals likeants or bees);

� their physical implementation is important at least as their abilityof manipulate symbols;

� they do things in the physical world;

� they go where humans do not (still) go – e.g., planetary rovers.

The ‘thinking machine’ metaphor enters a crisis, while the agents’environment assumes importance.

6 of 20

Page 14: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

The word ‘agent’ is inherently ambiguous

Firstly, agent researchers do not own this term in the same wayas fuzzy logicians/AI researchers own the term fuzzy logic – it isone that is used widely in everyday parlance as in travelagents, estate agents, etc. Secondly, even within the softwarefraternity, the word agent is really an umbrella term for aheterogeneous body of research and development [Nwana 1996,my emphasis].

[agent is] one who, or which, exerts power or produces aneffect [Woolridge et al. 1995, my emphasis].

People in the field need a new defining metaphor.

7 of 20

Page 15: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

The word ‘agent’ is inherently ambiguous

Firstly, agent researchers do not own this term in the same wayas fuzzy logicians/AI researchers own the term fuzzy logic – it isone that is used widely in everyday parlance as in travelagents, estate agents, etc. Secondly, even within the softwarefraternity, the word agent is really an umbrella term for aheterogeneous body of research and development [Nwana 1996,my emphasis].

[agent is] one who, or which, exerts power or produces aneffect [Woolridge et al. 1995, my emphasis].

People in the field need a new defining metaphor.

7 of 20

Page 16: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

A minimal but operative definition of agenthood

[Woolridge et al. 1995] restricts agenthood as a computer system,with the following fundamental properties:

� autonomy, i.e., being in control over its own actions;

� reactivity, i.e. it reacts to events from the environment;

And possibly:

� proactivity, the complement of reactivity, i.e, the ability to acts onits own initiative;

� sociality, the ability to interact with other agents.

Sociality presumes also a multi-agent system!

8 of 20

Page 17: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

A minimal but operative definition of agenthood

[Woolridge et al. 1995] restricts agenthood as a computer system,with the following fundamental properties:

� autonomy, i.e., being in control over its own actions;

� reactivity, i.e. it reacts to events from the environment;

And possibly:

� proactivity, the complement of reactivity, i.e, the ability to acts onits own initiative;

� sociality, the ability to interact with other agents.

Sociality presumes also a multi-agent system!

8 of 20

Page 18: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

A minimal but operative definition of agenthood

[Woolridge et al. 1995] restricts agenthood as a computer system,with the following fundamental properties:

� autonomy, i.e., being in control over its own actions;

� reactivity, i.e. it reacts to events from the environment;

And possibly:

� proactivity, the complement of reactivity, i.e, the ability to acts onits own initiative;

� sociality, the ability to interact with other agents.

Sociality presumes also a multi-agent system!

8 of 20

Page 19: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

A minimal but operative definition of agenthood

[Woolridge et al. 1995] restricts agenthood as a computer system,with the following fundamental properties:

� autonomy, i.e., being in control over its own actions;

� reactivity, i.e. it reacts to events from the environment;

And possibly:

� proactivity, the complement of reactivity, i.e, the ability to acts onits own initiative;

� sociality, the ability to interact with other agents.

Sociality presumes also a multi-agent system!

8 of 20

Page 20: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

A minimal but operative definition of agenthood

[Woolridge et al. 1995] restricts agenthood as a computer system,with the following fundamental properties:

� autonomy, i.e., being in control over its own actions;

� reactivity, i.e. it reacts to events from the environment;

And possibly:

� proactivity, the complement of reactivity, i.e, the ability to acts onits own initiative;

� sociality, the ability to interact with other agents.

Sociality presumes also a multi-agent system!

8 of 20

Page 21: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

From single agents toMulti-Agent Systems

Page 22: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

Autonomous Agents as a programming paradigm

[Shoham 1990] is Agenthood Degree Zero. In that paper, a newprogramming paradigm was defined, called agent-orientation:

� agents are pieces of software – possibly but not necessarilyembodied in robots;

� their behaviour is regulated by:� constraints like ‘honesty, consistency’;� parameters like ‘beliefs, commitments, capabilities, choices’.

� they show a degree of autonomy in the environment:� they reactively and timely respond to changes that occur around;� they exhibit a goal-directed behaviour by taking the initiative;� they interact with other entities through a common language;� they choose a plan in order to to reach goals, preferably by learning

from past experience.

10 of 20

Page 23: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

Springtime again for Artificial Intelligence?

The success of the agent-oriented paradigm is great and rapid, withdifferent architectures and models:

� Belief, Desire, Intention (BDI) [Rao & Georgeff 1991];

� Agent Logic Programming (ALP) [Kowalski & Sadri 1999];

� Declarative Logic programming Agent-oriented Language (DALI)[Costantini 1999];

� Knowledge, Goals and Plans (KGP) [Kakas et al. 2004].

ALP, DALI and KGP use Computational Logic, showing thatagenthood can be successfully implemented also out of theobject-orientation programming paradigm.

11 of 20

Page 24: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

How the concept of intention is re-engineered

An example from the foundation of the BDI architecture:

My desire to play basketball this afternoon is merely apotential influencer of my conduct this afternoon. It must viewith my other relevant desires [. . . ] before it is settled what Iwill do. In contrast, once I intend to play basketball thisafternoon, the matter is settled: I normally need not continue toweigh the pros and cons. When the afternoon arrives, I willnormally just proceed to execute my intention. [Bratman 1990,my emphasis]

Formally, an intention is a desire which can be satisfied in practice.

12 of 20

Page 25: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

Autonomus agents as machines for thinking

Desires and intentions – basic modalities of human thinking – areclearly distinguished in BDI and put into relation in a formal way.

All agent-oriented architectures are formalisations of the humanway of thinking. No one is exhaustive of human thinking as a whole,but they help us to understand ourselves by formalisation,implementation and testing, especially in virtual societies formed bymany agents.

13 of 20

Page 26: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

Multi-Agent Systems as simulations of societies

The human species is social, and therefore agent-based simulations ofsocieties through Multi-Agent Systems (MAS) become even moreinteresting:

� there is no global system control – agents must communicate andcoordinate their activities;

� MAS can put in evidence egoistic and collective interests;

� MAS are serious games (e.g., for educational purposes, oreconomic simulations);

� MAS emerge as a distinctive research area from separatecommunities, and so they profit from different perspectives.

14 of 20

Page 27: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

Further directions of work

Page 28: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

The emergence of hybrid environments...

Many authors (among them, Castells and Floridi) noted that thetendency is to have hybrid environments, shared by human agentsand autonomous agents, where they meet, fight, communicate,interact on the same level. Two cases are possibile:

� Multi-User Dungeons (MUDs) or environements such as SecondLife: human agents get virtual through avatars; where playerstogether

� Robots acting in the real world, where human agents arepresent there and then.

MAS are reasonable – although rather simplified – models of Natureand human societies, putting information first.

16 of 20

Page 29: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

...put autonomous agents as machines for thinking

The Paradox:

Distributed Artificial Intelligence

as a way to study

the Natural way of thinking!

17 of 20

Page 30: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

Essential references (1/2)

Bratman, M. E.: What is intention?. In Cohen, P. R., Morgan, J. L., and Pollack,M. E. (editors), Intentions in Communication, pages 15-32. The MIT Press:Cambridge, MA (1990).

Costantini, S.: Towards Active Logic Programming. In: A. Brogi and P.M. Hill (eds),Proc. of 2nd International Works. on Component-based Software Development inComputational Logic (COCL’99), PLI’99, Indexed by CiteSeerX (1999).

Kakas, A.C., Mancarella, P., Sadri, F., Stathis, K., Toni, F.: The KGP model ofagency. In: Proc. ECAI-2004. (2004)

Kowalski, R., Sadri, F.: From Logic Programming towards Multi-agent Systems.Annals of Mathematics and Artificial Intelligence, 25, 391–419 (1999)

18 of 20

Page 31: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

Essential references (2/2)

Nwana, H., S.: Software Agents: An Overview. Knowledge Eng. Review, 11(3), 1–40(1996)

Rao, A. S., Georgeff, M.: Modeling rational agents within a BDI-architecture. In:Allen, J., Fikes, R., Sandewall, E. (eds). Proc. of the Second InternationalConference on Principles of Knowledge Representation and Reasoning (KR’91),473–484 (1991)

Shoham, Y.: Agent Oriented Programming. Technical Report STAN-CS-90-1335,Computer Science Department, Stanford University (1990)

Wooldridge, M. J., Jennings, N. R.: Agent Theories, Architectures, and Languages:a Survey. In: Wooldridge, M. J., Jennings, N. R. Intelligent Agents. Lecture Notes inComputer Science, Volume 890, Berlin: Springer-Verlag. 1–39 (1995)

19 of 20

Page 32: A history of Autonomous Agents: from Thinking Machines to Machines for Thinking

Thanks for your attention!

Questions?

For proposals, ideas & comments:

[email protected]

Download & share these slides here:

http://federicogobbo.name/en/2013.html

CC© BY:© $\© C© Federico Gobbo 2013

20 of 20