24

Click here to load reader

AUTONOMOUS AGENTS AS EMBODIED AI

  • Upload
    stan

  • View
    218

  • Download
    3

Embed Size (px)

Citation preview

Page 1: AUTONOMOUS AGENTS AS EMBODIED AI

This article was downloaded by: [The Aga Khan University]On: 24 November 2014, At: 03:27Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales RegisteredNumber: 1072954 Registered office: Mortimer House, 37-41Mortimer Street, London W1T 3JH, UK

Cybernetics andSystems: AnInternational JournalPublication details, includinginstructions for authors andsubscription information:http://www.tandfonline.com/loi/ucbs20

AUTONOMOUSAGENTS AS EMBODIEDAISTAN FRANKLINPublished online: 29 Oct 2010.

To cite this article: STAN FRANKLIN (1997) AUTONOMOUS AGENTSAS EMBODIED AI, Cybernetics and Systems: An International Journal,28:6, 499-520, DOI: 10.1080/019697297126029

To link to this article: http://dx.doi.org/10.1080/019697297126029

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracyof all the information (the “Content”) contained in thepublications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations orwarranties whatsoever as to the accuracy, completeness,or suitability for any purpose of the Content. Any opinions

Page 2: AUTONOMOUS AGENTS AS EMBODIED AI

and views expressed in this publication are the opinions andviews of the authors, and are not the views of or endorsed byTaylor & Francis. The accuracy of the Content should not berelied upon and should be independently verified with primarysources of information. Taylor and Francis shall not be liablefor any losses, actions, claims, proceedings, demands, costs,expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connectionwith, in relation to or arising out of the use of the Content.

This article may be used for research, teaching, and privatestudy purposes. Any substantial or systematic reproduction,redistribution, reselling, loan, sub-licensing, systematicsupply, or distribution in any form to anyone is expresslyforbidden. Terms & Conditions of access and use can be foundat http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 3: AUTONOMOUS AGENTS AS EMBODIED AI

AUTONOMOUS AGENTS AS EMBODIED AI

STAN FRANKLIN

Institute for Intelligent Systems andDepartment of Mathematical Sciences,University of Memphis,Memphis, Tennessee, USA

This pape r is primarily concerne d with answe ring two questions: What are

necessary elements of embodie d architectures? How are we to proce ed in a

science of embodied systems? Autonom ous agents, more specifically cogni-

tive agents, are offered as the appropria te obje cts of study for embodie d AI.

The ne cessary e lements of the architectures of these agents are the n those

of embodied AI as we ll. A concre te proposal is pre se nted as to how to

proce ed with such a study. This proposal include s a syne rgistic paralle l

employme nt of an engine ering approach and a scientific approach. It also

supports the exploration of design space and of niche space . A general

architecture for a cognitive agent is outlined and discusse d.

This e ssay is motivated by foundational que stions concerning the nature

of human thinking and inte lligence:

s .Q 1 Is it necessary for an inte lligent system to possess a body?

s .Q 2 What are necessary e lements of embodied architectures?

s .Q 3 What drives these systems?

s .Q 4 How are we to proceed in a science of embodied systems?

s .Q 5 How is meaning re lated to real objects?

s .Q 6 What sort of ontology is nece ssary for de scribing and constructing

knowledge about systems?

s .Q 7 Which ontologies are created within the systems?

Addre ss corre sponde nce to Stan Franklin, Departm ent of Mathe matical Sciences,

Unive rsity of Memphis, Memphis, TN 38152, USA. E-mail: stan.frankl in@ memphis.e du;

http: r r www.m sci.memphis.e du r ; franklin

Cybernetics and Systems: An International Journal, 28:499 ] 520, 1997Copyright Q 1997 Taylor & Francis

0196-9722 ¤97 $12.00 + .00 499

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 4: AUTONOMOUS AGENTS AS EMBODIED AI

S. FRANKLIN500

The intent here is to speak to each of these questions, with

s . s .re latively lengthy discussions of Q 2 and Q4 and brie f responses to

the others. A concrete proposal will be made on how to proceed with

embodied AI research. Much of what follows will also apply to artificial

life re search.

Here are my short answers to the foregoing questions, offered as

appetizers for the main courses to follow.

s .A1 Software systems with no body in the usual physical sense can be

intelligent. But they must be `̀ embodied’ ’ in the situated sense of

be ing autonomous agents structurally coupled with the ir environ-

ment.

s .A2 An embodied architecture must have at least the primary e le -

ments of an autonomous agent, sensors, actions, drives, and an

action se le ction mechanism. Physical embodiment will, of course ,

constrain the sensors and actions but not the drives and action

selection. Intelligent systems typically must have much more .

s .A3 These systems are driven by built-in or evolved-in drives and the

goals generated from them. This is true of all autonomous agents,

including embodied systems and artificial life agents.

s .A4 Rathe r than pursuing a science of embodied systems, we suggest

pursuing a science of mind using embodied systems as tools. This

would be done by deve loping theorie s of how mechanisms of mind

can work, making predictions from the theories, designing au-

s .tonomous agent architectures often embodied systems con-

strained by the se theorie s, implementing the se agents in hardware

or software , expe rimenting with the agents to check our predic-

tions, modifying our theories and architectures, and looping ad

infinitum.

s .A5 Real objects exist, as objects, only in the `̀ minds’ ’ of autonomous

agents, including embodied systems. The ir meanings are grounded

in the agent’ s perceptions, both external and internal.

s .A6 An ontology for knowledge about autonomous agents, including

embodied systems and artificial life agents, will include sensors,

actions, subgoals, belie fs, de sires, intentions, emotions, attitudes,

moods, memorie s, concepts, workspaces, plans, schedule s, and

various mechanisms for gene rating some of the above. This list

does not even begin to be exhaustive.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 5: AUTONOMOUS AGENTS AS EMBODIED AI

AUTONOMOUS AGENTS AS EMBODIED AI 501

s .A7 Each autonomous agent uses its own ontology, which is typically

partly built in or evolved in and partly constructed by the agent.

This is true of embodied systems and of artificial life agents.

My concrete proposal on how to proceed include s an expanded

s . s .form of the cycle outlined in A4 augmented by Sloman’s 1995 notion

of exploration of de sign space and niche space. Now for the main

courses.

THE ACTION SELECTION PARADIGM OF MIND

Classical AI, along with cognitive science and much of embodied AI,

shas deve loped within the cognitivist paradigm of mind V arela et al.,

.1991 . This paradigm takes as its metaphor mind as a computer program

running on some underlying hardware or wetware . It thus see s mind as

information processing by symbolic computation, that is, rule -based

s .symbol manipulation. Horgan and Tiensen 1996 give a careful account

of the fundamental assumptions of this paradigm. Serious attacks on the

cognitivist paradigm of mind have been mounted from outside by

sneuroscientists, philosophers, and roboticists Brooks, 1990; Edelman,

1987; Freeman & Skarda, 1990; Horgan & Tiensen, 1989; Reeke &

.Ede lman, 1988; Searle, 1980; Skarda & Freeman, 1987 .

O the r competing paradigms of mind include the connectionist

sparadigm Horgan & Tiensen, 1996; Smolensky, 1988; V are la et al.,

. s1991 and the enactive paradigm Maturana, 1975; Maturana & V arela,

. s .1980; V arela e t al., 1991 . The structural coupling invoked in A1

derives from the enactive paradigm. The connectionist paradigm offers

a brain metaphor of mind rathe r than a computer metaphor.

s .The action selection paradigm of mind Franklin, 1995 , on which

the e ssay is based, sprang from observation and analysis of various

embedded AI systems, including embodied systems. Its major tene ts

follow:

s .AS1 The overriding task of mind is to produce the next action.

s .AS2 Actions are se le cted in the service of drives built in by evolution

or design.

s .AS3 Mind operate s on sensations to create information for its own

use .

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 6: AUTONOMOUS AGENTS AS EMBODIED AI

S. FRANKLIN502

s . s .AS4 Mind recreate s prior information memorie s to help produce

actions.

s .AS5 Minds tend to be embodied as collections of re latively indepen-

dent module s, with little communication between them.

s .AS6 Minds tend to be enabled by a multitude of disparate mecha-

nisms.

s .AS7 Mind is most use fully thought of as arising from the control

structure s of autonomous agents. Thus, there are many types of

minds with vastly different abilities.

These tenets will guide much of the following discussion, as, for

s . s . s .example , A5 will derive from AS3 . As applied to human minds, AS4

s .and AS5 can be more definitely asserted.

sAn action produces a change of state in an environment Luck &

.D’Inverno, 1995 . But not every such change is produced by an action,

for example , the motion of a plane t. We say that the action of a

hammer on a nail changes the environment, but the hammer is an

instrument, not an actor. The action is produced by the carpenter.

Similarly, it is the driver who acts, not the automobile . In a more

complex situation it is the user that acts, not the program that produces

payroll checks. In an AI setting the user acts with the expert system as

instrument. O n the othe r hand, a the rmostat acts to maintain a temper-

ature range.

Because actions, in the sense meant he re, are produced only by

s . s .autonomous agents see later , AS1 leads us to think of minds as

emerging from the architectures and mechanisms of autonomous agents,

which include embodied systems and artificial life agents. Thus it seems

plausible to seek answers to foundational questions concerning the

nature of human thinking and inte lligence by studying the architectures,

mechanisms, and behavior of autonomous agents, even artificial agents

such as autonomous robots and software agents.

AUTONOMOUS AGENTS

We’ve spoken several times of autonomous agents. What are they? An

au ton om ous agen t is a system situated within and a part of an environ-

ment that senses that environment and acts on it, over time , in pursuit

of its own agenda and so as to affect what it sense s in the future

s .Franklin & Graesser, 1997 .

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 7: AUTONOMOUS AGENTS AS EMBODIED AI

AUTONOMOUS AGENTS AS EMBODIED AI 503

What sorts of entities satisfy this definition? Figure 1 illustrates the

sbeginnings of a natural kinds taxonomy for autonomous agents Frank-

.lin & Graesser, 1997 . With the se example s in mind, le t us unpack the

de finition of an autonomous agent.

An environment for a human will include some range of what we

call the real world. For most of us, it will not include subatomic

particles or stars within a distant galaxy. The environment for a the rmo-

stat, a particularly simple robotic agent, can be described by a single

state variable , the temperature. Environments for embodied systems

s .autonomous robots might include typical, real-world office and labora-

s .tory spaces Brooks, 1990a; Harvey et al., 1993; Nolfi & Parisi, 1995 .

Artificial life agents ``live ’ ’ in an artificial environment often depicted

s .on a monitor e.g., Ackley & Littman, 1992 . Such environments often

include obstacles, food, othe r agents, predactors, and so on. Sumpy, a

stask-specific software agent, ``lives’ ’ in a UNIX file system Song et al.,

.1996 . Julia, an entertainment agent, ``lives’ ’ in a MUD on the Internet

s .Mauldin, 1994 . V iruses inhabit DO S, Windows, MacOS, and even

Microsoft W ord. An autonomous agent must be such with re spect to

some environment. Such environments can be described in various

s .ways, perhaps even as dynamical systems Franklin & Graesser, 1997 .

Keep in mind that autonomous agents are , themse lves, part of the ir

environments.

Human and animal sensors need no recounting he re. Robotic

sensors include video cameras, range finders, bumpers, or antennas with

Figu r e 1. A natural kind of taxonomy for autonomou s agents.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 8: AUTONOMOUS AGENTS AS EMBODIED AI

S. FRANKLIN504

s .tactile and sometimes chemical receptors Beer, 1990; Brooks, 1990 .

Artificial life agents use artificial sensors, some mode led after real

sensors, othe r not. Sumpy senses by issuing UNIX commands such as

spwd or ls. V irtual Mattie, a software clerical agent Franklin e t al.,

.1996 , senses only incoming e -mail messages. Julia senses messages

posted on the MUD by othe r users, both human and entertainment

agents. Sensors re turn portions of the environmental state to the agent.

Senses can be active or passive. Although all the senses mentioned we re

external, internal senses, proprioception, is also part of many agents’

de sign. This seems particularly necessary for autonomous robots. Some

s .might conside r the recreation of images from memory see AS4 to be

internal sensing.

Again, there is no need to discuss actions of humans, animals, or

even robots. Sumpy’s actions consist of wandering from directory to

directory, compressing some file s, backing up others, and putting him-

se lf to sleep when usage of the system is heavy. V irtual Mattie , among

other things, corresponds with seminar organize rs in English via e -mail,

sends out seminar announcements, and keeps a mailing list updated.

Julia wanders about the MUD conversing with occupants. Again, ac-

tions can be external or internal, such as producing plans, schedule s, or

announcements. Every autonomous agent comes with a built-in se t of

primitive actions. Other actions, usually sequences of primitive actions,

can also be built in or can be learned.

The definition of an autonomous agent requires that it pursue its

own agenda. Where does this agenda come from? Every autonomous

s .agent must be provided with built-in or evolved-in sources of motiva-

s .tion for its actions. I re fer to the se sources as drives see AS2 . Brooks’

Herbert has a drive to pick up soda cans. Sumpy has a drive to compress

files when needed. V irtual Mattie has a drive to get seminar announce -

ments out on time . Drives may be explicit or implicit. A the rmostat’ s

single drive is to keep the temperature within a range . This drive is

hardwired into the mechanism, as is Herbert’s soda can drive . Sumpy’s

four drives are as hardwired as that of the thermostat, except that it is

done in software . My statement of such a drive describes a straightfor-

ward causal mechanism within the agent. V irtual Mattie ’s six or so

drives are explicitly represented as drives within he r architecture. They

still operate causally, but not in such a straightforward manner. An

accounting of human drives would seem a useful endeavor.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 9: AUTONOMOUS AGENTS AS EMBODIED AI

AUTONOMOUS AGENTS AS EMBODIED AI 505

Drives give rise to goals that act to satisfy the drives. A goal

sdescribes a desired specific state of the environment Luck &

.D’Inverno, 1995 . I picture the motivations of a complex autonomous

agent as constituting a forest in the computational sense . Each tree in

this fore st is rooted in a drive that branche s to high-leve l goals. Goals

can branch to lowe r level subgoals, and so on. The leaf nodes in this

forest constitute the agent’s agenda.

Now that we can recognize an autonomous agent’s agenda, the

question of pursuing that agenda remains. We ’ve arrived at action

s .se le ction see AS1 . Each agent must come equipped with some mecha-

nism for choosing among its possible actions in pursuit of some goal on

its agenda. These mechanisms vary greatly. Sumpy is named after its

s .subsumption architecture Brooks, 1990a , developed for use with au-

tonomous robots such as Herbert. O ne of its layers uses fuzzy logic

s .Yager & File v, 1994 . Some interne t information-seeking agents use

s .classical AI, say planning Etzioni & Weld, 1994 . V irtual Mattie selects

he r actions via a conside rably augmented form of Maes’ behavior net

s .1990 . This topic will be discussed in more detail later.

Finally, an autonomous agent must act so as to affect its possible

future sensing. This requires not only that the agent be in and a part of

an environment but also that it be structurally coupled to that environ-

s .ment Maturana, 1975; Maturana & V arela, 1980; V are la et al., 1991 .

s .See also A1. Structural coupling, as applied here , means that the

agent’ s architecture and mechanisms must mesh with its environment so

that it sense s portions re levant to its needs and can act so as to mee t

those needs. Herbert is able to move about a room, avoid obstacle s, and

find soda cans. His moving the soda can affect what he see s next.

Having unpacked the definition of autonomous agent, we will argue

that it is the appropriate view to take of the objects of study of

embodied AI.

CAAT

In the early days of AI the re was much talk of creating human-level

inte lligence. As the years passed and the difficulties became apparent,

such talk all but disappeared as most AI researchers wise ly concen-

trated on producing some small facet of human inte lligence . Here I am

proposing a re turn to earlie r goals.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 10: AUTONOMOUS AGENTS AS EMBODIED AI

S. FRANKLIN506

Human cognition typically include s short- and long-term memory,

categorizing and conceptualizing, reasoning, planning, problem solving,

learning, and creativity. An autonomous agent capable of many or even

wmost of these activities will be re fe rred to as a cognitive agent. Sloman

calls such agents `̀ comple te ’ ’ in one place and refers to `̀ a human-like

s .inte lligent agent’ ’ 1995 or to `̀ autonomous agents with human-like

s .capabilities’ ’ in another Sloman & Poli, 1995 . Riegler’s dissertation is

also concerned with ``the emergence of higher cognitive structures’ ’

s . x1994 . Currently, only humans and perhaps some highe r animals seem

to be cognitive agents. These are all embodied systems. No current

autonomous robots fall into this category. I expect they will in the

future .

Mechanisms designed for cognitive functions as mentioned above

s .include Kanerva’s sparse distributed memory 1988 , Drescher’ s schema

s . s .mechanism 1988, 1991 , Maes’ behavior ne tworks 1990 , Jackson’s

pandemonium theory, Hofstadter and Mitche ll’s copycat architecture

s .1994; Mitche ll, 1993 , and many others. Many of the se do a fair job of

implementing some one cognitive function.

The strategy suggested he re proposes to fuse se ts of these mecha-

nisms to form control structures for cognitive mobile robots, cognitive

artificial life creature s, and cognitive software agents. V irtual Mattie is

an early example of this strategy. Her architecture extends both Maes’

behavior ne tworks and the copycat architecture and fuses them for

action se le ction and perception.

Given a control architecture for an autonomous agent, we may

theorize that human and animal cognition works as does this architec-

ture. Because the specification of every control architecture in this way

underlie s some theory, the strategy set forth in the last paragraph also

entails the creation of theories of cognition. For example, the function-

ing of sparse distributed memory gives rise to a theory of how human

memory operates. It is hoped that such theories, arising from AI, can

he lp to explain and predict human and animal cognitive activities.

Thus the acronym CAAT, cognitive agent architecture and theory,

arises. The CAAT strategy of de signing cognitive agent architectures

and creating theorie s from them leads to a loop of tactical activities as

follows:

s .SL1 Design a cognitive agent architecture.

s .SL2 Implement this cognitive architecture on a computer.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 11: AUTONOMOUS AGENTS AS EMBODIED AI

AUTONOMOUS AGENTS AS EMBODIED AI 507

s .SL3 Experiment with this implemented model to learn about the

functioning of the de sign.

s .SL4 Use this knowledge to formulate the cognitive theory correspond-

ing to the cognitive architecture .

s .SL5 From this theory derive te stable predictions.

s .SL6 Design and carry out experiments to te st the theory using human

or animal subjects.

s .SL7 Use the knowledge gained from the experiments to modify the

architecture so as to improve the theory and its predictions.

We have just seen the science loop of the CAAT strategy, whose

aim is understanding and predicting human and animal cognition with

the help of cognitive agent architectures. The enginee ring loop of

sCAAT aims at producing inte lligent autonomous agents mobile robots,

.artificial life creations, software agents approaching the cognitive abili-

tie s of humans and animals. The means for achieving this goal can be

expressed in a branch paralle l to the sequence of activities described

above . The first three items are identical.

s .EL1 Design a cognitive agent architecture.

s .EL2 Implement this cognitive architecture on a computer.

s .EL3 Experiment with this implemented mode l to learn about the

functioning of the de sign.

s .EL4 Use this knowledge to de sign a version of the architecture

scapable of real-world including artificial life and software envi-

.ronments problem solving.

s .EL5 Implement this version in hardware or software .

s .EL6 Experiment with the re sulting agent confronting real-world prob-

lems.

s .EL7 Use the knowledge gained from the experiments to modify the

architecture so as to improve the performance of the re sulting

agent.

The science loop and the enginee ring loop will surely seem familiar

to both scientists and engineers. So, why are they included? Because

when applied to autonomous agents, including autonomous robots,

synergy results from the subject matter. We have seen that each

s .autonomous agent architecture gives rise to at least one theory, the

theory that says humans and animals do it as this architecture does.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 12: AUTONOMOUS AGENTS AS EMBODIED AI

S. FRANKLIN508

Thus the engineering loop can leap out and influence theory. But

stheory also constrains architecture. A theory can give rise to usually

.many architecture s that implement the theory. Thus gains in the

science loop can leap out and influence the enginee ring loop. Synergy

can occur.

s .This section expands on short answer A4 . It also constitutes the

first part of my proposal on how to proceed with research in embodied

AI. The second part will appear below. Now, how do we design cognitive

agents?

DESIGN PRINCIPLES

Not much is known about how to design cognitive agents, although

sthere have been some attempts to build one Johnson & Scanlon,

.1987 . Theory guiding such design is in an early stage of development.

s .Brustiloni 1991; Franklin, 1995 has offered a theory of action selection

mechanisms, which gives rise to a hie rarchy of behavior types. Albus

s .1991, 1996 offers a theory of inte lligent systems with much overlap to

s .cognitive agents. Sloman 1995 and his cohorts are working diligently

on a theory of cognitive agent architecture s, with a high-level version in

s .place . W e will encounter a bit of this theory be low. Also, Baars’ 1988

global workspace mode l of consciousness, although intended as a mode l

of human cognitive architecture , can be viewe d as constraining cognitive

agent design. Here we see the syne rgy between the science loop and the

engineering loop in action.

This section proposes de sign principles, largely unre lated to each

other, that have been derived from an analysis of many autonomous

agents, including autonomous robots and naturally embodied systems.

They serve to constrain cognitive agent architecture s and so will con-

tribute to an eventual theory of cognitive agent design. In particular,

they apply to de signing autonomous robots to operate in the real world.

Drives: Ev ery agen t m u st hav e bu ilt-in d riv es to prov ide th e fun dam en tal

s .m otiv ation fo r its action s. This is simply a restatement of A3 and of

s .AS2 . It is included here because autonomous agent de signers tend

to hardwire drives into the agent without mentioning them explicitly

in the documentation and, apparently, without thinking of them

explicitly. An explicit accounting of an agent’ s drives should help

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 13: AUTONOMOUS AGENTS AS EMBODIED AI

AUTONOMOUS AGENTS AS EMBODIED AI 509

one to understand its architecture and its niche within its environ-

ment.

Attention: An agen t in a com plex env ironm en t who h as sev eral sen ses m ay

well n eed som e a tten tion m ech an ism to help it focu s on relev an t in pu t.

Attending to all input may we ll be computationally too expensive.

Internal models: Model th e env ironm en t on ly wh en su ch m odels are

n eeded . When possible, depen d on frequen t sam pling of th e env iron -

m ent instead . Mode ling the environment is both difficult and com-

putationally expensive . Frequent sampling is typically cheape r and

more e ffective, when it will work at all. This principle has been

s .enunciated several times before Brooks, 1990a; Brustiloni, 1991

and is particularly true of mobile robots.

Coordination: In m u ltiagen t system s, coord in ation can o ften be ach iev ed

w ith ou t th e h igh cost of com m u n ication. We often think of coordina-

tion of actions as requiring communication between the actors.

Many example s, including robotic example s, show this thought to be

a myth. Again, frequent sampling of the environment may serve as

s .we ll or even better Franklin, unpublished .

Knowledge: Bu ild as m u ch n eeded kn ow ledge as possible in to th e lower

lev els o f an au ton om ou s agen t ’ s arch itectu re. Every agent requires

knowledge of itse lf and of its environment in order to act so as to

satisfy its needs. Some of this knowledge can be learned. Trying to

learn all of it can be expected to be computationally intensive even

s .in simple cases Drescher, 1988 . The better tack is to hardwire as

much needed knowledge as possible into the agent’ s architecture.

Brooks’ Herbert was designed according to this principle, which has

s .also been enunciated by others Brustiloni, 1991 .

Curiosity: If an au tonom ous agen t is to learn in an un su perv ised way,

som e sort o f m ore or less ran dom behav ior m ust be bu ilt in . Curiosity

serves this function in humans and apparently in mammals. Au-

tonomous agents typically learn mostly by internal reinforcement.

s The notion of the environment providing re inforcement is mis-

.guided. Actions in a particular context whose results move the

agent close r to satisfying some drive tend to be re inforced, that is,

made more likely to be chosen again in that context. Actions whose

results move the agent furthe r from satisfying a drive tend to be

made less likely to be chosen again. In humans and many animals,

the mechanisms of this re inforcement include pleasure and pain.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 14: AUTONOMOUS AGENTS AS EMBODIED AI

S. FRANKLIN510

Every form of re inforcement learning must re ly on some mecha-

nism. Random activity is useful when no solution to the current

contextual problem is known and to allow the possibility of improv-

wing a known solution. This principle does not apply to observa-

tional-only forms of learning, such as that employed in memory-

s . xbased reasoning Maes, 1994 .

Routines: Most cogn itiv e agen ts w ill n eed som e m eans o f tran sfo rm ing

frequ en tly u sed sequen ces o f action s in to som eth ing reactiv e so that

th ey ru n faster. Cognitive scientists talk of becoming habituated.

Computer scientists like the compiling metaphor. O ne example is

s .chunking in SO AR Laird et al., 1987 . Anothe r is Jackson’s con-

s .cept demons Franklin, 1995; Jackson, 1987 . Agre ’s dissertation is

s .concerned with human routine s in press . I do not know of such

routine s appearing, as yet, in autonomous robots.

s .Brustiloni 1991 gives other such design principles for agents that

employ planning, as many cognitive agents must.

HIGH-LEVEL ARCHITECTURES FOR COGNITIVE AGENTS

Several high-level architecture s for cognitive agents have been proposed

s Albus, 1991; Baars, 1988; Fe rguson, 1995; Hayes-Roth, 1995; Jackson,

.1987; Johnson and Scanlon, 1987; Sloman, 1995 . Some of these include

de scriptions of mechanisms for implementing the architectures, othe rs

do not. With the exception of Sloman’s, all of these are architectures for

specific agents or for a specific class of agents. Surprisingly, the inter-

section of all these architecture s is rathe r small. It is like the story of

the blind men and the e lephant. What you sense depends on your

particular viewpoint. Also note that none of the se architectures are for

autonomous robots. Cognitive robots have yet to be built, but they must

be in order to participate in the deve lopment of a science of mind.

s .Here we are concerned with answering question Q 2 about the

necessary elements of embodied architecture s. I would also like to push

furthe r in search of a gene ral architecture for cognitive agents. This

architecture should be constrained by the tene ts of the action selection

paradigm of mind and, as much as possible, by the de sign principles of

the previous section. Ideas for this architecture may be drawn from

those referenced in the previous paragraph and from V Mattie ’s archi-

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 15: AUTONOMOUS AGENTS AS EMBODIED AI

AUTONOMOUS AGENTS AS EMBODIED AI 511

tecture . It is hoped that this architecture will give rise to a theory that

se rves to kick off the CAAT strategy outlined earlie r. We will produce a

plan for this architecture by a sequence of refinements beginning with a

very simple mode l. All apply directly to designing cognitive robots.

Computer scientists often partition a computing system into input,

s .processing, and output for beginning students Figure 2 . The corre -

sponding diagram for an autonomous agent might look as shown in

Figure 3.

For embodied agents, both sensors and reactions are constrained by

s .physical reality. The short answer A2 guide s a refinement of Figure 3.

Whereas sensors and actions are explicitly present, drives and action

se le ction are not. Note that this refinement imposes no furthe r con-

straint on embodied agents.

Although drives are explicitly represented in Figure 4, they may

we ll appear only implicitly in the causal mechanisms of a particular

agent. In embodied agents, they are most often implemented directly in

s . shardware. The diagram in Figure 4 is explicitly guided by AS1 action

. s . s .se le ction and AS2 drives .

s . s .AS3 talks about the creation of information O yama, 1985 , which

s .is accomplished partly by perception Neisser, 1993 . Perception pro-

s . s .vides the agent with affordances Gibson, 1979 . Glenberg to appear

suggests that se ts of these affordances allow the formation of concepts

and the laying down of memory traces. Real-world perception in embod-

ied agents will often include such notions as distance , size , and opacity.

s .Note that we have split pe rception off from action se le ction Figure 5 .

In furthe r re finements of the architecture , action se le ction must be

interpreted le ss and le ss broadly.

s .AS4 leads to anothe r re finement with the addition of memory. We

swill include both long-term memory and short-term memory work-

.space . Although only one memory and one workspace are shown in

Figu re 2. A computing syste m.

Figu r e 3. An autonomo us agent at an abstract leve l.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 16: AUTONOMOUS AGENTS AS EMBODIED AI

S. FRANKLIN512

Figu re 4. An autonomou s agent with drives broke n out.

Figu re 5. An autonomous agent with pe rception split out.

Figure 6, multiple specialized memories and workspaces may be ex-

pected in the architecture s of complex autonomous agents. V Mattie ’s

architecture contains two of each, one set se rving perception. Embodied

agents might be expected to have memories of physical layouts, paths

from one location to another, particular dangers to avoid, sources of

power, and so on.

At this point the action se le ction paradigm of mind gives us only

s .gene ral guidance : employ multiple , independent module s AS5 and

s .allow for disparate mechanisms AS5 . Thus we turn to de sign princi-

ple s. The attention principle points to an attention mechanism or

s .re levance filter Figure 7 . Note that attention will depend on context

and, ultimately, on the strength and urgency of drives. A mobile robot,

intent on reaching a goal, must nonetheless attend to a looming object.

The goal might even be abandoned in the presence of an urgent need to

recharge.

Figu re 6. An autonom ous agent with memory adde d.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 17: AUTONOMOUS AGENTS AS EMBODIED AI

AUTONOMOUS AGENTS AS EMBODIED AI 513

Figu r e 7. An autonomo us agent with attention adde d.

Although the internal mode ls principle warns against overdoing it,

some internal mode ling of the environment will be needed to allow for

s .expectations important to pe rception, for instance , planning, problem

s .solving, and so on Figure 8 . This principle also points to the distinction

sbetween reactive and de liberative action selection see, for example,

.Sloman, 1995 . De liberative actions are se lected with the help of inter-

s .nal mode ls, planners, schedule rs, and so forth Figure 9 . These mode ls

use inte rnal representations in the strict sense of the word; that is, they

are consulted for the ir content. Internal states that play a purely causal

role without such consultations are not representations in this sense

s .Franklin, 1995, Chapter 14 . Reactive actions are exemplified by re -

s .flexes and routines Agre , in press . They are arrived at without such

s .consultation. Brustiloni’s 1991 instinctive and habitual behaviors would

be reactive, whereas his problem solving and higher behaviors would be

de liberative . Deliberative mechanisms such as planners and problem

solvers may well require the ir own memorie s and workspaces not shown

in the figure. Most autonomous robots, to date , seem to have been

re flective.

Figu re 8. A cognitive agent.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 18: AUTONOMOUS AGENTS AS EMBODIED AI

S. FRANKLIN514

Figu r e 9. A cognitive agent with explicit de liberative mechanism s.

The coordination principle warns us against unnecessary communi-

cation. Still, for a cognitive agent in a socie ty of other such, communica-

tion may we ll be needed. It is sufficiently important that some people

s .include it in their definition of an agent Wooldridge & Jennings, 1995 .

V Mattie communicates with humans by e-mail. Her understanding of

incoming messages is part of perception. Her composition of outgoing

messages is brought about by deliberative behaviors. Independent mod-

ule s for understanding messages and for composing messages must be

s .part of a gene ral cognitive agent architecture. Mataric’s 1994 robotsÂalso communicate.

The knowledge principle brings up two issues: building knowledge

into the reactive behaviors and learning. By de finition, knowledge is

built into reactive behaviors casually through the ir mechanisms, rather

than declaratively by means of representations. This does not show up

in the diagrams.

The other issue brought up by the knowledge principle is learning,

which is critical to many autonomous agents coping with complex,

dynamic environments and must be included in a gene ral cognitive

s .agent architecture. Learning, itse lf, is quite complex. Thomas 1993

s .lists e ight different types of learning as follows: 1 habituation-

s . s . s .sensitization; 2 signal learning; 3 stimulus-response learning; 4

s . s .chaining; 5 concurrent discriminations; 6 class concepts, absolute

s .and relative; 7 relational concepts I; conjunctive, disjunctive, condition

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 19: AUTONOMOUS AGENTS AS EMBODIED AI

AUTONOMOUS AGENTS AS EMBODIED AI 515

s .concepts; 8 relational concepts II; biconditional concepts. He uses this

classification as a scale to measure the abilities of animals to learn.

Pe rhaps the same or a similar scale could be used for autonomous

agents.

Thomas’ classification categorizes learning according to the sophis-

tication of the behavior to be learned. O ne might also classify learning

s .according to the method used. Maes 1994 lists four such methods:

memory-based reasoning, re inforcement learning, supervised learning,

s .and learning by advice from other agents. Drescher’ s 1988 concept

s .learning and Kohonen’ s 1984 se lf-organization are other methods. The

curiosity principle is directly concerned with re inforcement learning,

while the routine s principle speaks of compiling or chunking sequences

of actions, again a form of learning.

The limitations of working with an almost planar graph become

apparent in Figure 10. Learning mechanisms should also connect to

drives and to memory, e ssentially to everything. And there are other

connections that need to be included or to run in both directions.

We ll, we have run out of our sources of guidance, both action

se le ction paradigm tenets and design principles. Are we then finished?

By no means. O ur general architecture for a cognitive agent is still in its

infancy. Much is missing.

Figu re 10. A cognitive agent with learning adde d.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 20: AUTONOMOUS AGENTS AS EMBODIED AI

S. FRANKLIN516

O ur agent’s motivations are restricted to drives and the goal trees

that are grown from them. We have not even mentioned the goal

gene rators that grew them. We also have not discussed othe r motiva-

tional e lements, such as moods, attitude s, and emotions that can influ-

ence action se le ction For such a discussion, see the work of Sloman

s .1979, 1994; Sloman & Croucher, 1981 . Pe rception can have a quite

scomplex architecture of its own Kosslyn & Koeing, 1992; Marr, 1982;

.Sloman, 1989 , including workspaces and memorie s. Each sensory

modality will require its own unique mechanisms, as will the need to

fuse information from several modalities. Each of the deliberative

mechanisms will have its own architecture , often including workspaces

and memorie s. Each will have its own connections to other module s.

For instance, some set of them may connect in paralle l to perception

s .and action sele ction Ferguson, 1995 . Similarly, each of the various

learning mechanisms will have its own architecture, connecting in

unique ways to other module s. The relationship between sensing and

acting, where acting facilitates sensing, is not yet specified in the

architecture . And the internal architecture of the action se lection

module itse lf has not been discussed. Finally, there is a whole other

s .layer of the architecture missing, what Minsky 1985 calls the B-brain

s .and Sloman 1995 calls the meta-management layer. This layer watche s

what is going on in othe r parts of our cognitive agent’s mind, keeps it

from oscillating, improves it strategies, and so forth. And after all this,

s .are we through? No. There is Baars’ 1988 notion of a global workspace

that broadcasts information wide ly in the system and allows for the

possibility of consciousne ss. There seems to be no end.

As you can see , the architecture of cognitive agents is the subject,

not for an article , but for a monograph or a 10-volume se t. Q uestion

s .Q 2 about the necessary e lements of embodied architectures is not an

easy one if, as I do, you take cognitive agents to be the proper objects of

study for embodied AI.

A PARADIGM FOR EMBODIED AI

So, how should embodied AI research proceed? Here is a `̀ concrete

proposal’ ’ :

Study cogn itiv e agen ts. The contention here is that a holistic view is

necessary. Inte lligence cannot be understood piecemeal. That is not

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 21: AUTONOMOUS AGENTS AS EMBODIED AI

AUTONOMOUS AGENTS AS EMBODIED AI 517

to say that proje cts picking off a piece of intelligence and studying

its mechanisms are not valuable. They often are. The claim is that

they are not sufficient, even in the aggregate .

Fo llow th e CAAT strategy. Running the enginee ring loop and the science

loop in paralle l will enable the syne rgy between. This will mean

making common cause with cognitive scientists and cognitive neuro-

scientists.

s .E xplore design space an d n iche space Sloman, 1995 . Strive to under-

stand not only individual agent architecture s but also the space of

all such architectures. This means exploring, classifying, and theo-

rizing at a highe r leve l of abstraction. Each agent occupie s a

particular niche in its environment. Explore , in the same way, the

space of such niches and the architecture s that are suitable to them.

This paradigm should apply equally to physically embodied agents,

to software agents `̀ living’ ’ in real computational environments, and to

artificial life agents `̀ living’ ’ in artificial environments.

REFERENCES

Ackley, D., and M. Littman. 1992. Interactions be tween learning and evolution.

In Artificia l Life II, ed. C. Langton et al., 407 ] 509. Redwood City, CA:

Addison-Wesley.

s .Agre, P. E. in pre ss . The dynam ic structu re of ev eryday life . Cambridge :

Cambridge University Press.

Albus, J. S. 1991. O utline for a theory of inte lligence . IEEE Tran s. Syst. Man

s .Cybern . 21 3 .

Albus, J. S. 1996. The engineering of mind. Proceed in gs o f th e Fourth In tern a-

tio nal Con feren ce on Sim u latio n o f Adaptiv e Beh av ior: From Anim als to

An im ats 4. Cape Code , MA, September.

Baars, B. J. 1988. A cogn itiv e th eory o f consciousn ess. Cambridge: Cambridge

University Pre ss.

Beer, R. D. 1990. Intelligen ce as adaptiv e behav io r. Boston: Houghton Mifflin.

Brooks, R. A. 1990. E lephants don’ t play che ss. In Design ing au ton om ou s agen ts.

ed. P. Maes. Cambridge , MA: MIT Pre ss.

Brooks, R. A. 1990a. A robust layered control system for a mobile robot. In

Artificia l in telligen ce at MIT , ed. P. Winston. Vol. 2. Cambridge, MA: MIT

Press.

Brustiloni, J. C. 1991. Autonomous Agents: Characterization and Requirements.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 22: AUTONOMOUS AGENTS AS EMBODIED AI

S. FRANKLIN518

Carnegie Me llon Technical Report CUM-CS-91-204, Carnegie Mellon Uni-

versity, Pittsburgh.

Drescher, G. L. 1988. Learning from experience without prior knowledge in a

complicated world. Proceed in gs o f th e AAAI Sym posium on Parallel Models.

AAAI Pre ss.

Drescher, G. L. 1991. Wet m inds. New York: The Free Pre ss.

Kosslyn and Koeing. 1992. Made-u p m inds. Cambridge, MA: MIT Press.

Edelman, G. M. 1987. Neural Darwin ism : The th eory of neuronal group selectio n .

New York: Basic Books.

Etzioni, O ., and D. Weld. 1994. A softbot-based interface to the Interne t.

s .Com m u n . ACM 37 7 :72 ] 79.

Fe rguson, I. A. 1995. On the role of DBI modeling for integrated control and

coordinated behavior in autonomous agents. Appl. Artif. Intell. 9:000 ] 000.

Fone r, L. N., and P. Maes. 1994. Paying attention to what’s important: Using

focus of attention to improve unsupe rvised learning. Proceed in gs of th e

Th ird Internation al Conference on th e Sim u la tion o f Adaptiv e Behav io r,

Brighton, England.

Franklin, S. 1995. Artific ia l m in ds. Cambridge , MA: MIT Pre ss.

s .Franklin, S. unpublished . Coordination without communication. http: r r www.

msci.memphis.edu r ; franklin r coord.html

Franklin, S., and A. Graesser. 1997. Is it an agent, or just a program? : A

taxonomy for autonomous agents. In Intelligen t Min ds III, eds. J. P. Muller,ÈM. J. Wooldridge , and N. R. Jennings, 21 ] 35. Berlin: Springe r ] Verlag.

Franklin, S., A. Graesser, B. O lde, H. Song, and A. Negatu. 1996. V irtual

Mattie ] an intelligent cle rical agent. AAAI Sym posium on em bod ied cogn i-

tio n an d actio n . Cambridge, MA.

Freeman, W. J., and C. Skarda. 1990. Repre sentations: Who needs them? In

J. L. McGaugh, e t al., eds., Brain organ izatio n and m em ory cells, system s,

and circu its, ed. J. L. McGaugh et al. New York: Oxford University Pre ss.

Gibson, J. J. 1979. The eco logical approach to v isual perceptio n . Boston: Houghton

Mifflin.

s .Glenberg, A. M. to appear . What memory is for. Beh av . Brain Sci.

Harvey, I., P. Husbands, and D. Cliff. 1993. Issues in evolutionary robotics. In

From an im als to an im ats. Proceed in gs o f th e Second In ternatio nal Conference

on Sim u latio n o f Adaptiv e Behav ior, ed. J.-A. Meyer, H. L. Roiblat, and

W. W. Wilson, 364 ] 373. Cambridge, MA: MIT Press.

Hayes-Roth, B. 1995. An architecture for adaptive inte lligent systems. Artif.

Intell. 72:329 ] 365.

Hofstadter, D. R., and M. Mitche ll. 1994. The copycat project: A mode l of

mental fluidity and analogy-making. In Adv ances in connectio n ist and n eural

com putatio n th eory, Vol. 2: Analogical conn ection s, ed. K. J. Holyoak and

J. A. Barnden, 31 ] 112. Norwood, N.J.: Ablex.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 23: AUTONOMOUS AGENTS AS EMBODIED AI

AUTONOMOUS AGENTS AS EMBODIED AI 519

Horgan, T., and J. Tiensen. 1989. Representation without rules. Philo s. Top.

s .17 Spring :147 ] 174.

Horgan, T., and J. Tiensen. 1996. Conn ection ism an d th e ph ilo sophy of psycho l-

ogy. Cambridge, MA: MIT Press.

s .Jackson, J. V . 1987. Idea for a mind. SIG ART Newslett. July 101 :23 ] 26.

Johnson, M., and R. Scanlon. 1987. Experience with a feeling-thinking machine .

Proceedin gs o f th e IEEE First In ternatio nal Conference on Neural Networks,

San Diego, pp. 71 ] 77.

Kanerva, P. 1988. Sparse d istribu ted m em ory. Cambridge , MA: MIT Press.

Kohonen, T. 1984. Self-organ izatio n and associativ e m em ory. Berlin: Springe r-

Verlag.

Kosslyn, S. M., and O . Koeing. 1992. Wet m in ds. New York: Free Pre ss.

Laird, J. E., A. Newall, and P. S. Rosenbloom. 1987. SO AR: An architecture for

general inte lligence . Artif. In tell. 33:1 ] 64.

Luck, M., and M. D’Inverno. 1995. A formal framework for agency and

autonomy. Proceed ings of th e Intern atio nal Conference on Mu ltiagen t System s,

254 ] 260.

Maes, P. 1990. How to do the right thing. Connect. Sci. 1:3.

Maes, P. 1994. Agents that reduce work and information overload. Com m un .

s .ACM 37 7 :31 ] 40, 146.

Marr, D. 1982. Vision . San Francisco: W. H. Freeman.

Mataric, M. 1994. Learning to behave socially. In From an im als to an im ats:ÂProceed ings of th e Th ird In ternatio nal Conference on Sim u lation o f Adaptiv e

Beh av ior, ed. D. Cliff, P. Husbands, J. A. Meyer, and S. W. Wilson,

432 ] 441. Cambridge , MA: MIT Pre ss.

Maturana, H. R. 1975. The organization of the living: A theory of the living

organization. Int. J. Man -Mach in e Stud ies 7:313 ] 32.

Maturana, H. R., and F. Varela. 1980. Au topoiesis and cogn itio n : The realization

o f th e liv in g. Dordrecht: Reide l.

Mauldin, M. L. 1994. Chatterbots, tinymuds, and the turing test: Entering the

Loebne r Prize competition. Proc. of th e 12th Natio nal Conference on Artifi-

cial In telligen ce, 16 ] 21. Seattle, WA.

Minsky, M. 1985. Society o f m ind . New York: Simon & Schuste r.

Mitche ll, M. 1993. Analogy-m akin g as perception . Cambridge , MA: MIT Pre ss.

Ne isser, U. 1993. Without pe rception, there is no knowledge : Implications for

artificial intelligence. In Natu ral and artific ia l m in ds, ed. R. G. Burton,

State University of New York Pre ss.

Nolfi, S., and D. Parisi. 1995. Evolving non-trivial behaviors on real robots: An

autonomous robot that picks up obje cts. Proceed ings o f th e Fourth Congress

o f th e Italian Associatio n for Artific ia l In telligen ce. Berlin: Springer.

O yama, S. 1985. The onto logy of in form ation . Cambridge : Cambridge University

Press.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014

Page 24: AUTONOMOUS AGENTS AS EMBODIED AI

S. FRANKLIN520

Reeke, G. N., Jr., and G. M. Edelman. 1988. Real brains and artificial intelli-

gence. Daedalu s Winter: 143 ] 173. Reprinted in S. R. Graubard, ed., The

artific ia l in telligen ce debate, Cambridge, MA: MIT Press.

Riegler, A. 1994. Constructivist artificial life. In Proceed ings of th e 18 th G erm an

s .Conference on Artific ia l In te lligen ce KI-94 Workshop ``Genetic Algorithms

within the Framework of Evolutionary Computation.’ ’ Max-Planck-Institute

Report MPI-I-94-241.

Searle , J. 1980. Minds, brains, and programs. Behav . Brain Sci. 3:417 ] 458.

Skarda, C., and W. J. Freeman. 1987. How brains make chaos in order to make

s .sense of the world. Behav . Brain Sci. 10 2 :161 ] 195.

Sloman, A. 1979. Motivational and emotional controls of cognition. Reprinted

in Model of though ts, 29 ] 38. New Haven, CT: Yale University Press.

Sloman, A. 1989. O n designing a visual system: Towards a Gibsonian computa-

s .tional mode l of vision. J. Exp. Theor. AI1 7 :289 ] 337.

Sloman, A. 1994. Computational mode ling of motive-management processes.

Proceed ings of th e Conference o f th e In ternatio nal Society for Research in

Em otion s. Cambridge, ed. N. Frijda, 344 ] 348. ISRE Publications.

Sloman, A. 1995. Exploring design space and niche space. Proceed ings 5th

Scan din av ian Conference on AI, Trondhe im, May 1995. Amsterdam: IOS

Press.

Sloman, A., and M. Crouche r. 1981. Why robots will have emotions. Proceed ings

Sev en th In ternatio nal Jo in t Con feren ce on AI, Vancouve r.

Sloman, A., and R. Poli. 1995. SIM AGENT: A toolkit for exploring agent

s .designs. In In telligen t agen ts, Vol. II ATAL-95 , ed. M. Woolridge e t al.,

392 ] 407. Berlin: Springe r-Verlag.

Smolensky, Paul. 1988. On the prope r treatment of conne ctionism. Behav . Brain

s .Sci. 11 1 :1 ] 74.

Song, H., S. Franklin, and A. Negatu. 1996. SUMPY: A fuzzy software agent. In

Intelligen t system s: Proceed ings o f th e ISCA 5 th Intern ation al Con feren ce,

Reno, June 1996, ed. F. C. Harris, Jr., 124 ] 129. International Society for

Compute rs and Their Applications } ISCA.

Thomas, R. K. 1993. Squirre l monke ys, conce pts and logic. In Natu ral an d

artific ia l m in ds, ed. R. G. Burton. Albany, NY: State University of New

York Pre ss.

Varela, F. J., E. Thompson, and E. Rosch. 1991. The em bod ied m ind . Cam-

bridge , MA: MIT Press.

Wooldridge , M., and N. R. Jennings. 1995. Agent theorie s, architectures, and

languages: A survey. In Intelligen t agen ts, ed. M. Woolridge and N. Jen-

nings, 1 ] 22. Berlin: Springe r-Verlag.

Yager, R. R., and D. P. Filev. 1994. E ssen tia ls o f fu zzy m odelin g an d con trol. New

York: Wiley.

Dow

nloa

ded

by [

The

Aga

Kha

n U

nive

rsity

] at

03:

27 2

4 N

ovem

ber

2014