28
For A “Cognitive Program” Explicit Mental Representations For Homo Oeconomicus (The Case Of Trust) Cristiano Castelfranchi Dep. of Communication Sciences Project PAR 2002 - University of Siena & Institute of Cognitive Sciences and Technologies – CNR - Roma [email protected] 1. Premise: For a “cognitive program” 1.1 Aims and claims In this paper I will argue in favor of a richer, more complete, and also more explicit representation of the mind of a social actor or agent. In order to account for the economic, the strategic, and the organizational behavior of social actors. The classical rationality model is not enough; not only because –what nowadays is quite obvious- it is an Olympic and ideal model while it should take into account the cognitive limits, biases, and mistakes of the human mind; not only because one should also consider the role of emotions in altering or in fostering rationality, but first of all because rationality is a specific (effective) way of mental working (justified beliefs acquisition, sound reasoning, grounded preferences and decision making), and mental working is based on some basic ingredients and principles that cannot be neglected. No simplified and formal theory of mind is tenable without some explicit account of the beliefs of the agent and of her motives and goals (including the beliefs and goals about the mind of the other). I will claim that subjective probability cannot replace an analytic account of the agent’s beliefs and expectations. For example, a cognitive approach to trust –which notoriously is such a fundamental attitude in economics and in society - cannot be reduced to or replaced by subjective expected probability of the favorable event. Analogously, utility value cannot reduce or substitute the specific and multiple desires and goals that motivate and reward the agent. I claim that the “epistemic program” of Brandeburger (this volume), aimed at making explicit the player’s beliefs about the game as part of the game, and Bacharach’s mental “frames” (this volume), are very relevant but not sufficient. We need a broader “cognitive program” aimed at making explicit the players’ goals and motives (and perceived partner’s motives) as part of the game they are playing. In fact, there is a “cognitive movement” within GT; but till now it just privileges the epistemic (beliefs and knowledge) aspects, while neglecting the other basic component of mind: goal-directed action, motives and objectives, which are the only ground of “utility”. Utility is in fact just an abstraction, a quantitative measure of this qualitative aspect of mind. This “quality” - the explicit account of the multiple and specific goals of the agent (from which only agents competition or cooperation follow)- must be reintroduced into the very model of economic or social actor’s mind. The inputs of any decision process are multiple conflicting goals (and beliefs about conflict, priority, value, means, conditions, plans, risks, etc.). This need does not coincide with the very trendy issue of dealing with emotion in rational models. Moreover, the challenge is not that of renouncing to principled, abstract and formal models in favor of some descriptive account of consumers, investors, and managers' behavior as derived from empirical data. In general, I’m not in favor of empiricist and descriptive psychological and sociological accounts of human behavior without any theoretical generalization power. I’m in favor of abstract, ideal-typical, formal models of mind, which are useful for several high level theories in the social sciences, but those normative models can no longer be so simplistic and anti- cognitive as those usually assumed in game theory and in economics. It is a wrong view, currently spreading in economics, that the alternative is between an abstract, theoretical and formal model of mind (identified with the decision-theoretic model of rationality) and an experimental economics based on empirical finding and specific observational “laws” of human behavior. Other principled, predictive, theoretical and formal approach to human social mind - like those developed within Cognitive Science and AI - are possible and useful for theoretical explanation and modeling in the social sciences. Logic and computational modeling of mental stuff and processes provide us the apparatus for more complex abstract top-down models of mind, while agent-based computer simulation of social phenomena provides an experimental basis for their validation.

For A “Cognitive Program” Explicit Mental Representations For Homo Oeconomicus

  • Upload
    cnr-it

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

For A “Cognitive Program” Explicit Mental Representations For Homo Oeconomicus

(The Case Of Trust)

Cristiano Castelfranchi Dep. of Communication Sciences

Project PAR 2002 - University of Siena &

Institute of Cognitive Sciences and Technologies – CNR - Roma [email protected]

1. Premise: For a “cognitive program” 1.1 Aims and claims In this paper I will argue in favor of a richer, more complete, and also more explicit representation of the mind of a social actor or agent. In order to account for the economic, the strategic, and the organizational behavior of social actors. The classical rationality model is not enough; not only because –what nowadays is quite obvious- it is an Olympic and ideal model while it should take into account the cognitive limits, biases, and mistakes of the human mind; not only because one should also consider the role of emotions in altering or in fostering rationality, but first of all because rationality is a specific (effective) way of mental working (justified beliefs acquisition, sound reasoning, grounded preferences and decision making), and mental working is based on some basic ingredients and principles that cannot be neglected. No simplified and formal theory of mind is tenable without some explicit account of the beliefs of the agent and of her motives and goals (including the beliefs and goals about the mind of the other). I will claim that subjective probability cannot replace an analytic account of the agent’s beliefs and expectations. For example, a cognitive approach to trust –which notoriously is such a fundamental attitude in economics and in society - cannot be reduced to or replaced by subjective expected probability of the favorable event. Analogously, utility value cannot reduce or substitute the specific and multiple desires and goals that motivate and reward the agent. I claim that the “epistemic program” of Brandeburger (this volume), aimed at making explicit the player’s beliefs about the game as part of the game, and Bacharach’s mental “frames” (this volume), are very relevant but not sufficient. We need a broader “cognitive program” aimed at making explicit the players’ goals and motives (and perceived partner’s motives) as part of the game they are playing. In fact, there is a “cognitive movement” within GT; but till now it just privileges the epistemic (beliefs and knowledge) aspects, while neglecting the other basic component of mind: goal-directed action, motives and objectives, which are the only ground of “utility”. Utility is in fact just an abstraction, a quantitative measure of this qualitative aspect of mind. This “quality” - the explicit account of the multiple and specific goals of the agent (from which only agents competition or cooperation follow)- must be reintroduced into the very model of economic or social actor’s mind. The inputs of any decision process are multiple conflicting goals (and beliefs about conflict, priority, value, means, conditions, plans, risks, etc.). This need does not coincide with the very trendy issue of dealing with emotion in rational models. Moreover, the challenge is not that of renouncing to principled, abstract and formal models in favor of some descriptive account of consumers, investors, and managers' behavior as derived from empirical data. In general, I’m not in favor of empiricist and descriptive psychological and sociological accounts of human behavior without any theoretical generalization power. I’m in favor of abstract, ideal-typical, formal models of mind, which are useful for several high level theories in the social sciences, but those normative models can no longer be so simplistic and anti-cognitive as those usually assumed in game theory and in economics. It is a wrong view, currently spreading in economics, that the alternative is between an abstract, theoretical and formal model of mind (identified with the decision-theoretic model of rationality) and an experimental economics based on empirical finding and specific observational “laws” of human behavior. Other principled, predictive, theoretical and formal approach to human social mind - like those developed within Cognitive Science and AI - are possible and useful for theoretical explanation and modeling in the social sciences. Logic and computational modeling of mental stuff and processes provide us the apparatus for more complex abstract top-down models of mind, while agent-based computer simulation of social phenomena provides an experimental basis for their validation.

I will also argue against a simplistic view that tries to add emotion to the reductive model of ‘rational mind’ to make it more human-like and realistic. I claim that a more articulated model of cognitive process is also needed for understanding the various ways in which emotion affects the decision process and behavior. The simpler the model of the decision process, the less articulated the ways in which emotion can affect it. In sum, a new complex but abstract cognitive “architecture” is needed both for a new micro-foundation of human social behavior and for dealing with emotion.

1.2. Blueprint After a very brief introduction (which I believe to be useful for a reciprocal situating of Cognitive Science and Economics) about the current crisis of Cognitive Science and of Artificial Intelligence, and their new frontiers and trends, with special reference to what I call the new “social paradigm” in AI, I will discuss three main issues about the relationship between “rational” and “cognitive”: • The idea that there exist a variety of motives in the human actor that must be taken into account for explaining

her behavior. If we do so, several claims about the “irrationality” of human subjects in economic or strategic decisions appear to be unjustified, while the rational-decision making model appears to be arbitrary prescriptive of specific “rational motives”.

• Although a variety of motives exists and explains so many presumed “deviations” from rationality, there also exists a variety of cognitive mechanisms (more or less “deliberative” or “reactive”) governing behavior. Thus, it is also true that humans do not always follow rational decision making mechanisms, and that other - adaptive - mechanisms (based on rules, routines, associations) must be modeled as well.

• The solution for a more adequate model of human decision does not hold just in putting some emotional distortion aside RD mechanism or bypassing it, or in some wordy magnification about how “rational” (adaptive) emotional impulses are. Such a juxtaposition or opposition of rationality and emotion is just a verbal solution. What is needed is an articulated model of the intertwining between explicit deliberation process and emotional process, and such an intertwining must be found in the broader cognitive process model on which both deliberation and emotion build. Also the typical economic approach to emotion - letting them to contribute to the utility function while letting untouched the architecture - is argued to be too conservative and insufficient.

• Finally, I will argue in favor of abstract, formal, ideal anti-empiricist models of mind, but against simplicity, for a “cognitive program” enriching economic and strategic reasoning model.

In such a perspective, and also as an example of affective disposition • I will present our cognitive model of trust. Unavoidably, the resulting paper will not be so compact, because I need to jump from one topic to another without a complete analysis or a gradual transition. However, the spirit and the message of the paper are very unitary; the mentioned issues are just converging supports, fulcra of just one and the same view. The paper is organized in four main sections: The new Cognitive Science; Cognitive vs. Rational; Emotion is not the solution; For a cognitive view of trust. 2. The New Cognitive Science Especially AI (but also CS in general) is going out of a crisis: crisis of grants, of prestige, and of identity. This crisis was not only due - on my view- to exaggerated expectations and overselling of specific technologies (like expert systems) tout court identified with AI. It was due to the restriction of cultural interests and influence of the discipline, and of its ambitions; to the dominance either of the logicist approach (identifying logics and theory, logics and foundations) (see the debate about the ‘pure reason’ [McD87] and ‘rigor mortis’) or of a mere technological/applicative view of AI. New domains were growing as external and antagonistic to AI: neural nets, reactive systems, evolutionary computing, CSCW (computer supported cooperative work), cognitive modeling, etc. Hard attacks were made to the "classical" AI and CS approach: situatedness [Suc87], anti-symbolism, reactivity [Bro89] [Agr89], dynamic systems, bounded and limited resources, uncertainty, and so on (on the challenges to AI and CS see also [Tha96]). Will the “representational paradigm” - that characterized Artificial Intelligence (AI) and Cognitive Science (CS) from their very birth - be eliminated in the 21° century? Will this paradigm be replaced by the new one based on dynamic systems, connectionism, situatedness, embodiedness, etc.? Will this be the end of the AI ambitious project? I do not think so. Challenges and attacks to AI and CS have been hard and radical in the last 15 years, however I believe that the next century will start with a renewed rush of AI and we will not assist to a paradigmatic revolution, with connectionism replacing cognitivism and symbolic models; emergentist, dynamic and evolutionary models eliminating reasoning on explicit representations and planning; neuroscience (plus phenomenology) eliminating

cognitive processing; situatedness, reactivity, cultural constructivism eliminating general concepts, context independent abstractions, ideal-typical models. I claim that the major scientific challenge of the first part of the century will precisely be the construction of a new “synthetic” paradigm: a paradigm that puts together, in a principled and non-eclectic way, cognition and emergence, information processing and self-organization, reactivity and intentionality, situatedness and planning, etc. [Cas98a] [Tag96]. In fact, by relaxing previous frameworks; by some contagion and hybridization, by incorporating some of those criticisms; by re-absorbing as its own descendants neural nets, reactive systems, evolutionary computing, etc.; by developing important internal domains like machine learning and distributed AI; by important developments in logics and in languages; and finally with the new successful “Agents” framework, AI is now in a revival phase. It is trying to recover all the original challenges of the discipline, its strong scientific identity, its cultural role and influence. Also CS is reacting to the reductionist and sub-symbolic attack. We may in fact say that there is already a neo-cognitivism and a new AI. In this new AI of the '90s systems and models are conceived for reasoning and acting in open unpredictable worlds, with limited and uncertain knowledge, in real time, with bounded (both cognitive and material) resources, interfering --either co-operatively or competitively-- with other systems. The new password is interaction [Bob91]: interaction with an evolving environment; among several, distributed and heterogeneous artificial systems in a network; with human users; among humans through computers. What is growing is “social AI”. The new AI and CS are -to me- only the beginning of a highly transformative and adaptive reaction to all those radical and fruitful challenges. They are paving the way for the needed future synthesis. 2.1 The synthesis Synthetic theories should explain the dynamic and emergent aspects of cognition and symbolic computation; how cognitive processing and individual intelligence emerge from sub-symbolic or sub-cognitive distributed computation, and causally feedbacks into it; how collective phenomena emerge from individual action and intelligence and causally shape back the individual mind. We need a principled theory able to reconcile cognition with emergence and with reactivity: Reconciling "Reactivity" and "Cognition"

We shouldn’t consider reactivity 1 as alternative to reasoning or to mental states [Cas95] [Tag96]. A reactive agent is

not necessarily an agent without mental states and reasoning. Reactivity is not equal to reflexes. Also cognitive and planning agents are and must be reactive (like in several BDI models). They are reactive not only in the sense that they can have some hybrid and compound architecture that includes both deliberated actions and reflexes or other forms of low level reactions (for example, [Kur97]), but because there is some form of high level cognitive reactivity: the agent reacts by changing its mind: plans, goals, intentions. Also Suchman's provocative claims against planning are clearly too extreme and false. In general we have to bring all the anti-cognitive claims, applied to sub-symbolic or insect-like systems, at the level of cognitive system. Reconciling "Emergence" and "Cognition" Emergence and cognition are not incompatible: they are not two alternative approaches to intelligence and cooperation, two competing paradigms. They must be reconciled:

- first, considering cognition itself as a level of emergence: both as an emergence from sub-symbolic to symbolic (symbol grounding, emergent symbolic computation), and as a transition from objective to subjective representation (awareness) and from implicit to explicit knowledge; - second, recognizing the necessity for going beyond cognition, modeling emergent unaware, functional social phenomena (ex. unaware cooperation, non-orchestrated problem solving, and swarm intelligence) also among cognitive and planning agents. In fact, for a theory of cooperation and society among intelligent agents mind is not enough [Con96]. We have to explain how collective phenomena emerge from individual action and intelligence, and how a collaborative plan can be only partially represented in the minds of the participants, and some part represented in no mind at all [Hay67].

Emergent intelligence and cooperation do not pertain only to reactive agents. Mind cannot understand, predict, and dominate all the global and compound effects of actions at the collective level. Some of these effects are self-

1 When behavior is governed not by represented anticipated goals or by plans and intentions, but by simple peripheral reflexes or by simple rules “condition � action”, and the agent is able to respond in real time to environmental stimuli and unpredictable variations, for example by stopping and changing is running activity..

reinforcing and self-organizing. There are forms of cooperation that are not based on knowledge, mutual beliefs, reasoning and constructed social structure and agreements. But what kind/notion of emergence do we need to model these forms of social behavior? The notion of emergence simply relative to an observer (which sees something interesting or some beautiful effect looking at the screen of a computer running some simulation) or a merely accidental cooperation, are not enough for social theory and for artificial social systems. We need an emerging structure playing some causal role in the system evolution/dynamics; not merely an epiphenomenon. Possibly we need even more than this: really self-organizing emergent structures. Emergent organizations and phenomena should reproduce, maintain, and stabilize themselves through some feedback: either through evolutionary/selective mechanisms or through some form of learning. Otherwise, we do not have a real emergence of some causal property (a new complexity level of organization of the domain); but simply some subjective and unreliable global interpretation. This is true also among cognitive/deliberative agents: the emergent phenomena should feedback on them and reproduce themselves without being understood and deliberated [Cas01]. This is the most challenging problem of reconciliation between cognition and emergence: unaware social functions impinging on intentional actions. AI can significantly contribute to solve the main theoretical problem of all the social sciences [Hay67]: the problem of the micro-macro link, the problem of theoretically reconciling individual decisions and utility with the global, collective phenomena and interests. AI will contribute uniquely to solve this crucial problem, because it is able to formally model and to simulate at the same time the individual minds and behaviors, the emerging collective action, structure or effect, and their feedback to shape minds and reproduce them. 3 Cognitive Vs Rational

In this section I will discuss four main issues:

• What is “cognitive”

• A plurality of motives and a variety of mechanisms

• Is Emotion the answer?

• New formal models: Against simplicity 3.1 What is Cognition and what are Cognitive Agents “Cognition” is not equal to “knowledge” or “consciousness”. Frequently enough students from other disciplines, especially those acquainted with philosophy and not so acquainted with cognitive science, make some mistakes of interpretation about what actually the term “cognition” and “cognitive” or the term “mind” (which basically are used as synonyms) means in cognitive sciences. First of all, “cognitive” science is not the science only of cognitive processes: language, perception, memory, attention, reasoning, etc.; i.e. of the processes of acquisition, elaboration, retrieval and use of knowledge. “Cognitive” and “cognitivism” refers more to a way of modeling and conceiving mind than to a specific part or aspect of mind. It consists in studying mind (all the psychic processes, from perception to motivation, from emotions to action) in terms of information processing. Second, neither cognition nor mind should be identified with consciousness. The great progress of cognitive studies has precisely been due to the fact of putting aside the problem of consciousness, of considering that it is not the central problem in modeling mental processes, and that the 90% of psychological machinery is unconscious, not in the Freudian sense; they simply are a very complex and rich information processing which is tacit, inaccessible to consciousness. Consciousness phenomenon (if possible) should be modeled within the same framework and not the other way around. 2 Third, mind is not necessarily rational, and “cognition” is not synonym of rationality. Rationality is a special way of working of the cognitive apparatus, when beliefs are well grounded on sufficient and convincing evidences, inferences are not biased by distortions or wishful thinking, illusions or delusions, where decisions are based on

2 It is true that the cognitive paradigm is now quite late in accounting for consciousness, and that this is no loger a tenible position. This delay or inability is also due to the fact that traditional cognitive approach has studied mind as a ‘disembodied’ entity, while propably several aspect of consciousness - in particular feeling, subjective experience, and similar - cannot be understood without a body and its “signals”.

those grounded beliefs and on a correct consideration of expected risks and advantages with their value. This is a very normative and ideal model; neither believing nor choosing necessarily and always conform to such an ideal model (by the way, only the 10% of human eyes conform to the ‘normal’ eye as presented in a handbook of oftalmology). Cognitive agents are “belief based goal-governed systems”: 3

i. Cognitive agents have representations of the world, of the actions’ effects, of themselves, and of other agents. Beliefs (the agent explicit knowledge), theories (coherent and explanatory sets of beliefs), expectations, desires, plans, and intentions, are relevant examples of these representations that can be internally generated, manipulated, and subject to inferences and reasoning.

ii. The agents act on the basis of their representations

iii. Those representations play a crucial causal role: the action is caused and guided by those representations. The behavior of cognitive agents is a teleonomic phenomenon, directed toward a given result that is pre-represented, anticipated in the agent’s mind. A Cognitive agent is an agent who basis its goals, choices, intentions, actions, on what it believes; it exhibits a “representation-driven behaviors”

iv. The success (or failure) of their actions depends on the adequacy of their limited knowledge and on their (rational) decisions, but it also depends on the objective conditions, relations, and resources, and on unpredicted

events. 4

Given this view of cognition, and of a cognitive agent, let’s now arrive to my argument that we need such a more analytical model of mind in terms of explicit beliefs and also goals. The lack of this model, the use of an unarticulated model of mind - that hides goals under “utility” and beliefs under a probability measure or some assumption of perfect knowledge - is evident in several situations. I will first consider some hasty and unjustified claim about the irrationality of real subjects when compared with the predictions of rational decision theory. 3.2 A plurality of motives and a variety of mechanisms: for a new micro-foundation Five claims:

1. RDT model should be an empty shell; it does not imply any specific motive 2. Humans have other motives beyond “economic incentives” 3. However, the introduction of additional or specific motives does not change the RTD “mechanism” 4. Utility is not a motive (but it is misused as such) 5. Other architectures and mechanisms governing action are possible and real.

3 Notice that I use "goal" as the general family term for all motivational representations: from desires to intentions, from objectives to motives, from needs to ambitions, etc. By "sub-cognitive" agents I mean agents whose behaviour is not regulated by an internal explicit representation of its purposes and by explicit beliefs. Sub-cognitive agents are for example simple neural-net agents, or mere reactive agents 4 These properties of the micro-level entities and of their actions have important consequences at the macro-level and for the emergence process. A cognitive frame and a cognitive view of agents and action does not entail any cognitive ‘reductionism’ and ‘subjectivism’ at the social layer, i.e.,the reduction of social structures, social roles and organization, social cooperation, to the beliefs, the intentions, the shared and mutual knowledge and commitments of the agents. In such a view, any social phenomenon is represented in the agents' minds and consists of such representations In our view on the contrary (Conte and Catelfra 94; Castelfr FUNCTIONS) a large part of the macro-social phenomena works thanks to the agents’ mental representations but without being mentally represented. How is this possible? Collective action exploits “cognitive mediators” in the agents minds. Social collective phenomena (like norms, values, functions, etc.) have "mental counterparts" which are not necessarily synonym of “cognitive representation” and awareness of them.

3.2.1 An empty shell As I said elsewhere (Binmore et Al, ), correctly interpreted classical rationality (rational decision theory) should say nothing about goals, motives, and preferences of the agents. It should be just an empty shell, a merely formal or methodological device to decide the best or a satisfying move, given a set of motives/preferences and their importance or order. Thus, being "rational" says nothing about being altruistic or not, being interested in capital (resources, money) or in art or in affects or in approval and reputation! The instrumentalist, merely formal approach to rationality should not be mixed up with the substantialist view of rationality: instrumentalist rationality ignores the specific motives or preferences of the agents. On the one side, "utility" should not be conceived per se’ as a motive, a goal of the generic agent. Utility is just an abstraction relative to the "mechanism" for choosing among the real motives or goals of the agent (Conte and Castelfranchi, 1995; and section 3.2,5). On the other side, economic theory as such is not entitled to specify human motives. Although everybody (especially economists and game theorists) will say that this is obvious and well known, we have to be careful since eventually they are likely to mix up the two things, and, by adopting a rational framework, we will hidden import a narrow theory of agent’s motivation, i.e. the Economic Rationality which is (normative)

rationality + economic motives (profit) and selfishness.5 Economists and game theorists are the first responsible of

such a systematic misunderstanding. 6

Even adopting a rational decision framework we can postulate in our agents any kind of motive/goal we want or need: benevolence, group concern, altruism, and so on. This does not make them less rational, since Rationality is defined subjectively! This might make them less efficient, less adaptive, less competitive, less “economically” rational, but not less subjectively rational. This distinction -always claimed to be obvious and always ignored- is to me orthogonal to the other distinction between Olympic or perfect or normative rationality, and Simon's limited and bounded rationality: it is not the same distinction. Although so "cleaned" decision-theoretic rationality is not necessary (the only possible device) for rational or

adaptive agents (for several reasons, not only because it needs to be bounded).7

5 As Herbert Simon (1991) claimed: “ The foundation stone of contemporary neo-classic economics is the hypothesis that economic actors are rational (i.e. perfectly adaptive), and that their rationality takes the form of maximising expected subjective utility. ..... “Utility” means the actor’s own ordering of preferences among outcomes, assumed to be consistent, but otherwise wholly arbitrary. Now how can we use this formula to predict behavior?... As yet we can conclude nothing about his behavior unless we know what he is trying to accomplish - what his utility function is. If we make the additional assumption that his utility is measured by his net profit, then we can instantly predict the quantity he will produce (the quantity that maximises the difference between total revenue and total costs). That sounds rather powerful and convenient. A description of the environment (the demand and cost schedules), an innocent assumption about motives (utility = profit ) is all we need to know to predict behavior. No tiresome inquiries into the businessman’s mental states or processes. ..” Notice, first, how Simon stresses that the assumption about the motives and profit is an “additional” assumption ; second, Simon’s irony about the “innocence” of such an assumption. 6 Keynes for example blamed the economists for their being "bentelhamist" and for ignoring the real motives of the economic actors. 7 One should for example add to this :

• the context dependent activation of goals and of knowledge: the agent should consider/use only the goals, the information, the inferences pertinent to the current context and activated in it. This is not just an unfortunate limitation: it is usually adaptive and efficient.

Thanks to the situated activation not all possible profitable investments (activities/ goals) are considered, but the choice is only among those agent's goals that are active in that specific situation the agent is involved in. I believe that this situated rationality is quite different from Simon's limited rationality which refers to cognitive limitations and sub-ideal knowledge for rational choice. Moreover, rationality subordinated to and oriented by the achievement of specific motives (goal-directed or motivated rationality) is not the same, should not make the same prediction and produce the same behaviour than merely formal or instrumentalist rationality, which is oriented by the meta-goal (not a real motive) of maximising the utility. While in the instrumentalist and economic perspective one goal or the other to me is the same, I just chose those goals, I just will allocate my effort and resources in those activities, that promise me the higher profit, in the other perspective, goals are not at all fungible with one the other. The agent is interested in specific results (world state), it desires something, it is motivated to achieve a given goal. While in the first perspective the agent will examine all possible goals it knows, and it will follow a resource-drive reasoning (“how can I best

3.2.2 The presumed irrationality of human behavior The fact that subjects’ behaviors do not reflects the prediction of Rational Decision Theory does not prove that they are not following such a rule/mechanism (that the subjects are irrational). This is an unjustified abduction, a diagnostic mistake; other explanations are possible, other evidences would be necessary for such a conclusion. In fact - as Simon has stressed - RDT cannot predict anything at all without assuming in the subjects some specific motivation (goals, preferences, subjective rewards of the subjects). Thus, why do not assume that we simply ignore the actual motives and rewards? As I said utility is just an abstraction relative to the "mechanism" for choosing among the real motives or goals of the agent (section xx). Officially Economics in fact considers RDT an empty shell, and motives are taken as “exogenous”, external to the theory and model which is just about a way of choosing among them, merely about a mechanism, a procedure; as it should be. Nevertheless, in practice in a lot of cases packet with the model they sell a set of rational motives (sic!), i.e. pecuniary incentives. Otherwise, how could it be possible that they consider “irrational” the choice of a subject preferring nothing (or better: no money!) to 1 dollar in the ultimatum game, or the cooperative move of an agent in a non-iterated PD? And how could they consider irrational the so called sunk cost bias which is a bias only from a strictly economic point of view, but not at all when taking into account the actual range of motives of the manager? And how could they proclaim that voters are irrational - since their costs in voting are for sure greater than their marginal contribution to the result of their party- prescribing in such a way which are or should be the only concerns or goals of the voters! So, to be true frequently RDT is both a formal model for choice plus a set of ‘economic’ or ‘rational’ presumed motives. The experience of being ‘irrational’ in the ultimatum game I have personally being involved as a subject in a preliminary experiment of ultimatum game; and I refused my 5 dollars preventing the other from taking his 15 dollars! And so on. My opinion is that:

If the subjects would believe that there is a merely random distribution of rewards, and that somebody will receive 1 dollar while some other will receive 9 or 99 dollars, etc., they will accept 1 dollar. But, in a regular ultimatum game, the probability of a refusal increases with perceived unfairness: the larger the difference between what you take for yourself and what you offer to me, the greater the probability of refusing (being the amount stable). The probability that I refuse 5 dollars is greater if you take 95 dollars than if you take 45, than if you take 15 dollars.

Other goals are actually involved: to punish the other, not to be offended, fairness, justice; which can predict this

(my) behavior. 8 Since there are other goals with their value – beyond the 5 dollars –, it may be that my decision has

been perfectly rational. Subjective motives cannot - in principle - be “irrational” (they might perhaps be unfortunate, noxious, unfit, not successful and adaptive for survival or reproduction), but not subjectively “irrational”. Since those motivations are more important to me than money, I would be perfectly (subjectively) rational in refusing the dollars (the amount of refused dollars would be a sort of measure of the importance, the worth of those goals for me). The objection to this argument is that “to punish” is for teaching and correcting wrong behavior, and this is stupid in one-shot game. My claim is that this long-term goal is not represented in my mind as an instrumental goal aimed at inducing the other to perform a different behavior (next time). This would really be an irrational plan in a one-shot game. My claim is that this is a top-goal (in our terminology, a terminal goal, non instrumental in my mind to any other goal, a real motive of mine). I would justify such a "stupid" top-goal or end, in terms of Triver's reciprocal biological altruism. One shouldn’t consider only the opposition between one-shot Vs repeated games, while the problem is different here. People claim that we are irrational because we play as a repeated game what is in fact a one-shot game, but the problem is that we play a Multi-Agent game and that this game has been played during our phylogenies!

allocate my resources?”), in the second perspective it starts from goals (motives), it examines only currently active goals (“how can I achieve my goals as much/many as possible?”), and search not for all possible goals but for all possible means and resources for achieve them . One should provide models for this difference: it should provide several possible “rational” architectures, strategies, and agents, much richer than classical Homo oeconomicus. 8 Of course, it is also reasonable to predict that the greater the absolute amount/value (5 dollars or 20 dollars) the less probable will be the refusal. I would like to know the interaction between these two factors

Our goal can neither be judged subjectively "irrational" because is an end and not a mean 9; nor it should be judged

objectively "irrational" or better non adaptive or unsuccessful. In fact I claim that A spends resources to punish B in

order to (biological function! not mental plan) make B behave correctly with C. 10

By claiming that in those experiments and in real social situations subjects are ‘irrational’ in fact we are simply prescribing them “rational motives” (sic!) they should have; which is unacceptable. Rationality is not a moral theory about human ends, is a theory of effective means-end reasoning. In sum, possible and less expensive explanation of those experiments is that subjects - in their minds - have a different rewards matrix, because they are considering other rewards (they have other goals and values). Every time in RDT or in GT it is assumed that the player plays “rationally” (is rational) tacitly another assumption is made: the assumption that she only cares of money, of those rewards that are externally observable, established, and postulated by the observer/experimenter. There are no hidden, internal, personal or social rewards changing the matrix and thus the game. This arbitrary and hidden assumption is misleading. While observing people diverging from “rational” predictions, instead of primarily claiming that –plausibly- they take into account in their decisions non-official rewards and motives, and that there is an hidden matrix of another game that they subjectively are actually playing, the scholar maintains that they play the official game with the official rewards, and conclude that they play wrong, is a strange, irrational way. On the contrary, explicitly modeling the broad range of players’ motives (and rewards) as an essential part of the game should be the first move. Frequently there is independent empirical evidence about this; for example subject simply explain why, for which motives, taking into account which extra-model goals, they have decided as they did. 3.2.3 “New micro-foundation”: additional motives are not enough As we just said, rational decision model if correctly used does not imply any specific motive, thus it is in principle compatible with any kind of motive: selfish or altruistic, external or internal rewards, economic, moral, aesthetic, social or whatever, personal and idiosyncratic or culturally shared. Since several years criticisms to economic models and reductionism focused in fact on a limited view of human motives and incentives. This has been for example the classical Pizzorno’s criticism in the 80’s to the application of the economic view of man to the social and political sciences. But also currently we have this kind of criticism. In an important recent paper Fehr and Falk (2001) reproach to the economists the fact that they “tend to constrain their attention to a very narrow and empirically questionable view of human motivation.” They claim that “powerful non-pecuniary motives like desire to reciprocate, desire to gain social approval, or intrinsic enjoyment in interesting tasks, also shape human behavior. By neglecting these motives economists may fail to understand the levels and the changes in behavior …….[They] may even fail to understand the effect of economic incentives on behavior if they neglect these motives” (p. 1). In this perspective Fehr and Falk explicitly recognize that together with RDT economists sell a theory of human motives, but they accept this as theoretically correct although empirically questionable and limiting. On the contrary, as I said, my claim is that:

- first, there is no reason in principle in the RDT, in Game Theory, in general economic theory (see for example Lionel Robins’ definition) for restricting economic model to economic incentives: it is a misuse of the theory itself, like the wrong identification between a “self-motivated” or “self-interested” agent and a “selfish” agent (see section xx);

- second, that RDT, economic and utilitarian view are compatible with any kind of incentives and motives. This kind of criticism is not well addressed and is insufficient A better theory of human individual and social behavior does not depend only on a better spectrum of human incentives. Analogously, Pizzorno’s recent interesting attempt to find a different micro-foundation (agent/actor’s mind model) for the social sciences, different from RDT, looks unclear (Pizzorno, xxx). For a different micro-foundation, for changing the model of the actor’s mind, it is not enough (it is not a real change of the RDT model) postulating additional “values”, as he suggests. This presupposes and accepts that the unjustified theory or assumption that “rational motives” be an intrinsic part of RDT. In fact, Pizzorno seems to identify the search for a “new micro-foundation” of social with relevant individual pro-social motives like membership, identity, recognition, altruism and social responsibility, etc. But unfortunately this is not a new micro-foundation: simply because no motive can subvert the very model of utilitarian economic man. A new micro-foundation necessarily requires (also) a different “mechanism” governing decision and action (Hardin).

9 On the contrary an instrumental goal, a means can be irrational when based on some irrational belief (ill-grounded, not justified, without evidence and implausible or contradictory with stronger beliefs) or due to some wrong planning or reasoning processes. 10 In fact, I interpret Triver's reciprocal altruism not necessarily in the sense that A will be reciprocated by the same guy it benefited: there are indirect reciprocations and punishments.

For example, as we just saw, mechanisms that found ritual or routine behavior or conformity, not involving true deliberation. I believe that both changes are necessary for a new micro-foundation of the social sciences, i.e. for a new abstract, normative model of a social mind:

- a broader and explicit account of motives (included pro-social ones); - the inclusion of different mechanisms governing behavior, beyond explicit decision making; including the multi-

facet role of emotional processes in this. 3.2.4. Why rational (utilitarian) self-motivated agents are not necessarily “selfish”. The same misuse of the rational/economic model and its implicit automatic stuff of “economic incentives” is the cause of the confusion between self-interested or motivated agents and “selfish” ones. Self-interested or self-motivated agents are simply agents which have their own interests, not servomechanism, not automatisms or tools simply working “for others”; they are “self-motivated” i.e.,: endowed with and guided by their own internal goals and motives (Castelfranchi, 1995). They can chose among those goals and active motives by some rational principle of optimizing expected utility on the basis of their (supported) beliefs. The fact that necessarily they are driven by their internal ends and taking into account the value of their own motives, does not make them “selfish” agents. This is a matter of motives, not of mechanisms. They may have any kind of motives: pecuniary or in general economic, or moral, esthetic, pro-social and altruistic, in favor of the group and self-sacrificial, etc. They are selfish or not just on the basis of the specific motives they have and prefer; not on the basis of their being obviously driven by internal and also endogenous motives. From the architectural point of view altruism is possible, and perhaps it is also real. As Seneca explained us some years ago, it is also possible that a virtuous man acts in a beneficial or altruistic way while expecting (and even enjoying) internal or external approval, or for preventing guilt and regret feelings; what matter is that such an expectation is not the aim, the motive of its action, that his action is not “for” “in view of” such reward. We just need a cognitive-motivational architecture able of such a crucial discrimination between expectations, rewards, and motives driving the action and motivating the choice. Even rational architecture is compatible with such a sophistication (provided that does not hidden entail selfish motives).. 3.2.5. Cognition and RDT In sum, the confrontation between the decision-theoretic model and human psychology does not in my view lean to a single answer: accept or reject RDT. There are more articulated responses.

• The first answer is that there are different goals not necessarily different mechanisms. Rationality should be motivationally empty while usually it is not; frequently by simply ascribing to the ‘players’

various and more realistic motives (internal and social rewards) we could account for their behavior, without postulating different decision mechanisms or irrationality. Players are not irrational: they are simply playing another game - in their mind.

• Second, there also are different decision mechanisms. Psychological research shows that humans use in several decisions “non compensatory” mechanisms; different

strategies and have various decision attitudes and personalities, etc. (BONINI....). Moreover, emotion affects and changes the decision process (see later).

• Third, non only the deliberation process does not follows a single model, but several times deliberation is bypassed at all

Not only ‘decision’ produces action, but also other mechanisms that bypass a real deliberation process:

- reactivity and rule-based behavior; (Andreson, in psychology; BDI in AI) - emotional impulses can lead to action (ex. Lowenstein), see later; - habits and script-based behavior; routines, practices and lock-in; social conformity and contagion

Till these kinds of mechanisms work, they are reinforced; only a serious failure could invalidate them. This at the individual level; at the collective level as it has been explained for conventions one cannot deviate alone, thus individual behavior maintains the behaviors of the others and is reproduced by them. 3.3. Founding “utility” Another distortion in my view is frequent, this too usually implicit. Utility and its maximization is used as a/the motive governing of rational behavior: I chose this and this in order to, with the end, the aim, of maximizing subjective utility. Also this commonsensical view creates a lot of problems and misunderstandings in the relationships between the RDT and the theory of human motives, and in designing an appropriate architecture of human mind. Let’s explicitly deal with this hard issue.

Utility maximization is not a true motive, it is just an algorithm, a procedure for dealing with multiple goals. At best is a meta-goal; a goal about dealing with goals. 3.3.1 Pseudo-goals Behavioral mechanisms exist (for example, reactive ones such as reflexes, releasers) that do not imply a true internal (internally represented) goal, even though the behavior is directed towards the production of given results. It is "as if" the system was regulated by an explicitly represented goal, even though in fact there is no internal anticipatory representation of the result, nor decision, nor reasoning, nor planning for achieving it. Let us call these "as if" goals or “pseudo-goals”. A pseudo-goal is in fact an adaptive function external to the mind of the system, towards which behavior is directed, without the goal being represented directly in the mind. The behavior is teleonomic and not random, although it is not deliberate and intentional. Pseudo-goals must not be confused with unconscious goals. In our opinion, in fact, a rigorous use of the term "unconscious" means that what is unconscious nevertheless resides in the mind, and is represented directly, in some format or other, in some memory. Pseudo-goals instead belong to the category of what is not in the mind. Of course, however, it is difficult, and sometimes empirically impossible, to determine whether a goal is unconscious or merely a pseudo-goal. In the following, an analysis is made of several examples of pseudo-goals that are crucial for a theory of the mind. 3.3.2 Reflexes as pseudo-goals Let us examine, for instance, the flight reflex, and in particular the flight reflex of birds in which the perception of the silhouette of a hawk moving above them triggers an immediate flight reaction: a key stimulus acts as releaser of a relatively rigid behavior. Another example is a robot designed to intervene in the case of environmental disasters, which, as soon as its chemical sensors detect the presence of certain acids, breaks off its activity, and withdraws precipitously. In the case of the birds it may reasonably be assumed that the purpose (the advantage) that selected the flight reaction is to avoid a predator: this is the goal, the aim of the behavior. But we have no reason to believe that the bird represents this to itself, that it wants to avoid being preyed upon, and therefore decides to escape. As for the robot, we may believe that the goal of this inbuilt reaction is to avoid its costly apparatus being attacked and damaged by the acid. The goal was explicit in the mind of the designer, but not in that of the robot. And yet its behavior is directed towards this goal. These goals (evolutionary pressure; the designer's idea) are not represented internally in an explicit way; they do not regulate actions as set points of a feedback mechanism. A simple “production rule” replaces the internal goal mechanism of the TOTE unit, which underlies purposive behavior: a comparison between condition and the world elicits the action. The equivalent of the goal is the functional effect of the action, that is, the effect for which it has been selected or designed. 3.3.3 Meta-goals as pseudo-goals Pseudo-goals do not refer solely to the "low" levels of behavior (reflexes, instincts, and so on). They can also be related to the level of regulation that is structurally higher than the explicit goals and plans, i.e. the level of the "meta-goals" or what we term the constructive principles or regulatory principles of the system. Of this type is the "goal" of maximizing utility, or the goal of avoiding pain, or that of maintaining coherent knowledge. It is by no means necessary (as well as being very difficult) to formulate these purposive effects as explicitly represented goals on the basis of which the mind itself can reason and make plans. In our opinion it is sufficient to assert that these functional principles are only pseudo-goals or procedures:

- pseudo-goal of cognitive coherence: the mind acts "as is" it had the goal of maintaining its beliefs coherently simply because if it recognizes a contradiction (condition) specific procedures (reaction) are implemented to eliminate it; it has the "goal of" being coherent (within certain limits) only as a function, i.e. only in the sense that it "is made in such a way that" it assures such an effect;

- pseudo-goal of avoiding suffering: likewise, the mind "has the goal" of avoiding suffering, in the sense that it embodies certain mechanisms liable to lead to reactions that avoid suffering. Such a mechanism could be the negative reinforcement of learning theories, or the "defense mechanism" postulated by psychoanalysis: if a belief is "painful", that is, it causes pain (condition), procedures (reactions) are implemented that are able to eliminate it or render it harmless. These mechanisms are purposive, although they are not directed by an internal goal. For the sake of descriptive convenience, we can say that the system "has the goal" of non-believing p (or of believing Not-p), while in actual fact a single pseudo-goal would be sufficient. This of course does not rule out the possibility that human beings are also capable of having the explicit internal goal of avoiding suffering and that accordingly they can reason, make plans and take decisions;

- maximizing or optimizing utility as pseudo-goal: we believe that also the goal of assuring the best allocation of resources, of attaining the greatest possible number of goals and the highest value at the least cost, is not a goal explicitly represented in the system and governing normal everyday choices. Individuals are not "economic agents"; they act with concrete and specific goals in mind (to be loved, to eat, to publish a book, to get married), i.e. a variety of heterarchic motives. They do not pursue a single totalizing goal, such as Profit (or pleasure). Of course it is true that (to the extent allowed by their limited knowledge) agents normally choose the most

appropriate goal (action) from among those that are active and that cannot be pursued simultaneously. However, our argument is that this result is not necessarily guaranteed by an explicit goal of maximizing profit: it is enough to have a mechanism or procedure for choosing from among active goals on the basis of their coefficients. Cognitive agents do not have the explicit goal of choosing the most appropriate goal; indeed they are "constructed" (their selective apparatus is constructed) in such a way as to guarantee this result. They behave "as if" they had this goal.

4. Emotion is not the solution Currently a simplistic view is spreading around: adding emotion to the reductive model of ‘rational mind’ is the right and sufficient way for making it more human-like and realistic. I claim on the contrary that a more articulated basic model of cognitive process is needed also for understanding the various ways in which emotion affects decision process and behavior. The simpler the model of the decision process, the less articulated the ways in which emotion can affect it. 4.1 How economists account for emotions while leaving untouched rational architecture The classical way in which Economics accounts for emotions is that emotion (more precisely to feel, experience an emotion) is something good or bad, provides some positive or negative rewards, i.e.: has some ‘utility’. In such a way emotion (feeling it) enters within the subject’s decision-making in a natural way, without changing the utility framework, just by stretching it a bit. The subject anticipates possible emotions: “if I do A (and the other will respond with C) I will feel E’; if I do B (and the other will respond with D) I will not feel E’ (or I will feel E’’)”. This changes the value of move A or B, etc. This view characterizes for example the literature about the role of anticipated regret in decision-making; but the same solution can be applied to shame, pride, envy, anxiety, fear, or whatever (for a criticism to this approach see Loewenstein, in press). This approach - the most conservative of the traditional utility framework – reduces a lot the possible role and impact of emotions on decision-making. Not very different looks the solution proposed within the so called “psychological game theory” (for example Geanakoplos, 1989; Rabin, 1998; Ruffle, 1999) where players’ payoffs are “endogenized”: on the basis of beliefs and expectations emotions enter the individual’s utility function by adding a “psychological” component to the “physical” (I would say the “public”) component inside the agent’s overall payoff. Let me remark how this way of dealing with emotion is simply one and the same way of incorporating everything within the economic framework that I have already explained, supported but also and criticized. The omnivorous economic framework In such a way one could theoretically account for the role of any given psychological, cultural, moral factor F which is not included within the traditional model of economic rationality by simply making F to contribute to the agent’s utility function; i.e. by inserting it within and subordinating it to the economic utilitarian frame. There are two ways: - One, very conservative, makes any possible motive or incentive strictly economic.

For example, in practical economics, in order the ‘rational’ decision maker takes into account pollution and environment, economists propose to monetarize it (via taxation and monetary sanctions, i.e. additional economic costs); otherwise it cannot be ‘calculated’ at all within the frame of an economic rationality. A generalization of this is that: any end can be taken into account, provided that/but it becomes an economic end, i.e. a given amount of income or cost.

- The other is by taking into account any additional motive or incentive as contributing to the utility calculation, while leaving untouched the basic mechanism.

For example one can deal with the agent pro-social goals, social sensibility and responsibility (i.e. her/his concern for the group or collective goals and advantages), altruism or moral ends, by simply adding to the other goals and incentives of the individual such a goal S. The consideration of S will modify the previous decision set. In fact S will have some value, and if achieved will provide some reward. In such a way it smoothly enters the decision process. For example the “good of the other” will be a goal S with some value V and this value will be taken into account in evaluating the utility function.

The serious drawbacks of such simple and obvious solutions are the following ones. A) Either, they mix up two importantly different things: - the procedure and criteria for the decision (in case of scanty resources and competing goals), with - the motivation and motives. Motives, specific qualitative goals are subordinated or simply erased out within the economic framework,

replaced by the abstract and quantitative notion of utility, which automatically mixes up them with the decision

criteria. This makes impossible to deal with several critical differences between concepts that are crucial for the understanding of human social behavior, like - as I have already shown - the difference between a selfish and an altruistic act, the difference between a gift and a merchandise; they both have just one and the same aim: maximizing utility. On the contrary, one thing are motives and goals of the agent, those for which I act, the objectives that I want to achieve (power, food, esteem, sex, …), a different thing is the technique thanks to whom I can chose between them. My motive is the specific and qualitative goal; my aim is to realize it, not to optimize or maximize my utility.

B) Or: they miss the opportunity of situating the role of rational reasoning and decision within a broader model of cognitive processing and emotional influence, while doing the other way around: adding, incorporating, modifying the RD model as the general and overall model, stretched as the Procuste’s bed.

4.2. Decision and Emotion: a multiple solution in a more complex cognitive architecture The real problem of accounting for emotion in (economic) decision and in behavior is not that of forcing everything and emotion too within the current RD model, subordinating or “incorporating” (to use Rabin’s term - Rabin 1993) in fact emotion in rationality and utility function. The real challenge is that of modifying the basic model of mind and adopting a richer cognitive architecture. Again my view is rather obvious:

Sometimes, there is in fact rational decision (although obviously within a limited and bounded rationality); sometimes, there is a true decision based on an explicit reasoning and evaluation of pros and cons, etc. although defective and biased by all the well known human biases in reasoning, estimating, risk perception, probability, framing, and also with emotions altering the cognitive processing; moreover, frequently a decision that uses different approaches and strategies (for example non-compensatory mechanisms) (cit). Other times there is no true decision at all; the action is not the result of a true deliberative process based on some prediction, evaluation, balance, etc. Either there is no real choice, or the choice is at a merely procedural level. Either the behavior is the result of following routines, procedure, prescriptions, habits, scripts, by conformity, imitation, receipts for already solved problems. The subject does not face a ‘problem’, does not take a decision. Or the behavior is the result of a simple reactive device: some ‘impulse’ elicited by an intense emotion, or some production rule (Condition � Action) that has been contextually activated and executed

The plurality of these behavior-governing mechanisms should not induce to an eclectic jumble. What is needed is a new integrated model of mind, i.e. an “architecture” able to explain how all those layers and mechanisms compete and coordinate with each other. This is an important research trend in Cognitive Sciences. For example, in this direction is going the new agent-based Artificial Intelligence with its studies on agent’s architecture (like the Touring machine, xxxx; like the important area of BDI - Beliefs, Desires, Intentions- models: xxxxxxxxx), and is going Cognitive Science in strict sense (see for example Strube, ). Economics should consider those studies and exploring whether a layered architecture or an architecture combining deliberative, emotional, and reactive mechanisms would be reasonable for economic theory. Let’s for example consider a typical cognitive science architecture as used by an economist: McFadden’s schema of the decision process (McFadden, forthcoming):

The Decision Process

Choice

Attitudes

Affect

Motives

Preferences

Process

Perceptions/BeliefsInformation

ACTION

As we can see, in this model “affect” affects the decision process directly (by altering its correct procedure), and indirectly, by modifying perception and beliefs or by modifying motives (goals and their values and urgency). The model is nice although rather vague: the conceptual and processing relationship between “attitudes”, “motives” and “preferences” is not so clear in the paper. The model is also incomplete, since it seems that always action is the result of a true decision based on beliefs and preferences. The possibility of undecided actions, of impulsive or merely reactive or automatic behaviors is put aside. On the contrary, Lowenstein (in press; 1997) in a good and well argued paper about decision and emotions criticizes the anticipatory role of emotions within the decision process and proposes a model where emotion directly conduces to behavior. Those theories are not in competition, they are complementary. They capture some real mechanisms of emotional interference; we just need a model for coherently assembling all those emotional effects. 4.3 How emotion changes the decision process Emotion (E) enters decision process in different ways. Let me synthesize the most relevant ones: A) Altering decision resources, processes, and heuristics

a’) E shortcuts decision time: with urgency and short time the result of the decision is different;

a”) E modifies knowledge accessibility: some items are more accessible other less accessible, this will change information taken into account during the decision and thus its result;

a”’) E introduces several biases: - frame effect, by focusing for example on negative or on positive consequences; - by modifying the thresholds of acceptable risk, or of needed trust; - by perceiving unachieved expected results as losses (prospect theory); - by altering the subject’s perception of probability and controllability; - etc.

B) Altering goals, changing the decision set

As we said, one can describe the effect of emotion on decision in terms of the goals taken into account in the decision balance. This means to take into account additional incentives since achieving a goal is a plus while frustrating it is a penalty. Emotion can modify considered goals in three ways:

b’) Es can be goals per se; or better to feel or not to feel an emotion can be a goal and have some utility;

b’’) E can simply activate new important or urgent goals and put them into the decision room; goals like ‘escaping’, ‘biting the other’, ‘kissing Mary’, etc. This will determine new priorities and preferences;

b”’) E alters the values of current goals (Lisetti, Gmytrasiewicz) some goals lose importance others acquire new weight: priorities change

C) Bypassing the decision process

Finally, E can directly elicit a behavior without any real decision and balance between different goals with their ‘reasons’: the simple activation of a fixed behavioral sequence (like in horror response) or of a high priority goal no subject to any deliberation but just “to be executed” (like: fire alarm� panic� escaping!).

One might represent those different emotional impacts as follows:

ACTION

DECISION process

ACTIVE GOALS

BELIEFS

E

This seems to be the minimal articulation of the model for accounting for the main impacts of emotions. 5 For a Cognitive View of Trust: Against Reduction or Elimination Is it trust - which notoriously is such a fundamental attitude in economics - just reducible to “subjective probability”,

possibly simply derived from personal previous experience or some statistics? 11

This is in fact the dominant tradition in Economics, Game Theory, part of Sociology [Gam-88; Col-94], and now in Artificial Intelligence and Electronic Commerce [Bra-99]. In this section I argue the importance of a cognitive view of trust (its explicit, analytic and grounded view), in contrast with a mere quantitative and opaque view of trust supported by Economics and Game Theory. I argue in favor of a view of trust as a complex structure of beliefs and goals (in particular causal attributions, evaluations, expectations and risk acceptance), even implying that the trustier must have a “theory of the mind” of the trustee (possibly including personality, shared values, morality, goodwill, etc.) [Cas-98, Cas-99]. Such a structure of beliefs determines a “degree of trust” and an estimation of risk, and then a decision of relying or not on the other, which is also based on a personal threshold of risk acceptance/avoidance. In this section I use this mental model of trust for two claims. On the one side, I claim that there are several sources of the beliefs on which the trust is based, and that the basis and the dynamics of trust cannot be reduced to reinforcement learning or probability updating on the basis of personal experience and personal interactions (although this is an important source) [Jon-99, Bis-99]. Trust beliefs come also from other sources: from observations, reasoning, social stereotypes, communication, spreading of reputation, signs [Bac-99], etc. I discuss the relationship between trust in information sources and social trust in task delegation. On the other side, I argue against some anti-cognitive approaches to trust in economics and GT. While agreeing with Williamson [Wil-85] that one can/should eliminate the redundant, vague, and humanistic notion of ‘trust’ if it simply covers the use of subjective probability in decisions, I strongly argue against both this reduction and the consequent elimination. 5.1 Some anti-cognitive approaches to trust in Economics and GT I will discuss here only two relevant positions about trust: one from transaction cost school, the other more related to game theory. Doubtless the most important tradition of studies on trust is the “strategic” tradition, which builds upon the rational decision and game theories to provide us a theory of trust in conflict resolution, diplomacy, etc. and also in commerce, agency, and in general in economics. I will also discuss a more extreme (but coherent) position that denies the utility of the notion of trust in favor of subjective probability or risk. 5.1.1 Trust beyond the Prisoner’s Dilemma syndrome Duetsch’s definition [Due] of trust in terms of expectations well represents strategic tradition:

11 This section is basically derived from a paper with Rino Falcone (Castelfranchi & Falcone, 2000).

An individual may be said to have trust in the occurrence of an event if he expects its occurrence and his expectations lead to behavior which he perceives to have greater negative consequences if the expectation is not confirmed than positive motivational experiences if it is confirmed.

Although we agree about the importance of expectation in trust, let us notice that this definition completely ignores the evaluation component, which makes such an expectation reason-based. However, the most peculiar aspect of this definition is the arbitrary restriction of trust to situations where the risks are greater than the utility. This does not correspond to common sense and natural language and it is not justified as a technical terminological decision by some heuristic advantage. In fact, several important examples and natural situations of trust would be excluded without any reason or advantage. What is really important in Duetsch’s analysis is the notion of vulnerability, the fact that the trustier is (and feels itself) exposed to danger; the idea that in trust necessarily there are risks and uncertainty. More than this: it is true that the act itself of trusting and relying on exposes to risks. However, this is not due to a PD-like situation where defeating -when the other is cooperating- pays more than cooperating. It is much more general: by deciding of trusting the other agent, I expose myself to risks because I decide to bet on the other (while if I do not bet on it, if I do not delegate, I will not risk).

In my view, Duetsch does not want to be generic 12

; he precisely intends to restrict trust to a special class of strategic situations. In fact, there are situations where in case of failure there are additional losses: not simply the invested resources (included time and thinking) are lost, and the motivating goals are unachieved, but some other goal is damaged. Consider for example a failure that implies shame and bad reputation. PD is an example of these situations, since the damage of relying upon and fail (x's cooperation and y's defection) is greater than the damage of not delegating at all (x's non cooperation). Obviously, the greater the damage in case of failure the greater the risk. But trust applies to any risky situation, i.e. to any uncertain decision and plan, not only in the very unbalanced situation with additional risks. For sure, the greater the perceived risks, the greater the needed trust in order to trust, but even decisions with small risks require some trust. Trust is involved in usual, every day decisions. Duetsch wants trust be special and outside rational decisions: following rational decision criteria one shouldn't rely on, shouldn't bet on that course of events; if one does it is just because of trust (see later). There is a correct and important intuition in Duetsch that should be accounted for (the idea that the greater the perceived risk, the greater the needed trust in order to trust); but why to make this relationship discontinuous (only if the risk is greater than the utility, there is trust)? Trust (or better the degree of trust) is a continuous dimension; an agent can trust an event also if the resulting utility is greater than the connected risk (undoubtedly, the needed trust is not so big as in the opposite case). Only the decision to trust (or better to delegate a task, see [Cas-98b]) is discrete: either trust is sufficient or it is not, either I (decide to) trust or not. Moreover, trust can be irrational but it is not necessarily so. Notice that if to trust would be always non rational, when and how would trust be "insufficient" to rely on? About this question, in our model (see later) there are both a ratio between utility and risk (that makes the decision rational or not), and an idiosyncratic factor of risk avoidance or acceptance that makes the degree of trust individually sufficient or not to trust [Cas-99]. Analogous view of trust we find in the conclusion of Gambetta's book [Gam-90] and in [Bac, Bac-99]: “In general, we say that a person ’trusts someone to do α’ if she acts on the expectation that he will do α when two conditions obtain: both know that if he fails to do α she would have done better to act otherwise, and her acting in the way she does gives him a selfish reason not to do α.” Also in this definition we can recognize the ‘Prisoner’s Dilemma syndrome’ that gives an artificially limited and quite pessimistic view of social interaction. In fact, X by trusting the other makes her ‘vulnerable’; in other terms, she gives to the other the opportunity to damage her. As we just said this is true, but not necessarily she gives him a motive, a reason for damaging her (on the contrary, in some cases to trust someone represents an opportunity for the trustee to show his competencies, abilities, willingness, etc.) (Cast-Falconer Dynamics). Not necessarily there is trust only if trusting y makes convenient for him to disappoint thruster’s expectation. Perhaps thruster’s trusting him gives him (the trustee) a reason and a motive for not disappointing thruster’s expectation; perhaps thruster’s delegation makes the expected behavior of trustee convenient for the trustee himself, it could create an opportunity for cooperation on a common goal. Trust continues to be trust independently of making convenient or not for the trustee to disappoint the trustier. Of course, there could be always risks and uncertainty, but not necessarily conflict in the trustee between selfish interest and broader or collective interests. If this were true there were no trust in

12 I do not believe that Duetsch had in mind the general fact that the failure of whatever action or plan (including a delegation) results not only in the unfulfilled goal, in the unrealised expected utility, but also in some loss (the invested resources and missed opportunities). If we consider the negative utility in case of failure as equivalent to the expected benefit or utility in case of achievement (psychologically this is not true: the negative outcomes are perceived as more important than the corresponding positive ones - Prospect theory) given losses (costs), it is always true (in any decision and in any action) that the negative outcome of failure is greater than the positive outcome of success.

strict cooperation based on common goal, mutual dependence, common interest to cooperate, and a joint plan to achieve the common goal [Con-95]. While on the contrary there is trust in any joint plan, since the success of the trustier depends on the action of the trustee, and vice versa, and the agents are relying on each other. The strategic view of trust is not general; it is an arbitrary and unproductive restriction. It is interested only in those situations where additional (moral or contractual) motivations, additional external incentives are needed. While in several cases intentions, intention declarations, esteem, goodwill are enough. The strategic view -so strictly based on a specific type of subjective utility- also exposes itself to a serious attack, aimed at the elimination of the notion of trust. We will see this and how a cognitive analysis of trust resists to it. 5.1.2. Against eliminativism: in defense of (a cognitive theory of) trust The traditional arrogance of economics and its attempt to colonize with its robust apparatus social theory (political

theory, theory of law, theory of organizations, theory of family, etc.13

) coherently arrives - on the field of 'trust' - to a ‘collision’ [Wil-85] with the sociological view. The claim is that the notion of 'trust' when applied in the economic and organizational domain or, in general, in

strategic interactions is just a common sense, empty term without any scientific added value 14

; and that the traditional notions provided by transaction cost economics are more ‘parsimonious’ and completely sufficient for accounting for and explaining all those situations where lay people (and sociologists) use the term 'trust' (except for very special and

few personal and affective relationships 15

). The term trust is just for suggestion, for making the theory more ‘user-

friendly’ and less cynic. It is just ‘rhetoric’ when applied to commerce 16

but does not explain nothing about its nature

which is and must be merely ‘calculative’ and 'cynic' 17

. On the one side, we should say that Williamson is pretty right: if trust is simply subjective probability, or if what is useful and interesting in trust is simply the (implicit) subjective probability (like in definitions used in Gambetta’s book [Gam-88] -see note 18 - and in the game-theoretic and rational decision use of trust), then the notion of trust is

13 In his section on ‘Economics and the Contiguous Disciplines’ (p.251) [Wil-85] Williamson himself gives example of this in law, political science, in sociology. 14 ‘There is no obvious value added by describing a decision to accept a risk (...) as one of trust’ [Wil-85, p.265]. ‘Reference to trust adds nothing’ [Wil-85, p.265]. 15 ‘(...) trust, if obtains at all, is reserved for very special relations between family, friends, and lovers’ [Wil-85, p.273]. 16 ‘I argue that it is redundant at best and can be misleading to use the term “trust” to describe commercial exchange (...) Calculative trust is a contradiction in terms’ [Wil-85, p. 256]. ‘(...) the rhetoric of exchange often employs the language of promises, trust, favors, and cooperativness. That is understandable, in that the artful use of language can produce deals that would be scuttled by abrasive calculativness. If however the basic deal is shaped by objective factors, then calculativness (credibility, hazard, safeguards, net benefits) is where the crucial action resides.’ [Wil-85, p. 260]. ‘If calculative relations are best described in calculative terms, then the diffuse terms, of which trust in one, that have mixed meanings should be avoided when possible.’ [Wil-85, p.261] And this does not apply only to the economic examples but also to the apparent exception of ‘the assault girl (...) I contend is not properly described as a condition of trust either’ [Wil-85, p.261]. This example that is ‘mainly explained by bounded rationality - the risk was taken because the girl did not get the calculus right or because she was not cleaver enough to devise a contrived but polite refusal on the spot - is not illuminated by appealing to trust’. [Wil-85, p. 267]. 17 ‘Not only is “calculated trust” a contradiction in term, but user friendly terms, of which “trust” is one, have an additional cost. The world of commerce is reorganised in favor of the cynics, as against the innocents, when social scientists employ user-friendly language that is not descriptively accurate - since only the innocents are taken in’ [Wil-85, p.274]. In other words, “trust” terminology edulcorates and masks the cynic reality of commerce. Notice how Williamson is here quite prescriptive and neither normative nor descriptive about the real nature of commerce and of the mental attitudes of real actors in it.

redundant, useless and even misleading. On the other side, the fact is that trust is not simply this, and -more important- what of the notion of trust is useful in the theory of social interactions is not only subjective probability. Not only Williamson is assuming more a prescriptive than a scientific descriptive or explanatory attitude, but he is simply wrong in his eliminativistic claims. And he is wrong even about the economic domain, which in fact is and must obviously be socially embedded. Socially embedded does not mean only -as Williamson claims- institutions, norms, culture, etc.; but also means that the economic actors are fully social actors and that they act in such a habit also in economic transactions, i.e. with all their motives, ideas, relationships, etc. included the trust they have or not in their partners and in the institutions.

The fact that he is unable to see what 'trust' adds to the economic analysis of risk 18

, and that he considers those terms as equivalent, simply shows how he is unable to take into account the interest and the contribution of cognitive theory. Risk is just about the possible outcome of a choice, about an event and a result; trust is about somebody: it mainly consists of beliefs, evaluations, and expectations about the other actor, his capabilities, self-confidence, willingness, persistence, morality (and in general motivations), goals and beliefs, etc. Trust in somebody basically is (or better at least includes and is based on) a rich and complex theory of him and of his mind. Risk is just risking “that”; while trust is to trust Y or trust in Y, not only trust “that”. Conversely distrust or mistrust is not simply a pessimistic esteem of probability: it is diffidence, suspect, and negative evaluations relative to somebody. Williamson’s claim about parsimony, sufficiency, and the absence of ‘added value’ is quite strange from a methodological point of view. In fact, a given description of X is parsimonious and adequate, sufficient or insufficient, only relative to given purposes the description is for. He should at most claim that for the purposes of the economic analysis the transaction cost framework is necessary and sufficient and that 'trust' does not add anything relevant for the economic perspective (it is just a cosmetic bla-bla). But this is not his claim. His claim pretends to be general, to provide the correct and sufficient interpretation of the situations. In fact it borrows the examples he analyses from sociology and he does not concede that analyzing those situations in terms of trust would add something relevant at least for the social or cognitive theory! (this is why we used the term 'arrogance' about economics). On the contrary, I claim that analyzing trust and analyzing those situations in terms of trust is absolutely necessary for modeling and explaining them from a psychological, anthropological or sociological scientific perspective. The richness of the mental ingredients of trust cannot and should not be compressed simply in the subjective probability estimated by the actor for his decision. But why do we need an explicit account of the mental ingredients of trust? 5.1.3 Why probability is not enough Trust cannot be reduced to a simple and mysterious index of probability because agents’ decisions and behaviors depend on the specific, qualitative evaluations and mental components. More precisely, we need an explicit account of the mental ingredients of trust (beliefs, evaluations, expectations, goals, motivations, model of the other), for at list the following reasons:

18 Section 2. starts with ‘My purpose in this and the next sections is to examine the (...) “elusive notion of trust”. That will be facilitated by examining a series of examples in which the terms trust and risk are used interchangeably - which has come to be standard practice in the social science literature - (...)’. The title of section 2.1 is in fact ‘Trust as Risk’. Williamson is right in the last claim. This emptying of the notion of trust is not only his own aim, it is quite traditional in sociological and game-theoretic approaches. For example in the conclusions of his famous book [Gam-88] Gambetta says: ‘.. When we say we trust someone or that someone is trustworthy, we implicitly mean that the probability that he will perform an action that is beneficial or at least not detrimental to us is high enough for us to consider engaging in some form of cooperation with him’ [Gam-88, p.217]. What is dramatically not clear in this view is what “trust” does explicitely mean! In fact, the expression cited by Williamson (the ‘elusive notion of trust’) is from Gambetta. His objective is the elimination of the notion of trust from economic and social theory (it can perhaps survive in social psychology of interpersonal relationships). ‘The recent tendency for sociologists /the attack is mainly to Coleman and to Gambetta/ and economists alike to use the term “trust” and “risk” interchangeably is, on the arguments advanced here, ill-advised’ [Gam-88, p.274].

� First, otherwise we will neither be able to explain or to predict the agent’s risk perception and decision. Subjective probability esteem is the consequence of the actor beliefs and theories about the world and the other agents. For example, internal or external attribution of risk/success (§5.2.2), or a differential evaluation of trustee’s competence Vs willingness, make very different predictions both about thruster’s decisions and possible interventions and cautions.

� Second, consider the very fundamental issue of how to generalize trust from one task, service or good to

another, or from one agent to another. If trust is just a probability and we do not have any model of the cognitive bases of such a subjective estimation/expectation (except perhaps simply a frequency in previous experience 19 and learning); if the expectation is not based on some mental representation of reasons, signs, models on which bases x expects that y will do well in task1, how can x generalize from task to task, from agent to agent? On the contrary this is a usual procedure in human social reasoning, in organization, and in business. Consider for example how can a famous industrial or commercial brand that wants change its product predict whether their clients will trust or not the new product under the old brand. This is a major issue in consumers’ trust theory. Only a model of the mental bases of consumers’ trust in a given brand, and of the meaning of such a brand and of the signs that consumers take into account, only a theory of possible analogical displacement in this mental representation can provide such a prediction.

� Third, without an explicit theory of the cognitive bases of trust any theory of persuasion/dissuasion, influence,

signs and images for trust, deception, reputation, etc. is not 'parsimonious' but is simply empty. Let’s take an example of Williamson (a girl under risk of assault) and suppose that she is Mr. W’s daughter D and that Mr. W is an anxious father, and that he has also a son from the same school of the guy G accompanying the girl. Will he ask for his son “Which is the probability that G assault your sister D?” I do not think so. He will ask for his son what he knows about G, if he has evaluation/information about G’s education, his character, his morality, etc. And this not for rhetoric or for using a friendlier notion. This is because he searches for some specific and meaningful information able to ground his prediction/expectation about risk. Now what is the relation between this information about G and the estimated risk or probability? Is Williamson’s theory able to explain and predict this relation? In his framework subjective probability and risk is an unprincipled and ungrounded notion. What the notion of trust (its cognitive analysis) adds to this framework is precisely the explicit theory of the ground and (more or less rational) support of the actor’s expectation, i.e. the theory of a specific set of beliefs and evaluations about G (the trustee) and about the environmental circumstances, and possibly even of the emotional appraisal of both, such that an actor makes a given estimation of probability of success or failure, and decides whether relying and depending on G or not. Analogously, what to do in Williamson’s framework for acting upon the probability (either objective or subjective)? Is there any rational and principled way? Mister W can just to touch wood or make exorcism to try to modify this magic number of the predicted probability. Why and how should for example information about ‘honesty’ change my perceived risk and my expected probability of an action of G? Why and how should for

example training, friendship, promises, a contract, norms 20

, or control, and so on, affect (increase) the probability of a given successful action and my estimation of it?

In the economic framework, first we can only account for a part of these factors; second this account is quite incomplete and unsatisfactory. We can account only for those factors that affect the rewards of the actor and then the probability that he will prefer one action to another. Honor, norms, friendship, promises, etc. must be translated into positive or negative ‘incentives’ on choice (for ex. to cooperate Vs to defeat). This account is very reductive. In fact, we do not understand in the theory how and why a belief (information) about the existence of a given norm or control, or of a given treat, can generate a goal of G and eventually change his preferences. Notice on the contrary that our predictions and our actions of influencing are precisely based on a ‘theory’ of this, on a ‘theory’ of G’s mind and mental processes beyond and underlying ‘calculation’. Calculation is not only institutionally but also cognitively embedded and justified! Other important aspects seem completely out of the theory. For example the ability and self-confidence of Y (trustee), and the actions for improving them (for example a training) and for modifying the probability of success, or the action for acquiring information about this and increase the subjective estimated probability. Trust is also this: beliefs about

19 Notice that usually this experience would be absolutely insufficient for any reasonable probability estimation. 20 How and why ‘regulation can serve to infuse trading confidence (i.e. trust!!) into otherwise problematic trading relations’ as Williamson reminds by citing Goldberg and Zucker (p. 268).

Y’s competence and level of ability, and his self-confidence. And this is a very important basis for the prediction and esteem of the probability of success or the risk of failure. For the traditional economic perspective all this seems both superfluous and naive (non-scientific, rhetoric): common-sense notions. This perspective does not want to admit the insufficiency of the economic theoretical

apparatus and the opportunity of its cognitive completion. 21

5.2. A Cognitive Analysis of Trust Let us introduce briefly our cognitive analysis of trust (for a more complete presentation see [Cas-98a, Cas-98b, Cas-98c, Cas-99]). In our model we specify which beliefs and which goals characterize x’s trust in another agent y. 5.2.1. Beliefs on which Trust consists Only a cognitive agent can “trust” another agent. We mean: only an agent endowed with goals and beliefs. First, one trusts another only relatively to a goal, i.e. for something s/he wants to achieve, that s/he desires. If x does not have goals, she cannot really decide, nor care about something (welfare): she cannot subjectively “trust” somebody. Second, trust itself consists of beliefs. Trust basically is a mental state, a complex mental attitude of an agent x towards another agent y about the behavior/action α relevant for the result (goal) g.

• x is the relying agent, who feels trust (trustier), it is a cognitive agent endowed with internal explicit goals and beliefs;

• y is the agent or entity which is trusted (trustee); y is not necessarily a cognitive agent (in this paper, however, we will consider only cognitive agents). So

• x trusts y “about” g/α (where g is a specific world state, and α is an action that produces that world state g) and “for” g/α ; x trusts also “that” g will be true.

Since y’s action is useful to x, and x is relying on it, this means that x is “delegating” some action/goal in her own plan to y. This is the strict relation between trust and reliance or delegation. Trust is the mental counter-part of delegation. We summarize the main beliefs in our model (their relationships are better explained in [Cas-99]:

1. "Competence" Belief: a positive evaluation of y is necessary, x should believe that y is useful for this goal of hers, that y can produce/provide the expected result, that y can play such a role in her plan/action, that y has some function. 2. “Disposition” Belief: Moreover, x should believe that y is not only able to perform that action/task, but y will actually do what x needs. With cognitive agents this will be a belief with respect to their willingness: this make them predictable. 3. Dependence Belief: x believes -to trust y and delegate to y - that either x needs it, x depends on it (strong dependence), or at least that it is better for her to rely than not to rely on y (weak dependence). 4. Fulfillment Belief: x believes that g will be achieved (thanks to y in this case). This is the "trust that" g. 5. Willingness Belief: I believe that y has decided and intends to do α. In fact for this kind of agent to do something, it must intend to do it. So trust requires modeling the mind of the other. 6. Persistence Belief: I should also believe that y is stable enough in his intentions, that y has no serious conflicts about α (otherwise y might change his mind), or that y is not unpredictable by character, etc. 7. Self-confidence Belief: x should also believe that y knows that y can do α. Thus y is self-confident. It is difficult to trust someone who does not trust himself!

We can say that trust is a set of mental attitudes characterizing the mind of a “delegating” agent, who prefers another agent doing the action; in “social” trust y is a cognitive agent, so x believes that y intends to do the action and y will persist in this because of his motives, character, and context. 5.2.2. Internal versus external attribution of Trust We should also distinguish between trust ‘in’ someone or something that has to act and produce a given performance thanks to its internal characteristics, and the global trust in the global event or process and its result, which is also affected by external factors like opportunities and interferences.

21 Both the reductionist and the eliminative positions are inadequate even within the economic domain itself, not only for the growing interest in economics for more realistic and psychologically based model of the economic actor [Ter-98], but because mental representations of the economic agents and their images are - for example - precisely the topic of marketing and advertising (that we suppose have something to do with commerce). There is fact in organizational studies and in marketing some richer view of trust (CIT….).

Trust in y (for example, ‘social trust’ in strict sense) seems to consists in the two first prototypical beliefs/evaluations we identified as the basis for reliance: ability/competence, and disposition. Evaluation of opportunities is not really an evaluation about y (at most the belief about its ability to recognize, exploit and create opportunities is part of our trust ‘in’ y). We should also add an evaluation about the probability and consistence of obstacles, adversities, and interferences. We will call this part of the global trust (the trust ‘in’ y relative to its internal powers - both motivational powers and competence powers) internal trust. This distinction between internal versus external attribution is important for several reasons: • To better capture the meaning of trust in several common sense and social science uses. • To understand the precise role of that nucleus of trust that we could describe in terms of “harmless”, sense of

safety, perception of goodwill. • To better understand why trust cannot be simply reduced to and replaced by a probability or risk measure. Trust can be said to consist of or rather to (either implicitly or explicitly) imply, the subjective probability of the successful performance of a given behavior α, and it is on the basis of this subjective perception/evaluation of risk and opportunity that the agent decides to rely or not, to bet or not on y. However, the probability index is based on, derived from those beliefs and evaluations. In other terms, the global, final probability of the realization of the goal g, i.e. of the successful performance of α, should be decomposed into the probability of y performing the action well (that derives from the probability of willingness, persistence, engagement, competence: internal attribution) and the probability of having the appropriate conditions (opportunities and resources: external attribution) for the performance and for its success, and of not having interferences and adversities (external attribution). Why is this decomposition important? Not only for cognitively grounding such a probability (which after all is ‘subjective’ i.e. mentally elaborated) - and this cognitive embedding is fundamental for relying, influencing, persuading, etc.-, but also because:

a) the agent’s trusting/delegating decision might be different with the same global probability or risk, depending on its composition; b) trust composition (internal versus external) produces completely different intervention strategies: manipulating the external variables (circumstances, infrastructures) is completely different from manipulating internal parameters.

If there are adverse environmental or situational conditions your intervention will be in establishing protection conditions and guarantees, in preventing interferences and obstacles, in establishing rules and infrastructures; while if you want to increase your trust in your contractor you should work on his motivation, beliefs and disposition towards you, or on his competence, self-confidence, etc.. Environmental and situational trust (which are claimed to be so crucial in commerce) are aspects of the external trust. Is it important to stress that:

when the environment and the specific circumstances are safe and reliable, less trust in y (the contractor) is necessary for delegation (for ex. for transactions).

Vice versa, when I strongly trust y, i.e. his abilities, willingness and faithfulness, I can accept a less safe and reliable environment (with less external monitoring and authority). We account for this ‘complementariness’ between the internal and the external components of trust in y for g in given circumstances and a given environment. However, we should not identify ‘trust’ only with ‘internal or interpersonal or social trust’ and claim that when trust is not there, there is something that can replace it (ex. surveillance, contracts, etc.). It is just matter of different kinds or better different facets of trust. 5.2.3 Degrees of Trust The idea that trust is scalable is common (in common sense, in social sciences, in AI). However, since no real definition and cognitive characterization of trust is given, the quantification of trust is quite ad hoc and arbitrary, and the introduction of this notion or predicate is semantically empty. On the contrary, we claim that there is a strong coherence between the cognitive definition of trust, its mental ingredients, and, on the one side, its value, on the other side, its social functions and its affective aspects. More precisely the latter are based on the former. In our model we ground the degree of trust of x in y, in the cognitive components of x's mental state of trust. More precisely, the degree of trust is a function of

a) the subjective certainty of the pertinent beliefs. b) the degree of the relevant dimension object of the belief: ability, strength of the willingness, persistence,

friendship, etc. The more y is skilled, willing, non-hostile, … and the more I’m sure about this, the more I trust y. We use the degree of trust to formalize a rational basis for the decision of relying and betting on y. Also we claim that the "quantitative" aspect of another basic ingredient is relevant: the value or importance or utility of the goal g. In sum, • the quantitative dimensions of trust are based on the quantitative dimensions of its cognitive constituents.

For us trust is not an arbitrary index with an operational importance, without a real content, but it is based on the subjective certainty of the pertinent gradable beliefs. 5.2.4. Positive trust is not enough: a variable threshold for risk acceptance/avoidance As we saw, the decision to trust is based on some positive trust , i.e. on some evaluation and expectation (beliefs) about the capability and willingness of the trustee and the probability of success. First, those beliefs can be well justified, warranted and based on reasons. This represents the “rational” (reasons based) part of the trust in y. But those beliefs can also be not really warranted, not based on evidences, quite irrational, faithful. We call this part of the trust in y: “faith”. Notice that irrationality in trust decision can derive from these unjustified beliefs, i.e. on the ratio of mere faith. Second, these (grounded or ungrounded) positive expectations are not enough for explaining the decision/act of trusting. In fact, another aspect is necessarily involved in this decision. The decision to trust/delegate necessarily implies the acceptance of some perceived risk. A trusting agent is a risk-acceptant agent. Trust is never certainty: there always remains some uncertainty (ignorance) and some probability of failure, and the agent must accept this and run a risk. Thus, a fundamental component of our decision to trust y is our acceptance and felt exposition to risk. Risk is represented in the quantification of the degree of trust and in criteria for decision. However, we believe that this is not enough. A specific risk policy seems necessary to trust and bet, and we should explicitly capture this aspect. In our model [Cas-99] we introduce not only a “rational” degree of trust but also a parameter able to evaluate the risk factor. In fact, in several situations and contexts, it should be important to consider the absolute values of some parameter independently from the values of the others. This fact suggests the introduction of some saturation-based mechanism to influence the decision, some threshold. For example, it is possible that the value of the damage per se (in case of failure) is too high to choose a given decision branch, and this is independently from the probability of the failure (even if it is very low) and from the possible payoff (even if it is very high). In other words, that danger might seem to the agent an intolerable risk (for example, in our model we introduce an ‘acceptable damage’ threshold). 5.2.5 Rational trust In our view trust can be rational and can support rational decisions. Trust as attitude is epistemically rational when is reason-based. When it is based on well-motivated evidences and on good inferences, when its constitutive beliefs are well grounded (their credibility is correctly based on external and internal credible sources); when the evaluation is realistic and the esteem is justified. The decision/action of trusting is rational when is based on a epistemically rational attitude and on a sufficient degree relative to the perceived risk. If my expectation is well grounded and the degree of trust exceeds the perceived risk, my decision to trust is subjectively rational. To trust is indeed irrational either when the accepted risk is too high (relative to the degree of trust), or when trust is

not based on good evidences, it is not well supported. Either the faith22 component (unwarranted expectations) or the

risk acceptance (blind trust) are too high23

. 5.2.6 Trusting Beliefs Sources: from Social to Epistemic again to Social Trust Our claim that the degree of trust in the trustee is based on thruster’s beliefs about trustee and in particular on the strength or weight of these beliefs, has interesting consequences for the relationship between two different types of trust. In fact, certainty or uncertainty in believing is again a form of confidence, trust or reliance. We rely on a given belief (depending upon it for taking our decision, for risking our resources, for executing our actions) on the basis of our confidence about its credibility (epistemic trust). What is paradoxical in this relationship is the following. Trust in beliefs is in turn derived from another trust which is trust in their sources [Dem-98]. Parts of these sources are our own sources: our senses, our reasoning and knowledge. On the one hand, we consider what we have directly perceived as the best evidence we have (in other terms, usually we trust more our perception -what we have seen, heard, touched, etc.- than other sources). On the other hand, we accept or reject a belief, we assign it a given credibility depending on its 'plausibility' i.e. on its coherence ("can I infer it?") or at least compatibility ("I cannot infer the opposite") with our consolidated knowledge (beliefs).

22 Non-rational blind trust is close to faith. Faith is more that trust without evidences, it is trust whitouht the need for and the search for evidences. 23 Rational trust can be based not only on reasons and reasoning, on explicit evaluations and beliefs, but also on simple learning and experience. For example the prediction of the event or result cannot be based on some understanding of the process or some model of it, but just based on repeated experiences and assotiations.

However, part of the sources are social, i.e. communication from some other agents. In this case, to trust a source is a case of social trust, and in fact it is again analyzed in terms of competence (that source has capabilities and opportunities to know the truth about that topic) and willingness (that source has will to say to me what I need, and it will be sincere). Thus, in this chain of trust, social trust is based on epistemic trust which is based on social trust, etc. This can give rise

to vicious circle24

when I do not have an independent social source with respect the possible trustee, when the social source of my beliefs about the trustee is the trustee himself. This is quite frequent for example in politics, but also in commerce. Quite frequently the source of information about the trustee, its competence and reliability, is the trustee or

his advertising and self-presentation25

. Trusting him as a source and trusting him as a delegated agent becomes one and the same thing. What should be needed is an independent social source providing evaluations about trustee's competence or reliability, since frequently we cannot observe the trustee and have personal direct evidence of his behavior, nor we have previous knowledge about him in order to make predictions. In this context of analysis of the epistemic trust (how to trust the beliefs producing trust), it could be interesting to cope with a series of questions: how is trust learned? How is it modified? What is its dynamics? Several works propose (Jonker; Schillo) an experiences-based approach to learn and/or update trust about other agents. Starting from the real fact that in an open world it is impossible to know each other, this approach try to conclude that the only possibility is to attribute to an unknown agent a neutral degree of trust mediated on the basis of the previous experiences with the other agents. Our cognitive model of trust is based on the idea that it is also possible to attribute to an unknown agent some kind of personality, stereotype, or to assign it to a class, for example on the basis of signs [Bac]. In this way it is possible also to attribute abilities, goals, attitudes, etc. In any case it is impossible -in our view- to analyze the experiences-based dimension as if it could be an independent and isolable dimension of trust. Each experience is not only with another agent, but also in a specific situation, and about a specific task. So it is possible that a trust negative (or positive) experience should be ascribed to different causes (for example external or internal to the interacting agent, or to his competence versus willingness). 5.2.7 When trust is too few or too much: over-confidence and over-diffidence Trust is not always good -also in cooperation and organization. It can be dangerous both for the individual and for the organization. In fact the consequences of over-confidence (the excess of trust) at the individual level are: reduced control actions; additional risks; non careful and non accurate action; distraction; delay in repair; possible partial or total failure, or additional cost for recovering. The same is true in collective activity. But, what does it mean 'over-confidence' i.e. excess of trust? In our model it means that the trustier accepts too much risk or too much ignorance, or is not accurate in her evaluations. Notice that there cannot be too much positive trust, esteem of the trustee. It could be not well grounded: the actual risk is greater than the subjective one. Positive evaluation on the trustee (trust in him) can be too much only in the sense that it is more than that reasonably needed for delegating to him. In this case, the trustier is too prudent and has searched for too many evidences and information. Since also knowledge has costs and utility, in this case the cost of the additional knowledge about the trustee could exceed its utility: the trustier already has enough evidence to delegate. Only in this case the well-grounded trust in the trustee is 'too much'. But notice that we cannot call it 'over-confidence'. In sum, there are three cases of 'too much trust': • More positive trust in the trustee than necessary for delegating. It is not true that 'the trustier trusts the trustee too

much' but is the case that she needs too much security and information. • The trustier has more trust in the trustee than he deserves; part of my evaluations and expectations are

unwarranted; I do not see the actual risk. This is a case of over-confidence. This is dangerous and irrational trust. • Thruster’s evaluation of the trustee is correct but she is too risk prone; she accepts too much ignorance and

uncertainty, or she bets too much on a low probability. This is another case of over-confidence, and of dangerous and irrational trust.

Which are the consequences of over-confidence in delegation? - Delegating to an unreliable or incompetent trustee;

24 Another important problem is when and how to stop this chain which seems to be an infinite regression. We believe that the roots of our trust should be found in our internal sources and in our faith: accepting a given belief without searching for additional evidences and sources. Explicit and reason-based trust is grounded in implicit and by-default trust: we trust by default (our senses for example) till we do not have reasons to doubt. Moreover there is a ratio between the marginal utility and the cost of an additional evidence. 25 This is even more true in electronic commerce where I do not usually meet and exchange information with other clients of the trustee.

- Lack of control on the trustee (he does not provide his service, or provide a bad service, etc.); - Too open delegation [Cas-98c]: in other words, a delegation that permits (obligates) the trustee to make chooses, plans, etc, and he is unable to realize such a kind of actions.

Which are on the contrary the consequences of insufficient confidence, of an excess of diffidence in delegation? - We do not delegate and rely on good potential partners; we miss good opportunities; there is a reduction of exchanges and cooperation; - We search and wait for too many evidences and proofs; - We make too many controls, loosing time and resources and creating interferences and conflicts; - We specify too much the task/role without exploiting trustee's competence, intelligence, or local information; we create too many rules and norms that interfere with a flexible and opportunistic solution.

So, some diffidence, some lack of trust, prudence and the awareness of being ignorant are obviously useful; but also trusting it is. Which is the right ratio between trust and diffidence? Which is the right degree of trust? • The right level of positive trust in the trustee (esteem) is when the marginal utility of the additional evidence on

him (its contribution for a rational decision) seems inferior to the cost for acquiring it (including time). • The right degree of trust for delegating (betting) is when the risk that we accept in case of failure is inferior to the

expected subjective utility in case of success (the equation -as we saw in [Cas-99]- is more complex since we have also to take into account alternative possible delegations or actions).

5.2.8 Trust as feeling I have just analyzed the cognitive explicit facet of trust as beliefs and goals about something, and a consequent decision of relying upon it. I have completely put aside the affective side: the trust that we ‘inspire’, the merely intuitive, emotional facet. It is true that trust can be also this or just this: no judgment, no reasons, but simply attraction and sympathy. This is an automatic, associative, unconscious form of appraisal: we do not know why we prefer y and we are attracted by y. There are beautiful experiments on this form of affective appraisal (Bargh, ). One should also account for the personality aspects of trust as disposition or as default attitude. I do not have room for this analysis, I just want to link this to the multiple relationship between emotion and cognitive processes (section 4). Some emotions are based on and elicited by true evaluations (beliefs), and also trust as affective disposition can be based on trust as esteem and good expectations. And the affective aspect of trust can play a role by modifying the process of beliefs, sources and decision-making. But, on the other side trust can be a non-belief-based emotional reaction, an affective attitude simply activated by unconscious sign perception or associations, by “somatic markers” (Damasio, ) (Castelf Miceli, xx). 6. Concluding remarks In this paper I have argued in favor of a richer, more complete, and also more explicit representation of the mind of a social actor or agent, in order to account for the economic, the strategic, and the organizational behavior of social actors. I supported a “cognitive program”- much beyond the “epistemic program” - aimed at making explicit the players’ goals and motives (and perceived partner’s motives) as part of the game they are playing. The true ground of “utility” is goal-directed action, motives and objectives, . More psychological and qualitative aspects - the explicit account of the multiple and specific goals of the agent (from which only agents competition or cooperation follow)- must be reintroduced into the very model of economic or social actor’s mind. The inputs of any decision process are multiple conflicting goals (and beliefs about conflict, priority, value, means, conditions, plans, risks, etc.). However, taking into account other more realistic human motives –although fundamental – is not enough. A better theory of human individual and social behavior does not depend only on a better spectrum of human incentives. With Pizzorno we are in search of a different micro-foundation (agent/actor’s mind model) for the social sciences, different from RDT, but for such a new micro-foundation, for changing the model of the actor’s mind, postulating additional “values” is not enough: no motive can subvert the very model of utilitarian economic man. A new micro-foundation necessarily requires (also) a different “mechanism” governing decision and action Focusing on motivation theory and on various mechanisms governing behavior does not coincide with the very trendy issue of dealing with emotion in rational models. I have in fact argued against a simplistic view that tries to add emotion to the reductive model of ‘rational mind’ to make it more human-like and realistic. I claimed that a more articulated model of cognitive process is also needed for understanding the various ways in which emotion affects the decision process and behavior. The simpler the model of the decision process, the less articulated the ways in which emotion can affect it. I argued that emotion affects the decision-making inputs, the decision-making mechanism, and can by pass decision-making at all. In sum, a new complex but abstract cognitive “architecture” is needed both for a new micro-foundation of human social behavior and for dealing with emotion. 6.1 Formal architectures: against simplicity

Those more sophisticated and complex model is not necessarily descriptive and empirically driven. I mean that they can be abstract and formal models like the celebrated formal model of economic rationality. This is in fact what AI is building within the domain of “agents architecture and logics”. Several kinds of architectures are proposed, for very simple agents (rule-based, reactive, neural), for economic oriented agents, and also for more cognitively oriented agents. Both an operational (computational) and a formal (logic + quantification) approach to mind modeling are possible and very promising. I believe that Economics should abandon the alternative and the comparison between economic theory and psychological experiments, in favor of a four-party game: economic theory – experiments – computational models (architectures + platforms) – simulation. For several issues simulation experiments are as enlightening as psychological experiments, and much more direct for adjusting models and for understanding the relationships between different factors. In particular, the dynamics between the micro and the macro layers can be modeled and experimentally understood - to me - only through computer simulation. Currently, economic computer simulation is interested in macro effects and deal with very simple agents; they are claimed to be sufficient (Terna, ). I argued that for understanding micro-macro issues, and for modeling economic decision (for example, with or without emotions) more complex and cognitive models of the agents are needed. Acknowledgments I’m grateful to my friends Rino Falcone, with whom I developed the theory of trust, and Maria Miceli who is co-author of the “pseudo-goals” argument; to Nicola Dimitri who involved me in that exciting match with so brilliant economists and game-theorists, and pushed me to finalize this “paper”. I would like to thanks also Francesca Marzo e Federica Alberti for comments and references. References Castelfranchi, C. and Conte, R. Limits of Economic and Strategic Rationality for Agents and MA Systems, to appear in Robotics and Autonomous Systems, special issue on rationality and MAS Conte R. and Castelfranchi, C. Cognitive and Social Action, UCL Press, London, 1995 Simon, H. Cognitive Architectures and Rational Analysis: Comment. In K. VanLehen (ed.) Architectures for Intelligence, Hillsdale, LEA, 1991 [Bac-99] Bacharach M. and Gambetta D., Trust as Type Interpretation, in Castelfranchi C. and Tan (eds), Trust and Deception in Virtual Societies, Kluwer Publisher, (in press). [Bac] M. Bacharach and D. Gambetta,Trust in Signs, in Karen Cook (ed.) Trust and Social Structure, New York:

Russel Sage Foundation, forthcoming. [Bis-99] A. Biswas, S. Sen, S. Debnath, (1999), Limiting Deception in Social Agent-Group, Autonomous Agents ‘99

Workshop on "Deception, Fraud and Trust in Agent Societes", Seattle, USA, May 1, 21-28. [Bra-87] M.E. Bratman, (1987), Intentions, Plans and Practical Reason. Harward University Press: Cambridge, MA. [Bra-99] S. Brainov and T. Sandholm, (1999), Contracting with uncertain level of trust, Proceedings of the AA’99

Workshop on “Deception, Fraud and Trust in Agent Societies”, Seattle, WA, 29-40. [Cas-98a] Castelfranchi C., Falcone R., (1998) Principles of trust for MAS: cognitive anatomy, social importance, and

quantification, Proceedings of the International Conference on Multi-Agent Systems (ICMAS'98), Paris, July, 72-79.

[Cas-98b] Castelfranchi C., Falcone R., (1998) Social Trust: cognitive anatomy, social importance, quantification and dynamics, Autonomous Agents ‘98 Workshop on "Deception, Fraud and Trust in Agent Societes", Minneapolis/St Paul, USA, May 9, pp.35-49.

[Cas-98c] Castelfranchi, C., Falcone, R., (1998) Towards a Theory of Delegation for Agent-based Systems, Robotics and Autonomous Systems, Special issue on Multi-Agent Rationality, Elsevier Editor, Vol 24, Nos 3-4, , pp.141-157.

[Cas-99] Castelfranchi, C., Falcone, R. (1999). The Dynamics of Trust: from Beliefs to Action, Autonomous Agents ‘99 Workshop on "Deception, Fraud and Trust in Agent Societes", Seattle, USA, May 1, 41-54.

[Col-94] J. S. Coleman, (1994), Foundations of Social Theory, Harvard University Press, MA. [Con-95] Conte, R., Castelfranchi, C., Cognitive and Social Action, London, UCL Press, 1995. [Dem-98] R. Demolombe, (1998), To trust information sources: a proposal for a modal logical framework,

Autonomous Agents ‘98 Workshop on "Deception, Fraud and Trust in Agent Societes", Minneapolis, USA, May 9, 9-19.

[Deu-58] M. Deutsch, (1958), Trust and Suspicion Journal of Conflict Resolution , Vol. 2 (4) 265-79. [Gam-90] D. Gambetta, editor. Trust. Basil Blackwell, Oxford, 1990.

[Gam-88] D. Gambetta, (1988), Can we trust trust?, in D. Gambetta (ed.), Trust, Making and Breaking Cooperative Relations, Oxford: Basil Blackwell, 213-237.

[Gan-99] A. Ganzaroli, Y.H. Tan, W. Thoen, (1999), The Social and Institutional Context of Trust in Electronic Commerce, Autonomous Agents ‘99 Workshop on "Deception, Fraud and Trust in Agent Societes", Seattle, USA, May 1, 65-76.

[Jon-99] C. Jonker and J. Treur, (1999), Formal Analysis of Models for the Dynamics of Trust based on Experiences, Autonomous Agents ‘99 Workshop on "Deception, Fraud and Trust in Agent Societes", Seattle, USA, May 1, 81-94.

[Ter-98] P. Terna, (1998), Simulation Tools for Social Scientists: Building Agent Based Models with SWARM, Journal of Artificial Societies and Social Simulation vol. 1, no. 2, <http://www.soc.surrey.ac.uk/JASSS/1/2/4.html>.

[Wil-85] Williamson, O.E., (1985), The Economic Institutions of Capitalism, The Free Press, New York. Castelfranchi C., Falcone R., Trust is much more than subjective probability: Mental components and sources of trust,

32nd Hawaii International Conference on System Sciences - Mini-Track on Software Agents, Maui, Hawaii, 5-8 January 2000. Electronic Proceedings

===================== bibl IBERAMIA [Agr89] Agre, P.E. 1989. The dynamic structure of everyday life. Phd Thesis, Depertment of Electrical

Engineering and Computer Science, Boston: MIT. [Bic90] Bicchieri, C. 1990. Norms of cooperation. Ethics, 100, 838-861. [Bob91] D. Bobrow. "Dimensions of Interaction", AI Magazine, 12, 3,64-80,1991. [Bon89] A. H. Bond, Commitments, Some DAI insigths from Symbolic Interactionist Sociology. AAAI

Workshop on DAI. 239-261. Menlo Park, Calif.: AAAI, Inc. 1989. [Bot98] Botelho, L.M. and Coelho H. Artificial Autonomous Agents with Artificial Emotions. Autonomous

Agents’98, Mineapolis, ACM Press. [Bro89] Brooks, R.A. 1989. A robot that walks. Emergent behaviours from a carefully evolved network.

Tech. Rep. Artificial Intelligence Laboratory. Cambridge, Mass.: MIT. [Can97] Canamero, D. Modeling Motivations and Emotions as a Basis for Intelligent Behavior.

Autonomous Agents’98, ACM Press, 148-55, 1997. [Cas91] C. Castelfranchi. Social Power: a missed point in DAI, MA and HCI. In Y. Demazeau &

J.P.Mueller (eds), Decentralized AI. 49-62. Amsterdam: Elsevier, 1991. [Cas92a] C. Castelfranchi and R. Conte. Emergent functionalitiy among intelligent systems: Cooperation

within and without minds. AI & Society, 6, 78-93, 1992. [Cas92b] C. Castelfranchi., M. Miceli, A. Cesta. Dependence Relations among Autonomous Agents, in

Y.Demazeau, E.Werner (Eds), Decentralized A.I. - 3, Elsevier (North Holland), 1992. [Cas95] C, Castelfranchi, Guaranties for Autonomy in Cognitive Agent Architecture. In [Woo95] [Cas96] Castelfranchi, C., Commitment: from intentions to groups and organizations. In Proceedings of

ICMAS'96, S.Francisco, June 1996, AAAI-MIT Press. [Cas97a] C. Castelfranchi. Individual Social Action. In G. Holmstrom-Hintikka and R. Tuomela (eds.)

Contemporary Theory of action. vol. II, 163-92. Dordrecht, Kluwer, 1997. [Cas97b] Castelfranchi, C. Challenges for agent-based social simulation. The theory of social functions. IP-

CNR, TR. Sett.97; invited talk at SimSoc’97, Cortona, Italy [Cas97c] C Castelfranchi and R Falcone. Delegation Conflicts. In M. Boman and W. van De Welde

(Eds.)Proceedings of MAAMAW ‘97, Springer-Verlag, 1997. [Cas98a] Castelfranchi, C., Modeling Social Action for AI Agents. Artificial Intelligence, (forthcoming). [Cas98b] Castelfranchi, C., To believe and to feel: To embody cognition and to cognitize body The case for

“needs”. In 1998 AAAI Fall Symposium "Emotional and Intelligent: The Tangled Knot of Cognition."

[Coh90] P. R. Cohen and H. J. Levesque. Rational interaction as the basis for communication. in P R Cohen, J Morgan and M E Pollack (Eds): Intentions in Communication. The MIT Press, 1990.

[Con95] Conte,R. and Castelfranchi, C. Cognitive and Social Action, UCL Press, London, 1995. [Con96] R. Conte and C. Castelfranchi. Mind is not enough. Precognitive bases of social interaction. In N.

Gilbert (Ed.) Proceedings of the 1992 Symposium on Simulating Societies. London, University College of London Press, 1996.

[Cot90] Cottrell, G.W., Bartell, B., and Haupt, C.1990. Grounding meaning in perception. In H. Marburger (ed.) 14th German Workshop on AI. Berlin, Springer, 307-21.

[Dam94] Damasio, A.R. Descartes’ Error. N.Y., Putnam’s Sons, 1994

[Den81] Dennet, Daniel.C. Brainstorms. Harvest Press, N.Y 1981. [Eco75] Eco, U. 1975. Trattato di Semiotica generale. Bompiani, Milano. [Els82] J. Elster. Marxism, functionalism and game-theory: the case for methodological individualism.

Theory and Society 11, 453-81. [Gas91] L. Gasser. Social conceptions of knowledge and action: DAI foundations and open systems

semantics. Artificial Intelligence 47: 107-138. [Gas98] L. Gasser. Invited talk at Autonomous Agents’98, Minneapoli May 1998. [Gen94] M.R. Genesereth and S.P. Ketchpel, S.P. 1994. Software Agents. TR, CSD, Stanford University. [Gro95] B. Grosz, Collaborative Systems. AI Magazine, summer 1996, 67-85. [Hay67] F.A. Hayek, The result of human action but not of human design. In Studies in Philosophy, Politics

and Economics, Routledge & Kegan, London, 1967. [Jen93] N. R. Jennings. Commitments and conventions: The foundation of coordination in multi-agent

systems. The Knowledge Engineering Review 3, 1993: 223-50. [Jon96] Jones, A.J.I. & Sergot, M. 1996. A Formal Characterisation of Institutionalised Power. Journal of

the Interest Group in Pure and Applied Logics, 4(3): 427-45. [Kur97] S. Kurihara, S. Aoyagi, R. Onai. Adaptive Selection or Reactive/Deliberate Planning for the

Dynamic Environment. In M. Boman and W. Van de Welde (Eds.) Multi-Agent Rationality - Proceedings of MAAMAW’97 , Berlin, Springer, LNAI 1237,1997, p.112-27

[Lev90] Levesque H.J., P.R. Cohen, Nunes J.H.T. On acting together. In Proceedings of the 8th National Conference on Artificial Intelligence, 94-100. Kaufmann. 1990

[Loy97] Loyall, A.B. Believable Agents: Building Interactive Personalities. PhD Thesis. CMU, Pittsburg, May 1997

[Luc95] M. Luck and M. d'Inverno, "A formal freamwork for agency and autonomy". In proceedings of the First International Conference on Multi-Agent Systems, 254-260. AAAI Press/MIT Press, 1995.

[Mac98] Macy, R. , In JASSS, I, 1, 1998. [Mat92] M. Mataric. Designing Emergent Behaviors: From Local Interactions to Collective Intelligence. In

Simulation of Adaptive Behavior 2. MIT Press. Cambridge, 1992. [McD87] McDermott D. 1987. A critique of pure reason. In Computational Intelligence, 3, 151-60. [McF83] McFarland, D. 1983. Intentions as goals, open commentary to Dennet, D.C. Intentional systems in

cognitive ethology: the “Panglossian paradigm” defended. The Behavioural and Brain Sciences, 6, 343-90.

[Mic in press] Miceli, M. and Castelfranchi, C. The role of evaluation in cognition and social interaction. In K. Dautenhahn (Ed.), "Human Cognition and Social Agent Technology". John Benjamins, in press.

[Rao92] Rao, A. S., Georgeff, M.P., & E.A. Sonenmerg . Social plans: A preliminary report. In Decentralized AI - 3, E. Werner & Y, Demazeau (eds.), 57-77. Amsterdam: Elsevier.

[Ric97] Ch. Rich and C L Sidner. COLLAGEN: When Agents Collaborate with People. In Proceedings of Autonomous Agents 97, Marina Del Rey, Cal., pp. 284-91

[Ros68] Rosenblueth, A. & N. Wiener 1968. Purposeful and Non-Purposeful Behavior. In Modern systems research for the behavioral scientist, Buckley, W. (ed.). Chicago: Aldine.

[Rus95] S.J. Russell and P. Norvig Artificial Intelligence: A Modern Approach. Prentice Hall, 1995. [Sic95] J Sichman, Du Raisonnement Social Chez les Agents. PhD Thesis, Polytechnique - LAFORIA,

Grenoble [Sin91] M.P. Singh, Social and Psychological Commitments in Multiagent Systems. In Preproceedings of

"Knowledge and Action at Social & Organizational Levels", Fall Symposium Series, 1991. Menlo Park, Calif.: AAAI, Inc.

[Ste90] L. Steels. Cooperation between distributed agents through self-organization. In Y. Demazeau & J.P. Mueller (eds.) Decentralized AI North-Holland, Elsevier, 1990.

[Suc87] Suchman, L.A. 1987. Plans and situated actions: The problem of human-machine communication. Cambridge: Cambridge University Press.

[Sun97] Sun, R. and Alexandre F. (eds.) 1997 Connectionist Symbolic Integration, Lawrence Erlbaum, Hillsdale, NJ.

[Tha96] Thagard, P. 1996 Mind. Introduction to Cognitive Science. MIT Press. [Tuo93] Tuomela, R. What is Cooperation. Erkenntnis, 38, 1993, 87-101 [Tuo88] R. Tuomela and K. Miller. "We-Intentions", Philosophical Studies, 53, 1988, 115-37. [van82] van Parijs, P. 1982. Functionalist marxism rehabilited. A comment to Elster. Theory and Society,

11, 497-511. [Vir96] Virasoro, M.A. Intervista a cura di Franco Foresta Martin, SISSA, Trieste, 1996 [Wat67] Watzlawick, P., Beavin, J.H. and Jeckson D.D. 1967. Pragmatics of Human Communication..

N.Y., Norton. [Wei97] Weisbuch G. 1997 Societies, cultures and fisheries from a modeling perspective. SimSoc’97,

Cortona, Italy

[Woo95a] M. Wooldridge and N. Jennings. Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2): 115-52. 1995.

[Woo95b] Wooldridge M.J. and Jennings N.R. (Eds.) 1995 Intelligent Agents: Theories, Architectures, and Languages. LNAI 890, Springer-Verlag, Heidelberg, Germany.

======== Castelfranchi, C. (2001) The theory of social functions. Challenges for multi-agent-based social simulation and multi-

agent learning. Cognitive Systems, Elsevier. http://www.elsevier.nl/locate/cogsys?menu=cont&label=author Castelfranchi, C. (2000). Through the Agents’ Minds: Cognitive Mediators of Social Action. In Mind & Society,

Torino, Rosembergh, pp.109-140 Castelfranchi, C. (2000). Per una teoria (pessimistica) della mano invisibile e dell’ordine spontaneo (For a pessimistic

theory of the invisible hand and spontaneous social order). In S. Rizzello (a cura di) Organizzazione, informazione e conoscenza. Saggi su F.A. von Hayek. Torino, UTET; 383-404.

Castelfranchi, C. Guaranties for Autonomy in Cognitive Agent Architecture. In M. Wooldridge and N. Jennings (Eds.)

Intelligent Agents. Springer. LNAI 890, 1995, 56-70. Kahneman, D. & Tversky,A. (1979) Prospect theory: An analysis of decision under risk. Econometrica, 47,263-91 Loewenstein, G Risk as feeling – working paper - Stanford University. (in press) Psychological Bulletin. Loewenstein, G Out of control: Visceral influences on behavior (1996) Organizational Behavior and Human decision processes vol. 65, No 3, pp. 272-292 Loewenstein, G. & Lerner, J. (2002) The role of emotion in decision making. In R.J. Davidson, H.H. Goldsmith & K.R. Scherer, the Handbook of Affective Science. Oxford, Oxford University Press. Tversky, A. & Kahneman, D. (1981) The framing of decisions and the rationality of choice, Science, 211, 453-8. McFadden, D. (Forthcoming) Rationality for Economists? Journal of Risk and Uncertainty, Special Issue on Preference Elicitation

GEANAKOPLOS J., PEARCE D., STACCHETTI E. (1989): “Psychological Games and Sequential

Rationality”, Games and Economic Behavior, 1: 60-79. RABIN M. (1998): “Psychology and Economics”, Journal of Economic Literature, 36: 11-46. RUFFLE B. (1999): “Gift-Giving with Emotions”, Journal of Economic Behavior and Organization, 39: 399-420 HAWAI References [Bac-99] Bacharach M. and Gambetta D., Trust as Type Interpretation, in Castelfranchi C. and Tan (eds),

Trust and Deception in Virtual Societies, Kluwer Publisher, (in press). [Bac] M. Bacharach and D. Gambetta,Trust in Signs, in Karen Cook (ed.) Trust and Social Structure, New

York: Russel Sage Foundation, forthcoming. [Bis-99] A. Biswas, S. Sen, S. Debnath, (1999), Limiting Deception in Social Agent-Group, Autonomous

Agents ‘99 Workshop on "Deception, Fraud and Trust in Agent Societes", Seattle, USA, May 1, 21-28. [Bra-87] M.E. Bratman, (1987), Intentions, Plans and Practical Reason. Harward University Press:

Cambridge, MA. [Bra-99] S. Brainov and T. Sandholm, (1999), Contracting with uncertain level of trust, Proceedings of the

AA’99 Workshop on “Deception, Fraud and Trust in Agent Societies”, Seattle, WA, 29-40.

[Cas-98a] Castelfranchi C., Falcone R., (1998) Principles of trust for MAS: cognitive anatomy, social importance, and quantification, Proceedings of the International Conference on Multi-Agent Systems (ICMAS'98), Paris, July, 72-79.

[Cas-98b] Castelfranchi C., Falcone R., (1998) Social Trust: cognitive anatomy, social importance, quantification and dynamics, Autonomous Agents ‘98 Workshop on "Deception, Fraud and Trust in Agent Societes", Minneapolis/St Paul, USA, May 9, pp.35-49.

[Cas-98c] Castelfranchi, C., Falcone, R., (1998) Towards a Theory of Delegation for Agent-based Systems, Robotics and Autonomous Systems, Special issue on Multi-Agent Rationality, Elsevier Editor, Vol 24, Nos 3-4, , pp.141-157.

[Cas-99] Castelfranchi, C., Falcone, R. (1999). The Dynamics of Trust: from Beliefs to Action, Autonomous Agents ‘99 Workshop on "Deception, Fraud and Trust in Agent Societes", Seattle, USA, May 1, 41-54.

[Col-94] J. S. Coleman, (1994), Foundations of Social Theory, Harvard University Press, MA. [Con-95] Conte, R., Castelfranchi, C., Cognitive and Social Action, London, UCL Press, 1995. [Dem-98] R. Demolombe, (1998), To trust information sources: a proposal for a modal logical framework,

Autonomous Agents ‘98 Workshop on "Deception, Fraud and Trust in Agent Societes", Minneapolis, USA, May 9, 9-19.

[Deu-58] M. Deutsch, (1958), Trust and Suspicion Journal of Conflict Resolution , Vol. 2 (4) 265-79. [Gam-90] D. Gambetta, editor. Trust. Basil Blackwell, Oxford, 1990. [Gam-88] D. Gambetta, (1988), Can we trust trust?, in D. Gambetta (ed.), Trust, Making and Breaking

Cooperative Relations, Oxford: Basil Blackwell, 213-237. [Gan-99] A. Ganzaroli, Y.H. Tan, W. Thoen, (1999), The Social and Institutional Context of Trust in

Electronic Commerce, Autonomous Agents ‘99 Workshop on "Deception, Fraud and Trust in Agent Societes", Seattle, USA, May 1, 65-76.

[Jon-99] C. Jonker and J. Treur, (1999), Formal Analysis of Models for the Dynamics of Trust based on Experiences, Autonomous Agents ‘99 Workshop on "Deception, Fraud and Trust in Agent Societes", Seattle, USA, May 1, 81-94.

[Ter-98] P. Terna, (1998), Simulation Tools for Social Scientists: Building Agent Based Models with SWARM, Journal of Artificial Societies and Social Simulation vol. 1, no. 2, <http://www.soc.surrey.ac.uk/JASSS/1/2/4.html>.

[Wil-85] Williamson, O.E., (1985), The Economic Institutions of Capitalism, The Free Press, New York.