114
BEHAVIORAL AND BRAIN SCIENCES (1993) 16, 1-14 Printed in the United States of America How we know our minds: The illusion of first-person knowledge of intentionality Alison Gopnik Department of Psychology, University of California, Berkeley, CA 94720 Electronic mail: [email protected] Abstract: As adults we believe that our knowledge of our own psychological states is substantially different from our knowledge of the psychological states of others: First-person knowledge comes directly from experience, but third-person knowledge involves inference. Developmental evidence suggests otherwise. Many 3-year-old children are consistently wrong in reporting some of their own immediately past psychological states and show similar difficulties reporting the psychological states of others. At about age 4 there is an important developmental shift to a representational model of the mind. This affects children's understanding of their own minds as well as the minds of others. Our sense that our perception of our own minds is direct may be analogous to many cases where expertise provides an illusion of direct perception. These empirical findings have important implications for debates about the foundations of cognitive science. Keywords: children; cognitive development; consciousness; epistemology; expertise; folk psychology; functionalism; incorrigibility; intentionality; perception; theories; theory of mind 1. The problem of first-person knowledge As adults all of us have a network of psychological beliefs. We believe that other people have beliefs, desires, inten- tions, and emotions and that these states lead to their actions. Moreover, we also believe that we ourselves have analogous beliefs and desires that are involved in our own decisions to act. And we believe, at least implicitly, that beliefs, desires, and so on are what philosophers would call "intentional" states; we believe that they are about the world. However, we also believe that our relations to our own beliefs and desires are different from our relations to those of others. We believe that we know our own beliefs and desires directly, but that we must infer the beliefs and desires of other people. Are we right? In trying to understand our commonsense psychologi- cal beliefs, and to test whether they are correct, it is helpful to distinguish between two different ways we think about mental states such as beliefs and desires. We sometimes think of mental states as the underlying enti- ties that explain our behavior and experience. Describing such states is the goal of scientific psychology. I will call these underlying entities "psychological states." These psychological states are similar to other physically or functionally defined objects and events in the world: atoms, species, word-processing programs, and so forth. In addition, however, we use mentalistic vocabulary to talk about conscious experiences with a particular kind of phenomenology, the Joycean or Woolfian stream of con- sciousness. I will call these "psychological experiences." These experiences are phenomenologically distinct from other types of experience, such as our experiences of trees or rocks or colors. Our experiences of our own beliefs and desires are also phenomenologically distinct from our experiences of other people's. l In one sense, of course, all experience is first-person and psychological. In a different sense, however, we can use these terms to pick out a distinctive type of experience, for example, the experi- ence I have as I sit motionless at my desk and thoughts, desires, emotions, and intentions fill my head. The commonsense notion that our knowledge of our own minds is immediate and privileged can be construed in many ways. We can construe it simply as a claim about our psychological beliefs themselves. We might simply be saying that, as a matter of fact, most people do believe that we know our own minds. This means, for example, that they don't require justifications for first-person psy- chological assertions (see Davidson 1980). Construed this way, the claim is obviously true. We do have the beliefs outlined in the first paragraph and they form the back- ground for the way we speak and act. Alternatively, we might construe the assertion of first-person privileged knowledge as a matter of phenomenology. It concerns the way our psychological experience feels to us. This asser- tion also seems incontrovertibly true. We might, however, also construe the claim of first- person privileged knowledge as an epistemological, even a cognitive, one. 2 Common sense itself appears to make this stronger claim, which does not just concern the phenomenology of ourpsychological experiences but also their relation to underlying psychological states. Accord- ing to this interpretation of first-person privilege, our © J993 Cambridge University Press 0140-525X193 S5.00+.00 1

A Gopnik - How We Know Our Minds

  • Upload
    marnio

  • View
    285

  • Download
    13

Embed Size (px)

DESCRIPTION

Phil Mind

Citation preview

Page 1: A Gopnik - How We Know Our Minds

BEHAVIORAL AND BRAIN SCIENCES (1993) 16, 1-14Printed in the United States of America

How we know our minds: Theillusion of first-person knowledgeof intentionality

Alison GopnikDepartment of Psychology, University of California, Berkeley, CA 94720Electronic mail: [email protected]

Abstract: As adults we believe that our knowledge of our own psychological states is substantially different from our knowledge of thepsychological states of others: First-person knowledge comes directly from experience, but third-person knowledge involvesinference. Developmental evidence suggests otherwise. Many 3-year-old children are consistently wrong in reporting some of theirown immediately past psychological states and show similar difficulties reporting the psychological states of others. At about age 4there is an important developmental shift to a representational model of the mind. This affects children's understanding of their ownminds as well as the minds of others. Our sense that our perception of our own minds is direct may be analogous to many cases whereexpertise provides an illusion of direct perception. These empirical findings have important implications for debates about thefoundations of cognitive science.

Keywords: children; cognitive development; consciousness; epistemology; expertise; folk psychology; functionalism; incorrigibility;intentionality; perception; theories; theory of mind

1. The problem of first-person knowledge

As adults all of us have a network of psychological beliefs.We believe that other people have beliefs, desires, inten-tions, and emotions and that these states lead to theiractions. Moreover, we also believe that we ourselves haveanalogous beliefs and desires that are involved in our owndecisions to act. And we believe, at least implicitly, thatbeliefs, desires, and so on are what philosophers wouldcall "intentional" states; we believe that they are about theworld. However, we also believe that our relations to ourown beliefs and desires are different from our relations tothose of others. We believe that we know our own beliefsand desires directly, but that we must infer the beliefs anddesires of other people. Are we right?

In trying to understand our commonsense psychologi-cal beliefs, and to test whether they are correct, it ishelpful to distinguish between two different ways wethink about mental states such as beliefs and desires. Wesometimes think of mental states as the underlying enti-ties that explain our behavior and experience. Describingsuch states is the goal of scientific psychology. I will callthese underlying entities "psychological states." Thesepsychological states are similar to other physically orfunctionally defined objects and events in the world:atoms, species, word-processing programs, and so forth.

In addition, however, we use mentalistic vocabulary totalk about conscious experiences with a particular kind ofphenomenology, the Joycean or Woolfian stream of con-sciousness. I will call these "psychological experiences."These experiences are phenomenologically distinct from

other types of experience, such as our experiences of treesor rocks or colors. Our experiences of our own beliefs anddesires are also phenomenologically distinct from ourexperiences of other people's.l In one sense, of course, allexperience is first-person and psychological. In a differentsense, however, we can use these terms to pick out adistinctive type of experience, for example, the experi-ence I have as I sit motionless at my desk and thoughts,desires, emotions, and intentions fill my head.

The commonsense notion that our knowledge of ourown minds is immediate and privileged can be construedin many ways. We can construe it simply as a claim aboutour psychological beliefs themselves. We might simplybe saying that, as a matter of fact, most people do believethat we know our own minds. This means, for example,that they don't require justifications for first-person psy-chological assertions (see Davidson 1980). Construed thisway, the claim is obviously true. We do have the beliefsoutlined in the first paragraph and they form the back-ground for the way we speak and act. Alternatively, wemight construe the assertion of first-person privilegedknowledge as a matter of phenomenology. It concerns theway our psychological experience feels to us. This asser-tion also seems incontrovertibly true.

We might, however, also construe the claim of first-person privileged knowledge as an epistemological, evena cognitive, one.2 Common sense itself appears to makethis stronger claim, which does not just concern thephenomenology of ourpsychological experiences but alsotheir relation to underlying psychological states. Accord-ing to this interpretation of first-person privilege, our

© J993 Cambridge University Press 0140-525X193 S5.00+.00 1

Page 2: A Gopnik - How We Know Our Minds

Gopnik: How we know our minds

beliefs about our own psychological states do not comefrom the same source as our beliefs about the psychologi-cal states of others. In the case of our own minds, there is adirect link leading from our underlying psychologicalstates to our psychological experiences. It is easy enoughto imagine how we might be so wired that whenever wewere in a particular psychological state we would have aparticular corresponding psychological experience. Be-cause we have no experience of other minds, this linkcannot exist in that case, and so our beliefs about thepsychological states of others must be indirect. Accordingto this interpretation, we note the behaviors that resultfrom our own psychological states, known directlythrough psychological experience, and then infer thatothers have similar psychological states when they pro-duce similar actions.

This is an intuitively plausible idea and one that under-lies our commonsense understanding as well as manyphilosophical and psychological accounts. It is also, how-ever, an empirical claim about the cognitive relationbetween our psychological states and experiences and ourbeliefs about them. Are psychological states, experi-ences, and beliefs actually related in this way?

The answer might be different for beliefs about differ-ent psychological states. For simple sensations, the com-monsense conviction that our beliefs about ourselves andour beliefs about others come from different sources maybe very strong. In other cases, for example, very abstractemotional states such as jealousy or guilt or love, we mayfeel less certain that this picture is correct. In particular,we may notice that there are cases of self-deception whereour beliefs about ourselves, and even our psychologi-cal experiences of ourselves, prove to be consistentlyinaccurate.

In this target article, I will be concerned with oneparticular belief: the belief that psychological states areintentional. As ordinary, unphilosophical adults we be-lieve that our beliefs are psychological entities that referto the world but that they can also be misleading. We maynot make this belief about beliefs fully explicit, but it isclearly apparent in our ability to understand the complex-ities of the relationship between the world and our beliefsabout it, and particularly in our ability to understandcases of misrepresentation. How do we develop thesebeliefs about beliefs? Our adult intuitions suggest thatknowledge of intentionality, like knowledge of sensa-tions, conies directly and reliably from our psychologicalexperience. I know that my beliefs refer to the world -but that they may be false, they may change, or they maycome from many different sources - simply by experienc-ing these facts about them.

Alternatively, our commonsense intuitions might bewrong. My beliefs about the intentionality of my ownmental states and those of others might have a very similarcognitive history. I will argue here that evidence fromdevelopmental psychology suggests that this is the case.As young children we have psychological states, we havepsychological experiences, and we have beliefs about ourpsychological states. Our beliefs about at least some ofthose states, however, are consistently incorrect anddiffer from the beliefs we will have later. As far as we cantell, our experience of those states is also different. Youngchildren do not seem to believe that their own psychologi-cal states are intentional, nor do they experience them as

intentional, in the way adults do. Since we were all oncesuch children, what we think we know about ourselveschanges radically.

Perhaps more important, these changes reflect ourknowledge about the psychological states of other people.When we are children and our understanding of others isincorrect, our understanding of ourselves, even our expe-rience of ourselves, is incorrect in the same way. Empiri-cal findings show that the idea of intentionality is atheoretical construct, one we invent in our early lives toexplain a wide variety of evidence about ourselves andothers. This theoretical construct is equally applicable toourselves and others and depends equally on our experi-ence of ourselves and others.

This conclusion may seem to leave us with a puzzle. Ifthe origins of first-person knowledge of intentionality arenot profoundly different from the origins of third-personknowledge, why do we as adults think they are? I willsuggest an analogy between our impression that we expe-rience our own psychological states directly and similarphenomena in cases of expertise. In the case of the expert,phenomenological immediacy may be divorced from cog-nitive directness. Experts experience their knowledge asimmediate and perceptually based; in reality, however, itdepends on a long theoretical history. Similarly, we, asexperts in commonsense psychology, may experience ourtheoretical knowledge of the intentionality of our psycho-logical states as if it were the result of direct perception.

2. First-person knowledge and cognitive science

Aside from their intrinsic interest, questions about thenature of psychological beliefs, particularly beliefs aboutintentional states like belief and desire, and even moreparticularly beliefs about the intentionality of those inten-tional states, have played an important role in debatesabout the foundations of cognitive science. The accountsof the mind offered by cognitive science have generallybeen indifferent to questions of psychological experience.States that are not experienced, like unconscious repre-sentations, may be construed as intentional, and may beattributed to creatures like computers that have no psy-chological experience. Clearly, this indifference is notshared by our commonsense psychology. According tocommon sense, I suggest, psychological states and psy-chological experiences are closely related.

Philosophers of cognitive science have responded tothis problem in different ways. One philosophical school(Churchland 1984; Stich 1983) has opted to abandon theidea of intentionality altogether. Other writers (Dretske1981; Millikan 1984; some works of Fodor 1985) havetried to develop a notion of intentionality that wouldpreserve our intuition that intentionality involves a spe-cial relation between psychological states and the world,but cut the link to psychological experience. This leads tosuch peculiarities as the "swamp-thing" (Davidson 1980;Millikan 1984), a random cloud of atoms that duplicatesmy physiology and my experience and yet fails, accordingto these views, to have intentional mental states. Theswamp-thing clearly conflicts with common sense.

All these philosophers endorse, in different ways, the"theory-theory," the view that commonsense psychology,including the idea of intentionality, is a theory that

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 3: A Gopnik - How We Know Our Minds

explains experience and behavior. Like scientific theo-ries, commonsense theories are "defeasible," that is, theyare refutable and revisable. They change, often quiteradically, in response to new evidence. Because theoriescome and go, there is no special reason to prefer ourcommonsense theory to any others. The debate is overthe merits of the commonsense theory and its relation toscientific theories of the mind. Moreover, there is noreason why theory-based beliefs should apply any differ-ently to the self than to others. For all these views theimmediacy of our first-person experience of intentionalityis problematic. In common sense, intentionality feels likesomething you perceive, not something you postulate,and the difference between our knowledge of ourselvesand others seems profound.

Other philosophers deny that commonsense knowl-edge of intentionality is theoretical. One suggestion isthat our ordinary notion of intentionality, rather thanreferring to some psychological reality also referred to bythe terms of cognitive science, reflects a kind of stancethat we adopt toward ourselves and others. We mightthink of intentionality, and many of the rest of our com-monsense psychological beliefs, as matters of conventionor language. Such matters are not defeasible in responseto evidence in the way that theories are. They are, in away, self-constitutive: If I and everyone else believe thatforks go on the left, and we all act that way, then forks willgo on the left. (Although their views are in other respectsvery different, this basic attitude is shared, for example,by Davidson [1980]; Dennett [1987]; Taylor [1985]; andWittgenstein [1953].)

More relevant to the present case, Searle (1980; 1984;1990) has suggested that the first-person aspect of inten-tionality is essential. Searle claims that the intentionalityof psychological states is known directly in the first-person case; this is what Searle calls "intrinsic inten-tionality." According to Searle, states without such intrin-sic intentionality, that is, those that cannot be knownthrough psychological experience, should not be consid-ered intentional at all. This emphasis on first-personexperience is also implicit in recent "simulation ac-counts," such as those offered by Gordon (1986; 1992c),which elaborate on the commonsense picture; it is alsoquite explicit in the recent work by Goldman alongsimilar lines (1989; 1992a). According to simulation ac-counts, we understand the minds of others by simulatingthem. We determine what we would feel in a similarsituation. For Searle, Gordon, and Goldman, havingpsychological beliefs does not depend on having a theoryof the mind at all. These beliefs are instead simply aconsequence of having a mind of a particular sort, a mindthat gives rise to psychological experiences.

These arguments turn on the relation between com-monsense beliefs about the mind, first-person psycho-logical experience, and scientific psychology. Each ofthese positions gives a different account of where thecommonsense idea of intentionality conies from. Accord-ing to the "theory-theory," intentionality is a theoreticalnotion constructed from evidence, which may change asthe result of new evidence. The view that intentionality isa kind of stance implies that it is developed by a process ofenculturation, a form of life rather than a form of knowl-edge. Searle and Goldman suggest that the notion ofintentionality is a consequence of certain distinctive fea-

Gopnik: How we know our minds

tures of the human brain. It is a fairly direct first-personapprehension of a particular brute fact about humanbeings. This last view is more like the view of commonsense, which probably explains much of its intuitiveappeal.

Although it has been discussed mostly by philosophers,the question of whether our beliefs about intentionalityare theoretical, enculturated, or derived from a moredirect first-person apprehension is an empirical one.Resolving these questions will largely depend on ouraccount of where the idea of intentionality comes fromand how it is related to our underlying psychologicalstates.

3. Developing an understanding of the minds ofothers

3.1. The evidence. In the last few years there has been averitable explosion of interest in children's ideas aboutthe mind (see Astington et al. 1988; Perner 1991b; Well-man 1990; Whiten 1990; see also Astington & Gopnik1991a; Gopnik 1990, for reviews). Much of this investiga-tion has centered on the period between about 2'/2 and 5years of age.3 In this developmental period there are clearand consistent changes in the ways that children makeinferences about the mental states of others. The area is anew one and there is some controversy over the details ofvarious experiments. Nevertheless, there is also anemerging consensus about the general outlines of thedevelopments.

By 18 months children show some capacity to generaterepresentations that are not given to them perceptually,and to comment, implicitly or explicitly, on the fit be-tween representations and reality. For example, theymay use words like "no" to comment on the nonexistenceof hypothetical objects, or (later) the falsity of hypotheti-cal propositions; or they may use "uh-oh" to comment onthe fact that their goals have not been realized (Gopnik1982). In their pretend play they show the capacity totreat an object as something other than itself, and todemonstrate, by their laughter and delight, that theyknow they are doing this (Leslie 1988). These abilitiessuggest some very early capacity to distinguish betweenreality and representations, between physical objects andmental states.

During the second year children become increasinglyable to refer to mental states linguistically (Shatz et al.1983). By the age of 3, the capacity to understand theontological difference between physical reality and men-tal states is.clearly in place. The most dramatic evidencefor this comes in a study by Wellman and Estes (1986).Three-year-old children were able to distinguish betweendreams, images and thoughts, and real things. Moreover,as we might expect from the early emergence of pretendplay, 3-year-olds appear to be able to differentiate otherpeople's pretenses from reality. Flavell et al. (1987) foundthat 3-year-old children had little difficulty understand-ing that someone was pretending to be a dog yet wasreally a person.

At about the same time, children also show some abilityto understand differences between their own mentalstates and the mental states of others. Children of this ageseem to understand that there might be limits on how

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 4: A Gopnik - How We Know Our Minds

Gopnik: How we know our minds

much of the world one might see or think of, and thatthese limits might differ for different people. For exam-ple, children as young as 2V2 years appear to know thatsomeone else may not be able to see something theythemselves see, or vice versa (Flavell et al. 1981). (Thereis some evidence that this ability is itself gradually con-structed in the second year; Lempers et al. 1977). Recentwork by Wellman (1990) also suggests that young childrenmay be able to make similar judgments about knowledgeand belief. For example, 3-year-old children may knowthat if there are pencils in two locations, say, the shelf andthe desk, and someone has only looked in the desk, theywill not know about the pencils on the shelf.

An interesting, ambiguous case concerns children'sability to intentionally deceive others. Some have sug-gested that this ability might be a hallmark of general"metarepresentational" ability. However, it also appearsdifficult to distinguish genuine deception (an attempt toimplant a particular false belief in the mind of another)from what one might call "behavioral deception," theacquisition of a pragmatic strategy for bringing aboutcertain events, without a deeper understanding of thebasis for that strategy. The developmental literature isalso ambiguous. Some investigators have claimed to finddeceptive behaviors in 3-year-olds (Chandler et al. 1989);others have failed to replicate these results (Sodian 1991).

These are impressive abilities in such very youngchildren. However, the failures of these children areequally impressive. Three-year-old children consistentlyfail to understand certain other problems, or rather un-derstand them in a way that is profoundly different fromadult understanding. These problems involve very differ-ent questions and tasks but a common conceptual basis.All of them require that the child understand the complexrepresentational process relating real objects and mentalrepresentations of those objects.

Two types of tasks have been extensively studied, theappearance-reality task (Flavell et al. 1986) and the false-belief task (Hogrefe et al. 1986; Perner et al. 1987;Wimmer & Perner 1983). In the appearance-reality task,children are presented with an object that appears to beone thing but is actually another: a "Hollywood rock"made of painted sponge; a "sucker egg" made of chalk;' agreen cat covered by a red filter that makes it look black.After extensive pretraining to ensure that they under-stand the questions, children are asked what the objectlooks like and what it really is. Typically, 3-year-olds givethe same answer to both questions, saying that the objectlooks like a sponge, and really is a sponge or the cat looksblack and really is black (whether they respond with thereality or the appearance depends on the particularobject).

Flavell and his colleagues have gone to heroic lengthsto ensure that the children's errors are genuinely concep-tual and not merely linguistic. In one particularly con-vincing demonstration (Flavell et al. 1986) children areshown a white cardboard "flower" with a blue filter overit. Th,e filter is placed over the flower and the childrenhave a chance to see that the blue color is only apparent.The children say that the flower both appears blue and isreally blue. The children next see the flower with thefilter on top and see the experimenter cut away a smallportion of the cardboard flower. They are then shown twopieces of cardboard, one white and one blue, and asked

which piece came from the flower. Even though thisquestion is not explicitly about appearance or reality, andonly requires children to point to a piece, children err bychoosing the blue patch.

In one version of the false-belief task, children areasked to predict how a deceptive object will appear toothers. For example, children see a candy box, whichturns out to be full of pencils (Perner et al. 1987). They areasked what someone else will think when they first see thebox. Three-year-old children consistently say that theother person will think there are pencils in the box. Theyapparently fail to understand that the other person'sbeliefs may be false. Again, this finding has proved to bestrikingly robust. Children make this error in many differ-ent situations, involving many different kinds of objectsand events. They continue to make the error when theyactually see the other person respond to the box withsurprise, and even when they are explicitly told about theother person's false belief (Moses & Flavell 1990; Well-man 1990). Moreover, they make incorrect predictionsabout the other person's actions, which reflect theirincorrect understanding of the other person's beliefs(Perner et al. 1987). They make similar errors in dealingwith deceptive physical representations, such as mislead-ing pictures or photographs (Zaitchik 1990).4

These are the best-investigated tasks, but there are alsoother indicators in the literature that an important con-ceptual change occurs at this point. One indication con-cerns children's ability to understand that people may seeobjects in different ways, what Flavell has called "level-2"perceptual perspective-taking. Three-year-olds, for ex-ample, appear to have difficulty understanding that aturtle on a table who looks right-side up to them may lookupside-down to an observer on the other side of the table(Flavell et al. 1981).

There are also changes in children's ability to under-stand the sources of their beliefs. Wimmer has suggestedthat children have difficulty inferring where beliefs comefrom, knowing, for example, that someone who has hadperceptual access to an object would know what it was,while someone who has not, could not (Wimmer et al.1988a; 1988b). Others have failed to replicate this finding,particularly where vision is involved (Pillow 1989). Thereis, however, other evidence that children have difficultyunderstanding where beliefs come from. O'Neill et al.(1992) tested whether children understood that only cer-tain kinds of information could be obtained from particu-lar types of sources. For example, children were showntwo objects which could only be differentiated by touchand two others which could only be differentiated bysight; they were also shown a person who either saw ortouched the objects. Three-year-olds had difficulty pre-dicting which objects the other person would be able todifferentiate.

There is also evidence that 3-year-olds have diffi-culty understanding the notion of subjective probability.Moore et al. (1990) found that 3-year-olds were unable todetermine that a person who knew about an object was amore reliable source of information than one who merelyguessed or thought. Similarly, 3-year-olds, in contrast to4-year-olds, showed no preference for getting informa-tion from people who were sure they knew what was in abox rather than those who expressed uncertainty abouttheir knowledge. These children seemed to divide cogni-

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 5: A Gopnik - How We Know Our Minds

tive states into full knowledge or total ignorance; they didnot appreciate that belief could admit of degrees.

A final piece of evidence for a developing understand-ing of belief conies from investigations of children's spon-taneous references to such events in their ordinaryspeech. Bartsch and Wellman (cited in Wellman 1990),following an earlier study by Shatz et al. (1983) ana-lyzed extensive spontaneous speech samples from theCHILDES corpus and discovered that there were vir-tually no genuine references to belief or knowledge (asopposed to formulaic expressions such as "I don't know")before the third birthday. In contrast, there were manyreferences to desire and perception in the earliest tran-scripts. After the third birthday, references to belief andeven occasional references to false belief began to appear,and these increased markedly in the period from ages3 to 4.

The understanding of perceptions, of false beliefs, andof sources and degrees of belief requires an understand-ing of what Searle has called mental states with a "mind-to-world" direction of fit; states where the mind is alteredto fit the world (Searle 1983). An interesting question,only recently being investigated, concerns children's un-derstanding of "world-to-mind" mental states, states suchas desire and intention where the world is altered to fit themind. Classical philosophical accounts often describe oureveryday psychology as a "belief-desire" psychology andhave argued that both belief and desire are always impli-cated in our explanations of action. Moreover, such statesare considered to be intentional in most philosophicalaccounts; what we desire or intend (by and large) is not athing itself but the thing as represented in a particularway. For example, if the chocolate cake I am intent onobtaining turns out to be made of carob and tofu, I will befrustrated. My desire was less for this piece of cake, thanfor this piece of cake as I first represented it to myself, fullof sin and cholesterol. At the same time, the differences indirection of fit may make desires rather different frombeliefs.

There is evidence that even 2-year-olds have a simplenonrepresentational concept of desire. They understand,for example, that desires may not be fulfilled (Astington &Gopnik 1991b; Wellman & Woolley 1990). There is alsoevidence, however, that other aspects of desire are moredifficult to understand. One particularly interesting re-cent finding concerns children's understanding of differ-ences in judgments of value, judgments more closelyrelated to desires than to beliefs (Flavell et al. 1990).Children appear to be better able to make the correctjudgment in these cases than in the standard false-belieftask. They appear to be more willing to say, for example,that they might think a cookie was yummy whereasanother person thought it was yucky than that they mightthink the box was full of pencils whereas the other thoughtit was full of candy. Nevertheless, a fair minority of3-year-old children (between 30% and 40%) still madeerrors on these tasks.

Similarly, there is some evidence that children havedifficulty understanding some aspects of intention. Inten-tions may be construed simply as desires. We might alsothink of intentions, however, as more complex states thatmediate between beliefs and desires and actions. Three-year-old children appear to understand intentions in thefirst way but not the second, just as they have difficulty

Gopnik: How we know our minds

understanding how representations mediate between be-liefs and desires and the world. Astington has found thatchildren identify intentions with either actions or desiresrather than understanding them as mental states follow-ing desires but preceding actions (Astington & Gopnik1991b).

These abilities consistently appear at around age 4.Given appropriately simple tasks, 4-year-olds have littledifficulty understanding false beliefs, appearance realitycontrasts, and the other concepts that give 3-year-oldsdifficulty. Moreover, there is evidence for an associationamong these tasks; children who do well on one are alsolikely to do well on the others. Flavell et al. (1986) foundthat the level-2 perceptual task (knowing that the turtlewould appear upside down to the other viewer) was signif-icantly correlated with performance on the appearance-reality task when age was controlled for. Similarly, wefound that, with age controlled, performance on false-belief and appearance-reality tasks was significantly cor-related (Gopnik & Astington 1988). Moore et al. (1990)replicated this result and also found that judgments ofsubjective probability were correlated with all theseother developments.

3.2. The shift to a representational model of the mind.How do we interpret these results? There is some con-sensus among these investigators that there is a quitegeneral shift in the child's concept of the mind at around3'/2 years. This shift involves central changes in the child'sepistemological concepts, concepts of the relation be-tween mind and world. The precise characterization ofthis shift has been a matter of some debate. Indeed it isdifficult to know exactly how to characterize a view of themind that is profoundly different from our own. Theessential idea, however, is that the 3-year-old believesobjects or events are directly apprehended by the mind.In Chandler's (1988) striking phrase, objects are bulletsthat leave an indelible trace on any mind that is in theirpath. Different theorists have captured this idea in differ-ent ways. Flavell (1988) talks about cognitive connections;Wellman (1990) talks about a "copy theory of representa-tion"; Perner (1991b) talks about connections to situa-tions. We have suggested that it may be useful to think ofthis view as analogous to certain views of perception, suchas a Gibsonian or Dretskean (Dretske 1981) view (As-tington & Gopnik 1991b),5 in which the relation betweenreal things in the world and our perception of them is adirect causal link, almost a transference. Indeed, it seemsplausible that children originally apply this model toperception and then extend it to belief.

For 3-year-olds, there are two kinds of psychologicalstates. In true 3-year-old spirit, we might call them "sillystates" and "serious states." Silly states include images,dreams, and pretenses, whereas serious states are similarto what adults would call perceptions, desires, and be-liefs. For the 3-year-old, silly states have no referential orcausal relation to reality; they are neither true nor false.They are completely divorced from considerations of thereal world (which largely explains their charm). The3-year-old versions of perception, desire, and belief,however, involve a completely accurate, if sometimeslimited, apprehension of the way the world really is.Objects exist in the world and people see, want, orapprehend them. One way of putting it might be that for

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 6: A Gopnik - How We Know Our Minds

Gopnik: How we know our minds

the 3-year-old, all serious psychological states are "trans-parent" (Quine 1956). That is, children think of belief theway we as adults sometimes construe perception or de-sire, as a matter of a direct relation between the mind andobjects in the world, not a relation mediated by represen-tations or propositions. They think we simply believe x,tout court, just as, even in the adult view, we may simplysee or want x, tout court, rather than seeing, wanting, orbelieving that x.

This view of the mind allows children to make manypredictions and solve many problems. It allows them tosee that if a mind is not in the path of an object it will notapprehend the object and to understand how beliefs leadto action. It does not, however, allow them to understandcases of misrepresentation, such as false beliefs or mis-leading appearances (consider the similar difficulties for aGibsonian [1979] or Dretskean account). In an importantsense, if you cannot understand the possibility of misrep-resentation, you do not understand representation at all.[See also Whiten & Byrne: "Tactical Deception in Pri-mates" BBS 11(2) 1988.] And this is not all this Gibsonianview fails to allow children to understand. It also, in aquite different way, makes it difficult to understand thatbeliefs may come from many different sources, that theymay come in degrees, and that there are intermediatesteps between the mind and the world.

Three-year-olds believe that cognitive states come inonly two varieties, total knowledge, when the world isrelated to the mind, and absolute ignorance, when it isnot. In cases of misrepresentation, there is one object inthe world and two people are both related to that object,but differences in the relations between the minds andthe world lead to different representations of that object.In the sources and subjective probability cases (see sect.3, para. 11 & 12), there is one object in the world and twopeople arrive at the same representation of that object,but the relations between each person and the world aredifferent. To distinguish between degrees of belief andsources of belief we need to understand that people'scognitive relations to the world may differ in significantways even when both their ultimate beliefs and theobjects in the world are the same. To understand bothmisrepresentation and sources and subjective proba-bility, requires "a representational model of the mind"(Forguson & Gopnik 1988). The absence of a representa-tional model might also make it difficult for children toappreciate the intentionality of desire, the fact that ob-jects are desired under a description, and that desiresmay vary as a result of variations in that description.

By the age of 4 or 5, children, at least in our culture,have developed something more like a representationalmodel of mind. Accordingly, almost all psychologicalfunctioning in 5-year-olds is mediated by representa-tions. Desires, perceptions, beliefs, pretenses, and im-ages all involve the same basic structure, one sometimesdescribed in terms of propositional attitudes and proposi-tional contents. These mental states all involve represen-tations of reality, rather than direct relations to realityitself. Perceiving, desiring, and believing become per-ceiving, desiring, and believing that. Rather than distin-guishing different types of mental states with differentrelations to a real world of objects, the child sees that allmental states involve the same abstract representationalstructure. Many characteristics of all psychological states,

such as their diversity and their tendency to change, canbe explained by the properties of representations. Thisunified view provides predictions, explanations, and in-terpretations that were not possible earlier.

I have argued above and elsewhere (Gopnik 1990) thata representational model of the mind is an essential part ofthe commonsense adult notion of intentionality. Clearly,if we asked most adults whether their beliefs and desireswere "intentional" they would not understand what wewere talking about. The philosophical term is shorthandfor a number of related commonsense beliefs. One ofthese is the belief that beliefs and desires are aboutthings. More important we also believe that the contentsof beliefs and desires are not the things themselves butwhat we think about those things. As a consequence,beliefs and desires may vary as our understanding of theworld varies. The intuitions that are captured by philo-sophical notions like "opacity" or "propositional contents"or the rest (Quine 1956) concern the mediated nature ofrepresentation and the possibility of misrepresentationthat results. If common sense were Gibsonian, if we allthought that cognition was direct and unmediated, then, Isuggest, we would not think that psychological stateswere intentional.6

4. Developing an understanding of your ownmind

So far, I have argued that there is evidence for a deepchange in children's understanding of the psychologicalstates of other people somewhere between the ages of 3and 4. Is this change in the child's concept of the mindapplicable only to others or to the self as well?

Suppose the commonsense and philosophical accountsof privileged first-person beliefs about the mind werecorrect. Then we should predict that, however erroneouschildren's views of the psychological states of othersmight be, they would not make similar errors in theirunderstanding of their own psychological states. If theyknew anything at all about their own minds, then whatthey knew ought to be substantially correct. This knowl-edge certainly should not be systematically and consis-tently wrong.

One could argue that the changes in the understandingof the mind we have described so far reflect the difficultiesinherent in inferring the psychological states of otherpeople, and indeed this argument has been made in theliterature (Harris 1990; Johnson 1988). Children certainlyseem to assume that their current beliefs about the objectwill be shared by others. There is a difficulty here,however, for our concept of what is real is constituted byour current beliefs. Children's errors might come fromtwo rather different sources: They might believe (1) thateveryone else believes what they do or (2) that everyonebelieves what is actually the case, where what is the case,for children as well as adults, is specified by their currentbeliefs. The problem with false beliefs might not be thatchildren assume that particular beliefs are shared, butsimply that they assume that others believe what is thecase.

One way to differentiate between these possibilities isby looking at children's understanding of their imme-diately past beliefs or other psychological states, states

6 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 7: A Gopnik - How We Know Our Minds

they no longer hold. If children's problems genuinelystem from a kind of egocentrism, they should not havesimilar difficulties in understanding their own imme-diately past beliefs. These beliefs are, after all, as muchtheir own as their current beliefs.

Moreover, our immediate recall of psychological statesthat occurred within the span of a few minutes ought tocount as part of first-person psychological experience.Phenomenologically, first-person experience extends be-yond the immediate moment (indeed it is hard to see howit could exist, or at least how we could know it did,otherwise). In adults, the span of introspection, as itwere, is at least this long.7 If I were to describe mypsychological experience when I see a candy box and thendiscover that it is full of pencils, I would say that Iexperience my belief that the box is full of candy, and thenthe change in my belief that comes with the new discov-ery, with all its attendant phenomenological vividnessand detail. The very psychological experience of thechange in belief depends on the fact that I continue toremember the previous belief.

We have compared children's performance on "otherminds" tasks like those we have just described with theirperformance on tasks that require them to report theirown immediately past mental states. We find that chil-dren make errors about their own immediately past statesthat are similar to the errors they make about the states ofothers. This is true even though children ought to havedirect first-person psychological evidence of these paststates.

In our original experiment (Gopnik & Astington 1988)we presented children with a variety of deceptive objects,such as the candy box full of pencils, and allowed them todiscover the true nature of the objects. We then asked thechildren both the false-belief question, "What will Nickythink is inside the box?' and the appearance-reality ques-tion, "Does it look as if there are candies in the box?" Butwe also asked children about their own immediately pastbeliefs about the box: "When you first saw the box, beforewe opened it, what did you think was inside it?" Thepattern of results on all three tasks was similar: One-halfto two-thirds of the 3-year-olds said they had originallythought there were pencils in the box. They apparentlyfailed to remember their immediately previous false be-liefs. Moreover, children's ability to answer the false-belief question about their own belief was significantlycorrelated with their ability to answer the question aboutthe others' belief and the appearance-reality question,even with age controlled. This result was recently repli-cated by Moore et al. (1990), who also found significantcorrelations with children's understanding of subjectiveprobability.

The children were given an additional control task.They saw a closed container (a toy house) with one objectinside it, then the house was opened, the object wasremoved, and a different object was placed inside. Chil-dren were asked, "When you first saw the house, beforewe opened it, what was inside it?" This question had thesame form as the belief question. It asked, however,about the past physical state of the house rather thanabout a past mental state. Children were only included inthe experiment if they answered this question correctly,demonstrating that they could understand that the ques-tion referred to the past and remember the past state of

Gopnik: How we know our minds

affairs. Several different syntactic forms of the questionwere asked to further ensure that the problem was not alinguistic one.

Recently, this experiment has been replicated, withadditional controls, by Wimmer and Hartl (1991) (whoalso draw similar philosophical conclusions). First, theyphrased the control question as a "think" question: "Whenyou first saw the box, before we opened it, what did youthink was inside it?" ensuring that the additional syntacticcomplexity of the embedded clause was not confusing thechildren. Moreover, they used exactly the same materialsand question in the control and belief tasks. In the controltask, children were shown a box whose contents weresubsequently changed. Three-year-olds were again fullycapable of answering the question when it referred to anactual change in the world rather than a change in belief.Finally, in the belief task the children were explicitlyasked to identify the object when they first saw it. All thechildren initially said they thought there was candy in thebox, confirming that they did, in fact, have the false beliefinitially. The results were similar to the previous results.

In a second set of experiments (Gopnik & Graf 1988) weinvestigated children's ability to identify the sources oftheir beliefs, elaborating on a question first used byWimmer et al. (1988a). Children found out about objectsthat were placed in a drawer in one of three ways; theyeither saw the objects, were told about them, or figuredthem out from a simple clue. Then we asked "What's inthe drawer?" and all the children answered correctly.Immediately after this question we asked about thesource of the child's knowledge: "How do you knowthere's an x in the drawer? Did you see it, did I tell youabout it, or did you figure it out from a clue?" Again,3-year-olds made frequent errors on this task. Althoughthey knew what the objects were, they could not say howthey knew. They might say, for example, that we had toldthem about an object, even though they had actually seenit. Their performance was better than chance but stillsignificantly worse than the performance of 4-year-olds,who were almost error-free. In a follow-up experiment(O'Neill & Gopnik 1991) we used different and simplersources (tell, see, and feel) and presented children withonly two possibilities at a time. We also included a controltask to ensure that the children understood the meaningof "tell," "see," and "feel." Despite these simplifications ofthe task, the performance of the 3-year-olds was similar totheir performance in the original experiment.

In a more recent experiment we have investigatedwhether children could understand changes in psycho-logical states other than belief (Gopnik & Slaughter 1991).When the child pretends an object is another object, orimagines an object, there need be no understanding ofthe relation between those representations and reality.For young children, pretenses and images are unrelatedto reality: They cannot be false - or true, for that matter.And, as we have seen, these "silly" mental states areapparently well understood by 3-year-olds. Similarly, wehave seen that even without an understanding of therepresentational process, 3-year-olds can tell that some-one else may not see something they see themselves andvice versa. These simple perceptual judgments shouldalso be possible for young children.

To test this we placed children in a series of situations inwhich they were in one psychological state and that state

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 8: A Gopnik - How We Know Our Minds

Gopnik: How we know our minds

was changed, situations comparable to the belief-changetask. For example, we asked children to pretend that anempty glass was full of hot chocolate. Then the glass was"emptied" and the child was asked to pretend that it wasfull of lemonade. We then asked them, "When I firstasked you, . . . what did you pretend was in the glassthen?" We also asked them to imagine a blue dog and thena pink cat and asked them, "When I first asked you,. . . what did you think of then?" In both these cases, as

in the belief-change case, the child is in one mental stateand then another mental state, even though nothing inthe real world has changed. In these cases, however, un-like the belief case, the mental states need not be inter-preted as involving any representational relation to thereal world. In a perceptual task, we placed the children onone side of a screen from which one object was visible andthen moved them to the other side from which another ob-ject was visible and asked them to recall their past percep-tion. The 3-year-olds were fully able to perform thesetasks; only one out of 30, for example, failed to rememberan earlier pretense. However, a majority of these same3-year-olds were unable to perform the belief-change task.

We also tested children's understanding of changes instates with a direction of fit, such as desires and inten-tions. In three different tasks we presented children withsituations in which their desires were satiated and sochanged. Cases of satiation are particularly interestingbecause they induce both changes in desires themselvesand changes in the representation of the desired object.Satiation not only changes our desires, it changes our verynotion of whether the object is desirable. The delicious,tempting mousse becomes cloying and nauseating by thefourth portion, the exciting new toy becomes a bore.Moreover, these changes are not parasitic on beliefchanges but stem from the nature of the desire itself (seeAstington & Gopnik 1991b, for discussion).

In all three tasks a sizable minority of 3-year-olds(30%-40%) reported that they had been in their final stateall along. Thus, for example, hungry children were fedcrackers at snack time until they were no longer hungryand were then asked "Were you hungry before we hadsnack?" A third of them reported that they were not.

Nevertheless, just as in the Flavell task, which mea-sured children's understanding of the desires of others("Does Ellie think the cookie is yummy or yucky?")(Flavell et al. 1990), so in our similar task ("Were youhungry before?") the children were better at reportingpast desires than past beliefs. Indeed, the absolute levelsof performance were strikingly similar in our task andFlavell's.

Finally, we examined children's ability to report theirearlier intentions when they did not actually come tofruition. Children were given a red crayon and asked todraw a ball; halfway through the experimenter said, "Whythat drawing looks like this big red apple, could you makeit a big red apple?" Children complied. Then we asked thechildren to report their past intention; 50% of the 3-year-olds reported that they had originally intended to drawthe apple.

5. Information-processing alternatives

Research with very young children raises the question ofthe information-processing demands of the tasks. Are

tasks difficult for some reason other than a conceptualone? Such concerns are raised whenever a developmentalstudy claims to have found an inability in children or adifference between children and adults. As Wellman hascogently argued (Wellman 1990), no individual task orexperiment is immune from such criticisms. (For somerecent examples of criticisms of individual experimentsalong these lines see Freeman et al. 1991; Lewis &Osborne 1991). We can, nevertheless, consider whethera pattern of results, taken across a number of studies, isbest explained in terms of some difference in theinformation-processing ability of 3- and 4-year-olds or interms of a conceptual difference. Let me consider a fewsuch arguments that might be applied to particular exper-iments and show how they are incompatible with otherpieces of evidence.

5.1. Understanding questions about the past. VVimmerand Hartl (1991) have shown that children can readilyunderstand questions identical to those asked in theexperiment ("What did you think was inside the box?")when they refer to changes in the actual world rather thanto strictly mental changes. Moreover, in our experimentschildren are able to understand questions about their pastpsychological states such as pretenses even when they aresyntactically identical to the belief questions ("What didyou pretend was in the glass?" vs. "What did you thinkwas in the box?").

5.2. Remembering past events. Common observationsuggests that 3-year-old children can remember eventsthat occurred minutes earlier. The control tasks andpretense, imagination, and perception task test this possi-bility experimentally. Children could understand thequestions as well as recall the past mental states in thesecases. Note that in the pretense, image, and perceptiontasks, there was no change of the objects in reality, only apsychological change. Children, nevertheless, had nodifficulty remembering their past psychological states inthese cases. Certainly, no simple memory problem canexplain these results.

5.3. Reality seduction. One might argue that children aresubject to a kind of "reality seduction," feeling compelledto answer questions by referring to what is actually true,although they are intellectually able to appreciate falsebelief. Children are not "reality seduced," however, inthe pretense or image tasks. In both these tasks theycould answer the question by referring to the reality, theempty glass or the objects actually in front of them, butthey do not. Nor are they "reality seduced" in the percep-tion task, where they are able to report their earlierlimited vision. Also, this objection does not apply to thesources or subjective probability tasks. The "How do youknow?" question does not ask the child to say anythingabout appearance or reality but simply how two events,an event in the world and the belief that results, arerelated. Similarly, in the subjective probability tasks thechildren do not know what the real state of affairs is.Finally, the objection also fails to apply to the desire orintention tasks.

5.4. Embarrassment. To an adult, the most likely explana-tion for the results in the belief-change task is embar-rassed deception or perhaps self-deception: The child was

8 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 9: A Gopnik - How We Know Our Minds

Gopnik: How we know our minds

too ashamed to say he had been wrong (the most likelyexplanation for similar phenomena in adulthood). Thisexplanation could not apply to any of the other belieftasks, such as the classic false-belief or appearance-realitytasks, on which children behave in similar ways at asimilar time. Nor could it apply to the desire or intentiontasks. Changing your mind does not entail an admission ofa mistake or embarrassment; there is no obvious reason,for example, why you should be embarrassed that you areno longer hungry after you have a snack. Nor, finally, doesit apply to the sources task; here, in fact, the children arequite willing to confess their ignorance.

No doubt the BBS commentators will supply otherpossibilities for particular tasks. What is important, how-ever, is the pattern of development across a variety oftasks. To be parsimonious, an information-processingaccount would need to demonstrate some information-processing complication that was common to the difficulttasks (false-belief, appearance-reality, sources, desires,intentions [both for self and others] and subjective proba-bility) and not common to the easy tasks (pretense,perception, imagination, physical-state change, both forself and others).

The conceptual account does provide such an explana-tion. Consider what a child with a Gibsonian (1979) modelof the mind might and might not understand. Suchchildren should understand that mental states may existindependently of physical ones (the image and pretensecases) and that such states are subject to change. Theyshould understand that the spatial relation between amind and a world may determine how much of the worldthe mind apprehends (the perception case). On the otherhand, they should understand neither cases of misrepre-sentation (the false-belief cases) nor the mediated natureof cognition (the sources and subjective probabilitycases). In the mind-to-world cases they might fail tounderstand how satiation alters representation. In partic-ular, they might see "desirability" as an objectively appre-hended feature of the world. They might also fail tounderstand the way that intention mediates betweendesire and action (see Astington & Gopnik 1991b).

6. Developmental evidence as support for the"theory-theory"

The striking finding, from our present point of view, is theparallel between children's understanding of the psycho-logical states of others and their understanding of theirown immediately past psychological states. If the epis-temological version of our commonsense intuitions iscorrect, the process of discovering our own psychologicalstates is fundamentally different from the process ofdiscovering someone else's states. We see the otherperson look in the box or feel inside it, or see him grimaceat the taste of the cookie, and infer his beliefs, theirsources and his desires. In our own case we simply reportthe changes in our psychological states, or their sources,by referring to our psychological experience. We neednot infer them; we need not, indeed, use any theoreticalaccount of the mind at all.

If our findings correctly reflect the experience of youngchildren, the situation is very different. In each of ourstudies, children's reports of their own immediately past

psychological states are consistent with their accounts ofthe psychological states of others. When they can reportand understand the psychological states of others, in thecases of pretense, perception, and imagination, theyreport having had those psychological states themselves.When they cannot report and understand the psychologi-cal states of others, in the cases of false beliefs, and source,they report that they have not had those states them-selves. In the source case, they simply say they do notknow the answer, or they respond at random whenpressed. In the belief case they report that they havealways held their current belief. Moreover, and in someways most strikingly, the intermediate cases, such as thecase of desire, are intermediate both for self and other.

These findings are consistent with other findings insocial psychology (Nisbett & Ross 1980; Nisbett & Wilson1977) and in neuropsychology, such as cases of agnosiaand amnesia, in which there are similar dissociationsbetween subjects' behaviors and their report of their ownpsychological experience. The findings from children alsodiffer, however. The social psychological cases typicallyinvolve rather abstract reports of the motivations forparticular actions. The subjects in the Nisbett and Wilsonexperiments, for example, can report their past desiresperfectly accurately, though they are mistaken about theunderlying reason for that desire. In these cases, even inthe commonsense view, our adult psychological experi-ence may be hazy or unclear or even nonexistent, andsubjects may simply confabulate. The children, on theother hand, have first-person experiences different fromthose of adults in cases where the adult first-personexperiences are crystal clear. Second, unlike the amne-sics or agnosics, these children are perfectly capable ofreporting their psychological states in some cases — pre-cisely those that are consistent with their general theoryof the mind.

The developmental evidence I have described herefails to support some views of the origins of commonsensepsychology. In particular, there is little evidence that thecommonsense psychological account of intentionality isan innately determined aspect of our understanding of themind. Some aspects of commonsense psychology mayindeed be innately given (see, for example, Baron-Cohen1991b; Hobson 1991; Meltzoff& Gopnik 1993). But othercrucial aspects of that understanding, and particularly theidea of intentionality, appear to be constructed some-where between 3 and 4. Before this point children'saccounts of the mind may be very different from adultaccounts.8

At the same time, the evidence does not support theposition that our commonsense psychological ideas aresimply stances we adopt: the view of Dennett (1987) andDavidson (1980). If this were true we would imagine thatthe process of acquiring those ideas would be less aprocess of knowledge construction than of enculturation.We would learn how to psychologize appropriately muchthe way we learn to eat politely or dress appropriately.This may indeed be the case for certain sophisticatedesthetic or emotional states. It does not seem to be true,however, for the simple states we have investigated here.In particular, it is interesting that young children actuallyacquire an incorrect account of commonsense psychol-ogy, what we have called a Gibsonian or copy account.They adhere to this view with some stubbornness at about

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 10: A Gopnik - How We Know Our Minds

Gopnik: How we know our minds

age 3, even in the face of error. This view is not the adultone; if it is a stance or what Wittgenstein calls a "form oflife," it is one unique to 3-year-olds.

The developmental evidence, however, and partic-ularly the evidence we have presented here, also fails tosupport the view that the intentionality of psychologicalstates is discovered through first-person experience, ascommon sense, Searle's view, or the simulation viewwould suggest. The evidence suggests that there is adissociation between the psychological states that causethe children's behavior and their sincere conscious reportof their psychological experience. Either we have to denythat 3-year-olds have psychological states like ours, or wehave to deny that their experience of those states is likeours. If we take the second option, we must also deny thatchildren learn about these psychological states throughdirect psychological experience, because their experi-ence is wrong. Three-year-olds are the converse of"swamp-things": The swamp-things think they have in-tentional states even though they do not; the 3-year-oldsthink they do not even though they do. But, like swamp-things, they make the point that intentionality may bedivorced from phenomenology.9

We could, of course, take the first option and deny that3-year-olds have psychological states like ours at all, giventhat their experience of those states is so different fromour own. In fact, Searle's (1990) most recent views suggestthat he might make this position. We even might want todeny that the psychological states or experiences of chil-dren have anything to do with our states and experiencesas adults. Children, like computers in Searle's view,might just fail to be the sorts of things to which inten-tionality could be attributed.

This argument might seem plausible when we arecomparing human beings and machines. We have noprima facie reason to suppose that things made of siliconwill have the same properties, or be explained in the sameway, as things made of blood and bone. It seems muchless plausible when we are considering creatures that aremade out of exactly the same stuff that we are, creaturesthat are grown and not engineered or made, creaturesthat talk, that reflect, that answer questions, that evenhave much of the same commonsense psychological ter-minology that we do. Moreover, these creatures do seemto have accurate beliefs about some aspects of their ownpsychological states, precisely the ones that are consistentwith their accounts of the minds of others. Indeed, thecase is even stronger: These creatures are not just like us,they are us, they are Searle himself, and me, myself (thefirst-person consciousness who formulates these lines), afew years ago.

The current findings support the view that has come tobe called the "theory-theory": commonsense psychologi-cal beliefs are constructed as a way of explaining ourselvesand others. A number of investigators have recentlyproposed more general parallels between theory changein science and some kinds of cognitive development(Carey 1985; 1988; Gopnik 1984; 1988; Gopnik & Well-man, in press; Karmiloff-Smith 1988; KarmilofF-Smith &Inhelder 1974/75; Keil 1987; Wellman & Gelman 1987).Some version of this theory is widely accepted amongdevelopmental psychologists investigating children's un-derstanding of the mind (Flavell 1988; Forguson & Gop-nik 1988; Perner 1991b; Wellman 1990; though see Les-

lie, 1987, and Harris, 1990, for opposing views). Thedevelopmental evidence suggests that children constructa coherent, abstract account of the mind which enablesthem to explain and predict psychological phenomena.Although this theory is implicit rather than explicit, thiskind of cognitive structure appears to share many featureswith a scientific theory. Children's theories of the mindpostulate unobserved entities (beliefs and desires) andlaws connecting them, such as the practical syllogism.Their theories allow prediction, and they change (eventu-ally) as a result of falsifying evidence. (For a detailedexposition and justification of these claims see Gopnik &Wellman 1992; in press). Moreover, the child's theory ofmind is equally applicable to the self and to others.

There may well be certain sorts of first-person psycho-logical experience that serve as evidence for the common-sense psychological theory and are used in its construc-tion. Unlike Ryle (1949), for example, I would not want tosuggest that this theory is reducible to behavior, but morestrongly, I would also deny that it is based on behavior.Some experiences might indeed be directly related tocertain psychological states, as is perhaps the case withsimple sensations. Whether this is the case, what types offirst-person psychological experience are involved in the-ory construction and what role they play in the develop-ment of the theory of mind are empirical questions. Theimportant point is that the theoretical constructs them-selves, and particularly the idea of intentionality, are notthe result of some direct first-person apprehension that isthen applied to others. Rather, they are the result ofcognitive construction. The child constructs a theory thatexplains a wide variety of facts about the child's experi-ence and behavior and about the behavior and language ofothers.

7. The illusion of expertise

So far I have argued that our beliefs about our ownintentional psychological states parallel our beliefs aboutthe intehtional psychological states of others. If thesources of the two kinds of beliefs are really similar,however, why do we also believe that there is such aprofound difference between them? The move from thephenomenological claim that our first-person psychologi-cal experience feels direct and immediate to the epis-temological claim that our first-person knowledge of ourpsychological states is direct and immediate seems com-pelling. Yet the evidence suggests that our adult beliefsabout the origins of our beliefs about beliefs are just asmistaken as our 3-year-old beliefs about beliefs.

I will suggest a speculative analogy between the illu-sion of privileged knowledge of our own psychologicalstates and what might be called the illusion of expertise.In the case of expertise, direct and immediate experiencemay be combined with a long, indirect (and theoretical)cognitive history.

We know that certain kinds of expertise appear to causechanges not just in knowledge, but in perception (Chi etal. 1982). Master chess-players report that they no longersee the board in terms of individual pieces and squaresbut as a set of competing forces and powers. They neednot calculate that an isolated king is vulnerable; they seehe is (Chase & Simon 1973; De Groot 1978). The baseball

10 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 11: A Gopnik - How We Know Our Minds

player, asked why he got seven hits in a game, says he wasseeing the ball well. Diagnosticians and fortune tellerssee cancer in the face of a patient, or an unhappy marriagein the stance of a client. Dowsers feel the pull of waterwhen, we believe, they are actually reading off subtlegeographical cues. Expertise and immediacy go hand inhand.

Do the experts really perceive the king's strength orthe cancer or the strike? The notion of perception is itselfambiguous in much the same way that other mental stateconcepts are. It might be used to capture a particularphenomenological quality, a sense of directness or imme-diacy. In this sense the chess-player does perceive thestrength of the king. But it might also mean somethingabout the cognitive relation between an experience andits object. It might mean that the experience is reliably,and reasonably directly, caused by the object. In the caseof the expert, the experiences do not bear this cognitiverelation to the objects they are about. In this secondsense, the chess-player does not perceive the strength ofthe king, though he may think he does.

Consider this thought experiment: Suppose we lived inKasparovia, a world in which everyone has been trainedto play chess from an early age and has essentially mas-tered the game by age 5, and where chess is essential toany kind of social survival. Imagine that chess is sopervasive that no one can remember a time when thegame did not exist. It seems rather likely that players insuch a world would see their adult chess expertise inlargely perceptual terms. They might indeed differenti-ate their knowledge of chess from their knowledge ofother games like parcheesi, or dominoes, which theylearn in a more ordinary way. They might say that whenthey choose the right move in parcheesi they calculate,infer, construct heuristics, and so on, but that when theychoose the right move in chess they just see it on theboard.

Our knowledge of our ordinary psychological states inthis world might be like our hypothetical knowledge ofchess in Kasparovia. As Ryle (1949) pointed out long ago,our expertise about minds and behavior is great, aboutour own minds and behavior, which after all, we live withevery day, even greater. The force of this expertise mightbe such that our beliefs about psychological states, partic-ularly our own, would appear to be perceptually immedi-ate, noninferential, direct.

One story of the relation between expertise and per-ception might run as follows: In developing forms ofexpertise, we construct an implicit theory of the realm inwhich we are expert. Various kinds of genuine perceptionact as important evidence for that theory. In applying it,we rely on our genuine perception of particularly com-mon or crucial pieces of such evidence. The diagnosticianreally does see the patient's pallor, feel the pulse, and soforth. The dowser really does see the hill contours. Giventhis evidence, or even a single piece of it, the diagnosti-cian draws on vast, nonperceptual, theoretical knowledgeto make implicit inferences about the patient. He quite ap-propriately applies the theory, "the patient has cancer."The diagnostician is engaging in the same cognitive pro-cesses that a less experienced medical student mightengage in, but more quickly and surely. To the diagnosti-cian, however, none of this may be going on at all. From hisfirst-person view, the cancer may simply be perceived.

Gopnik: How we know our minds

How might we apply this story in the case of psycho-logical states? We saw that one possible source of evi-dence for the child's theory may be first-person psy-chological experiences that may themselves be theconsequence of genuine psychological perceptions. Forexample, we may well be equipped to detect certain kindsof internal cognitive activity in a vague and unspecifiedway, what we might call "the Cartesian buzz." Given therecurrence of such experiences in adulthood, and otherappropriate contextual and behavioral evidence, theadult may now apply the full theoretical apparatus of thetheory of mind, including the idea of intentionality, andso draw conclusions or inferences about his own psycho-logical states. These inferences lead to a psychologicalexperience with a particular complex phenomenologicalcharacter. Given the effects of expertise, we may be quiteunaware of these inferences, and so interpret these com-plex, theory-laden experiences as direct perceptions ofour psychological states.10

The fact that a particular ascription of an intentionalpsychological state is based in part on psychological expe-rience may mislead us into thinking that the entire theo-retical apparatus itself is so given. This fact might also beone element in our sense that beliefs about our ownpsychological states are direct or privileged. For exam-ple, our genuinely special and direct access to certainkinds of first-person evidence might account for the factthat we can draw some conclusions about our own psycho-logical states when we are perfectly still and silent. Wemight not be able to draw similarly confident inferencesabout another person. This is the sort: of fact that lendscredence to the commonsense picture.

The crucial point, however, is that the theoreticalknowledge in all these cases does not actually come fromthe experience, even if we feel it does. A chess novice, nomatter how keen-sighted and quick-witted, would neverbe able to see the strength of the king. The patient's palloror pulse, all by itself, does not spell cancer. No matterhow certain the dowser may be that he feels a tug on hiswand, our knowledge of the physical world suggests thathe is really using implicit geographical knowledge. Itwould be wrong to say that in these cases the source of theexpert's knowledge is direct perception, rather than im-plicit theoretical construction.

It might not be possible for experts, even whenpressed, to tell which parts of their belief come fromdirect perceptual experiences and which are the result oftheoretical knowledge. Experts' first-person experiencealone may be simply insufficient to make this differentia-tion, though they might attempt a "rational reconstruc-tion" of the source of the belief. To really answer thisquestion we would need to know something about thedevelopmental course of the expert's knowledge, how thediagnostician moved from fumbling medical student toassured expert. We would need a developmental accountof the diagnostician's expertise.

The theoretical nature of the expert's experience isparticularly clear when the theory is wrong. Suppose achess-player had not yet learned about a particular gam-bit. We might expect that the player's perceptions wouldbe equally affected, the knight that was, in fact, about tocapture the king would appear misleadingly weak andhelpless. A diagnostician who had failed to keep up withthe most recent literature might see cancer where there

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 11

Page 12: A Gopnik - How We Know Our Minds

Gopnik: How we know our minds

was none. This kind of failure argues against a genuinelyperceptual account of these beliefs. Surely, what theseexperts would say in such cases is that they believed thatthe king was strong or the patient had cancer, even that itlooked as if the king was strong or the patient had cancer,but that further evidence proved that this was wrong.

Similarly, in the case of psychological beliefs, 3-year-olds' misleading beliefs about their psychological statesapparently affect their experience of those states. Just asthe chess-player or the diagnostician might be mistaken,so 3-year-old children are mistaken about their ownpsychological states. Even if we use some psychologicalexperience as evidence for our psychological state ascrip-tions, we can and do clearly override that evidence withgreat ease. We do so, moreover, in cases other than that ofself-deception. Perhaps some experiential evidence wasavailable to the children in our tasks. That evidence,however, was clearly outweighed for them by theoreticalconvictions about the kind of position they were in andwhat they could or could not have known or believed.This is perfectly proper if, for them, "beliefs" and "de-sires" are theoretical terms that are, in principle, equallyapplicable to themselves and others. If they are theoreti-cal terms for them, they are theoretical terms for us.Remember again that these children are us, when weknew less about the mind than we do now.

8. Why I am not a behaviorist

One response to the sorts of arguments I have beenmaking in this target article is to suggest that they arereally old-fashioned behaviorism, but they need not im-ply behaviorism at all. I do not deny that there areinternal psychological states; on the contrary, discoveringthe nature of such states is the fundamental task ofpsychology. Nor do I deny that there are full, rich, first-person psychological experiences of the Joycean or Wool-fian kind. I even suggest that there may be cases in whichpsychological states do lead directly to psychologicalexperiences, cases in which there is genuine perceptionof a psychological state.

What I do want to argue is that intentionality is not sucha case. In this instance, the relationship between underly-ing psychological states, behavior, and experiences israther different from what we had supposed. The com-monsense picture proposes that we have intentional psy-chological states, then we have psychological experiencesof the intentionality of those states, then we observe ourown behavior that follows those states, and finally, weattribute the states to others with similar behavior. Isuggest a different sequence: First we have psychologicalstates, observe the behaviors and the experiences theylead to in ourselves and others, construct a theory aboutthe causes of those behaviors and experiences that postu-lates intentionality, and then, in consequence, we haveexperiences of the intentionality of those states.

9. First-person knowledge and cognitive sciencerevisited

We return now to the question of the role of first-personpsychological experience of intentionality in the develop-

ment of cognitive science. I have suggested there isempirical evidence that this experience is the result of theconstruction of an implicit theory. One characteristic ofsuch theoretical structures, in children as well as inscientists, is that they are "defeasible," subject to changeand revision. They are neither fundamentally differentfrom scientific psychological claims nor are they epis-temologically privileged. Whether they require revisionor replacement, whether "intentionality" will survive insome modified form or go the way of "phlogiston," is anopen question, one more likely to be resolved by theactual progress of cognitive science than by a priorispeculation.

Moreover, not only are theoretical structures them-selves defeasible, but so are the decisions about the typesof evidence that are relevant to the theory. As the theorydevelops, pieces of evidence that were crucial at one stageof its construction may turn out to be irrelevant later. Totake a hackneyed example, it seems likely that suchfeatures of objects as color and texture were important inidentifying chemical elements at one point. Thus theyellowness and shininess of gold played a part in theconstruction of theories of the nature of gold. Even nowwe might use these features as useful identifying marks,rough guides for when the application of the theoreticalterm "gold" is appropriate. Nevertheless, it would bewrong to say, now, that color and texture were essential tothe concept of gold, or that something that was not yellowand shiny could not be gold.

In the same way, genuine first-person psychologicalperceptions might play a role in the formation of thecommonsense theory of intentionality. As adults wemight also use such evidence as a way of identifying theoccurrence of some mental state. Nevertheless, the the-ory itself might be applied without that experience. Infact, 3-year-old psychologists already seem to be willingto apply the theory to themselves on the basis of contex-tual and behavioral evidence, even when it is not sup-ported by first-person experience. They must do this,because their direct, first-person experience could not,presumably, tell them that they had believed there werepencils in the box before they opened it. As somewhatolder and wiser scientific psychologists, we might sim-ilarly identify unconscious states, or the states of acomputer, as intentional, in spite of the absence of psy-chological experience, in much the same way that wemight identify colorless gases as gold.

10. The moral: Listen to children

The most important point of this target article is not thatone account or another of first-person knowledge is theright one. Rather, it is that in deciding among the possi-bilities sketched by philosophers, and inventing newones, we sought to consider much more than adult experi-ence. Experience does not wear its epistemological his-tory on its sleeve. If our goal is to say how we actuallydevelop knowledge of the mind, or indeed knowledge ofany kind, we must look beyond an analysis of the concep-tual structures that we have as adults, or "just-so" storiesabout how those structures might have arisen. We mustlook to actual evidence about how we develop suchknowledge.

12 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 13: A Gopnik - How We Know Our Minds

Gopnik: How we know our minds

The experiments I have reported here might havecome out differently. Children might instead have provedto be accurate in reporting their own psychological states,and to develop an understanding of the states of others byanalogy with those states. If this had been the case itwould have provided evidence for the commonsense viewof first- and third-person knowledge. The "theory-theory"would have been challenged by such evidence as, I haveargued, the commonsense view is challenged by theactual evidence.

It is of course true that no single source of evidence,developmental, psychological, neurological, or concep-tual, can answer epistemological questions definitively.Moreover, developmental facts may not always reflect theacquisition of knowledge; sometimes they may be theconsequence of other factors: maturation, information-processing changes, enculturation, and so on. Childrenare nevertheless an important and often neglected sourceof information about where knowledge comes from.Studying them may force us to reexamine the deeplyrooted assumptions of our adult common sense and pro-vide us with new and surprising answers to ancientquestions. Two thousand years ago in Plato's Meno,Socrates combined one of the first-recorded philosophicaldiscussions of the genesis of knowledge with the first-recorded experiment in cognitive development. Perhapsit is time to try it again.11

A C K N O W L E D G M E N T SThe writing of this target article was supported by NSF grant no.BNS-8919916. The manuscript had the benefit of an unusuallylarge number of unusually helpful comments. Earlier versionsof the manuscripts were presented at the Berkeley CognitiveScience Seminar and at John Flavell's Children's Theory ofMind Seminar at Stanford University; I am grateful to all theparticipants in those seminars who commented on it. Partic-ularly helpful were comments by Irvin Rock, Steve Palmer,Bert Dreyfus, John Searle, Tony Dardis, and Charles Siewert.Among developmentalists, Janet Astington, John Flavell,Henry Wellman, Simon Baron-Cohen, and Andrew Meltzoffallread drafts and made many helpful suggestions. I am grateful tothem and to other theory-of-mind investigators, particularlyJosef Perner, Paul Harris, Heinz Wimmer, and Alan Leslie, formuch stimulating work, thought, and conversation. On threeseparate occasions, Clark Glymour asked an apparently simplequestion which led me to rewrite the manuscript completely. Iam grateful to him anyway.

NOTES1. Ordinary language embodies the assumptions of common

sense. When we try to categorize our experiences phenome-nologically we typically talk about "experiences of trees" or"experiences of our beliefs" or "experiences of others' desires,"but these phrases already make assumptions about the relationsbetween experiences and objects. For my purposes it would bemore appropriate to say "tree-experiences" or "own-belief-experiences" or "others'-desire-experiences" - appropriate, butunfortunately, not English.

2. Throughout this article I am adopting the position that hasbeen called "naturalistic epistemology" in the literature: theidea that an account of the naturalistic connections betweenworld and mind is the only thing that can tell us how knowledgeis acquired. I will not argue for this general position here. Thephilosophical accounts I will discuss do make claims about thenature of the world and the mind, and the connections betweenthem. If these accounts are instead construed as claims about

ordinary language or phenomenology, the evidence I will pre-sent might not be relevant to them. On the other hand, theywould also not be relevant to psychology and cognitive science.

3. Exact ages are not crucial. In fact, according to the theory-theory, we would expect to find, as we indeed do, wide variationin the ages at which successive theories develop. We wouldexpect to find similar sequences of development, however. Iwill use age as a rough way of referring to successive develop-ments.

4. Very recently, a number of studies have appeared in whichit is reported that, under some conditions, more 3-year-olds willproduce the "right answer" in false-belief-like tasks than in thestandard tasks (e.g., Freeman et al. 1991; Lewis & Osborne1991; Siegel & Beattie 1991). The conditions are very varied andoften the results are difficult to interpret (for a detailed discus-sion see Gopnik & Wellman, in press). There are also questionsabout the replicability of some of these results. At best, itappears that there may be evidence of some fragile and fragmen-tary false-belief understanding in some 3-year-olds under someconditions, particularly when there is extensive contextual sup-port. Some of this evidence suggests that children may havetheir first glimmerings of false belief when they are forced toconfront counterevidence. Wellman and Bartsch (1988), forexample, found that some 3-year-old children would, withprompting, produce some false-belief claims as explanations foranomalous behaviors. Similarly, in a recent study, Mitchell andLacohee (1991) found that children in a belief-change task whoselected an explicit physical token of their earlier belief (apicture of what they thought was in the box) were better able toavoid later misinterpretation of that belief. That is, these chil-dren seemed to recognize the contradiction between the actionthey had just performed (picking a picture of candies), whichwas well within the scope of their memory, and their theoreticalprediction about their past belief. These results actually providean interesting line of evidence in support of the "theory-theory."We have suggested that children may initially treat representa-tion in general and false belief in particular as a kind of auxiliaryhypothesis, invoked only to deal with particular anomalies(Gopnik & Wellman 1992). This is in sharp contrast to the morepowerful "theory-like" predictive and general understanding ofpretenses, images, and perceptions in these children, and thepowerful, predictive and general understanding of false belief in4- and 5-year-olds.

5. According to the views of Gibson and Dretske, at leastsome kinds of perception are not viewed as representational inthe usual sense. Instead, the idea is that there is a more directcausal link between the world and the mind. In Gibson'sterminology, perception involves a kind of "resonance" betweenobjects in the world and the organism; in Dretske's, information(in the technical sense) flows from the object to the organism. Isuggest that 3-year-olds understand all relations between theworld and the mind, including those that involve beliefs, in asimilar way.

6. If we consider the philosophical stories that feature indiscussions of intentionality, from the morning star and theevening star (Frege 1892), to Scott and the author of Waverly(Russell 1905), to Ortcutt the spy (Quine 1956), all of theminvolve cases of misrepresentation. In each case, an observer'sinitial beliefs about an object turn out to be misleading orseriously incomplete later on.

7. The "span of introspection" is probably longer. The first-person experience of, say, a patient with Korsakov's syndrome,who only has access to information within the short-term span,but not the long-term span, seems to be radically different fromour own (see Tulving 1985).

8. One possibility might be that the 3- and 4-year-old shift isthe result of the maturation of an innately determined capacity.A number of authors have suggested that the child's theory ofmind is indeed partly based on various kinds of innately spe-cified knowledge (Baron-Cohen 1991b; Hobson 1991; Leslie

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 13

Page 14: A Gopnik - How We Know Our Minds

Gopnik: How we know our minds

1987; Meltzoff & Gopnik 1993). Leslie (1987) in particular hasargued for an innate "theory of mind module" or, in his mostrecent work, two innate modules, one maturing at 9 months andanother at 18 months. So far as I know no one actively working inthe field, not even Leslie, has suggested that the 3- to 4-year-oldshift is the result of the maturation of such a module. (For otherarguments against this possibility see Astington & Gopnik1991a; Gopnik & Wellman, in press).

9. The fact that 3-year-olds actually exist makes me thinkthey are a more convincing example, but this may be anantiphilosophical prejudice.

10. An interesting problem concerns why, if the impressionof perceptual immediacy conies from expertise, we do notextend this impression to our equally expert beliefs about otherpeople. Two explanations come to mind. First, we do, I suggest,have precisely this impression when we are dealing with thosewe know very intimately, young children or lovers, for example.It is this experience that makes the notion of telepathy soplausible to common sense. There are others, however, who aregenuinely strange to us, and with them, the impression of

immediacy rapidly breaks down. The breakdown is then retro-actively extended even to intimates. A second and related factorinvolves our commonsense notion of causality, which empha-sizes spatial contact. In common sense, our own minds andbrains, our psychological experiences and our psychologicalstates, are located in the same place, inside our skins. Otherpeople's minds are located in a different place. We assume thatwe cannot really see them because their skins get in the way.But no such difficulty arises with our own minds.

11. In the last 2,000 years we have had Socrates and Piaget.This discussion may sound Piagetian, and so it should. Almost allof Piaget's substantive claims about the child's conception of themind have turned out to be wrong. The Piagetian notion ofgeneral stagelike, domain-independent changes is also not sup-ported by the more recent research. Piaget's general constructi-vist approach, however, informs both the empirical researchand the theoretical position outlined here. I, for one (and, Isuspect, others in the field), would be pleased to think of the"theory-theory" as the genuine inheritor of the tradition ofgenetic epistemology.

BBS Associates

Please send BBS your electronic mail ('email') address

BBS is relying more and more on electronic mail to communicate with Associates(especially for circulating abstracts of recently accepted target articles soprospective commentators can nominate themselves).

If you have an email address, please inform BBS right away at:

[email protected]

harnad@puccbitnet

(If you do not yet have an email address, we urge you to get one!)

Patricia M. Greenfield, University of California, Los Angeles, has been awarded the 1992 American Association for theAdvancement of Science (AAAS) Prize for Behavioral Science Research for her article Language, tools and brain: Theontogeny and phylogeny of hierarchically organized sequential behavior (BBS 14:4).

14 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 15: A Gopnik - How We Know Our Minds

BEHAVIORAL AND BRAIN SCIENCES (1993) 16, 15-28Printed in the United States of America

The psychology of folk psychology

Alvin I. GoldmanDepartment of Philosophy, University of Arizona, Tucson, AZ 85721Electronic mall: [email protected]

Abstract: Folk psychology, the naive understanding of mental state concepts, requires a model of how people ascribe mental states tothemselves. Competent speakers associate a distinctive memory representation (a category representation, CR) with eachmentalistic word in their lexicon. A decision to ascribe such a word to oneself depends on matching to the CR an instancerepresentation (IR) of one's current state. As in visual object recognition, evidence about a CR's content includes the IRs that are orare not available to trigger a match. This poses serious problems for functionalism, the theory-of-mind approach to the meaning ofmental terms. A simple functionalist model is inadequate because (1) the relational and subjunctive (what would have happened)information it requires concerning target states is not generally available and (2) it could lead to combinatorial explosion. A modifiedfunctionalist model can appeal to qualitative (phenomenological) properties, but the earlier problems still reappear. Qualitativeproperties are important for sensations, propositional attitudes, and their contents, providing a model that need not refer tofunctional (causal-relational) properties at all. The introspectionist character of the proposed model does not imply that ascribingmental states to oneself is infallible or complete; nor is the model refuted by empirical research on introspective reports. Empiricalresearch on "theory of mind" does not support any strict version of functionalism but only an understanding of mentalistic words thatmay depend on phenomenological or experiential qualities.

Keywords: folk psychology; functionalism; introspection; mental concepts; propositional attitudes; self-attribution of mental states;sensations; theory of mind

1. Introduction

The central mission of cognitive science is to reveal thereal nature of the mind, however familiar or foreign thatnature may be to naive preconceptions. The existence ofnaive conceptions is also important, however. Prescien-tific thought and language contain concepts of the mental,and these concepts deserve attention from cognitive sci-ence. Just as scientific psychology studies folk physics(Hayes 1985; McCloskey 1983), namely, the commonunderstanding (or misunderstanding) of physical phe-nomena, so it must study folk psychology, the commonunderstanding of mental states. This subfield of scientificpsychology is what I mean by the phrase "the psychologyof folk psychology."

The phrase "folk psychology" often bears a narrowersense than the one intended here. It usually designates atheory about mental phenomena that common folk al-legedly hold, a theory in terms of which mental conceptsare understood. In the present usage, "folk psychology" isnot so restricted. It refers to the ordinary person's reper-toire of mental concepts, whether or not this repertoireinvokes a theory. Whether ordinary people have a theoryof mind (in a suitably strict sense of "theory") is controver-sial, but it is indisputable that they have a folk psychologyin the sense of a collection of concepts of mental states.Yet people may not have, indeed, probably do not have,direct introspective access to the contents (meanings) oftheir mental concepts, any more than they have directaccess to the contents of their concepts of fruit or lying.Precisely for this reason we need cognitive science todiscover what those contents are.

The study of folk psychology, then, is part of thepsychology of concepts. We can divide the psychology ofconcepts into two parts: (a) the study of conceptualizationand classification in general, and (b) the study of specificfolk concepts or families of folk concepts such as numberconcepts, material object concepts, and biological kindconcepts. The study of folk psychology is a subdivision of(b), the one that concerns mental state concepts. Itpresupposes that mature speakers have a large number ofmentalistic lexemes in their repertoire, such as happy,afraid, want, hope, pain (or hurt), doubt, intend, and soforth. These words are used in construction with otherphrases and clauses to generate more complex mentalisticexpressions. The question is: What is the meaning, orsemantical content, of these expressions? What is it thatpeople understand or represent by these words (orphrases)?

This target article advances two sorts of theses: meth-odological and substantive. The general methodologicalthesis is that the best way to study mental-state conceptsis through the theoretico-experimental methodology ofcognitive science. We should consider the sorts of datastructures and cognitive operations involved in mentalattributions (classifications), both attributions to oneselfand attributions to others. Although this proposal isinnocuous enough, it is not the methodology that hasbeen followed, or even endorsed in principle, by philoso-phers who have given these questions the fullest atten-tion. And even the cognitive scientists who have ad-dressed these questions empirically have not used thespecific methodological framework I shall recommendbelow.

® 1993 Cambridge University Press 0140-525X193 SS.OO+.OO 15

Page 16: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

In addition to methodological theses, this target articleadvances some substantive theses, both negative andpositive. On the negative side, some new and seriousproblems are raised concerning the functionalist ap-proach to mental-state concepts (as well as some doubtsabout pure computationalism). On the positive side, thearticle supports a prominent role for phenomenology inour mental-state concepts. These substantive theses areput forward tentatively because I have not done the kindof experimental work that my own methodological pre-cepts would require for their corroboration; nor doesexisting empirical research address these issues in suffi-cient detail. Theoretical considerations, however, lendthem preliminary support. I might state at the outset,however, that I am more confident of my negative thesis -about the problems facing the relevant form of functional-ism - than of my positive theses, especially the role ofphenomenology in the propositional attitudes.

2. Proposed methodology

Philosophical accounts of mental concepts have beenstrongly influenced by purely philosophical concerns,especially ontological and epistemological ones. Per-suaded that materialism (or physicalism) is the only ten-able ontology, philosophers have deliberately fashionedtheir accounts of the mental with an eye to safeguardingmaterialism. Several early versions of functionalism (e.g.,Armstrong 1968; Lewis 1966) were deliberately designedto accommodate type-physicalism,1 and most forms offunctionalism are construed as heavily physicalist inspirit. Similarly, many accounts of mental concepts havebeen crafted with epistemological goals in mind, forexample, to avoid skepticism about other minds.

According to my view, the chief constraint on anadequate theory of our commonsense understanding ofmental predicates is not that it should have desirableontological or epistemological consequences; rather, itshould be psychologically realistic: Its depiction of howpeople represent and ascribe mental predicates must bepsychologically plausible.

An adequate theory need not be ontologically neutral,however. As we shall see in the last section, for example,an account of the ordinary understanding of mental termscan play a significant role in arguments about eliminativ-ism.2 Whatever the ontological ramifications of the ordi-nary understanding of mental language, however, thenature of that understanding should be investigatedpurely empirically, without allowing prior ontologicalprejudices to sway the outcome.

In seeking a model of mental-state ascription (attribu-tion), there are two types of ascriptions to consider:ascriptions to self and ascriptions to others. Here we focusprimarily on self-ascriptions. This choice is made partlybecause I have discussed ascriptions to others elsewhere(Goldman 1989; 1992a; 1992b), and partly because ascrip-tions to others, in my view, are "parasitic" on self-ascriptions (although this is not presupposed in the pre-sent discussion).

Turning now to specifics, let us assume that a compe-tent speaker/hearer associates a distinctive semanticalrepresentation with each mentalistic word, whateverform or structure this representation might take. This

(possibly complex) representation, which is stored inlong-term memory, bears the "meaning" or other seman-tical properties associated with the word. Let us call thisrepresentation the category representation (CR), since itrepresents the entire category the word denotes. A CRmight take any of several forms (see Smith & Medin1981), including: (1) a list of features treated as individu-ally necessary and jointly sufficient for the predicate inquestion; (2) a list of characteristic features with weightedvalues, where classification proceeds by summing theweights of the instantiated features and determiningwhether the sum meets a specified criterion; (3) a repre-sentation of an ideal instance of the category, to whichtarget instances are compared for similarity; (4) a set ofrepresentations of previously encountered exemplars ofthe category, to which new instances are compared forsimilarity; or (5) a connectionist network with a certainvector of connection weights. The present discussion isintended to be neutral with respect to these theories.What interests us, primarily, is the semantical "contents"of the various mentalistic words, or families of words, notthe particular "form" or "structure" that bears thesecontents.

Perhaps we should not say that a CR bears the "mean-ing" of a mental word. According to some views of mean-ing, after all, naive users of a word commonly lack fullmastery of its meaning; only experts have such mastery(see Putnam 1975). But if we are interested in what guidesor underpins an ordinary person's use of mental words,we want an account of what he understands or representsby that word. (What the expert knows cannot guide theordinary person in deciding when to apply the word.)Whether or not this is the "meaning" of the word, it iswhat we should be after.

Whatever form a CR takes, let us assume that when acognizer decides what mental word applies to a specifiedindividual, active information about that individual's stateis compared or matched to CRs in memory that areassociated with candidate words. The exact nature of thematching process will be dictated by the hypothesizedmodel of concept representation and categorization. Be-cause our present focus is self-ascription of mental terms,we are interested in the representations of one's ownmental states that are matched to the CRs. Let us call suchan active representation, whatever its form or content, aninstance representation (IR). The content of such an IRwill be something like, "A current state (of mine) hasfeatures (|)j, . . . , 4>n." Such an IR will match a CR havingthe content: <(>!, . . . , <$>„. Our aim is to discover, for eachmental word M, its associated CR; or more generally, thesorts of CRs associated with families of mental words. Wetry to get evidence about CRs by considering what IRs areavailable to cognizers, IRs that might succeed in trigger-ing a match.

To make this concrete, consider an analogous proce-dure in the study of visual object-recognition; we will usethe work of Biederman (1987) as an illustration. Visualobject-recognition occurs when an active representationof a stimulus that results from an image projected to theretina is matched to a stimulus category or conceptcategory, for example, chair, giraffe, or mushroom. Thepsychologist's problem is to answer three coordinatedquestions: (1) What (high-level) visual representations(corresponding to our IRs) are generated by the retinal

16 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 17: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

image? (2) How are the stimulus categories representedin memory (these representations correspond to ourCRs)? (3) How is the first type of representation matchedagainst the second so as to trigger the appropriate cate-gories?

Biederman (1987) hypothesizes that stimulus catego-ries are represented as arrangements of primitive compo-nents, namely, volumetric shapes such as cylinders orbricks, which he calls geons (for "geometrical ions").Object recognition occurs by recovering arrangements ofgeons from the stimulus image and matching these to oneof the distinguishable object models, which is paired withan entry-level term in the language (such as lamp, chair,giraffe, and so forth). The theory rests on a range ofresearch supporting the notion that information from theimage can be transformed (via edge extraction, etc.) intorepresentations of geons and their relations. Thus, thehypothesis that, say, chair is represented in memory byan arrangement (or several arrangements) of geons ispartly the result of constraints imposed by consider-ing what information could be (a) extracted from theimage (under a variety of viewing circumstances), and (b)matched to the memory representation. In similar fashionI wish to examine hypotheses about the stored represen-tations (CRs) of mental-state predicates by reflecting onthe instance representations of mental states that mightactually be present and capable of producing appropriatematches.

Although we have restricted ourselves to self-ascriptions,there are still at least two types of cases to consider:ascriptions of current mental states - "I have a headache(now)" - and ascriptions of past states - "I had a headacheyesterday." Instance representations in the two cases arelikely to be quite different, obviously, so they need to bedistinguished. Ascriptions of current mental states, how-ever, have primacy, so these will occupy the center of ourattention.

3. Problems for functionalism

In the cognitive scientific as well as the philosophicalcommunity, the most popular account of people's under-standing of mental-state language is the "theory of mind"theory, according to which naive speakers, even children,have a theory of mental states and understand mentalwords solely in terms of that theory. The most precisestatement of this position is the philosophical doctrine offunctionalism, which states that the crucial or definingfeature of any type of mental state consists of its causalrelations to (1) environmental or proximal inputs, (2)other types of mental states, and (3) behavioral outputs.(Detailed examples are presented below.) Since what is atstake is the ordinary understanding of the language of themental, the doctrine is generally called analytic, or com-monsense, functionalism. Another doctrine that fits un-der the label "functionalism" is scientific functionalism(roughly what Block [1978] calls "psychofunctionalism"),according to which it is a matter of scientific fact thatmental states are functional states: That is, mental stateshave functional properties (causal relations to inputs,other mental states, and outputs) and should be studied interms of these properties. I shall have nothing to sayagainst scientific functionalism. I do not doubt that men-

tal states have functional properties; nor do I challengethe proposal that mental states should be studied (at leastin part) in terms of these properties. But this doctrinedoes not entail that ordinary people understand or repre-sent mental words as designating functional propertiesand functional properties only. States designated by men-tal words might have functional properties without ordi-nary folk knowing this, or without their regarding it ascrucial to the identity of the states. But since we areconcerned here exclusively with the ordinary person'srepresentations of mental states, only analytic functional-ism is relevant to our present investigation.

Philosophers usually discuss analytic, or common-sense, functionalism quite abstractly, without seriousattention to its psychological realization. I am asking us toconsider it as a psychological hypothesis, that is, a hypoth-esis about how the cognizer (or his cognitive system)represents mental words. It is preferable, then, to call thetype of functionalism in question representational func-tionalism (RF). This form of functionalism is interpretedas hypothesizing that the CR associated with each mentalpredicate M represents a distinctive set of functionalproperties, or functional role, FM.3Thus, RF implies thata person will ascribe a mental predicate M to himselfwhen and only when an IR occurs in him bearing themessage, "role FM is now instantiated."That is, ascriptionoccurs precisely when there is an IR that matches thefunctional-role content of the CR for M. (This may besubject to some qualification. Ascription may not requireperfect or complete matching between IR and CR; partialmatching may suffice.) Is RF an empirically plausiblemodel of mental self-ascription? In particular, do subjectsalways get enough information about the functional prop-erties of their current states to self-ascribe in this fashion(in real time)?

Before examining this question, let us sketch RF inmore detail. The doctrine holds that folk wisdom em-bodies a theory, or a set of generalizations, which articu-lates an elaborate network of relations of three kinds: (a)relations between distal or proximal stimuli (inputs) andinternal states, (b) relations between internal states andother internal states, and (c) relations between internalstates and items of overt behavior (outputs). Here is asample of such laws from Churchland (1979). Underheading (a) (relations between inputs and internal states)we might have:

When the body is damaged, a feeling of pain tends to occur atthe point of damage.When no fluids are imbibed for some time, one tends to feelthirsty.When a red apple is present in daylight (and one is looking atit attentively), one will have a red visual experience.

Under heading (b) (relations between internal states andother internal states) we might have:

Feelings of pain tend to be followed by desires to relieve thatpain.Feelings of thirst tend to be followed by desires for potablefluids.If one believes that P, where P elementarily entails Q, onealso tends to believe that Q.

Under heading (c) (relations between internal states andoutputs) we might have:

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 17

Page 18: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

Sudden sharp pains tend to produce wincing.States of anger tend to produce frowning.An intention to curl one's finger tends to produce the curlingof one's finger.

According to RF, each mental predicate picks out a statewith a distinctive collection, or syndrome, of relations oftypes (a), (b), and/or (c). The term pain, for example,picks out a state that tends to be caused by bodily damage,tends to produce a desire to get rid of that state, and tendsto produce wincing, groaning, and so on. The content ofeach mental predicate is given by its unique set of rela-tions, or functional role, and nothing else. In other words,RF attributes to people a purely relational concept ofmental states.

There are slight variations and important additionalnuances in the formulation of functionalism. Some formu-lations, for example, talk about the causal relations amongstimulus inputs, internal states, and behavioral outputs.Others merely talk about transitional relations, that is,one state following another. Another important wrinkle inan adequate formulation is the subjunctive or counterfac-tual import of the relations in question. For example, partof the functional role associated with desiring water wouldbe something like this: If a desire for water were accom-panied by a belief that a glass of water is within arm'sreach, then (other things being equal) it would be fol-lowed by extending one's arm. To qualify as a desire forwater, an internal state need not actually be accompaniedby a belief that water is within reach, nor need it befollowed by an extending of the arm. It must, however,possess the indicated subjunctive property: If it wereaccompanied by this belief, the indicated behavior wouldoccur.

We are now in a position to assess the psychologicalplausibility of RF. The general question I wish to raise is,Does a subject who self-ascribes a mental predicate al-ways (or even typically) have the sort of instance informa-tion required by RF? This is similar to an epistemologicalquestion sometimes posed by philosophers, namely,whether functionalism can give an adequate account ofone's knowledge of one's own mental state. But thepresent discussion does not center on knowledge; itmerely asks whether the RF model of the CRs and IRs inmental self-ascription adequately explains this behavior.Does the subject always have functional-role informationabout the target states - functional-role IRs - to secure a"match" with functional-role CRs?

There are three sorts of problems for the RF model.The first is ignorance of causes and effects (or predecessorand successor states). According to functionalism, whatmakes a mental state a state of a certain type (e.g., a pain,a feeling of thirst, a belief that 7 + 5 = 12, and so forth) isnot any intrinsic property it possesses but its relations toother states and events. What makes a state a headache,for example, includes the environmental conditions orother internal states that actually cause or precede it, andits actual effects or successors. There are situations,however, in which the self-ascription of headache occursin the absence of any information (or beliefs) about rele-vant causes or effects, predecessors or successors. Surelythere are cases in which a person wakes up with aheadache and immediately ascribes this type of feeling to

himself. Having just awakened, he has no informationabout the target state's immediate causes or predecessors;nor need he have any information about its effects orsuccessors. The classification of the state occurs "imme-diately, " without waiting for any further effects, eitherinternal or behavioral, to ensue. There are cases, then, inwhich self-ascription occurs in the absence of information(or belief) about critical causal relations.

It might be replied that a person need not appeal toactual causes or effects of a target mental state to type-identify it. Perhaps he determines the state's identity byits subjunctive properties. This brings us to the secondproblem confronting the RF model: ignorance of sub-junctive properties. How is a person supposed to deter-mine (form beliefs about) the subjunctive properties of acurrent state (instance or "token")? To use our earlierexample, suppose the subject does not believe that a glassof water is within arm's reach. How is he supposed to tellwhether his current state would have produced an ex-tending of his arm if this belief were present? It isextremely difficult to get information about subjunctiveproperties, unless the RF model is expanded in ways notyet intimated (a possible expansion will be suggested insect. 4). The subjunctive implications of RF, then, are aliability rather than an asset. Each CR posited by RFwould incorporate numerous subjunctive properties,each presumably serving as a necessary condition forapplying a mental predicate. How is a cognizer supposedto form IRs containing properties that match those sub-junctive properties in the CR? Determining that thecurrent state has even one subjunctive property is diffi-cult enough; determining many such properties is formi-dably difficult. Is it really plausible, then, that subjectsmake such determinations in type-identifying their innerstates? Do they execute such routines in the brief time-frames in which self-ascriptions actually occur? Thisseems unlikely. I have no impossibility proof, of course;but the burden is on the RF theorist to show how themodel can handle this problem.

The third difficulty arises from two central features offunctionalism: (1) The type-identity of a token mentalstate depends exclusively on the type-identity of its re-lata, that is, the events that are (or would be) its causesand effects, its predecessors and successors, and (2) thetype-identity of an important subclass of a state's relata,namely, other internal states, depends in turn on theirrelata. To identify a state as an instance of thirst, forexample, one might need to identify one of its effects as adesire to drink. Identifying a particular effect as a desireto drink, however, requires one to identify its relata,many of which would also be internal states whose identi-ties are a matter of their relata; and so on. Complexityramifies very quickly. There is no claim here of anyvicious circularity, or vicious regress. If functionalism iscorrect, the system of internal state-types is tacked downdefinitionally to independently specified external states(inputs and outputs) via a set of lawful relations. Noncircu-lar definitions (so-called Ramsey definitions) can be givenof each functional state-type in terms of these indepen-dently understood input and output predicates (see Block1978; Lewis 1970; 1972; Loar 1981; Putnam 1967). Theproblem I am raising, however, concerns how a subjectcan determine which functional type a given state-token

18 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 19: A Gopnik - How We Know Our Minds

instantiates. There is a clear threat of combinatorial explo-sion: Too many other internal states will have to be type-identified in order to identify the target state.

This problem is not easily quantified with precision,because we lack an explicitly formulated and completefunctional theory, hence we do not know how many otherinternal states are directly or indirectly invoked by anysingle functional role. The problem is particularly acute,though, for beliefs, desires, and other propositional atti-tudes, which under standard formulations of functional-ism have strongly "holistic" properties. A given belief maycausally interact with quite a large number of other belieftokens and desire tokens. To type-identify that belief, itlooks as if the subject must track its relations to each ofthese other internal states, their relations to furtherinternal states, and so on until each path terminates in aninput or an output. When subjunctive properties areadded to the picture the task becomes unbounded, be-cause there is an infinity of possible beliefs and desires.For each desire or goal-state there are indefinitely manybeliefs with which it could combine to produce a furtherdesire or subgoal. Similarly, for each belief there areinfinitely many possible desires with which it could com-bine to produce a further desire or subgoal, and infinitelymany other beliefs with which it could combine to pro-duce a further belief. If the type-identification of a targetstate depends on tracking all of these relations untilinputs and outputs are reached, clearly it is unmanage-ably complex. At a minimum, we can see this as achallenge to an RF theorist, a challenge that no func-tionalist has tried to meet, and one that looks forbidding.

Here the possibility of partial matching may assist theRF theorist. It is often suggested that visual object identi-fication can occur without the IR completely matching theCR. This is how partially occluded stimuli can be cate-gorized. Biederman (1987; 1990) argues that even com-plex objects, whose full representation contains six ormore geons, are recognized accurately and fairly quicklywith the recovery of only two or three of these geons fromthe image. Perhaps the RF theorist would have us appealto a similar process of partial matching to account formental-state classification.

Although this might help a little, it does not get aroundthe fundamental difficulties raised by our three problems.Even if only a few paths are followed from the target stateto other internal states and ultimately to inputs and/oroutputs, the demands of the task are substantial. Nor doesthe hypothesis of partial matching address the problem ofdetermining subjunctive properties of the target state.Finally, it does not help much when classification occurswith virtually no information about neighboring states, asin the morning headache example. Thus, the simple RFmodel of mental self-ascription seems distinctly unprom-ising.

4. A second functionalist model

A second model of self-ascription is available to the RFtheorist, one that assumes, as before, that for each mentalpredicate there is a functional-role CR. (That is whatmakes an RF model functional.) This second model,however, tries to explain how the subject determines

Goldman: Folk psychology

which functional role a given state-token exemplifieswithout appealing to on-line knowledge of the state's cur-rent relations. How, after all, do people decide whichobjects exemplify other dispositional properties, for ex-ample being soluble in water? They presumably do thisby inference from the intrinsic and categorical (i.e., non-dispositional) properties of those objects. When a personsees a cube of sugar, he may note that it is white, hard,and granular (all intrinsic properties of the cube), inferfrom background information that such an object must bemade of sugar, and then infer from further backgroundinformation that it must be soluble in water (because allsugar is soluble in water). Similarly, the RF theorist maysuggest that a subject can detect certain intrinsic andcategorical properties of a mental state, and from'this hecan infer that it has a certain functional property, that is, asuitable relational and dispositional property.

Let us be a bit more concrete. Suppose that the CR forthe word headache is the functional-role property F.Further suppose that there is an intrinsic (nonrelational)property E that mental states have, and the subject haslearned that any state which has E also has the functional-role property F. Then the subject is in a position to classifya particular headache as a headache without any exces-sively demanding inference or computation. He justdetects that a particular state-token (his morning head-ache, for example) has property E, and from this he infersthat it has F. Finally, he infers from its having F that it canbe labeled headache.4

Although this may appear to save the day for RF, itactually just pushes the problem back to what we may call"the learning stage." A crucial part of the foregoingaccount is that the subject must know (or believe) thatproperty E is correlated with property F - that whenevera state has £ it also has F. But how could the subject havelearned this? At some earlier time, during the learningstage, the subject must have detected some number ofmental states, each of which had both E and F. But duringthis learning period he did not already know that E and Fare systematically correlated. So he must have had someother way of determining that the E-states in questionhad F. How did he determine that? The original diffi-culties we cited for identifying a state's functional proper-ties would have been at work during the learning stage,and they would have been just as serious then as we sawthem to be in the first model. So the second model offunctionalist self-ascription is not much of an improve-ment (if any) over the first.

In addition, the second model raises a new problem (orquestion) for RF: What are the intrinsic properties ofmental states that might play the role of property E? Atthis point let us separate our discussion into two parts,one dealing with what philosophers call sensation predi-cates (roughly, names for bodily feelings and percepts),and the other dealing with propositional attitudes (believ-ing that p, hoping that q, intending to r, etc.). In thissection we restrict our attention to sensation predicates;in section 8 we shall turn to predicates for propositionalattitudes.

What kinds of categorical, nonrelational propertiesmight fill the role of E in the case of sensations? Inaddition to being categorical and nonrelational, suchproperties must be accessible to the system that performs

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 19

Page 20: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

the self-ascription. This places an important constraint onthe range of possible properties.

There seem to be two candidates to fill the role of E: (1)neural properties and (2) what philosophers call "qualita-tive" properties (the "subjective feel" of the sensation).Presumably any sensation state or event has some neuralproperties that are intrinsic and categorical, but do theseproperties satisfy the accessibility requirements? Pre-sumably not. Certainly the naive subject does not have"personal access" (in the sense of Dennett 1969; 1978b) tothe neural properties of his sensations. That would occuronly if the subject were, say, undergoing brain surgeryand watching his brain in a mirror. Normally people donot see their brains; nor do they know much, if anything,about neural hardware. Yet they still identify their head-aches without any trouble.

It may be replied that although there is no personalaccess to neural information in the ordinary situation, thesystem performing the self-ascription may have subper-sonal access to such information. To exclude neural prop-erties (i.e., neural concepts) from playing the role of E weneed reasons to think that self-ascription does not usethese properties of sensations. It goes without saying thatneural events are involved in the relevant informationprocessing; all information processing in the brain is, atthe lowest level, neural processing. The question, how-ever, is whether the contents (meanings) encoded bythese neural events are about neural properties. This, torepeat, seems implausible. Neural events process visualinformation, but cognitive scientists do not impute neuralcontents to these neural events. Rather, they consider thecontents encoded to be structural descriptions, thingslike edges and vertices (in low-level vision) or geons (inhigher-level vision). When connectionists posit neurallyinspired networks in the analysis of, say, language pro-cessing, they do not suppose that configurations of con-nection weights encode neural properties (e.g., configu-rations of connection weights), but rather things likephonological properties.

There is more to be said against the suggestion that self-ascription is performed by purely subpersonal systems,which have access to neural properties. Obviously, agreat deal of information processing does occur at subper-sonal levels within the organism. But when the process-ing is purely subpersonal, no verbal labels seem to begenerated that are recognizably "mental." There are allsorts of homeostatic activities in which information istransmitted about levels of certain fluids or chemicals; forexample, the glucose level is monitored and then con-trolled by secretion of insulin. But we have no folkpsychological labels for these events or activities. Sim-ilarly, there are information-processing activities in low-level vision and in the selection and execution of motorroutines. None of these, however, are the subjects ofprimitive (pretheoretic) verbal labeling, certainly not"mentalistic" labeling. This strongly suggests that ourspontaneous naming system does not have access topurely subpersonal information. Only when physiologicalor neurological events give rise to conscious sensationssuch as thirst, felt heat, or the like does a primitive verballabel get introduced or applied. Thus, although there issubpersonal detection of properties such as "excess glu-cose," these cannot be the sorts of properties to which thementalistic verbal-labeling system has access.

We seem to be left, then, with what philosophers call"qualitative" properties. According to the standard philo-sophical view, these are indeed intrinsic, categoricalproperties that are detected "directly." Thus, the secondmodel of functional self-ascription might hold that inlearning to ascribe a sensation predicate like itch, one firstlearns the functional role constitutive of that word'smeaning (e.g., being a state that tends to produce scratch-ing, and so forth). One then learns that this functional roleis realized (at least in one's own case) by a certain qualita-tive property: itchiness. Finally, one decides that theword is self-ascribable whenever one detects in oneselfthe appropriate qualitative property, or quale, and infersthe instantiation of its correlated functional role. Thismodel still depicts the critical IR as a representation of afunctional role, and similarly depicts the CR to which theIR is matched.

We have found a kind of property, then, which mightplausibly fill the role of £ in the second functionalistmodel. But is this a model that a true functionalist wouldwelcome? Functionalists are commonly skeptical aboutqualia (e.g., Dennett 1988; 1991a; Harman 1990). Inparticular, many of them wish to deny that there are anyqualitative properties if these are construed as intrinsic,nonrelational properties. But this is precisely what thesecond model of RF requires: that qualitative propertiesbe accepted as intrinsic (rather than functional) propertiesof mental states. It is not clear, therefore, how attractivethe second model would be to many functionalists.

5. A qualitative model of sensationrepresentation

Furthermore, although the second functionalist modelmay retain some appeal (despite its problems with thelearning stage), it naturally invites a much simpler andmore appealing model: one that is wholly nonfunctional-ist. Once qualitative properties are introduced into thepsychological story, what need is there for functional-rolecomponents? Why not drop the latter entirely? Instead,we hypothesize that both the CR and the IR for eachsensation predicate are representations of a qualitativeproperty such as itchiness (or, as I suggest below, somemicrocomponents of the property of itchiness). Thisvastly simplifies our story, for both the learning phase andthe ascription phase itself. All one learns in the learningphase is the association between the term itch and thefeeling of itchiness. At the time of self-ascription, all onedetects (or represents) is itchiness, and then matches thisIR to the corresponding CR. This is a very natural modelfor the psychology of sensation self-ascription, at least toanyone free of philosophical prejudice or preconception.

Of course, some philosophers claim that qualitativeproperties are "queer," and should not be countenancedby cognitive science. There is nothing objectionableabout such properties, however, and they are alreadyimplicitly countenanced in scientific psychology. Onemajor text, for example, talks of the senses producingsensations of different "quality" (Gleitman 1981, p. 172).The sensations of pressure, A-flat, orange, or sour, forexample, are sharply different in experienced quality (asGleitman puts it). This use of the term quality refers todifferences across the sensory domains, or sense modal-

20 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 21: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

ities. It is also meaningful, however, to speak of qualita-tive differences within a modality, for example, the differ-ence between a sour and a sweet taste. It is wholly withinthe spirit of cognitive science, then, to acknowledge theexistence of qualitative attributes and to view them aspotential elements of systems of representation in themind (see Churchland 1985).

Although I think that this approach is basically on theright track, it requires considerable refinement. It wouldindeed be simplistic to suppose that for each word orpredicate in the common language of sensation (e.g., itch)there is a simple, unanalyzable attribute (e.g., itchiness)that is the cognitive system's CR for that term. But nosuch simplistic model is required; most sensory or sensa-tional experience is a mixture or compound of qualities,and this is presumably registered in the contents of CRsfor sensations. Even if a person cannot dissect an experi-ence introspectively into its several components or con-stituents, these components may well be detected andprocessed by the subsystem that classifies sensations.

Consider the example of pain. Pain appears to have atleast three distinguishable dimensional components (seeCampbell 1985; Rachlin 1985): intensity, aversiveness,and character (e.g., "stinging," "grinding," "shooting," or"throbbing"). Evidence for an intensity/aversiveness dis-tinction is provided by Tursky et al. (1982), who foundthat morphine altered aversiveness reports from chronicpain sufferers without altering their intensity reports. Inother words, although the pain still hurt as much, thesubjects did not mind it so much. Now it may well be thata subject would not, without instruction or training,dissect or analyze his pain into these microcomponents ordimensions. Nonetheless, representations of such com-ponents or dimensions could well figure in the CRs forpain and related sensation words; in particular, the sub-system that makes classification decisions could well besensitive to these distinct components. The situation hereis perfectly analogous to the phonological microfeatures ofauditory experience that the phonologist postulates as thefeatures used by the system to classify sequences ofspeech.

Granted that qualitative features (or their microcompo-nents) play some sort of role in sensation classification, itis (to repeat) quite parsimonious to hypothesize that suchfeatures constitute the contents of CRs for mental words.It is much less parsimonious to postulate functional-rolecontents for these CRs, with qualitative features playing apurely evidential or intermediate role. Admittedly, thereare words in the language that do have a functional-stylemeaning, and their ascriptions must exemplify the sort ofmultistage process postulated by the complex version offunctionalism. Consider the expression can-opener, forexample. This probably means something like: devicecapable of (or used for) opening cans. To identify some-thing as a can-opener, however, one does not have to seeit actually open a can. One can learn that objects havingcertain intrinsic and categorical properties (shape, sharp-ness, and so on) also thereby exemplify the requisitefunctional (relational, dispositional) property. So whenone sees an object of the right shape (etc.), one classifies itas a can-opener.

Although this is presumably the right story for somewords and expressions in the language, it is not so plaus-ible for sensation words. First, purely syntactic consider-

ations suggest that can-opener is a functional expression,but there is no comparable suggestion of functionality forsensation words. Second, there are familiar difficultiesfrom thought experiments, especially absent-qualia ex-amples such as Block's Chinese nation (Block 1978).5 Forany functional description of a system that is in pain (orhas an itch), it seems as if we can imagine another systemwith the same functional description but lacking thequalitative property of painfulness (or itchiness). Whenwe do imagine this, we are intuitively inclined to say thatthe system is not in pain (has no itch). This supports thecontention that no purely functional content exhausts themeaning of these sensation words; qualitative character isan essential part of that content.

On a methodological note, I should emphasize that theuse of thought experiments, so routine in philosophy,may also be considered (with due caution) a species ofpsychological or cognitivist methodology, complemen-tary to the methodology described earlier in this article.Not only do applications of a predicate to actual casesprovide evidence about the correlated CR, but so dodecisions to apply or withhold the predicate for imaginarycases. In the present context, reactions to hypotheticalcases support our earlier conclusion that qualitative prop-erties are the crucial components of CRs for sensationwords.

Quite a different question about the qualitative ap-proach to sensation concepts should now be addressed,namely, its compatibility with our basic framework forclassification. This framework says that self-ascriptionoccurs when a CR is matched by an IR, where an IR is arepresentation of a current mental state. Does it makesense, however, to regard an instance of a qualitativeproperty as a representation of a mental state? Is it notmore accurate to say that it is a mental state, not arepresentation thereof? If we seek a representation of amental state, should we not look for something entirelydistinct from the state itself (or any feature thereof)?

Certainly the distinction between representations andwhat they represent must be preserved. The problem canbe avoided, however, by a minor revision in our frame-work. On reflection, self-ascription does not require thematching of an instance representation to a categoryrepresentation; it can involve the matching of an instanceitself to a category representation. The term instancerepresentation was introduced because we wanted toallow approaches like functionalism, in which the stateitself is not plausibly matched to a CR, only a representa-tion of it. Furthermore, we had in mind the analogy ofperceptual categorization, where the cognizer does notmatch an actual stimulus to a mental representation of thestimulus category but an inner representation of thestimulus to a category representation.

In this respect, however, the analogy between percep-tual recognition and sensation recognition breaks down.In the case of sensation there can be a matching of thepain itself, or some features of the pain, to a storedstructure containing representations of those features.Thus, we should revise our framework to say that catego-rization occurs when a match is effected between (1) acategory representation and (2) either (a) a suitable repre-sentation of a state or (b) a state itself. Alternative (b) isespecially plausible in the case of sensations, because it iseasy to suppose that CRs for sensations are simply mem-

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 21

Page 22: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

ory "traces" of those sensations, which are easily activatedby reoccurrences of the same (or similar) sensations.

This new picture might look suspicious because itseems to lead to the much-disparaged doctrines of infal-libility and omniscience about one's own mental states. Ifa CR is directly matched to an instance of a sensationitself, is all possibility of error not precluded? And wouldit not be impossible to be unaware of one's sensations,because correct matching is inevitable? Yet surely botherror and ignorance are possible.

The proposed change implies neither infallibility noromniscience. The possibility of error is readily guaran-teed by introducing an assumption (mentioned earlier) ofpartial matching. If a partial match suffices for classifica-tion and self-ascription, there is room for inaccuracy. Ifwe hypothesize that the threshold for matching can beappreciably lowered by various sorts of "response biases"(such as prior expectation of a certain sensation), thismakes error particularly easy to accommodate. Ignorancecan be accommodated in a different way, by supplemen-tary assumptions about the role of attention. When atten-tional resources are devoted to other topics, there may beno attempt to match certain sensations to any categoryrepresentation. Even an itch or a pain can go unnoticedwhen attention is riveted on other matters. Mechanismsof selective attention are critical to the full story ofclassification, but this large topic cannot be adequatelyaddressed here.

Even with these points added, some readers mightthink that our model makes insufficient room for inciden-tal cognitive factors in the labeling of mental states. Doesnot the work on emotions by Schachter and Singer (1962),for example, show that such incidental factors are crucial?My first response is that I am not trying to address thecomplex topic of emotions but restricting attention tosensations (in this section) and propositional attitudes (insect. 8). Second, there are various ways of trying toaccommodate the Schachter-Singer findings. One possi-bility, for example, is to say that cognitive factors influ-ence which emotion is actually felt (e.g., euphoria oranger) rather than the process of labeling or classifying thefelt emotion (see Wilson 1991). So it is not clear that theSchachter-Singer study would undermine the model pro-posed here, even if this model were applied to emotions(which is not my present intent).

6. The classical functionalist account of self-ascription

Classical functionalists such as Putnam (1960), Sellars(1956), and Shoemaker (1975) have not been oblivious tothe necessity of making room in their theory for self-ascriptions or self-reports of mental states. They thoughtthis could be done without recourse to anything likequalitative properties. How, then, does their theory go,and why do I reject it?

According to the classical account, it is part of thespecification of a mental state's functional role that havingthe state guarantees a self-report of it; or, slightly better,that it is part of the functional specification of a mentalstate (e.g., pain) that it gives rise to a belief that one is inthat state (Shoemaker 1975). If one adopts the general

framework of RF that I have presented, however, it isimpossible to include this specification. Let me explainwhy.

According to our framework, a belief that one is in stateM occurs precisely when a match occurs between a CR forM and a suitable IR. (Since we are now discussing func-tionalism again, we need not worry about "direct" match-ing of state to CR.) But classical functionalism implies thatpart of the concept of being in state M (a necessary part) 1shaving a belief that one is in M. Thus, no match can beachieved until the system has detected the presence of anM-belief. However, to repeat, what an M-belief is, ac-cording to our framework, is the occurrence of a matchbetween the CR for M and an appropriate IR. Thus, thesystem can only form a belief that it is in M (achieve an IR-CR match) by first forming a belief that it is in M!Obviously this is impossible.

What this point shows is that there is an incompatibilitybetween our general framework and classical functional-ism. They cannot both be correct. But where does thefault lie? Which should be abandoned?

A crucial feature of classical functionalism is that itoffers no story at all about how a person decides whatmental state he is in. Being in a mental state automaticallyentails, or gives rise to, the appropriate belief. Preciselythis assumption of automaticity has until now allowedfunctionalism to ignore the sorts of questions raised in thispaper. In other words, functionalism has hitherto tendedto assume some sort of "nonrecognitional" or "noncri-terial" account of self-reports. Allegedly, you do not useany criterion (e.g., the presence of a qualitative property)to decide what mental state you are in. Classification of apresent state does not involve the comparison of presentinformation with anything stored in long-term memory.Just being in a mental state automatically triggers aclassification of yourself as being in that state.

It should be clear, however, that this automaticityassumption cannot and should not be accepted by cogni-tive science, for it would leave the process of mental-stateclassification a complete mystery. It is true, of course,that we are not introspectively aware of the mechanism bywhich we classify our mental states. But we are likewisenot introspectively aware of the classification processesassociated with other verbal labeling, the labeling ofthings such as birds or chairs, leapings or strollings. Lackof introspective access is obviously no reason for cognitivescientists to deny that there is a microstory of how wemake - or how our syste7ns make - mental classifications.There must be some way a system decides to say (orbelieve) that it is now in a thirst state rather than a hungerstate, that it is hoping for a rainy day rather than expectinga rainy day. That is what our general framework requires.In short, in a choice between our general framework andclassical functionalism (with its assumption of automaticself-report), cognitive science must choose the former.Any tenable form of functionalism, at least any functional-ism that purports to explain the content of naive mentalconcepts, must be formulated within this general frame-work. This is just how RF has been formulated. It neitherassumes automaticity of classification nor does it create avicious circularity (by requiring the prior detection of aclassification event - a belief- as a necessary condition forclassification). So RF is superior to classical functionalismfor the purposes at hand. Yet RF, we have seen, has

22 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 23: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

serious problems of its own. Thus, the only relevant formof functionalism is distinctly unpromising.

7. Dual representations

Thus far we have assumed that there is a single CR(however complex) paired with each mental word. Thisassumption, however, is obviously too restrictive. In-deed, there are many lines of research that suggest thepossibility of dual representations for a single word. Forexample, Biederman (1987; 1990) hypothesizes that asingle word (e.g., piano) is associated with two or morevisual object-models, (e.g., a model for a grand piano anda model for an upright). In addition, however, there maybe a wholly nonvisual representation associated withpiano (e.g., keyboard instrument that has a certain char-acteristic sound). Some neuropsychological evidence fordual representations has been provided by Warringtonand her coworkers. Warrington and Taylor (1978) studiedpatients with right-hemisphere and left-hemisphere le-sions. They were given two sorts of tasks: one involvingperceptual categorization by physical identity and theother involving semantic categorization by functionalidentity. The right-hemisphere group showed impair-ment on the perceptual categorization task. The left-hemisphere group showed no impairment on this task butthey did show impairment on the second task. For exam-ple, they could not pair (pictures of) two deck chairs thatwere physically dissimilar but had the same function.Warrington and Taylor accordingly postulate two anatom-ically separate, postsensory categorical stages in objectrecognition, which occur in sequence. In a somewhatsimilar vein, Cooper, Schacter, and their colleagues havefound evidence of distinct forms of representation ofvisual objects: a purely structural representation stored inimplicit memory and a semantical/functional representa-tion available to explicit memory (Cooper 1991; Schacteret al. 1990; 1991). Other types of evidence or argumentsfor dual representations have been given by Landau(1982), McNamara and Sternberg (1983), Putnam (1975),and Rey (1983). [See also Bradshaw & Nettleton: "TheNature of Hemispheric Specialization in Man" BBS 4(1)1981; and Shallice's pre'cis of From Neuropsychology toMental Structure, BBS 14(3) 1991.]

If the idea of dual representation is applied to sensationterms, or mental terms generally, it might support ahybrid theory that features both a qualitative representa-tion and an independent, nonqualitative representation,which might be functional. The qualitative representa-tion could accommodate self-ascriptions and therebyavert the problems we posed for pure functionalism.However, it is not clear how happy functionalists wouldbe with this solution. As we have remarked, most function-alists are skeptics about qualia. Any dual-representationtheory that invokes qualitative properties (not function-ally reconstructed) is unlikely to find favor with them.

Furthermore, functionalist accounts of mental con-cepts face another difficulty even if the functional repre-sentation is only one member of a subject's pair of repre-sentations. According to functional-style definitions ofmental terms, the meaning of each term is fixed by theentire theory of functional relations in which it appears(see Block 1978; Lewis 1972; Loar 1981). This implies that

every little change anywhere in the total theory - everyaddition of a new law or revision of an old law - entails anew definition of each mentalistic expression. Even theacquisition of a single new mental predicate requiresamending the definition of every other such predicate,since a new predicate introduces a new state-type thatexpands the set of relations in the total theory. Thisholistic feature of functionalism entails, all-pervasivechanges in one's repertoire of mental concepts. Suchglobal changes threaten to be as computationally intract-able as the familiar "frame problem" (McCarthy & Hayes1969), especially because there is a potential infinity ofmental concepts, owing to the productivity of "that"clauses and similar constructions. The problem of concep-tual change for theory-embedded concepts is acknowl-edged and addressed by Smith, Carey, and Wiser (1985),but they are concerned with tracking descent for individ-ual concepts, whereas the difficulty posed here is thecomputational burden of updating the vast array of mentalconcepts implicated by any theoretical revision.

8. A phenomenological model for the attitudes?

Returning to my positive theory, I have thus far onlyproposed a qualitative approach to sensation concepts.Let us turn now from sensations to propositional atti-tudes, such as believing, wanting, and intending. Thistopic can be divided into two parts (Fodor 1987): therepresentation of attitude types and the representation ofattitude contents. Wanting there to be peace and believ-ing there will be peace are different attitudes becausetheir types (wanting and believing) are different. Intend-ing to go shopping and intending to go home are differentattitudes because, although their type is the same, theircontents differ. In this section we consider how attitudetypes are represented; in the next we consider attitudecontents.

Philosophical orthodoxy favors a functionalist approachto attitude types. Even friends of qualia (e.g., Block1990a) feel committed to functionalism when it comes todesire, belief, and so forth. Our earlier critiques of func-tionalism, however, apply with equal force here. Vir-tually all of our antifunctionalist arguments (except theabsent-qualia arguments) apply to all types of mentalpredicates, not just to sensation predicates. So there arepowerful reasons to question the adequacy of functional-ism for the attitude types. How, then, do people decidewhether a current state is a desire rather than a belief, ahope rather than a fear?

In recent literature some philosophers use the meta-phor of "boxes" in the brain (Schiffer 1981). To believesomething is to store a sentence of mentalese in one's"belief box"; to desire something is to store a sentence ofmentalese in one's "desire box"; and so on. Should thismetaphor be taken seriously? I doubt it. It is unlikely thatthere are enough "boxes" to have a distinct one for eachattitude predicate. Even if there are enough boxes in thebrain, does the ordinary person know enough about theseneural boxes to associate each attitude predicate with oneof them (the correct one)? Fodor (1987) indicates that box-talk is just shorthand for treating the attitude types infunctional terms. If so, this just reintroduces the forbid-ding problems already facing functionalism.

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 23

Page 24: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

Could a qualitative or phenomenological approachwork for the attitude types? The vast majority of philoso-phers reject this approach out of hand, but this rejection ispremature. I shall adduce several tentative(!) argumentsin support of this approach.

First a definitional point. The terms qualia and qualita-tive are sometimes restricted to sensations (percepts andsomatic feelings), but we should not allow this to precludethe possibility of other mental events (beliefs, thoughts,etc.) having a phenomenological or experiential dimen-sion. Indeed, at least two cognitive scientists (Baars 1988;Jackendoffl987) have defen Jed the notion that "abstract"or "conceptual" thought often occupies awareness orconsciousness, even if it is phenomenologically "thinner"than modality-specific experience. Jackendoff appeals tothe tip-of-the-tongue phenomenon to argue that phenom-enology is not confined to sensations. When one tries tosay something but cannot think of the word, one isphenomenologically aware of having requisite conceptualstructure, that is, of having a determinate thought-content one seeks to articulate. What is missing is thephonological form: the sound of the sought-for word. Theabsence of this sensory quality, however, does not implythat nothing (relevant) is in awareness. Entertaining theconceptual unit has a phenomenology, just not a sensoryphenomenology.

Second, in defense of phenomenal "parity" for theattitudes, I present a permutation of Jackson's (1982;1986) argument for qualia (cf. Nagel 1974). Jackson arguesthat qualitative information cannot be captured in physi-calist (including functionalist) terms. Imagine, he says,that a brilliant scientist named Mary has lived from birthin a cell where everything is black, white, or gray. (Evenshe herself is painted all over.) By black-and-white televi-sion she reads books, engages in discussion, and watchesexperiments. Suppose that by this means Mary learns allphysical and functional facts concerning color, color vi-sion, and the brain states produced by exposure to colors.Does she therefore know all facts about color? There isone kind of fact about color perception, says Jackson, ofwhich she is ignorant: what it is like (i.e., what it feels like)to experience red, green, and so on. These qualitativesorts of facts she will come to know only if she actuallyundergoes spectral experiences.

Jackson's example is intended to dramatize the claimthat there are subjective aspects of sensations that resistcapture in functionalist terms. I suggest a parallel style ofargument for attitude types. Just as someone deprived ofany experience of colors would learn new things uponbeing exposed to them, namely, what it feels like to seered, green, and so forth, so (I submit) someone who hadnever experienced certain propositional attitudes, forexample, doubt or disappointment, would learn newthings on first undergoing these experiences. There is"something it is like" to have these attitudes, just as muchas there is "something it is like" to see red. In the case ofthe attitudes, just as in the case of sensations, the featuresto which the system is sensitive may be microfeatures ofthe experience. This still preserves parity with the modelfor sensations.

My third argument is from the introspective discrimi-nability of attitude strengths. Subjects' classificationalabilities are not confined to broad categories such asbelief, desire, and intention; they also include intensities

thereof. People report how firm is their intention orconviction, how much they desire an object, and howsatisfied or dissatisfied they are with a state of affairs.Whatever the behavioral predictive power of these self-reports, their very occurrence needs explaining. Again,the functionalist approach seems fruitless. The otherfamiliar device for conceptualizing the attitudes - namely,the "boxes" in which sentences of mentalese are stored -would also be unhelpful even if it were separated fromfunctionalism, since box storage is not a matter of degree.The most natural hypothesis is that there are dimensionsof awareness over which scales of attitude intensity arerepresented.

The importance of attitude strength is heightened bythe fact that many words in the mentalistic lexicon osten-sibly pick out such strengths. Certain, confident, anddoubtful represent positions on a credence scale; de-lighted, pleased, and satisfied represent positions on aliking scale. Since we apparently have introspective ac-cess to such positions, self-ascription of these terms in-vites an introspectivist account (or a quasi-introspectivistaccount that makes room for microfeatures of awareness).

One obstacle to a phenomenological account of theattitudes is that stored (or dispositional) beliefs, desires,and so on are outside awareness. However, there is nostrain in the suggestion that the primary understanding ofthese terms stems from their activated ("occurrent") in-carnations; the stored attitudes are just dispositions tohave the activated ones.

A final argument for the role of phenomenology takesits starting point from still another trouble with func-tionalism, a trouble not previously mentioned here. Inaddition to specific mental words like hope and imagine,we have the generic word mental. Ordinary people canclassify internal states as mental or nonmental. Notice,however, that many nonmental internal states can begiven a functional-style description. For example, havingmeasles might be described as a state that tends to beproduced by being exposed to the measles virus and tendsto produce an outbreak of red spots on the skin. So havingmeasles is a functional state; although, it clearly is not amental state. Thus, functionalism cannot fully dischargeits mission by saying that mental states are functionalstates; it also needs to say which functional states aremental. Does functionalism have any resources for mark-ing the mental/nonmental distinction? The prospects arebleak. By contrast, a plausible-looking hypothesis is thatmental states are states having a phenomenology, or anintimate connection with phenomenological events. Thispoints us again in the direction of identifying the attitudesin phenomenological terms.

Skepticism about this approach has been heavily influ-enced by Wittgenstein (1953; 1967), who questionedwhether there is any single feeling or phenomenal charac-teristic common to all instances of an attitude such asintending or expecting. (A similar worry about sensationsis registered by Churchland & Churchland 1981.) Notice,however, that our general approach to concepts does notrequire a single "defining characteristic" for each mentalword. A CR might be, for example, a list of exemplars(represented phenomenologically) associated with theword, to which new candidate instances are compared forsimilarity. Thus, even if Wittgenstein's (and Churchland& Churchland's) worries about the phenomenological

24 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 25: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

unity of mental concepts are valid, this does not exclude acentral role for phenomenological features in CRs forattitude words.

9. Content and computationalism

The commonsense understanding of the contents of thepropositional attitudes is an enormous topic; we shalltouch on it here only lightly. The central question con-cerns the "source" of contentfulness for mental states.Recent theories have tended to be externalist, claimingthat content arises from causal or causal-historical interac-tions between inner states (or symbols in the language ofthought) and external objects. It is highly doubtful, how-ever, whether any of the most developed externalisttheories gives an adequate account of the naive cognizer'sunderstanding or representation of content.

Fodor currently advocates a complex causal-counter-factual account (Fodor 1987; 1990). Roughly, a mentalsymbol C means cow if and only if (1) C-tokens are reliablycaused by cows, and (2) although noncows (e.g., horses)also sometimes cause C-tokens, noncows would not causeC-tokens unless cows did, whereas it is false that cowswould not cause C-tokens unless noncows did. Clause (2)of this account is a condition of "asymmetric dependence"according to which there being non-cow-caused C-tokensdepends on there being cow-caused C-tokens, but notconversely. It seems most implausible, however, that thissort of criterion for the content of a mental symbol is whatordinary cognizers have in mind. Similarly implausiblefor this purpose are Millikan's evolutionary account ofmental content (Millikan 1984; 1986) and Dretske's (1988)learning-theoretic (i.e., operant conditioning) account ofmental content. Most naive cognizers have never heard ofoperant conditioning, and many do not believe in evolu-tion; nevertheless, the same subjects readily ascribe be-lief contents to themselves. So did our sixteenth-centuryancestors, who never dreamt of the theory of evolution oroperant conditioning. (For further critical discussion seeCummins 1989.) Perhaps Millikan and Dretske do notintend their theories as accounts of the ordinary under-standing of mental contents. Millikan (1989), for one,expressly disavows any such intent. But then we are leftwith very few detailed theories that do address ourquestion. Despite the popularity of externalist theories ofcontent, they clearly pose difficulties for self-ascription.Cognizers seem able to discern their mental contents -what they believe, desire, or plan to do - without consult-ing their environment.6

What might a more internalist approach to contentslook like? Representationalism, or computationalism,maintains that content is borne by the formal symbols ofthe language of thought (Fodor 1975; 1981; 1987; Newell& Simon 1976). But even if the symbolic approach gives acorrect de facto account of the working of the mind, itdoes not follow that the ordinary concept of mentalcontent associates it with formal symbols per se. I wouldagain suggest that phenomenological dimensions play acrucial role in our naive view. Only what we are aware orconscious of provides the primary locus of mentalcontent.

For example, psycholinguists maintain that in sentenceprocessing there are commonly many interpretations of a

sentence that are momentarily presented as viable, butwe are normally aware of only one - the one that getsselected (Garrett 1990). The alternatives are "filtered" bythe processing system outside of awareness. Only inexceptional cases such as "garden path" sentences (e.g.,"Fatty weighed three hundred and fifty pounds ofgrapes") do we become aware of more than one consid-ered interpretation. Our view of mental content is, Isuggest, driven by the cases of which we are aware,although they may be only a minority of the data struc-tures or symbolic structures that occupy the mind.

Elaboration of this theme is not possible in the presentpaper, but a brief comment about the relevant conceptionof "awareness" is in order. Awareness, for these purposes,should not be identified with accessibility to verbal re-port. We are often aware of contents that we cannot(adequately) verbalize, either because the type of contentis not easily encoded in linguistic form or because itsmode of cognitive representation does not allow fullverbalization. The relevant notion of awareness, or con-sciousness, then, may be that of qualitative or phenome-nological character (there being "something it is like")rather than verbal reportability (see Block 1990b; 1991).

The role I am assigning to consciousness in our naiveconception of the mental bears some similarity to thatassigned by Searle (1990). Unlike Searle, however, I seeno reason to decree that cognitive science cannot legit-imately apply the notion of content to states that areinaccessible (even in principle) to consciousness. First, itis not clear that the ordinary concept of a mental statemakes consciousness a "logical necessity" (as Searle putsit). Second, even if mental content requires conscious-ness, it is inessential to cognitive science that the noncon-scious states to which contents are ascribed should beconsidered mental. Let them be "psychological" or "cog-nitive" rather than "mental"; this does not matter to thesubstance of cognitive science. Notice that the notion ofcontent in general is not restricted to mental content;linguistic utterances and inscriptions are also bearers ofcontent. So even if mental content is understood toinvolve awareness, this places no constraints of the sortSearle proposes on cognitive science.

10. Empirical research on the theory-theory

The idea underlying functionalism, that the naive cog-nizer has a "theory" of mind, goes increasingly by thelabel "the theory-theory" (Morton 1980). Although origi-nally proposed by philosophers, this idea is now endorsedby a preponderance of empirical researchers, especiallydevelopmental psychologists and cognitive anthropolo-gists. Does their research lend empirical support to thefunctionalist account of mental concepts?

Let us be clear about exactly what we mean by func-tionalism, especially the doctrine of RF (representationalfunctionalism) that concerns us here. There are twocrucial features of this view. The first feature is purerelationalism. RF claims that the way subjects representmental predicates is by relations to inputs, outputs, andother internal states. The other internal-state conceptsare similarly represented. Thus, every internal-state con-cept is ultimately tied to external inputs and outputs.What is deliberately excluded from our understanding of

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 25

Page 26: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

mental predicates, according to RF, is any reference tothe phenomenology or experiential aspects of mentalevents (unless these can be spelled out in relationalistterms). No "intrinsic" character of mental states is ap-pealed to by RF in explaining the subject's basic concep-tion or understanding of mental predicates. The secondcrucial feature of RF is the appeal to nomological (lawlike)generalizations in providing the links between eachmental-state concept and suitably chosen inputs, outputs,and other mental states. Thus, if subjects are to exemplifyRF, they must mentally represent laws of the appropriatesort. Does empirical research on "theory of mind" supporteither of these two crucial features? Let us review whatseveral leading workers in this tradition say on thesetopics. We shall find that very few of them, if any,construe theory of mind in quite the sense specified here.They usually endorse vaguer and weaker views.

Premack and Woodruff (1978), for example, say that anindividual has a theory of mind if he simply imputesmental states to himself and others. Ascriptions of mentalstates are regarded as "theoretical" merely because suchstates are not directly observable (in others), and becausesuch imputations can be used to make predictions aboutthe behavior of others. This characterization falls short ofRF, because it does not assert that the imputations arebased on lawlike generalizations and does not assert thatmental-state concepts are understood solely in terms ofrelations to external events.

Wellman's (1988; 1990) concept of the theory-theory(TT) is also quite a weak one. A body of knowledge istheory-like, he says, if it has (1) an interconnected ("co-herent") set of concepts, (2) a distinctive set of ontologicalcommitments, and (3) a causal-explanatory network.Wellman grants that some characterizations of theoriesspecify commitments to nomological statements, but hisown conception explicitly omits that provision (Wellman1990, Chap. 5). This is one reason why his version of TTfalls short of RF. A second reason is that Wellman explic-itly allows that the child's understanding of mind is partlyfounded on firsthand experience. "The meaning of suchterms/constructs as belief, desire and dream may beanchored in certain firsthand experiences, but by agethree children have not only the experiences but thetheoretical constructs" (Wellman 1990, p. 195). Clearly,then, Wellman's view is not equivalent to RF, and theevidence he adduces for his own version of TT is notsufficient to support RF.

Similarly, Rips and Conrad (1989) present evidencethat a central aspect of people's beliefs about the mind isthat mental activities are interrelated, with some activ-ities being kinds or parts of others. For example, reason-ing is a kind of thinking and reasoning is a part of problemsolving. The mere existence of taxonomies and par-tonomies (part-whole hierarchies), however, does notsupport RF, since mental terms could still be representedin introspective terms, and such taxonomies may notinvoke laws.

D'Andrade (1987) also describes the "folk model of themind" as an elaborate taxonomy of mental states, orga-nized into a complex causal system. This is no defense offunctionalism, however, since D'Andrade expressly indi-cates that concepts such as emotion, desire, and intentionare "primarily defined by the conscious experience of theperson" (p. 139). The fact that laymen recognize causal

relations among mental events does not prove that theyhave a set of laws. Whether or not belief in causal relationsrequires belief in laws is a controversial philosophicalquestion. Nor does the fact that people use mental con-cepts to explain and predict the behavior of others implythe possession of laws, as we shall see below.

The TT approach to mental concepts is, of course, partof a general movement toward understanding concepts astheory-embedded (Carey 1985; 1988; Gopnik 1984; 1988;Karmiloff-Smith & Inhelder 1975; Keil 1989; Murphy &Medin 1985). Many proponents of this approach, how-ever, acknowledge that their construal of "theory" is quitevague, or remains to be worked out. For example, Mur-phy and Medin (1985, p. 290) simply characterize a theoryas "a complex set of relations between concepts, usuallywith a causal basis"; and Keil (1989, pp. 279-80) says: "Sofar we have not made much progress on specifying whatnaive theories must look like or even what the besttheoretical vocabulary is for describing them." Thus,commitment to a TT approach does not necessarily implycommitment to RF in the mental domain; nor wouldevidential corroboration of a TT approach necessarilycorroborate RF.

A detailed defense of TT is given by Gopnik (this issue),who specifically rejects the classical view of direct orintrospective access to one's own psychological states.However, even Gopnik's view is significantly qualifiedand her evidential support far from compelling. First,although her main message is the rejection of an in-trospective or "privileged access" approach to self-knowledge of mental states, she acknowledges that weuse some mental vocabulary "to talk about consciousexperiences with a particular kind of phenomenology, theJoycean or Woolfian stream of consciousness, if you will."This does not sound like RF. Second, Gopnik seems toconcede privileged access, or at least errorless perfor-mance, for subjects' self-attributions of current mentalstates. At any rate, all of her experimental data concernself-attributions of past mental states; nowhere does shehint that subjects make mistakes about their current statesas well. How can errorless performance be explainedon her favored inferential model of self-attribution? Iffaulty theoretical inference is rampant in children's self-attribution of past states, why do they not make equallyfaulty inferences about their current states? Third, thereis some evidence that children's problems with reportingtheir previous thoughts are just a result of memory failure.Mitchell and Lacohee (1991) found that such memoryfailure could be largely alleviated with a little help.Fourth, Gopnik's TT does not explain satisfactorily whychildren perform well on self-attributions of past pretenseand imaging. Why are their inferences so much moresuccessful for those mental states than for beliefs? Finally,how satisfactory is Gopnik's explanation of the "illusion" offirst-person privileged access? If Gopnik were right thatthis illusion stems from expertise, why should we nothave the same illusion in connection with attribution ofmental states to others? If people were similarly posi-tioned vis-a-vis their own mental states and those ofothers, they would be just as expert for others as forthemselves and should develop analogous illusions; butthere is no feeling of privileged access to others' mentalstates.

At this point the tables might be turned on us. How are

26 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 27: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

we to account for attributions to others if subjects do nothave a theory, that is, a set of causal laws, to guide theirattributions? An alternative account of how such attribu-tions might be made is the "simulation," or role-taking,theory (Goldman 1989; 1992a; 1992b; Gordon 1986;1992a; Harris 1989; 1991; 1992; Johnson 1988), accordingto which one can predict another person's choices ormental states by first imagining himself in the otherperson's situation and then determining what he himselfwould do or how he would feel. For example, to estimatehow disappointed someone would feel if he lost a certaintennis match or did poorly on a certain exam you mightproject yourself into the relevant situation and see howyou would feel. You do not need to know any psychologi-cal laws about disappointment to make this assessment.You just need to be able to feed an imagined situation asinput to some internal psychological mechanism that thengenerates a relevant output state. Your mechanism can"model" or mimic the target agent's mechanism even ifyou do not know any laws describing these mechanisms.

To compete with TT, the simulation theory (ST) mustdo as well in accounting for developmental data such as3-year-olds' difficulties with false-belief ascriptions (As-tington et al. 1988; Wimmer & Perner 1983). Defendersof TT usually postulate a major change in children'stheory of mind: from a primitive theory - variously calleda "copy theory" (Wellman 1990), a "Gibsonian theory"(Astington & Gopnik 1991a), a "situation theory" (Perner1991b), or a "cognitive connection theory" (Flavell 1988) -to a full representational theory. Defenders of ST mightexplain these developmental data in a different fashion bypositing not fundamental changes of theory but increasesin flexibility of simulation (Harris 1992). Three-year-oldshave difficulty in imagining states that run directly coun-ter to their own current states; but by age four children'simaginative powers overcome this difficulty. ST also com-ports well with early propensities to mimic or imitateothers' attitudes or actions such as joint visual attentionand facial imitation (Butterworth 1991; Goldman 1992b;Harris 1992; Meltzoff & Moore 1977). Thus, ST providesan alternative to TT in accounting for attributions ofmental states to others.

11. Psychological evidence about introspectionand the role of consciousness

The positive approach to mental concepts I have ten-tatively endorsed has much in common with the classicaldoctrine of introspectionism. Does not this ignore empiri-cal evidence against introspective access to mental states?The best-known psychological critique of introspectiveaccess is Nisbett and Wilson's (1977); so let us reviewbriefly the question of how damaging that critique is andwhere the discussion now stands.

The first sentence of Nisbett and Wilson's abstractreads: "Evidence is reviewed which suggests that theremay be little or no direct introspective access to higherorder cognitive processes" (Nisbett & Wilson 1977, p.231). At first glance this suggests a sweeping negativethesis. What they mean by "process," however, is causalprocess; and what their evidence really addresses ispeople's putative access to the causes of their behavior.This awareness-of-causes thesis, however, is one that no

classical introspectionist, to my knowledge, has everasserted. Moreover, Nisbett and Wilson explicitly con-cede direct access to many or most of the private statesthat concern us here and that concern philosophy of mindin general:

We do indeed have direct access to a great storehouseof private knowledge. . . . The individual knows a hostof personal historical facts; he knows the focus of hisattention at any given point of time; he knows what hiscurrent sensations are and has what almost all psycholo-gists and philosophers would assert to be "knowledge"at least quantitatively superior to that of observ-ers concerning his emotions, evaluations, and plans.(Nisbett & Wilson, 1977, p. 255)

Their critique of introspectionism, then, is hardly asencompassing as it first appears (or as citations oftensuggest). As White (1988) remarks, "causal reports couldturn out to be a small island of inaccuracy in a sea ofinsight" (p. 37).

Nisbett and Wilson's paper reviewed findings fromseveral research areas including attribution, cognitivedissonance, subliminal perception, problem solving, andbystander apathy. Characteristically, the reported find-ings were of manipulations that produced significantdifferences on behavioral measures but not on verbal self-report measures. In Nisbett and Wilson's position-effectstudy, for example, passersby appraised four identicalpairs of stockings in a linear display and chose the pairthey judged of best quality. The results showed a strongpreference for the rightmost pair. Subjects did not reportthat position had influenced their choice and vehementlydenied any such effect when the possibility wasmentioned.

However, as Bowers (1984) points out, this sort offinding is not very damaging to any sensible form ofintrospectionism. As we have known since Hume (1748),causal connections between events cannot be directlyobserved; nor can they be introspected. A sensible formof introspectionism, therefore, would not claim that peo-ple have introspective access to causal connections, butthis leaves it open that they do have introspective accessto the mere occurrence of certain types of mental events.

Other critics, for example Ericsson and Simon (1980),complain that Nisbett and Wilson fail to investigate orspecify the conditions under which subjects are unable tomake accurate reports. Ericsson and Simon (1980; 1984)themselves develop a detailed model of the circum-stances in which verbal reports of internal events arelikely to be accurate. In particular, concurrent reportsabout information that is still in short-term memory(STM) and fully attended are more likely to be reliablethan retrospective reports. In most of the studies re-viewed by Nisbett and Wilson, however, the time lagbetween task and probe was sufficiently great to make itunlikely that relevant information remained in STM. Asensible form of introspectionism would restrict the thesisof privileged access to current states and not extend it topast mental events. Of course, people often do have long-term memories of their past mental events. But theirdirect access is then to these memories, not to the originalmental events themselves.

In a more recent work, one of the two authors, T. D.Wilson, has been very explicit in accepting direct access.He writes: "People often have direct access to their

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 27

Page 28: A Gopnik - How We Know Our Minds

Goldman: Folk psychology

mental states, and in these cases the verbal system canmake direct and accurate reports. When there is limitedaccess, however, the verbal system makes inferencesabout what these processes and states might be" (Wilson1985, p. 16). He then explores four conditions that fosterimperfect access, with the evident implication that goodaccess is the default situation. This sort of position isobviously quite compatible with the one advocated in thepresent target article.

With its emphasis on conscious or phenomenologicalcharacteristics, the present article appears to be chal-lenged by Velmans (1991a). Velmans raises doubts aboutthe role of consciousness in focal-attentive processing,choice, learning and memory, and the organization ofcomplex, novel responses. His target article seems toconjecture that consciousness does not enter causally intohuman information processing at all.

However, as many of his commentators point out,Velmans's evidence does not support this conclusion.Block (1991) makes the point particularly clear. Even ifVelmans is right that consciousness is not required for anyparticular sort of information processing, it does notfollow that consciousness does not in fact figure causally.Block also sketches a plausible model, borrowed fromSchacter (1989), in which consciousness does figure caus-ally. In the end, this debate may be misplaced, sinceVelmans (1991r), in his response to commentators, saysthat he did not mean to deny that consciousness has causalefficacy.

Velmans's views aside, the position sketched in thepresent article invites questions about how, exactly, qual-itative or phenomenological properties can figure in thecausal network of the mind. This raises large and technicalissues pertaining not only to cognitive science but also tophilosophical questions about causation, reduction andidentity, supervenience, and the like. Such issues requirean entirely separate paper (or more than one), and cannotbe addressed here.

12. Ontological implications of folk psychology

These technical issues aside, the content of folk psychol-ogy may have significant ontological implications aboutthe very existence of the mental states for which commonspeech provides labels. Eliminativists maintain that thereare no such things as beliefs, desires, thoughts, hopes,and other such intentional states (Churchland 1981; Stich1983). They are all, like phlogiston, caloric, and witches,the mistaken posits of a radically false theory, in this case acommonsense theory. The argument for eliminativismproceeds from the assumption that there is a folk theory ofmind. This article counters the argument by denying(quite tentatively, to be sure) that our mental conceptsrest on a folk theory.

It should be stressed that the study of folk psychologydoes not by itself yield ontological consequences. It justyields theses of the form: Mental (or intentional) states areordinarily conceptualized as states of kind K. This sort ofthesis, however, may appear as a premise in argumentswith eliminativist conclusions, as we have just seen. Ifkind K features nomological relations R, for example, onecan defend eliminativism by holding that no states actu-ally instantiate relations R. On the other hand, if K just

includes qualitative or phenomenological properties, it isharder to see how a successful eliminativist argumentcould be mounted. One would have to hold that no suchproperties are instantiated or even exist. Although quali-tative properties are indeed denied by some philosophersof mind (e.g., Dennett 1988; 1991; Harman 1990), thegoing is pretty tough (for replies to Dennett and Harman,respectively, see Flanagan 1992, Ch. 4, and Block 1990a).The general point, however, should be clear. Althoughthe study of folk psychology does not directly addressontological issues, it is indirectly quite relevant to suchissues.

Apart from ontological implications, the study of folkpsychology also has intrinsic interest as an importantsubfield of cognitive science. This, of course, is thevantage-point from which the present discussion hasproceeded.

ACKNOWLEDGMENTSI am grateful to the following people for very helpful commentsand discussion: Kent Bach, Paul Bloom, Paul Boghossian, Ro-bert Cummins, Carl Ginet, Christopher Hill, John Kihlstrom,Mary Peterson, Sydney Shoemaker, and Robert Van Gulick, aswell as the BBS referees. I also profited from discussions ofearlier versions of the paper at the Society for Philosophy andPsychology, the Creighton Club, CUNY Graduate Center,Cornell University, Yale University, University of Connecticut,and Brown University.

NOTES1. Type-physicalism is the doctrine that every mental state

type is identical with some neurophysiological state type.2. Eliminativism is the philosophical view that certain types

of mental states invoked by everyday language, especially thepropositional attitudes, would not appear in a fully maturescience of the mind and should therefore be "eliminated" fromour ontology.

3. Since a person does not have direct introspective access tothe contents of his CRs, he may be unable to report or evenrecognize FM as the content paired with M. This, at any rate, iswhat a defender of RF might point out. We do seem to havelimited access, however, to the meanings or contents of ourwords. Hence, if ordinary people have no recognition what-soever of functional roles as the meanings or contents of theirmental predicates, this (in my opinion) would be some evidence,though not decisive evidence, against the hypothesis that theseare their meanings or contents.

4. Note that our framework allows the RF theory to exclude Efrom the content of the CR for headache despite the fact that £ isused as "evidence" for a headache. This shows that our frame-work is not verificationist: It does not equate something's beingevidence for a type of mental state with its being in the CR forthat type of mental state.

5. Block describes an imaginary scenario in which a systeminvolving the entire Chinese nation (one billion people strong) isfunctionally equivalent for an hour to a single human mind.Although each Chinese person in this system has inner stateswith qualitative character, it is doubtful that the entire system inquestion would be conscious, or have inner states with qualita-tive character, despite the system's functional equivalence to anindividual human mind.

6. Solutions to this apparent puzzle are proposed by David-son (1987) and Burge (1988); but the former is less than transpar-ent and the latter is convincingly shown by Boghossian (1989) tobe inadequate. The considerations of this target article supportBoghossian's contention, in contrast to Burge's, that knowledgeof one's own mental contents cannot be merely automatic or"cognitively insubstantial" (as Boghossian puts it).

28 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 29: A Gopnik - How We Know Our Minds

Open Peer Commentary

Commentary submitted by the qualified professional readership of thisjournal will be consideredfor publication in a later issue as ContinuingCommentary on this article. Integrative overviews and syntheses areespecially encouraged.

Causes are perceived and introspected

D. M. ArmstrongDepartment of Traditional and Modern Philosophy, Sydney University, NewSouth Wales, Australia 2006Electronic mail: [email protected]

[Gol] One reason that Alvin Goldman gives for rejecting a"simple functionalist model" for the mind is that the "causal-relational" properties postulated by such a model are not avail-able to introspection. In section 11 he writes: "As we haveknown since Hume (1748), causal connections between eventscannot be directly observed; nor can they be introspected.'Goldman, of course, is not alone in this view. It is the orthodoxview among analytic philosophers. I believe it is a mistakenview, and will so argue in this commentary.

In a recent book (1990, p. 12) Evan Fales remarks that if thereis direct observation of causality the obvious place to look for it isin the experience we have when we push or pull other materialbodies or when other bodies push or pull on our body. What hasHume to say about this matter? Fales says that, rather aston-ishingly, Hume takes up this question only in two footnotes inthe Enquiry, Section 7. Fales goes into the matter carefully, andI will not follow him into the detail, but I think he shows howweak Hume's case is.

For myself, I have long thought we have a direct awareness ofcausality when we experience pressure on our bodies (see my1968, Index, under "Pressure"). We speak of sensations ofpressure and the natural hypothesis is that this is a direct (whichdoes not of course mean infallible) awareness of physical pres-sure. Pressure, in turn, is the action of a force; that is, it involvescausality. Thinking biologically, there can hardly be anythingmore important for an animal to know than the current actionof forces on its body. One interesting point about sensations ofpressure is that though they give us a rather clear awareness offorces acting on the body, the nature of that which acts is muchless clearly presented. The experience seems primarily to beone of being acted upon. It also seems to have a "subjunctive" ornomically modal character. Often we seem to be aware not somuch of the effect as of its tendency.

Fales goes further than I did. He suggests, very plausibly Ithink, that our experience of pushes and pulls is actually anexperience of vectors. We are aware of the action of forces. Weare aware to a degree of the magnitude of such forces. We arefurther aware in some degree of the direction of action of suchforces. We even have some elementary capacity, he suggests, toresolve the components of such forces.

Going beyond this, it may be that we even have someknowledge (of a practical and inarticulate sort) of the laws thatgovern the operation of such forces and other causes. In arecent, very interesting article (1992), Roy Sorensen suggeststhat we have, probably innately and so put in us by evolution, acertain amount of modal knowledge concerning physical laws.Modal knowledge about the vector forces that act on our bodyand the forces that we exert on the bodies in our environmentwould seem to be eminently desirable. It is plausible that weactually have such knowledge.

Once we have in this way escaped the Humean epistemic (andontological) perspective with respect to forces acting on ourbody and exerted by our body, we are in a position to reconsider

Comjnentan//Gopnik/Goldman: Knowing our minds

the question whether we may not have introspective access tocausal connections. As a matter of fact, that there is such accesswas long ago upheld by Thomas Reid (1788, Chapter 5), whowas, incidentally, a great admirer as well as critic of Hume'sthought.

Reid's idea is that when we are introspectively aware of thesuccessful operation of our will, that is, that we did some bodilyor mental thing as opposed to the thing merely happening, weare experientially aware of causality in the mental sphere. [SeeLibet; "Unconscious Cerebral Initiative and the Role of Con-scious Will in Voluntary Action" BBS 8(4) 1985.] This seems tome to be very plausible. A causal theory of action is plausible.And our awareness that we acted is, very often, as persuasive asalmost any other introspective datum so that, plausibly, ourintrospective awareness of causality is epistemically direct. I seeno reason why the awareness should not involve tendency aswell as actual effect. There will, of course, have to be mecha-nisms in the central nervous system that subserve such epis-temically unmediated awarenesses. But there seems to be nomore problem here than with other sorts of information gainedby introspective access.

My claim is that the views I have just sketched are psycho-logically plausible and will assist us to a better understanding ofthe ordinary experience of our own minds. I recognize, ofcourse, that these positions are controversial, to be arguedabout, and to be investigated empirically. But it would be nice ifargument, observation, or experiment that tells against themcould be presented, instead of unbacked up appeals to theauthority of Hume.

The concept of intentionality: Invented orinnate?

Simon Baron-CohenDepartment of Psychology and Child Psychiatry, Institute of Psychiatry,University Of London, Denmark Hill, London SES 8AF, EnglandElectronic mail: [email protected]

[Gop] Gopnik's treatment of the link between first- and third-person knowledge of intentionality is persuasive: The develop-mental evidence she amasses for the notion that the two maturesimultaneously, and thus in all likelihood reflect the acquisitionof a new theory of mind (own and other's), is impressive. It is alsoin line with evidence from children with autism. For example,the same children who fail to attribute appropriate beliefs toothers (Baron-Cohen et al. 1985) also appear unable to retrievetheir own prior, outdated belief (Baron-Cohen 1991a) or toreflect on their own misidentification of an object's identity(Baron-Cohen 1989a). Equally, when children with autism passtests of mental state attribution, they tend to do so in cases ofattribution to self or other (Baron-Cohen 1989a). Gopnik'sargument from the normal developmental data, together withthe data from children with autism, combine powerfully tosuggest that the cognitive mechanism that allows mental stateattribution is indifferent to whether the target of such attribu-tions is self, other, or indeed a thermostat (Dennett 1978b).

In this commentary, I shall focus on just one of the importantissues Gopnik raises: the notion that children as theory-buildersinvent or postulate the concept of intentionality. Consider thesetwo quotes from Gopnik:

1. "Empirical findings show that the idea of intentionality is atheoretical construct, one we invent in our early lives to explaina wide variety of evidence about ourselves and others" (sect. 1,my italics).

2. "Children's theories of the mind postulate unobservedentities (beliefs and desires). . . . " (sect. 6, my italics).The implication from these quotations is that children, facedwith the need to explain the actions of others, go individually

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 29

Page 30: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

through the intellectual grind of inventing the concept ofinten-tionality. I suggest that this view may overextend the analogy ofscientist as theorist to child as theorist.

I have no problem in applying the term theorist to the child inthis case, as it is uncontroversial that the concept of inten-tionality, once acquired, works for the child in very theorylikeways - it allows explanation and prediction of behavior, byreference to unobservable entities (mental states). Moreover,the child's knowledge of the links between perception, mentalstates, and action appears (upon probing) to be highly lawful andsystematized (Wellman 1990). Four-year-old children makeclear, theorylike assertions ("If you haven't seen what it is, thenyou won't know what it is"; or, "If you want an x, and you thinkwhat you're getting is an x, then you'll feel happy," etc.).

Where I suggest there may be an overextension in the use ofthe analogy of scientific theorist to child theorist is in theassumption that, like the scientist, the only way the child canacquire the concept of intentionality is by inventing it. This maybe how children get the concept, but I think the case for this isnot yet made. In contrast, I propose the evidence from autismsuggests that the concept of intentionality (in at least some of itsaspects) may be innate. Here is why.

I have proposed that the earliest evidence of the operation ofthis mechanism is in the 9-month-old infant's understanding ofanother person's mental state of attention (Baron-Cohen 1989b;1989c; 1991b). By itself, this reveals the child's grasp of some ofthe key properties of intentionality. For example, the infant'srepresentation "Mummy is attending to x" has the "aboutness'property of intentionality. The "aspectuality" property of inten-tionality is also present. This is evident, for example, in theinfant's production of the protodeclarative pointing gesture, todirect another person's attention toward a particular object, oran aspect of it.

It is true that the 9-month-old infant's concept of attention isunlikely to have the third property of intentionality, namely thepossibility for misrepresentation; and the bulk of evidence inthis area suggests this is indeed not well understood until about4 years of age. Given the early appearance of joint-attentionbehaviours, however, as well as their cross-cultural universality(Bruner 1983), it is unlikely that the concept of intentionality isinvented. Rather, like joint-attention itself, it may well beacquired through the triggering of an innate cognitive mecha-nism. Additional evidence for the innate, neural basis of thismechanism comes from autism. These children not only failtests of attribution of mental states like belief and intention, butthey also fail to develop joint-attention behaviours (Baron-Cohen 1989b; Sigman et al. 1986). Given that autism is itself aneurobiological disorder (Bolton & Rutter 1991; Gillberg 1991),the strong implication is that the brain abnormalities in autismimpair the development of the normal brain mechanism thatallows the representation of intentionality.

In summary, Gopnik's argument for the "theory-theory"makes a major contribution. Future work needs to disentanglewhat in this theory is innately specified and what is "theorized."

Are false beliefs representative mentalstates?

Karen Bartsch and David EstesDepartment of Psychology, University of Wyoming, Laramie, WY 82071Electronic mail: [email protected]

[Gol, Gop] In their target articles about the roles of theoreticalinference and first-person experience in folk-psychological un-derstanding, both Gopnik and Goldman invoke evidence fromempirical studies. Indeed, Gopnik's claim - that our under-standing of the intentionality of mental states is the product oftheoretical inference - is inspired by children's reasoning about

misrepresentations, particularly their developing understand-ing of false beliefs. If such evidence is to be regarded as pivotal inthis debate, it is important to examine it carefully. We wish todraw attention to some ways in which reasoning about falsebeliefs is different from reasoning about other sorts of mentalstates. These differences make us question the appropriatenessof generalizing from the special case of false belief to "how weknow our minds. " In questioning this generalization, we echoGoldman's argument against overgeneralizing from evidenceabout particular mental states to conclusions about all mentalstates.

Gopnik argues that the notion that mental states are inten-tional ("about the world," by her definition, sect. 1, para. 1) is atheoretical construction rather than the result of direct experi-ence. The most striking evidence for her claim is that childrensimultaneously acquire the ability to report their own past falsebeliefs and to predict another's action from information aboutthat person's false belief. Gopnik suggests that such findingsmark a developmental milestone, described by some as theacquisition of a representational theory of mind (e.g., Perner1991b). Although other evidence, such as children's difficultywith the appearance-reality distinction (e.g., Flavell et al.1986), is cited in support of this conclusion, young children'sdifficulties with false beliefs are arguably the catalyst for Gop-nik's claim. In any case, most of the other evidence is relevantonly insofar as it concerns children's understanding of misrepre-sentation; to that extent, it is also subject to the considerationsspelled out below.

Two aspects of reasoning about false belief distinguish it fromreasoning about other mental states. The first aspect accountsfor its central position in current debates about children'sunderstanding: When a child (or anyone) demonstrates anunderstanding of false belief by predicting an appropriate ac-tion, for example, scientists can attribute to the child theconcept of mental states, confident that the child is not merelyreasoning about the world itself. In predicting or explainingaction in terms of false belief, moreover, a child demonstrates anunderstanding that such mental states represent the world. Inthis way, reasoning about false beliefs appears to constitute asufficient demonstration of a conception of mental states, indeedof representational mental states.

It is tempting to suppose that such reasoning is also a neces-sary condition for attributing mental state understanding. Gop-nik (sect. 3.2, para. 3) contends, "In an important sense, if youcannot understand the possibility of misrepresentation, you donot understand representation at all." Yet it seems possible thatone might understand much about beliefs, such as that they are"about the world," without being able to grasp that they mightbe false about the world. Wellman's (1990) description of veryyoung children as "copy theorists" is one version of such aconception: If young children conceive of beliefs as directimpressions from the world, they might understand that peoplehave in their heads copies of the world while also thinking thatsuch copies could not be false. Copies might either be present orabsent, but if present, they would necessarily be "about theworld," yet incapable of misrepresenting it. Gopnik herself(sect. 3.2, para. 2) gives a very similar characterization of youngchildren's understanding of mental states.

Because we can in principle distinguish between the notion ofmental states as being "about the world" and as being capable ofmisrepresentation, we must question the equation assumed inGopnik's argument: Understanding false belief = understand-ing representation = understanding intentionality. In fact,some of the same empirical research invoked by Gopnik demon-strates some important ways very young children appear tounderstand that mental states are intentional in that they are"about" something or even "about the world." For example, thefinding that 3-year-olds know that mental images, unlike theirphysical counterparts, can be mentally transformed, cannot betouched, and are not accessible to the public (Estes et al. 1989)

30 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 31: A Gopnik - How We Know Our Minds

Commentary/Gopnikl'Goldman: Knowing our minds

suggests that the "aboutness" of mental images is clear to suchchildren. The finding that 3-year-olds use information aboutlimited knowledge to predict actions (Wellman & Bartsch 1988)suggests they understand that this psychological state is con-nected to the world, that it is in some sense "about the world." A3-year-olds statement that he wanted to put a letter in themailbox "so that the mailman can know where I are" (Bartsch1990) appears to reflect the same understanding.

And what about children's understanding of the intentionalityof states that are not beliefs? It can be argued that very youngchildren who lack finesse in reasoning about misrepresentationnevertheless have a firm grip on the notion that desires areabout the world. For example, they reason appropriately aboutpersons said to have different desires (Wellman & Woolley1990).

In sum, we suggest that the utility of false-belief reasoning as alitmus test for a representational theory of mind has led re-searchers, including Gopnik, to equate reasoning about falsebeliefs with understanding mental states in general. In view ofthe empirical evidence, her conclusion is more appropriatelylimited to the misrepresentational capacity of some mentalstates. This of course, leaves intact her argument that thesimultaneous acquisition of self- and other-oriented reasoningpoints to theoretical inference rather than phenomenologicalexperience as the source of this new understanding (now charac-terized as understanding misrepresentation, not intentionality).

A second unique aspect of false beliefs concerns just thispoint, however, in one sense validating Gopnik's argument fortheoretical inference and in another sense undermining it. Thesecond aspect of false belief that deserves notice is the impos-sibility of experiencing it directly: Beliefs by their nature areexperienced as true. Although one can experience (in somesense) a belief that is later recognized to be false, one neverexperiences the "falseness" of the belief currently held. Obvi-ously, awareness of one's own false beliefs must result frominference. But it is the falseness that is inferred, not necessarilythe content or the "aboutness. " Here is another respect, then, inwhich caution is warranted before generalizing to all mentalstates because a capacity for misrepresentation is only oneproperty of some, but not all, mental states. Indeed, we can onlysuppose that this is what Gopnik is trying to argue. This point isnot entirely clear, however, particularly in view of Gopnik'sintroductory statement (sect. 1, para. 1): "We believe that weknow our own beliefs and desires directly, but that we mustinfer the beliefs and desires of other people. Are we right?' Itsounds as if Gopnik is trying to argue that we are wrong - that weinfer not only the falseness but indeed the existence and contentof our own, as well as others, beliefs and desires. According tothis interpretation, we do things such as observe our own actions(e.g., looking in an empty refrigerator) and then infer that wemust have had a false belief (e.g., that there was food in therefrigerator).

But our reasoning about ourselves in such cases is surely notas tenuous or error-prone as our reasoning about others. Forexample, suppose we see a friend opening the empty refrigera-tor. We can only speculate as to what the friend's beliefs musthave been - that there was beer, or apples, or perhaps both beerand apples in the refrigerator. The same limited action on ourown part is surely understood with a qualitatively different levelof certainty. We rarely wonder whether it was beer or applesthat we thought was in the refrigerator. Moreover, a wholechain of mental events might have led up to this action: desiringa beer, deciding it was too early in the day for alcohol, consider-ing the desirability of an apple, remembering that one wasplaced in the refrigerator the previous day, and so forth.Whereas our reasoning about another can only be speculation(without a first-person report), in our own case there is noquestion as to which of these mental events has occurred.

The intuition that we are certain in understanding our ownnotions (at least much of the time) appears to reflect the first

interpretation of first-person privilege to which Gopnik refers(sect. 1, para. 4) - the notion that justification is not required forfirst-person psychological assertions. If we accept Gopnik'sacknowledgment that the latter is "obviously true," then wemust view with suspicion her argument against the epistemicinterpretation.

So at least two aspects of false beliefs distinguish them fromother sorts of mental states and, in different ways, raise ques-tions about the wisdom of centering on them a debate about"how we know our minds. ' This situation is remarkably analo-gous to another one in which evidence regarding lack of intro-spective access to one type of mental state or cognitive process isoverinterpreted or misinterpreted as pertaining to our knowl-edge of our minds in general. As Goldman notes in his targetarticle, Nisbett and Wilson's (1977) influential critique of intro-spective reports has been wrongly interpreted to be much moreencompassing than what either their evidence warranted orthey probably intended. Whereas Nisbett and Wilson's evi-dence primarily concerned people's difficulty in reporting pastcauses of their behavior, most of Gopnik's evidence concernschildren's difficulty in reporting their own and other's falsebeliefs. In both cases there is the implication that the evidencein question bears on the status of introspective access to mentalstates in general. But in both cases there are good reasons weshould not expect direct access to the mental state in questionand why the demand for a self-report might require potentiallyerroneous inference.

Both Goldman and Gopnik suggest that empirical evidencecan help to elucidate the status of our beliefs about the mind andto determine the extent to which we have privileged access toour own mental states. We agree. It is essential to recognize,however, that all mental states are not equivalent. Evidenceregarding whether our knowledge of a particular mental state isa product of inference or direct experience is not likely togeneralize to all mental states because we know some thingsabout our minds directly but we must infer others. In folkpsychology as in scientific psychology, our knowledge is acomplex interplay between theory and observation, inferenceand experience.

Towards an ecology of mind

George ButterworthDivision of Psychology, University of Sussex, Brighton BN19QR, EnglandElectronic mail: [email protected]

[Gol, Cop] Gregory Bateson (1972a) argued that a full under-standing of the properties of mind is only to be obtained inrelation to the social and ecological structures in which mind isimmanent. He maintains that it is a fundamental epistemologi-cal error to separate the mind from the ways in which it isevidenced through the body, and through human relationships,in society.

This theoretical position stands in stark contrast with one ofthe central assumptions in the recent literature on the child's"theory of mind," the assumption that an understanding ofmental states can only satisfactorily be demonstrated indepen-dently of their behavioural, social, and ecological instantiation.In my view, lack of a fully worked out ecological perspective is aproblem common to the target articles by Gopnik and byGoldman. Both fail to make sufficient capital from the fact thatminds are situated in bodies and in the social world.

In fact, Gopnik and Goldman do take different positions onthe ecological issue. Goldman is clearly the closer to an ecologi-cal stance because he appears to be arguing that the roots of folkpsychology lie ultimately in the data of everyday experience,especially where he argues that mental states are represented inrelation to inputs, outputs, and internal states (sect. 10). He is

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 31

Page 32: A Gopnik - How We Know Our Minds

Commentary/GopnildGoldman: Knowing our minds

also right that social behaviours observed in infancy, such asjoint visual attention, or the comprehension of pointing, revealsome direct understanding of other minds. This early under-standing is expressed through behaviour and is mediatedthrough perception (Butterworth & Jarrett 1990). It does notrequire that the infant have a theory that other people haveminds nor that the baby simulate the experience of another oreven introspect. And the baby does not simply perceive behav-iour and then impute mental states (just as we adults do notperceive disembodied minds). From the infant's perspectivethere is no duality of the bodily and the mental. From theperspective of ecological realism, the expressive behaviour ofother bodies reveals the presence of other minds, even tobabies. The essence of the ecological approach, as recounted byGibson (1966) is that perception is a means of obtaining informa-tion about reality; his theoretical stance can be extended toinclude social information revelatory of other minds.

If we accept the Gibsonian position on perception, the ques-tion becomes: How does development progress from directperception of other minds to the representation of mentalstates? Goldman's discussion of "sensation representation"(sect. 5) strays a long way from an ecological approach. Thedevelopmental path, from a Gibsonian ecological standpoint onperception, could not be from sensation to a mental lexicon;Gibson denies that perception is based on sensation. Thedevelopmental pathway would have to be from informationabout other minds obtained through perception to a representa-tion which preserves the information.

Gopnik accepts that a Gibsonian account may be sufficient tomake a first-order distinction between mental states and physi-cal reality but she argues that it would be insufficient to explainmisrepresentation. The "false belief" task is critical to thisargument, since it supposedly gets at specifically mental states.I believe that Gopnik is right that to understand "misrepresenta-tion" requires further cognitive mechanisms in addition to thoseof direct perception. It may not be necessary, however, to arguethat what is required is a theory of mind. Rather, a different typeof representation may be necessary to ascribe false beliefs andtrue beliefs.

This argument can be illustrated through Wimmer and Per-ner's (1983) paradigm, where the young child is told a story,using props. The story concerns a child called Maxi who helpshis mother unpack the shopping and places a bar of chocolate ina green cupboard being careful to remember where he stored itso he can come back later and eat some. He then goes off to play.His mother then uses some of the chocolate in cooking andplaces the remainder in another, blue cupboard. Mother leavesthe scene and Maxi returns. The child is asked where Maxi willsearch for the chocolate. Children of 4 years say that he will lookfor it in the blue cupboard where it was hidden most recently,even though Maxi could not know this. By 5 years, the child saysMaxi will look for it in the green cupboard, where he last saw it,thus revealing a knowledge of Maxi's false belief. Youngerchildren's egocentric judgment is based on their own knowledgeof the visible movements of the object. They fail to take intoaccount that some of these movements were not visible to Maxi.

The false-belief test depends on representing the informationavailable to the visual perception of another observer. Under-standing of a false belief is inferred from specifying the positionof the object within a represented reality. It is striking that thetask resembles very closely a series of physical search tasksdevised by Piaget (1954) for assessing a baby's understanding ofobject permanence. According to Piaget, babies become able tosearch manually for an object hidden under a cloth at about 9months, followed by mastery of visible movements of the objectby about 12 months, culminating at about 18 months, when thebaby will search persistently for a hidden object after it has beeninvisibly displaced. In fact, Piaget might have argued that thefailure of 4-year-olds and the success of 5-year-olds on false-belief tasks demonstrates a "vertical decalage" between the

achievements of stages IV, V, and VI of the sensorimotor phasein object permanence. Between 4 and 5 years the child suc-cessively masters the consequences, for another observer, ofvisible and invisible movements of an object in representedreality.

The importance of all this is that the permanence of the objectis at the root of mastering manual search and that it also enablesotherwise unrelated minds to meet in referential communica-tion about an object in infancy. So why do children consistentlyfail on the false-belief task which requires specification of placefrom a viewpoint in a represented reality?

One crucial difference between object search tasks of infancyand the false-belief task may lie in the representation of negativeinformation. Joint visual attention and searching for hiddenobjects are necessarily positive, in the sense that the object isalways present (although it may be hidden) and we may surmisethat the spatial location information as to its whereabouts isrepresented iconically (i.e., as it would be perceived). The false-belief task requires the child to reason that since Maxi did notsee the movement of the object, Maxi would represent theobject where it was first seen. That is, the child must be able toact as if the same object simultaneously exists and does not existat two locations, depending on the perspective of each observerin the story. It seems possible that children may have nodifficulty in the representation of positive beliefs as expressedthrough behaviour. The false-belief task, however, requiresthem somehow to represent the nonexistence of the object. Thechild must represent the place where the object is symbolicallylocated if he is successfully to take the perspective of Maxi. Inother words, the 4-year-old specifies the true physical location(where the object is really located) and fails on the task, whereasthe 5-year-old specifies the symbolic location (where the objectdoes not exist) and succeeds.

According to this analysis, passing the false-belief task maynot reveal the child's first understanding of mental events. Itmay reveal the beginning of the ability to represent "nonexis-tence at a place." however. Bateson (1972b) discussed the"mysterious step from the iconic to the verbal" in describing thedifferences between the signalling systems of animals and men.The ability to perform correctly on false-belief tasks may involvea first level of integration between an "iconic" system of repre-sentation that is adequate for "positive representation" and basicintentional communication and second, "symbolic" type of rep-resentation. Reasoning about the nonexistence of objects at aplace may require a symbolic (noniconic) mode adequate to thetask of representing counterfactual (negative) states of affairs.

Such an ecologically based developmental progression, in-volving the intercoordination of iconic and symbolic representa-tion, may still yield the conclusion that children have difficultieswith false beliefs. The difference is that the comprehension offalse beliefs is not divorced from the perception of true beliefsnor does it distance the developing child from what is real aboutminds. Such an analysis raises the possibility that the problemfor the child may be one of coordinating different types ofrepresentation. This is not the same thing as constructing atheory that other people have minds, nor does it amount tointrospection or the simulation of other minds. Such a fine-grained analysis of the representational processes involvedcould enable us to reunite research on reasoning about mentallife with the ecological tradition that rightly insists that mindsare embodied and in the world.

32 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 33: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

Knowing levels and the child'sunderstanding of mind

Robert L. Campbell8 and Mark H. Bickhardb

'Department of Psychology, Clemson University, Clemson, SC 29634-1511and "Departments of Philosophy and Psychology, Lehigh University,Bethlehem, PA 18015Electronic mail: '[email protected] and "[email protected]

[Gol, Gop] We are pleased to see that the developmental studyof children's conceptions of mind is beginning to have an impacton philosophy of mind. After all, if we take the program ofgenetic epistemology seriously, that is how things are supposedto work. Gopnik presents a solid summary of her own research,and studies by Flavell, Wimmer, Perner, Wellman, and others,showing that there is a major transition in children's under-standing of their own and other people's minds around age 4.Before this transition, children have trouble understandingtheir own and other people's false beliefs, deceiving otherpeople with the express goal of getting them to accept falsebeliefs, and differentiating the way an object looks from the wayit actually is. Gopnik, like a number of other investigators,proposes that children are developing a "metarepresentational"capacity to think about their own and other people's mentalrepresentations. Like most of these other investigators, sheinterprets this as a representational theory of mind.

From an interactivist standpoint, we would agree that ametarepresentational capacity develops around age 4, but we donot cast that capacity as a theory. Instead, we draw on aconception of levels of knowing (Bickhard 1973; 1978; 1980;Campbell & Bickhard 1986) whose basic intuition is as follows:Knowing is accomplished by a goal-directed system interactingwith an environment. Knowing is irreflexive; the system canknow properties of the environment by interacting with it, but itcannot know anything about itself, even though some of its ownproperties might be useful to know. A subsystem that interactswith the knowing system, much as the knowing system interactswith the environment, however, could know these properties.Specifically, a second-level system could know and learn aboutthe first-level system by interacting with it. Once the hierarchygets started, a third-level system could know things about thesecond-level system; a fourth-level system could know about thethird, and so on, unboundedly.

According to interactivism, reflexive consciousness requires asecond knowing level to interact with the first. In that manner, itis possible to acquire knowledge about knowledge and belief.The hierarchy of knowing levels is also implicated in develop-ment through stages; instead of being defined in terms ofcharacteristic mathematical structures like the Piagetian stages,knowing-level stages are defined in terms of the level of knowingat which knowledge is being constructed. The process that isresponsible for ascension to the next knowing level we callreflective abstraction, to borrow a term from Piaget (1977;1986).

Understanding false beliefs, engaging in deliberate decep-tion, sorting out appearance from reality - and very likely otherdevelopments not mentioned by Gopnik, like the emergence ofautobiographical memory (Nelson 1992) - are all instances of theonset of level 2. So, we would claim, are changes in causal andclassification reasoning that require reflective abstraction onprior understandings, but not thought about belief as such(Campbell 1992; Campbell & Bickhard 1986). All these changesbegin around age 4.

The knowing-levels conception also has an important bearingon the role of maturation in the changes that happen around age4. Gopnik states

One possibility might be that the 3- to 4-year-old shift is the result ofthe maturation of an innately determined capacity . . . . So far as Iknow no one actively working in the field, not even Leslie, hassuggested that the 3- to 4-year-old shift is the result of the maturationof such a module, (note 8)

From an interactivist standpoint, it is only possible to ascend toknowing level 2 if a physically differentiated subsystem ispresent that can interact with the level 1 knowing system. (Oncelevel 2 knowing is possible, higher levels can be reached on apurely functional basis, without extra hardware.) An empiricalconsideration of the earliest possible signs of level 2 knowing inhuman development led Bickhard (1973; 1978) to place theearliest transition at age 4; consequently, interactivism predictsthe maturation of the physical second-level knowing system (nota Fodorian module!) around this age. Obviously, the matura-tional hypothesis needs to be investigated at a neural level; wehave been proposing and elaborating this idea for nearly twentyyears (Bickhard 1973; 1978; 1980; 1992; Campbell 1992; Camp-bell & Bickhard 1986).

Now on to some secondary themes. Gopnik, like Perner(1991b), Wellman (1990), and apparently most others in this lineof work, attributes to young children a "theory" of mind (sect. 6).The reasons for doing so have never appealed to us. As Goldmanrightly points out (in his sect. 10), such talk of theories isextremely loose, ignoring, for example, any reference tonomological or lawful generalizations. Indeed, all that Carey(1985), Murphy and Medin (1985), Keil (1989), Perner (1991b),Wellman (1990), and others in this camp seem to require oftheories is that they be coherent networks of concepts that canbe used for predicting and explaining. This formulation borderson the vacuous; given what we know about human knowledge,even the limited sorts possessed by newborn babies, is there anyof it that wouldn't qualify as a "theory'? Would a Piagetianscheme not qualify? A set of production rules? (In treating"information-processing (IP) alternatives" in sect. 5 as mere"performance" considerations, Gopnik, perhaps unwittingly,trivializes IP theory.) The only thing that would not qualify as atheory is a collection of atomistic, self-encapsulated, encodedconcepts. From the interactivist standpoint, knowledge simplycannot take that form (Campbell 1992). For Goldman, however,it can, and he unwittingly lends credence to the "theory-theory"by presenting as his alternative an unorganized aggregate ofmental state concepts. Our own preference is to follow thosewho study the use, modification, and testing of explicit hypoth-eses and theories later in development (notably Kuhn et al.1988) and restrict talk of theories to that arena.

We must also comment on Gopnik's peculiar and apparentlydistorted invocations of Gibsonian theory (sect. 3.2 and note 5),including her claim that 3-year-olds, who cannot yet think aboutrepresentation, must therefore have a "Gibsonian" conceptionof the relation of mind to world. In her assertion that "therelation between real things in the world and our perception ofthem is a direct causal link, almost a transference," she isconflating Gibson (1966; 1977; 1979) with Dretske (1981). Thereis a similar conflation with Dretske in her assertion that aGibsonian approach cannot deal with error in the form of falsebeliefs or misleading appearances; were this the case, why, forinstance, did Gibson try to explain visual illusions?

More misunderstandings of Gibson and of perception showup in Gopnik's claim that our belief, as adults, in the special andprivileged nature of our first-person knowledge of ourselves, isan "illusion of expertise" (sect. 7). Gopnik asserts that when achess master claims to see the forces threatening a king, thiscannot be real perception, because that would have to involvean

experience [that is] reliably, and reasonably directly, caused by theobject. . . . In developing forms of expertise, we construct an im-plicit theory of the realm in which we are expert. Various kinds ofgenuine perception act as important evidence for that theory. . . .Given this [genuinely perceptual] evidence, or even a single piece ofit, the diagnostician draws on vast, nonperceptual, theoretical knowl-edge to make implicit inferences about the patient. He quite appro-priately applies the theory, "the patient has cancer" . . . . [But] fromhis first-person view, the cancer may simply be perceived.

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 33

Page 34: A Gopnik - How We Know Our Minds

Commentary/Gopnikl'Goldman: Knowing our minds

Note again the emptiness of talk about implicit theories - whatkinds of knowledge would Gopnik exclude from counting astheory? Moreover, this argument ignores the discussion ofgenuine, extended kinds of perception in Gibson. Indeed,Gopnik's strictures seem to rule out any and all perceptuallearning! When chicken-sexers think they can see whether anewly hatched chick is male or female, is this just another"illusion of expertise'? At a deeper level lie questions like: Whatdoes it mean to say that perception is direct? Could perceptionbe mediated by encodings? In what sense does perceptioninvolve inference? Gopnik just assumes that theory involvesmediating encodings and perception does not (for more aboutthese issues, see Bickhard & Richie 1983).

Gopnik has the advantage that her ideas are grounded in asolid, fairly well elaborated research program in developmentalpsychology. By contrast, we find Goldman's own researchprogram to be considerably less promising. His convolutedinvestigations of functionalism and folk psychology strike us aslargely beside the point, because they all rest on an inadequatemodel of categorization. In his section 2, he proposes thatpeople ascribe mental states to themselves and to others bymatching category representations (CRs) for mental state wordsto instance representations (IRs) of mental states (or, in hispreferred alternative, to mental state instances as such). "Thecontent of such an IR will be something like 'A current state (ofmine) has features <J>1; . . . , <(>„.' Such an IR will match a CRhaving the content: (J),, . . . , <J>n. Our aim is to discover, for eachmental word M, its associated CR. . . . " Now the recourse tomatching is widespread in cognitive science; conventionalinformation-processing models like those of Anderson (1983)and Newell (1990) depend on it, because production rules fireonly when a match is detected between their symbolic initialconditions and symbols present in working memory.

But matching is completely useless for explaining categoriza-tion or pattern recognition. Matching models raise the sameepistemological questions all over again in microcosm. If patternrecognition has to be explained by the matching of componentfeatures in encoded representations, is not the matching of thefeatures itself just as badly in need of explanation? How does <)>!in a CR come to represent "the same' 4>j in an IR? By parity ofargument, this would require us to match up subfeatures tyn

through <)>lk between <)>! in the CR and <)>, in the IR; those, inturn, would embroil us in the matching of subsubfeatures; andso on ad infinitum. This is, in any case, a style of argument thatshould be familiar to contemporary philosophers of mind, mostnotably via Kripke's (1982) interpretation of the later Witt-genstein.

The seduction here is that, since <|>lcr is in the same notation as<t>Ilr, it would seem that this match could be performed purelyfunctionally - by some sort of direct comparison, perhaps. Butnothing in a CR is identical to things in an IR, so this match mustbe a judgment of similarity or identity in some respects, and it isthis sort of judgment that was to be explained in the first place.Note that if judgments of match were naturalistically cashablethis easily then the general problem of intentionality would beenormously simpler, if not already solved.

Besides, if connectionists have accomplished anything at all inrecent years, they have shown how patterns can be differenti-ated without reliance on symbols or matching. Yet Goldmanpays no heed to connectionism whatsoever. Interactivism (Bick-hard 1973; 1980) offers an alternative as well, though we lack thespace to expound it here.

In sum, ongoing research about children's understanding ofmind has proven instructive and challenging for psychology andphilosophy alike, and we hope it will prosper. At the same time,we hope that more attention will be devoted to thinking aboutthe deep and sometimes slippery epistemological issues that arebound up with this kind of research. With the exception ofPerner (1991), workers in this field have displayed more enthu-

siasm in designing and running empirical studies than care inthinking about their implications.

There's more to mental states than meetsthe inner " I"

Kimberly Wright CassidyDepartment of Psychology, University of Pennsylvania, Philadelphia, PA19104Electronic mail: [email protected]

[Gop] An understanding of mental states consists of at least twocomponents. First, one may know what one's own mental statesare as represented by the ability to state or identify those beliefs.Second, one may have an understanding of the properties ofone's mental states; for example, believing that beliefs areintentional, or understanding the origins of different aspects ofknowledge. Gopnik offers very compelling evidence that todevelop the second component of mental state understanding,children must construct a theory about mental states by utilizingevidence both from self and other. Specifically, Gopnik positsthat children learn about certain aspects of mental states, such asintentionality, through the construction of a set of beliefs aboutpsychological states which the child applies equally to both selfand others. Gopnik's position contrasts with the commonsenseview that children learn about this aspect of mental stateunderstanding directly from their experience of their ownmental states, which would make it different from the way inwhich they learn about others.

Although Gopnik provides compelling support for her theoryin terms of the child's beliefs about mental states, her analysisdoes not address the first element of mental state understandingidentified above. In particular, the basic knowledge of whatone's current mental states are may not arise in the mannerpostulated by Gopnik. Simple reports of current beliefs anddesires may not require the development of a representationaltheory of mind, but may instead arise directly from psychologi-cal experience. If so, knowing one's own mental states and thoseof another would involve different processes. Which alternativeis correct is an empirical question left unresolved by the evi-dence presented by Gopnik.

Available evidence in fact suggests that there may be adifference between children's ability to report their own andothers' mental states. Children use emotion words first inrelation to self (Smiley & Huttenlocher 1989). In addition, workon children's understanding of the origin of knowledge suggeststhat they have difficulty in determining the knowledge pos-sessed by others (Perner & Ogden, 1988; Povinelli & DeBlois1992; Wimmer et al. 1988a; for an opposing view, see Pillow1989; Pratt & Bryant 1990); yet they have little trouble statingtheir own knowledge or using their knowledge in guiding theirbehavior (Povinelli & DeBlois 1992).

The typical experimental design takes for granted that chil-dren can accurately report their own current beliefs. To con-sider just one example, Gopnik and Slaughter (1991) "induced" amental state in the child; this state was then adjusted and thechild was asked to recall the prior mental state. Although 3-year-old subjects sometimes had difficulty remembering their ownpast mental states, they appeared to have little or no troublereporting their current mental states, including their beliefs. Itremains an empirical question, however, whether children areactually privileged about their own mental states in this regard,or they have this ability for others as well: Would these childrendo as well in reporting the current belief of others or the pastpretenses of another when that pretense or belief was notdirectly reported? If so, this finding would be consistent withGopnik's premise and would buttress her theory by extending it

34 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 35: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

to the conditions under which a child comes to know the mentalstates of self or others. Specifically, further research may dem-onstrate that this aspect of a child's belief also emerges from theconstruction of a theory for both self and others that precedesthe later representational theory of mind developed at age 4.The data currently available, however, suggest this will not bethe case.

Why might children develop beliefs about their mental statesdifferently from the way they identify their mental states?Perhaps evolution favored the development of certain aspects ofmental state understanding in regard to others rather than theself. It is easy to imagine how an understanding of the formationof mental states in others, or an understanding of the possibilityof misrepresentation in others, could provide a selective advan-tage to those that had it. An understanding that mental statessuch as belief are mediated by representations of the worldallows the idea of misrepresentation. Understanding thesources of mental states like belief and knowledge also allowsone to deliberately manipulate the mental states of others.

Recognizing that the mental states of others do not exactly andcompletely reflect what is in the world further allows one toengage in pedagogy and makes the transmission of informationmore efficient. For example, if an animal knows that a relative isignorant, he can take explicit steps to redress the problem. Inaddition, if animals are aware of the nature of the knowledge ofanother, they can avoid transmitting redundant information.This is especially important when there is great cost in transmit-ting information. [See Whiten & Byrne: "Tactical Deception inPrimates" BBS 11(2) 1988 and Cheney & Seyfarth: Multiplebook review of How Monkeys See the World, BBS 15(1) 1992.]

In contrast, it is difficult to find reasons why an understandingof such aspects of mental states regarding the self would providean individual with an advantage. It seems adequate to haveaccess to one's psychological experiences in order to use them todirect behavior. The more elaborate beliefs about beliefs, suchas their intentionality, do not appear to be particularly urgent inregard to the self. Such beliefs do appear to be necessary,however, both to determine the mental states of others and toalter them in desired ways. It is therefore at least reasonable toconsider that these aspects of the understanding of mental statesevolved because of the selective advantage they provided inregard to others.

A weak version of the evolutionary account would predict thatan understanding of the properties of mental states wouldemerge simultaneously for self and others. The emergence of atheory of others would be selected but could then be appliedequally to self and others. A strong version of this evolutionaryaccount would predict that children may show an understandingof the properties of others' mental states before their own.Although Gopnik has presented many studies demonstratingthat children have equal difficulty with certain aspects of bothself and others' mental states, there are few studies that directlycompare understanding of exactly the same aspect of mentalstates using the same procedure for both self and other. Gopnikand Astington (1988), however, compare children's perfor-mance on the false-belief and representational-change tasks.Both tasks require children to understand the concept thatbeliefs can misrepresent reality: for false belief the understand-ing is in regard to others' beliefs, for representational change it isin regard to one's own beliefs. Gopnik and Astington find that foreach particular task, children consistently understand the false-belief question before they understand the representational-change question. It might be more direct to test whetherchildren can report the origin of others' mental states beforetheir own, or others' past desires and beliefs before their own.

In summary, while Gopnik has convincingly shown that wemay learn about the properties of our mental states throughtheory construction rather than through first-person privilege,it is possible that the identification of current mental states is

done differently for self and other. In regard to the understand-ing of the properties of mental states, however, evolution mayhave made others the primary focus of theories of mind.

Self-ascription without qualia: A case study

David J. ChalmersCenter for Research on Concepts and Cognition, Indiana University,Bloomington, IN 47405Electronic mail: [email protected]

[Gol] In section 5 of his interesting target article, Goldmansuggests that the consideration of imaginary cases can be valu-able in the analysis of our psychological concepts. In particular,he argues that we can imagine a system that is isomorphic to usunder any functional description but lacks qualitative mentalstates, such as pains and color sensations. Whether or not such abeing is empirically possible, it certainly seems to be logicallypossible, or conceptually coherent. Goldman argues from thispossibility to the conclusion that our concepts of qualitativemental states cannot be analyzed entirely in functional terms.

This thought-experimental methodology seems sound to me,and I agree with Goldman on the logical possibility of theseabsent-qualia cases (although many functionalists would not;e.g., Armstrong 1968; Dennett 1991; Shoemaker 1975). I thinkthis methodology can be taken further, however, yielding con-clusions that oppose those that Goldman draws elsewhere in thetarget article.

Consider: If it is logically possible that my functional iso-morph might lack qualia entirely, it seems equally logicallypossible that there could be a qualia-free physical replica of me.We have already seen that there is no conceptual entailmentrelation from the functional properties of a system to thequalitative properties; it seems even clearer that there is noentailment relation from the nonfunctional implementationaldetails to qualia. (What conceptual entailment could neuro-physiological detail possibly provide that silicon, or even Chi-nese nations, could not?) So let us consider Zombie Dave, myqualia-free physical replica. Zombie Dave is almost certainly anempirical impossibility, but he is a conceptual possibility.

First, let us ask: Does Zombie Dave have beliefs? It seems tome that he does. If we ask him where his car is, he tells us that itis in the driveway. If we ask him whether he likes basketball, hetells us that he does. If we tell him that a basketball game isstarting across town in half an hour, he immediately heads forthe driveway, an action that seems to be best explained by thehypothesis that he wants to go to the basketball game, believesthat his car will get him there, and believes that his car is in thedriveway. All of the usual principles of psychological explana-tion sanction attributing beliefs to Zombie Dave; explaining hisaction without the attribution of beliefs would be a fearsomelycomplex task. (It might be objected that Zombie Dave lacks theexternal grounding required for belief contents, but we canavoid this problem by stipulating that his environment andhistory are physically indistinguishable from mine.)

Goldman argues in section 8 that beliefs, like perceptualstates, are typically accompanied by qualia; but much morewould be required to conclude that qualia are essential to astate's being a belief. (Searle [1990J has given an argument inthis direction, but it does not seem to have been widelyaccepted.) Zombie Dave's beliefs may not be colored by theusual phenomenological tinges, but it seems reasonable to saythat they are nevertheless beliefs. Beliefs, unlike qualia, seemto be characterized primarily by the role that they play in themind's causal economy. (To illustrate the difference, note that itseems coherent to be an epiphenomenalist about qualia,whether or not one finds the position plausible; but there seems

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 35

Page 36: A Gopnik - How We Know Our Minds

Commentary/Gopnikl'Goldman: Knowing our minds

to be something conceptually wrong with the idea that beliefscould be epiphenomenal.) So qualia-free believers like ZombieDave are quite conceptually coherent, and qualia do not seem tobe an essential part of our concept of belief.

Even if we resist the idea that Zombie Dave has beliefs, wecan still use him to show that qualia cannot be the primarymechanism in the self-ascription of our mental states. ForZombie Dave ascribes precisely the same mental states tohimself as I do! By some process or other, he will tell you that hethinks that Bob Dylan makes good music. How can this abilityfor self-ascription be explained? Clearly not by appealing toqualia, for Zombie Dave does not have any. The story willpresumably have to be told in purely functional terms. But oncewe have this story in hand, it will apply equally to proudpossessors of qualia such as ourselves. The self-ascription mech-anisms that Zombie Dave uses are equally the mechanisms thatwe use; at most, the difference consists in the fact that hisascriptions might be wrong, whereas ours are right. Thereforethere is no need to invoke qualia in the explanation of how weascribe mental states to ourselves. Zombie Dave does the job,presumably, either by reasoning from nonqualitative evidenceor by simply being thrown into the appropriate state. It seemslikely that we do it the same way, and that qualia are a redherring.

All this seems to lead to a rather epiphenomenalist view ofqualia. Note, for example, that the argument in the aboveparagraph applies not only to the self-ascription of beliefs, butalso to the self-ascription of qualia; so qualia do not seem to playa primary role in the process by which we ascribe qualia toourselves! (Zombie Dave, after all, ascribes himself the samequalia; he is just wrong about it.) I am happy enough with theconclusion that qualia are mostly just along for the ride, but Isuspect that Goldman and others will not be. It seems to me thatthe only way to avoid this conclusion is to deny that ZombieDave is a conceptual possibility; and the only principled way todeny that Zombie Dave is a conceptual possibility is to allow thatfunctional organization is conceptually constitutive of qualita-tive content. This is probably a step that Goldman does not wishto take, as it would negate many of his conclusions, but theremay not be any tenable middle ground between functionalismand epiphenomenalism.

The naked truth about first-personknowledge

Michael Chandler and Jeremy CarpendaleDepartment of Psychology, University of British Columbia, Vancouver,British Columbia, Canada VGT 1Z4Electronic mail: [email protected]

[Gop] Common sense has it that, although capable of a reflectiveturn that allows for the direct reading off of certain aspects of ourown immediate psychological lives, we are more or less blind toother important features of our own past and present psycho-logical functioning and so must speculate about ourselves, muchas we are obliged to speculate about the private experience ofothers. In her target article, Gopnik settles for an altogethersimpler picture by attempting to discredit as pure illusion theordinary belief that we sometimes have direct or privilegedknowledge of our own internal psychological states. This istough talk. We live in a culture heavily committed to the notionthat reflective moments are good, that knowing one's self is aworthy and attainable ambition, and that such self-insights willprobably aid us in our attempts to understand others. None ofthis succeeds, of course, in demonstrating the actual possibilityof such direct first-person knowledge, but it does underscorethe importance of coming well armed when planning to call outour culturally sanctioned conviction that we do have privileged

knowledge about certain of our private mental states. Merelyannouncing that the emperor has no clothes won't do.

Gopnik recognizes all of this and does come armed withcertain congenial research findings from the literature on chil-dren's so-called developing theories of mind - evidence that shebelieves knocks down all opposing claims about first-personprivilege by demonstrating that 3-, but not 4-year-olds typicallyfail when asked to report on their own recently outmodedbeliefs. The kernel idea here, according to Gopnik, is that ifpeople really did have direct access to the private workings oftheir own mental life then that fact ought to be evident right offthe bat. If, by contrast, young persons prove to be initiallywrong about themselves, and wrong in just the same measurethat they are typically wrong about the mental lives of others,then, by Gopnik's lights, that would count in favor of thedismissive view that direct knowledge of one's own mental life isa myth and that first- and third-person knowledge have anidentical inferential history. Several things, we suggest, arewrong with this picture; the most obvious ones are detailedbelow.

First and foremost, if one wanted to know whether peoplereally do have direct first-person knowledge of certain aspects oftheir ongoing mental life then the right way to begin would be toask them what they were currently thinking or feeling ordesiring right then and there. The great bulk of psychologicaltesting, including most of Gopnik's own attempts at studyingchildren's developing theories of mind, typically proceeds injust this fashion, by asking subjects to report on their currentthoughts and feelings. In the present case, however, Gopnikunaccountably attempts to approach the problem in a differentand curiously roundabout way by asking children about certainof their own past beliefs (i.e., that Smarties are likely to be keptin Smarties boxes) which she has just worked to invalidate (by,for example, substituting pencils for the expected Smarties).Two-year-olds, it turns out, are too young to answer suchquestions. By 3^ they commonly get all such questions right.However, for a brief few months around 3 years of age mostyoungsters mistakenly claim that they always knew that the boxcontained pencils. That is interesting, but it does not go anydistance at all toward challenging the commonsense belief thatpersons have direct first-person knowledge of certain of theirmental states. Nor, as Harris (1991) has also pointed out, is thereany compelling reason why it should. As usually understood,claims about first-person privilege are meant to be claims aboutthe ability to read off certain foveal matters that are operative onthe center stage of one's current mental life; they are not claimsabout our talents for remembering wrong-headed beliefs towhich we were previously committed. Nor would it seemparticularly adaptive if we did prove expert at stirring the ashesof our earlier cognitive mistakes.

Gopnik is clearly aware of this problem and tries to counter itin various ways. She points out, for example, that the specificpast beliefs about which she inquires are not part of someremote past but rather "immediate" past beliefs. Still, pastbeliefs are past beliefs, especially when, as in Gopnik's proce-dures, they are currently mistaken and so deserve to be dis-carded. She also provides data meant to demonstrate thatchildren of the same age are good at keeping track of recentlysatiated desires and earlier moves in games of pretense. None ofthis seems especially to the point, however, because, whereassuch desires and as-if beliefs may lose their currency, they arenot, like outmoded beliefs, seriously at odds with the way theworld is presently taken to be. Remembering when one is fullthat one was previously empty, or that the building block that istoday's pretend car was yesterday's pretend train invalidatesnothing. By contrast, if left under foot, yesterday's wrong beliefsnot only risk cluttering things up, but also actively contradictour more up-to-date views. For these reasons it does not seemimplausible that novice contenders might selectively turn ablind eye to their own outmoded beliefs, but not their spent

36 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 37: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

desires and fantasies. Or at least this is just as plausible, and doesmuch less violence to common sense, than imagining, as Gopnikdoes, that we are all totally blind to the intentional nature of ourown mental lives.

Of course, none of these objections demonstrates that youngpersons do use their own first-person experiences as analogicalstepping stones toward a more adequate account of the mentallives of others. But neither do the oblique bits of data presentedby Gopnik demonstrate that they do not; and it is she, after all,who is suggesting that the emperor has no clothes.

Categorization, theories and folk psychology

Nick ChaterDepartment of Psychology, University of Edinburgh, Edinburgh EH8 9JZ,ScotlandElectronic mail: [email protected]

[Gol] Goldman's argument against a functionalist or theory-based account of folk psychological terms is, I shall argue, bothquestion-begging and fallacious. It is question-begging becauseGoldman begins by assuming, without argument, a categoriza-tional view of concepts to the effect that to have a concept is tohave an internal state (what Goldman calls a CR) which is activejust when concept instances are present. He then argues thatthis assumption is incompatible with a theory-based view ofconcepts, according to which having a concept involves havingan entire theory of the relevant domain. This is because thetheoretical properties of a concept instance will generally not beavailable to the categorization system. An argument parallel toGoldman's, however, could start by assuming that concepts aredefined in terms of theories and argue that, since it is notpossible to distinguish instances from noninstances of a conceptaccording to their theoretical properties, the categorizationalaccount cannot be correct. Both arguments beg the question ofwhether a categorizational or theory-based view is more appro-priate.

The argument is fallacious because, in any case, Goldmandoes not establish that the categorizational and theory-basedviews are incompatible. To see this, consider how Goldman'sargument fares with a concept like mass, which is, after all, aparadigmatically theoretical term, connected with force, accel-eration, gravitational laws, and so on. An object can be classifiedas having a certain mass purely in virtue of visual or tactileperceptual input, without any knowledge of causes or effects ofthat object which hold in virtue of that mass (Goldman's firstdifficulty); without any knowledge of the relevant subjunctiveproperties of the object, such as how it would move if variousforces were applied (Goldman's second difficulty); and withoutknowing the type identity (i.e., category) of theoretically rele-vant properties, such as the forces acting on the object, and thusbeing sucked into a classificatory regress (Goldman's thirddifficulty). None of these problems arise, because classificationis effected by detecting perceptual correlates of mass, ratherthan its constituent properties.

Goldman recognizes this possible rejoinder. He notes that acube of sugar could be recognized as sugar by its whiteness,hardness, granularity rather than its theoretical properties suchas solubility. More generally, a theoretical property F can bedetected by its correlated perceptible property E. Havingrecognized this possible way out, Goldman then gives a verypuzzling argument against it. He claims that this correlationcould only be learnt in the first place given that the learner hassome independent way of recognizing Fs, thus bringing back theoriginal problem. This argument assumes that the learning of acorrelation must occur by induction from observed E, F pairs,but this is a very limited view of how learning can occur. Forexample, the learner could simply be told the theory relevant to

Fs, the role that F plays within that theory, and the fact that Fcorrelates with E. Goldman cannot retort that the learnercannot learn about Fs because having the concept of F presup-poses the ability to distinguish Fs from non-Fs, as this would justbeg the question against a theory-based account of concepts.

Finally, it is worrying that Goldman's argument make noappeal to special properties of folk psychological concepts. If thisform of argument were valid, we could conclude that no con-cepts are theory-based. The argument would be that for allconcepts there must be some category representation (CR)which is activated when concept instances are present and nototherwise, and this will simply not be possible for theoreticalterms. The conclusion that people cannot have the concepts"proton," "gene," or "force' is counterintuitive enough to pro-vide a reductio of Goldman's argument, if one were needed.

How directly do we know our minds?

Maria Czyzewska3 and Pawel Lewickib

'Department of Psychology, Southwest Texas State University, SanMarcos, TX 78666 and ^Department of Psychology, University of Tulsa,Tulsa, OK 74104

[Gop] In her target article, Gopnik argues that not all psycho-logical states are subject to direct, "first-person" experience andthat this is particularly evident in the case of "experience ofintentionality." She does not "deny that there are full, rich, first-person psychological experiences of the Joycean or Woolfiankind" and "that there may be cases in which psychological statesdo lead directly to psychological experiences, cases in whichthere is genuine perception of a psychological state." Shebelieves, however, that in the case of intentionality, we aresubject to an "illusion of direct perception" produced by ourimplicit theory of mind. In this comment, we will focus onGopnik's assumption that there is a possibility of "direct" and"genuine" access to one's own mental states.

From the perspective of cognitive psychology, all human"psychological states" and "psychological experiences" are indi-rect in the sense that they are only a final product of many stagesof sophisticated cognitive processing. The purpose of theseoperations is to organize, interpret, and translate the "objective"stimulation into subjectively meaningful experience. Everystimulus that is about to be processed and "become an experi-ence" for a subject (e.g., visual shapes, spoken words, or socialevents) first, has to be preprocessed by a system of inferentialrules (i.e., encoding algorithms, which guide the process ofinterpretation of stimuli). In other words, the specific aspects ofincoming information (stimuli) consciously noticed by the sub-ject and the subjective meaning of that information depend notonly on the objective characteristics of the stimuli but also on thepreexisting encoding algorithms used to translate the stimuliinto subjectively meaningful representations. The outcome ofthis translation (e.g., emotional reactions, impressions, prefer-ences, judgments, etc.) is determined by the specific inferentialrules utilized by the encoding algorithms.

One of the elementary properties of encoding algorithms istheir general independence from conscious cognition and con-sciously controlled belief systems (declarative knowledge). Ithas been repeatedly demonstrated that people often do notknow what specific aspects of information and what kinds ofinferential strategies were responsible for their (even verysimple) judgments and impressions (e.g., Lewicki 1986; Nisbett& Ross 1980). For example, few people are able to articulate anyof the encoding algorithms they use to determine whether ahuman face is attractive or looks "likable." Although most suchinferential rules are unavailable to one's conscious awareness,we clearly must have some working knowledge that producesthe meaningful output (i.e., first impressions) "automatically"

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 37

Page 38: A Gopnik - How We Know Our Minds

Com»nentan//Gopnik/Goldman: Knowing our minds

and in so short a time that we do not notice any delay. The waythese algorithms operate (i.e., the actual content of the inferen-tial rules) as well as their outcome (e.g., emotional or behavioralreactions) may be independent from or even inconsistent withconsciously held beliefs or preferences.

Although people cannot articulate most of the encodingalgorithms they use, and these algorithms operate withoutmediation of perceivers' conscious awareness, the developmentof procedural knowledge results from experience and thus hasbeen learned at some point. There is evidence that the develop-ment of encoding skills cannot be explained by mere automatiz-ation through conscious experience (Lewicki et al. 1992). Theyrequire qualitative learning processes different from those oper-ating under conscious control (e.g., subjects in the covariationacquisition experiments are not even aware they acquired anyknowledge, let alone able to articulate what they learned).

The implicit learning process which plays a fundamental rolein the acquisition of encoding algorithms was demonstrated inresearch on nonconscious processing of information aboutcovariations (for a recent review, see Lewicki et al. 1992). It hasrepeatedly been shown that the human cognitive system iscapable of nonconsciously detecting and processing informationabout covariations (i.e., contingencies) between features andevents in the outside world. The implicitly acquired informationabout covariations results in the development of proceduralknowledge that participates in the encoding of relevant stimuliencountered subsequently. For example, the nonconsciousprocessing of covariation between a facial feature x and apersonality characteristic y results in the development of atendency to interpret (encode) behaviors of subsequently en-countered people who possess the feature x as indicative of thepersonality characteristic y.

The processes responsible for nonconscious detection ofcovariations (and for storing them in memory in form of encod-ing algorithms) appear to be ubiquitous. They have been dem-onstrated using many types of stimulus material, experimentalsituations, and subject populations. Moreover, one's ability toimplicitly learn covariations was found to be superior to theability to detect the same information in a consciously controlledmanner(e.g., in one of the studies [Lewicki etal. 1987], subjectsnonconsciously acquired information about a four-way interac-tion, which clearly went beyond the "complexity limits" ofconsciously controlled processing of the same information inthose subjects).

The results of research on the so-called self-perpetuationeffect and other related mechanisms (e.g., nonconscious indi-rect inference, nonconscious transfer and generalization, etc.;Hill et al. 1989; 1990; Lewicki et al. 1992) indicate that manyencoding algorithms may develop in a manner that is relativelyindependent from, or even inconsistent with, the "objective"nature of the subject's environment. It has been demonstratedthat the development of new encoding algorithms may betriggered by a very small number of instances which "happened"only incidentally to be consistent with some covariation. Even ifthe perceiver encounters no further supportive evidence, theencoding biases may continue to develop in a self-perpetuatingmanner, with no support in what the person encounters in theoutside world. Considering the decisive role of encoding algo-rithms for generating the subjective meanings of one's experi-ence, the processes of self-perpetuation may account for manyindividual differences in how people experience and respond tothe environment.

In conclusion, the evidence from the cognitive research onnonconscious information processing (reviewed here) demon-strates that one of the fundamental properties of the humancognitive system is its ability to nonconsciously acquire pro-cedural knowledge which is capable of automatically influencingsubsequent perceptions. From that perspective, one's "psycho-logical states" and "psychological experiences" are all deter-mined by the same system of encoding algorithms that are

functionally independent of consciously controlled knowledge.The "real work" of regulating different aspects of human cogni-tive activity (e.g., generating overt responses, mental states, oremotional reactions) appears to be done at a level which isinaccessible to our consciousness. Our phenomenological expe-riences (e.g., perception of psychological states) are the finalproducts of these complex inferential operations governed bynonconsciously acquired encoding algorithms. Therefore, the"experience of intentionality" discussed by Gopnik is not anexception but rather a very good illustration of the generalprinciple that the human perception of mental states is "indi-rect" and results from nonconsciously acquired knowledgewhich is beyond one's control. In other words, in the perceptionof their own "minds," all people are subject to "expert'sillusions."

The anthropology of folk psychology

Steven DanielCenter for Cognitive Studies, Tufts University, Medford, MA 02155Electronic mail: [email protected]

[Gol] Any philosophical account of mental kinds and mentalconcepts will have to say something about our practice ofascribing mental states. But how much it will have to say isunclear, and it is far from obvious that it will have to tell useverything we are interested to know.

Classical functionalism provides an account of mental stateascription. As Goldman writes, it holds that "just being in amental state automatically triggers a classification of yourself asbeing in that state" (sect. 6, para. 5). In other words, classicalfunctionalism holds that it is part of the functional role of pains,for example, to produce (or tend to produce) in a subject a beliefin their own existence, and this is supposed to explain how weascribe pains to ourselves. Goldman, however, complains thatthis account of mental self-ascription is inadequate; that it fails toprovide any kind of interesting microstory "of how we make - orhow our systems make - mental classifications" (sect. 6, para. 6).

There is truth in this charge. Classical functionalism does notprovide a micro-account of mental self-ascription of the kindGoldman here envisions, one that proceeds in terms of aperson's matching IRs and CRs. Classical functionalism, inother words, is not the same thing as Goldman's representa-tional functionalism. But why is this an objection to classicalfunctionalism? Despite what Goldman says (sect. 6, para. 3-4),the "automaticity" characterizing the functionalist account cer-tainly does not entail the nonexistence of an interesting micro-story, and there is no obvious reason to believe that it isincumbent upon functionalism to provide whatever microstoryis to be had, so long as its own macrostory is correct. Goldmanclaims that folk psychology, along with its philosophical descen-dant functionalism, "requires a model of how people ascribemental states to themselves" (abstract) that goes further than anysimple macrostory. This is a substantive claim, however, onethat presupposes a demanding and controversial view of thefunction of folk psychology and the explanatory aims of classicalfunctionalism.

Of course, even if the functionalist account of mental self-ascription can stand on its own, Goldman is right in thinking thatcognitive science should try to go deeper than the functionalistmacrostory. He thinks it might even succeed if it utilizes thephenomenological model of mental self-ascription that comeswith his preferred version of representational functionalism.This, Goldman thinks, is where we will find our interestingmicrostory about mental self-ascription. The problem with thisparticular microstory, however, is that by asking us to takephenomenology seriously in a scientific context, it opens a can ofworms that we might later wish had remained unopened.

38 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 39: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

It is part of the business of cognitive science to limn theontology of the mind. If cognitive scientists take phenomenol-ogy seriously, so seriously that they countenance talk of irre-ducibly qualitative features of experience, they have to carry onsuch talk in an ontologically committed tone of voice. This willlead many cognitive scientists to conclude that they are betteroff not talking about qualia at all, since it is notoriously difficultto reconcile the existence of qualia with physicalism. Goldmandoes not argue directly for the existence of irreducibly qualita-tive features of experience. He does cite a number of examplesthat are supposed to convince us of the existence of suchfeatures, but these examples are controversial, to say the least(cf. Churchland 1985; 1989; Dennett 1987; 1988; 1991a).

Is there another microstory in terms of matching IRs to CRsthat does not rely on phenomenology? This is unclear. Goldmanconsiders an alternative representational functionalist accountaccording to which we detect that (1) particular mental statetokens have certain neural properties; and that (2) these neuralproperties are characteristic of a particular mental type; then we(3) attribute to ourselves a token of that type. He rejects thisaccount, however, on the grounds that we have no personalaccess to the neural properties of our mental states. Nor does hethink it will do any good to appeal to subpersonal access to suchproperties since "cognitive scientists do not impute neuralcontents to . . . neural events" (sect. 4, para. 7). In effect, suchan account would not fit into the general framework of Gold-man's representational functionalism since it would not proceedin terms of the matching of IRs to CRs at the personal level.

Goldman gives us no reason to regard this as a defect,however. Indeed, he gives us no convincing reason to believethat the sort of account of mental state ascription he desires,involving a person's matching IRs to CRs, is at all possible withincognitive science. Perhaps it isn't. Perhaps, in other words, ourpsychological/intentional aspects simply do not "run deepenough' to find an echo in cognitive science. In that case, thoseof us who agree with Goldman that neural events do not haveneural contents, but who remain unprepared to take qualiaseriously in scientific contexts, might want to conclude thatthere is no microstory about mental self-ascription to be had bycognitive science.

This might smack of defeatism, but it should not. I am notsuggesting that Goldman is wanting an account of mental stateascription that he is unwise to want. His mistake is simply inthinking that it is the job of cognitive science to provide it. If wewant to know how people appear to rely upon the qualitativeaspects of experience in ascribing mental states to themselves,perhaps we can investigate this without shouldering unwantedontological commitments. Classical functionalism is free toendorse such an investigation so long as it is viewed, not as apsychology of folk psychology, but as a kind of anthropology offolk psychology that is not particularly concerned with ontologi-cal scruples. This would give Goldman the sort of account ofmental self-ascription he is looking for. It differs from theaccount he provides only in that it would not view irreduciblyqualitative features of experience as sufficiently real to be ofconcern to cognitive science.

One final comment concerns Goldman's attitude toward thetheory-theory of folk psychology. Goldman's representationalfunctionalism requires us to reject the theory-theory of folkpsychology, especially the view that psychological interpreta-tion involves our relying on a folk psychological theory. Gold-man recommends that we reject these views in favor of a kind ofsimulation theory, according to which "one can predict anotherperson's choices or mental states by first imagining himself inthe other person's situation and then determining what hehimself would do or how he would feel" (sect. 10, para 9).

Setting aside the obvious questions this account raises, suchas how much of myself I am to project into the other person'sshoes, it raises the question of how, in the absence of a back-ground theory, the envisioned simulation can provide an ex-

planatory understanding of people's behavior. Simulations aresometimes explanatory, no doubt. We can simulate an airplanein a wind tunnel and perhaps explain why it crashed under theconditions it did. We can use a simulation in this way, however,only when we are already in a position to explain the behavior ofthe model (Churchland 1988b), and explaining the behavior ofthe model will require a background theory. This is of course notto downplay the importance of simulations. At times we mightadjust the background theory in light of what our simulationsreveal. The point is simply that resorting to a simulation presup-poses that some theory or other is already in place, directingone's inquiry. Otherwise we could not get the simulation underway.

Intentionality, mind and folk psychology

Winand H. Dittrich8 and Stephen E. G. Leab

Department of Psychology, University of Exeter, Washington SingerLaboratories, Exeter EX4 4QG, EnglandElectronic mail: [email protected] andbseglea@cen. exeter. ac. uk

[Col, Cop] Intentionality and theory of mind. Gopnik argues thatthe idea of intentionality as a cognitive construct is the mostplausible interpretation of the empirical findings about chil-dren's understanding of intentionality. This argument lackssupport for both methodological and structural reasons.

Methodologically, the idea of intentionality is given the statusof a theoretical construct that is used to understand both itselfand other constructs. That is the core idea of the concept of the"theory-theory." But Gopnik's attempt to confirm this constructby purely empirical means is highly problematic: What are thecriteria by which children's intentionality can be recognized,and in any case, how can any hypothetical construct be con-firmed?

Structurally, even if the experiences of self and of others dodevelop more or less together, it does not follow that both arebased on the same underlying process (which Gopnik is as-suming to be the cognitive construct of intentionality). Develop-mental parallelism is never conclusive evidence of commoncausation, and in any case the empirical findings presented canbe interpreted in at least two quite different ways. The first isbased on first-order knowledge, what Gopnik calls Searle's(1990) view, or the simulation view (Gordon 1986; Harris 1989,Ch. 3). According to this view, the development of an adequateknowledge of self requires a specific representational systemwhich only develops at the age of 3-4 years and can then be usedto understand the other mind problem. Thus, what Gopnikdescribed as the theory-theory is nothing more than the applica-tion or generalization of first-person knowledge in the domain ofthird-person knowledge. As Spitz (1957, p. 130) puts it: "Thebeginning awareness of the self is predicated on the awareness ofthe 'other'"; children attempt to simulate the other's mind byusing the world of their own self. The identical relationshipbetween the development of an understanding of self and othercan be understood as a result of the developmental process inwhich an understanding of self is used to inform an understand-ing of others and vice versa. How children think of the self andhow they conceptualize their experiences are intimately relatedissues. Gopnik rejects the simulations view because she hasa concept of intentionality in which it is not the result of apsychological state but the result of the interpretation of apsychological state based on the experience of this state. This islike a latter-day version of the James/Lange theory of emotion(James 1884; Lange 1885).

Our second alternative to Gopnik's interpretation is not incontrast to the first but focuses on a different aspect. In thisinterpretation the shift in children's behaviour which is high-

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 39

Page 40: A Gopnik - How We Know Our Minds

Commentary/Gopnik!'Goldman: Knowing our minds

lighted by Gopnik can be interpreted as synchronization be-tween their representational abilities and their ability to expressthemselves adequately. Before this age there is a so-calleddevelopmental dilemma (Heckhausen 1983), in which repre-sentational ability and acting or communicating seem discrep-ant. This would explain why children can solve some tasks,which are mainly based on perceptual competence, and notothers, which involve some competence in performance. Itwould also explain why representations of the self can be foundat the age of about 2 years (18 months before Gopnik's behav-ioural shift), when the generalization to representing othersdoes not seem to be possible, because the appropriate emotionalbasis is not fully developed. According to this view, too, thedevelopment of the representation systems for the self and forothers is much more related than Gopnik assumed.

Both alternative interpretations imply that the question ofwhat theory of mind children develop is less important than thequestion of the nature of the cognitive processes involved; andGopnik's approach cannot answer this question. What is theevidence that children and adults have the same understandingof mental states or linguistic terms about psychological states,even if they use the same words? This problem can only besolved if the basis of our intentionality, namely the structure ofobjects or actors themselves, is taken into consideration. Gop-nik's evidence is based mainly on examples from cognitivedevelopment. But the emerging sense of self as intentionalagent is also seen in infant's competence motivation, the precur-sor of achievement motivation (Heckhausen 1982), or the in-fant's ability for self-regulation (Kopp 1982). Information pro-cessing within the motivational, emotional, and communicativedomains is just as important as its purely intellectual aspects forthe development of children's conceptual knowledge abouttheir physical and the mental world.

Intentionality and knowledge. In recent research on the per-ception of intentionality (Dittrich & Lea 1992) we have foundthat the identification of intentionality or meaning is a process ofcognition but one grounded in perceptual input characteristics.It is based on the comparison between two sources of informa-tion - a visually offered one and a stored one. As in objectrecognition, if the comparison of the real object and its concep-tualized representation yields equivalence, semantic recogni-tion can take place. Thus, specific cognitive operations (e.g.,"hybrid representations") must have developed or becomeavailable before comprehension can take place. One of these"hybrid" operations has to involve the relationship of the self toothers or objects in the world. It is pointless to look for a specificdefinition or construct of intentional information. It receives itsspecific nature through the verification of perceptual and storedinformation as signs for something, not through the direct pick-up of intention from the stimulus, or through the specificmeaning of certain signals or a specific "idea of intentionality."Perception of intentionality is possible because of the specificintegration of different features; therefore, in order to under-stand the role of intentionality, our approach focused on theanalysis of the cognitive processes involved. The traditionallinear sequential model of object representation, with a linguis-tic stage at the end, should be extended toward a model ofparallel processing of different visual aspects. The key feature ofthese operations is the use of concepts, and in this respect weagree with Goldman that folk psychology can be related to thepsychology of concepts, and, in particular, to their developmen-tal aspects.

Intentionality and mind. Goldman raises the important ques-tion of how people ascribe mental states to themselves. Weagree with Goldman and Gopnik in their attempt to integrateinto scientific theories psychological facts that are given onlyphenomenologically, facts that are strongly related to our every-day life and thus have ecological validity. We disagree, how-ever, with the role they attribute to the phenomenology of thepsyche. Both authors try to establish the primacy of psychologi-

cal experience, compared to psychological states, for under-standing our cognitive processes - Gopnik through the develop-mental history of the theory-theory and Goldman through thephenomenology of concepts. We suppose that the role of phe-nomenology is purely descriptive: It provides a body of psycho-logically realistic facts for which scientific psychology must seekexplanations. In some cases scientific psychology and commonsense may certainly be congruent, but in other cases they maydiverge. Thus, phenomenology is of great value to scientifictheories, but cannot become part of them and does not sharetheir explanatory power. Scientific theories and theory-theoriesare different levels of understanding; they cannot be combined,nor are they rivals. The immediacy of the theory-theory makes itattractive, as it claims to be a purely empirical approach, but itmay fail to understand the psychological processes involved,which are not in all cases directly accessible. Understandingrequires a theoretical framework enabling us to get access to andintegrate empirical phenomena. With such a framework, fail-ures in children's performance may be as informative as thesuccesses on which Gopnik's phenomenological approach inev-itably focuses.

Goldman's attempt to understand commonsense mental rep-resentations by using an analogy from visual perception seemsproblematic for several reasons. First, why is the visual mode ofprocessing privileged to provide the basis of the analogy? Ingeneral, Goldman's analogies are arbitrary and lack a substantialbasis either in phenomenology or in the mode of cognitiveprocessing itself. He tries to justify his emphasis on qualitativeproperties with the occurrence of qualitative sensations. Quali-tative sensations, however, are dominant only in the gustatorymodality; for most other modalities, the Weber/Fechner law isdescriptively adequate, demonstrating the dominance of quan-titative factors.

Second, Goldman stresses the difference between semanticcontent and the structure of words. To match the two aspects orto match single exemplars he introduces a homunculus, thecognizer. The cognizer is necessary because it is otherwise quiteunclear how intentional aspects could be implemented in men-tal processes, which Goldman models as analogous to puresensations. He seems to propose that the structural aspects areindeed bottom-up processes, whereas semantics is provided bya top-down process. This leaves a gulf between the data base andthe interpretations erected upon it. Although Goldman, likeGopnik, argues for the purely empirical nature of his approach,the data do not by themselves determine an interpretation, butleave a gap to be bridged by inferential processes of variouskinds.

In this sense, the intentional description of mental processesis theoretical vis-a-vis the behavioural one. Neither Goldmannor Gopnik answers the question whether there is a sharedobservational base for the intentional and behavioural descrip-tions. In explaining physical reality, folk physics and physicsshare the same data base. This does not always seems to be thecase in human psychology, where third-person knowledge mayinterfere with a person's own understanding of his or herbehaviour.

Goldman's attempt to establish a psychology of folk psychol-ogy is an attempt to reestablish the principles of introspectionand apply them to mentalese. In both Goldman's and Gopnik'sapproaches, reality is the sum of the facts that are cognitivelyaccessible to humans. Therefore, accessibility through directobservation or inference from observational facts (in most ofGopnik's examples) is absolutely essential. The main progress inunderstanding our cognitive processes and mental states, how-ever, has been achieved by experimental methodology whichmade possible analyses independent of the question of acces-sibility, either at the psychological or at the neuroscientificlevel. Gopnik's demonstration of the development of the theoryof mind in children is interesting because it illustrates theoriginal formation and change of intuitive concepts. This ap-

40 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 41: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

proach has to be complemented, however, by studies of thedevelopment of the underlying processes and mechanisms.

ACKNOWLEDGMENTSPreparation was supported by the Alexander von Humboldt Foundationand the University of Exeter through a Feodor Lynen Fellowship heldby W. H. Dittrich.

Recall or regeneration of past mental states:Toward an account in terms of cognitiveprocesses

K. Anders EricssonDepartment of Psychology, Institute ol Cognitive Science, University ofColorado at Boulder, Boulder, CO 80309-0344Electronic mall: [email protected]

[Gol, Gop] In two interesting target articles by Goldman andGopnik respectively, different theories about our knowledge ofmental states are discussed and related to empirical evidence. Itis very exciting to note that empirical studies are becomingincreasingly important in resolving theoretical issues in thephilosophy of mind which until relatively recently were re-solved primarily by theoretical analysis and first-person intro-spection. My commentary will focus on how to go a step furtherby relating empirical evidence relevant to claims about mentalstates to current theories of the cognitive processes that producethem. I will first discuss the difference between the controver-sial introspective reports and other types of verbal reports aswell as their relations to cognitive processes. A process accountof Goldman's concept of having a belief suggests two cases: Inthe first, a belief is generated as an explicit thought in attention;in the second, a belief is an implicit inference that could havebeen made in a given situation. In the first case the belief canbe recalled at a later time, whereas in the latter it cannot berecalled directly (as it was never in attention) but can only berecovered by regenerating the earlier situation. This distinctionis important to my discussion of different types of processes thatmight account for Gopnik's empirical evidence on accurate andinaccurate reports about past mental states by children. Mycommentary concludes with a discussion of Gopnik's argumentfor direct perception of complex attributes of chess positions bychess experts.

Both authors refer to introspective reports as attributions of,or reports about, mental states, thus implying a separation of themental state and its reporting agent with the associated issues ofprivileged first-person access. Following the rejection of first-person introspections as scientific evidence early in this centuryresearchers have searched for different ways to obtain verbalreports on cognitive processes. Herbert Simon and I (Ericsson& Simon 1984) examined a wide range of verbal reports onthinking to assess what kinds of valid information ordinarysubjects could report. We found two types of verbal reports thatwere distinct from the rejected introspections: think-aloudreports in which subjects concurrently verbalize thoughts inattention and retrospective verbal reports in which subjectsretrospectively recall their sequence of thought for cognitiveprocesses of shorter duration. Unlike introspection, our modeldoes not require the individual to observe and reflect onthoughts and experience. Concurrent verbal reports simplyinvolve verbalizing thoughts as they enter attention as part ofthe regular thinking process. Retrospective verbal reports in-volve retrieving past thought sequences and verbalizing them asthey enter attention. Retrieving past thought sequences isfrequently easy using retrieval cues still available in short-termmemory. These two types of verbal reports provide an unobtru-sive record of the sequence of thoughts and their generation iscompletely consistent with both the known capacity limits of

attention and short-term memory and the storage and retrievalcharacteristics of long-term memory. With brief instructions,adults can give these types of verbal reports and the thoughtsreported have been successfully validated against other types ofdata such as reaction times and sequences of eye fixations andobserved actions and formal analysis of how the tasks could besuccessfully performed (task analysis, Ericsson & Simon 1984).

According to a recent review (Ericsson & Crutcher 1991) ofintrospective research since Aristotle there has been a clearconsensus that thinking consists of a sequence of thoughts. Themajor difference between introspection and think-aloud orretrospective verbal reports is that introspection goes beyondsimply verbalizing thoughts and involves a reflective analysis ofthoughts or mental states to obtain further insights about theirstructure. Often earlier philosophers would analyze their ownperceptions, sensations, or thoughts obtained through freeassociation as they were sitting at their desk. In reflecting on theperception of a common object, such as a book, it is clear that anindividual has many implicit beliefs about attributes that are not.directly visible: For example, the book consists of pages orderedby page numbers and on the pages is printed text, and so on. Byextended introspective reflection on this single mental statemost of these inferences and beliefs can be generated sequen-tially and retrieved into attention to be recorded and enumer-ated. When a subject engaged in some normal activity for amoment sees an object, such as a book, however, few if any ofthese inferences and beliefs will enter attention with its limitedcapacity; the vast majority of beliefs will only be accessible andremain at the "fringe,' to use William James's metaphor. Theverbal-report studies of thinking in the laboratory are concernedwith the normal mode of thinking and thus subjects are giventasks to complete as efficiently as possible.

With reference to Goldman's target article, I want to addressthe question of what it means to have a belief. Within theEricsson and Simon framework one can distinguish two cases. Inthe first a particular belief has been accessed or generated and iscurrently in attention as a thought; the belief can either beverbalized concurrently or reported retrospectively at a laterdate if successfully retrieved. In the second case a subject isthinking about an object such as a book and has beliefs about thisobject that are not in attention but are "activated" or "primed" inthe sense that if desired or asked by an experimenter, theattribute or belief could be immediately accessed from memory.In this case such a belief would not be reported concurrently orretrospectively. If one were asked afterwards, however, onecould easily infer associated properties of the object that onecould have accessed or thought about on the prior occasion. Infairness to Goldman I have to concede that the above distinctionis not normally made and is thus not part of folk psychology.When asked most college students insist they can see "veryclearly" in most of their visual field (= 100°) at a given instant. Infact they can only see clearly enough to read a word in 1-2°.What seems to be happening is that subjects test the clarity oftheir vision by directing their gaze to various points, unawarethat they are changing the visual fixation point. By usingappropriate experimental controls and making specific behav-ioral measurements (RT and eye fixations) we can readily dis-criminate information that is currently perceived and heededfrom information accessed following a change in the fixationpoint. We can similarly discriminate explicitly heeded knowl-edge from knowledge that is just readily accessible, and explic-itly heeded beliefs from beliefs that can be readily accessed orgenerated.

This distinction between two different cases of having a beliefis relevant to Gopnik's finding of inaccurate and accurate reportsof past mental states. In the belief condition, the children werenot asked about their beliefs, hence there is no way to knowwhether or not they actually generated thoughts expressingtheir beliers about the relevant attribute of the object. Thebelief condition is therefore quite different from the other

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 41

Page 42: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

conditions, involving a comparatively long period of activeimagination and perception during which relevant informationwas heeded and thus likely to result in retrievable memorytraces. Gopnik argues that the span of introspection extendsbeyond several minutes, whereas most other reviews of mem-ory retention, especially in regard to past thoughts (Ericsson &Simon 1984), suggest a much faster decay leading to consider-able incompleteness of recall. It seems that even adults wouldhave difficulty recalling an isolated thought about their relevantbeliefs in this type of experiment.

A more plausible hypothesis is that at the time of questioningthe older subjects are able to generate a belief that they hadgenerated or, more likely, would have generated if asked, whenthey originally perceived the object. Akin to the simulationprocesses supported by Goldman, subjects have been found toregenerate their beliefs accurately under specified circum-stances. There are, however, some interesting circumstancesunder which regeneration leads to systematically inaccurateverbal reports even with adults. Ericsson and Simon (1984)review many studies showing how subjects' reports of attitudes,interests, and opinions can be influenced by some activity that

selectively retrieves information. For example, after reading atext discussing some issue adults are unable to give unbiasedreports of their attitudes on that issue prior to the reading;asking easy questions about current politicians increases thereported level of interest in politics. These types of short-terminfluence do not seem to extend to experts, whose organizationof knowledge is better integrated (Anderson & Wright 1988).Most of the research relevant to the complex integration ofknowledge involved in generating beliefs, attitudes, and under-standing has been done in the area of text comprehension.Comprehension of well-written text proceeds quite smoothlyand gives ample evidence for processes yielding products(thoughts) in attention without reportable intermediate stepssimilar to Gopnik's direct perception of complex attributes inchess. Reading of more difficult texts or minor changes in thenormal reading procedure, however, allows mediating thoughtsand heeded information during reading to be assessed (Ericsson1988; Trabasso & Suh, in press).

Gopnik's choice of chess to make her point about directperception is a questionable one. Research on chess perceptionsupports a more mediated construction of the representation ofa given chess position based on studies of reaction time andverbal reports (Charness 1991). International chess mastersspend 5-10 minutes before selecting a move in an unfamiliarchess position although they may have intuitions about the bestmoves within 5-10 seconds. During the subsequent systematicsearch, they occasionally discover new moves that are superiorto the ones provided initially by intuition (de Groot 1978). Theability to systematically analyze and evaluate potential actionsassures the accuracy of expert performance. Perception ofmeaningful chess relations does not appear to be automatic;Lane and Robertson (1979) found no evidence for it when chessexperts were asked to count pieces on the chess board. For othertypes of processing, such as word recognition, all observableindicators (reaction time and verbal reports) support directperception (Ericsson & Simon 1984). The cultural specificityand, thus, the acquired nature of these processes is well illus-trated by comparing words in the native language to foreignwords.

In conclusion, the new research on issues in the philosophy ofmind has exciting potential and should benefit from closerintegration with recent theoretical and empirical work on com-plex thinking and expert performance.

Goldman has not defeated folk functionalism

James H. FetzerDepartment of Philosophy, University of Minnesota, Duluth, MN 55812Electronic mail: [email protected]

[Gol] Goldman's attack upon folk functionalism should not beconfused with a rejection of scientific functionalism, which he isnot inclined to criticize. He does not want to deny that scientistsmay be able to study mental phenomena from a functionalistperspective. What he wants to deny is that functionalism cap-tures our ordinary beliefs about ourselves. If there is a contin-uum between them, however, then successful criticisms of oneare likely to carry over to the other. My purpose here is toexplain why Goldman's critique does not succeed against theo-ries of either kind, thereby defending functionalism withinordinary and scientific contexts.

According to "causal role" functionalism of the type I advo-cate, the meanings of mental states are properly identified withtheir causal roles in influencing behavior. Meanings are disposi-tional properties that may have very diverse manifestations

under different conditions. Since our actions are determined byour complete sets of motives, beliefs, ethics, abilities, capa-bilities and opportunities, the values of all of these variableshave to be specified in determining the (possibly probabilistic)strengths of our dispositions toward behavior under variousconditions. The theory of meaning thereby presupposes thetheory of action (Fetzer 1989; 1990; 1991; 1992).

When someone is aware of the presence of a glass of water inhis immediate vicinity, the behavior that person may displayunder various conditions can thereby be affected. If he is thirsty(and believes that the water is intended for him), he may drink it(though perhaps not if he is on a religious fast, has his arms in acast, or someone beats him to it). If a mislaid cigarette were toset the sofa on fire (and he wanted to extinguish it), he mightempty the water from his glass to put it out (although variousinterfering conditions can easily be imagined).

On this account, the meaning of a specific belief is fixed by thetotality of dispositions for various kinds of behavior that some-one with that belief would possess, relative to various (possiblyinfinite) combinations of those other beliefs, motives, ethics,abilities, capabilities, and opportunities whose presence or ab-sence makes a difference to the behavior that would be dis-played under those conditions. The complete meaning of thatbelief could therefore be formalized (were it possible) by meansof a (possibly infinite) set of subjunctive conditionals specifyingthose outcomes in those contexts.

Goldman advances three reasons why the psychological as-criptions of ordinary persons to others and themselves are notproperly envisioned as functionalist. The first is that we may beignorant of causes and effects, that is, of the causes that bringabout various mental states and the effects they bring about.Perhaps we have a headache, he suggests, yet do not know itsspecific causes. Presumably, however, our self-ascription of aheadache is based on symptoms of having a headache, which areamong its effects. Otherwise, how would someone know that hehad a headache?

Goldman tends to emphasize introspection as an access routeto mental states. Introspection thus provides access to internalstates, just as perception provides access to external states. Butboth presumably depend on a causal interaction between theobserver and the observed. If our internal states could notinteract causally with our introspective abilities, introspectionwould not occur, just as perception cannot occur if externalstates do not interact causally with our perceptual abilities.Nothing here supports the conclusion that uncaused informa-tion has been transmitted.

Goldman is right that a technically adequate theoreticalformulation of functionalism of this kind requires the use ofsubjunctive conditionals. But he is wrong in suggesting that

42 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 43: A Gopnik - How We Know Our Minds

Commentary/GopniV./'Goldman: Knowing our minds

ignorance of subjunctive properties is a reason to discount thecausal role conception. We can act on the basis of our beliefsabout what would happen if we were to drink the glass of water(or poured it on the fire, etc.), whether or not we ever realizethat we are trading in subjunctive conditionals. It is not acondition of causal role functionalism that everyone must ade-quately understand it.

Indeed, we can have incomplete or false beliefs about variousmental states, just as we can have incomplete and false beliefsabout metals and compounds. That does not indicate that thosebeliefs themselves do not satisfy the causal role conception.Goldman therefore inadvertently obscures the difference be-tween ontic questions about the nature of these states them-selves and epistemic questions about our transitory states ofknowledge and belief. Our ignorance either of specific causesand effects or of particular subjunctive conditionals does notdiscredit functionalism.

Goldman's emphasis upon introspection, I think, creates ahighly misleading impression of our capacity to understand oursituation in the world. He maintains that our ascriptions ofpsychological predicates to others are based on those we ascribeto ourselves, yet the opposite seems true. Charles S. Peirce(1868) advanced the conjecture that we learn about ourselves inthe same way we leani about others, namely, by observation andinference. The reason we may know more about ourselves thanwe know about others is that we spend considerably more timein our own company.

As a result, our knowledge about ourselves tends to differfrom our knowledge of others more in degree than in kind.Knowledge of ourselves is not epistemically prior to knowledgeof others and does not necessarily depend upon any faculty ofintrospection. We are often able to acquire reliable knowledgeabout the (actual or potential) effects of our own (actual orpotential) actions by means of abductive inference, a process of"inference to the best explanation" (Fetzer 1993), from theobserved effects of the actions of others. We can acquire knowl-edge of the folk psychological kind on the basis of ordinary dailyexperience.

Goldman's combinatorial explosion objection is equally mis-taken. He is right in thinking that there are infinitely manypossible conditions under which various kinds of behavior mightbe displayed as a manifestation of the meaning of a mental state.But that does not mean we have to understand them all in orderto understand them at all, any more than we have to have addedevery pair of numbers to understand how to add any pair ofnumbers. The potential for combinatorial explosion does notdefeat the conception of folk psychology as a version offunctionalism.

If these considerations are correct, Goldman's negative caseagainst folk functionalism really carries no weight. Scientificfunctionalism may appropriately be viewed as an extension andrefinement of its folk predecessor by assuming a transitionalcontinuum between somewhat vague and incomplete ordinarybeliefs and more precise and complete scientific knowledge.Scientific functionalism can be understood as arising from amore systematic and deliberate progression of observation andexperimentation, which gradually overcomes our ignorance ofthe causes and effects and subjunctive conditionals that ade-quately display the meaning of our mental states.

Competing accounts of belief-taskperformance

Alvin I. GoldmanDepartment of Philosophy, University of Arizona, Tucson, AZ 85721Electronic mall: [email protected]

[Cop] Gopnik claims that the only way to explain performancechanges on belief tasks as children develop is to postulate a

certain change in their theory of mind. Three-year-olds al-legedly have a theory of belief that does not permit the notion offalse belief; only when a more sophisticated theory emerges thatpermits false belief (a representational theory) does perfor-mance improve. I shall make three points about this claim.First, evidence strongly suggests that 3-year-olds do grasp thenotion of false belief. Second, information-processing factorsseem to be implicated in the poor performance by 3-year-olds,so the prospects for an information-processing explanation of thefindings are good. Third, the first-person "introspective" ac-count of the representation of belief is perfectly compatible withperformance changes, contrary to what Gopnik asserts.

That 3-year-olds and even 2£-year-olds grasp the notion offalse belief is supported by Chandler et al.'s (1989) study of theearly deployment of deception. Even 2^-year-olds were found tobe capable of using a range of deceptive strategies that tradeupon an awareness of the possibility of false belief. There is alsoBartsch and Wellman's (1989) finding that most 3-year-oldscould generate a false-belief explanation of a story character'saction. When asked why Jane is looking for her kitten under thepiano although the kitten is in fact under a chair, 15 out of 233-year-olds generated false-belief explanations such as "Shethinks the kitten's under the piano.' Even more significant isMitchell and Lacohee's (1991) recent finding that 3-year-oldsperform well on the first-person false-belief task when they aregiven mnemonic help. After children expressed their falseexpectation of what was in a box (e.g., candy) they selected apicture of what they thought was inside and "mailed" it. Withthis manipulation, 20 out of 28 3-year-olds later reported cor-rectly that they had previously thought that candy was in thebox, although they now knew it was pencils. Similarly, whenSiegal and Beattie (1991) changed the linguistic form of a false-belief question, they found great improvement in 3-year-olds'performance.

Mitchell and Lacohe'e's results strongly support an information-processing explanation of Gopnik's main findings. The result ofthe "mailing" condition was almost a complete reversal of theresult of the control (nonmailing) group, where only 4 childrenout of 28 correctly reported their previous false belief. Appar-ently the overt mailing action strengthened subjects' memoriesof what they had previously thought, or enabled them todisentangle these memories from their current thoughts aboutthe box. True, 3-year-olds' excellent performance on pretense,imagination, and perception tasks (Gopnik & Slaughter 1991)suggests that they do not have a simple memory problem in thebelief case, but they may still suffer an information-processingdifficulty in sorting out conflicting drafts of reality as they haveconceived of it over a period of time. This is different from sayingthat they do not grasp the notion of false belief at all.

Finally, it is wrong to assume that only the theory-theory canexplain developmental changes in mental ascription perfor-mance. Even if 3-year-olds do not fully comprehend the natureof representation, this hypothesis is compatible with theirhaving a first-person phenomenological conception of mentalstates. Suppose that children (and adults) primarily understandbelief as a distinct kind of phenomenological state from whichcontent can be "read off." This may suffice to enable them toapply the belief concept to their own current states, but it doesnot suffice to enable them to apply it to other people or to theirown past states. For these tasks they need additional informa-tion or skills that may not be perfected at age 3. For example,when they experience multiple belieflike states that conflict incontent, some of which are in fact retrieved from memory, theymay have trouble sorting these out and dating them appro-priately. This could give rise to the observed deficit that Gopnikreports.

Gopnik supposes that a first-person phenomenological ap-proach to mental concepts is incompatible with growth orchange. In section 2 she discusses three accounts of the origin ofthe idea of intentionality and suggests that only the theory-

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 43

Page 44: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

theory permits change. In fact, the first-person approach cou-pled with the simulation theory (as an account of third-personascription) can accommodate performance changes. For exam-ple, it can say that 3-year-olds are imperfect simulators of otherpersons because they allow their own beliefs to creep into theirsimulations, which leads them to make erroneous predictions ofactions in the standard false-belief tasks. By age 4 they are betterat "isolating" their own beliefs when they simulate others'thought and action (Goldman 1992b).

Moreover, it still is not clear to me how the theory-theoryaccommodates virtual flawlessness of 3-year-olds' performanceon their own current mental states. Such flawlessness is almost amethodological assumption of the research in this area. WhenGopnik and Slaughter (1991), for example, wish to determinewhat initial mental states children are in, they ask them ques-tions like "What do you think is inside this box?" or "Which bookdo you want to read?" or "Are you hungry?" They assume (as weall would) that the children's answers are correct indicators oftheir mental states. How is this assumption warranted on thenonintrospective, highly inferential picture drawn by thetheory-theory? Must we assume that 3-year-olds have perfectknowledge of the laws with the help of which they can accuratelyinfer their current mental states? But 3-year-olds' performancedeficits must stem from poor knowledge of laws if the theory-theory is correct, since knowledge of laws, I presume, is thehallmark of the theory-theory. If their knowledge of laws is quiteincomplete, however, it is a mystery how they can perform sowell on tasks concerning their current mental states.

Theories and qualities

Alison GopnikDepartment of Psychology, University of California at Berkeley, Berkeley,CA 94720Electronic mail: [email protected]

[Gol] I am completely in sympathy with Goldman's project todevelop a psychologically plausible answer to traditional philo-sophical questions, and very glad to see a philosopher takingempirical issues seriously. I obviously disagree, however, withboth his critique of the hypothesis that mental-state terms aretheoretical and the positive alternative suggestion that theyrefer to qualitative aspects of our experiences. The trouble withthe critical arguments is that they not only rule out the possi-bility that mental-state terms are psychologically real theoreti-cal terms, but they rule out the possibility that there are anypsychologically real theoretical terms. Consider a perfectlycanonical use of a theoretical term: A chemist says of a liquid,"This is H2O." How could a scientist possibly invoke the fullconceptual apparatus of theoretical chemistry, the elaboratenetwork of theoretical relations in which this term is embedded,every time he identified a sample?

Clearly a scientist who uses this term is not required to invokehis entire theoretical apparatus on each occasion. In fact, onparticular occasions, the scientist may make the identificationon the basis of only one or two salient features of the object.Moreover, these salient features may not even be criterial forthe correct theoretical application of the term. (Nothing in thetheoretical definition of H2O specifies that it comes out of taps;nevertheless, a scientist might immediately identify the stuffcoming out of a tap as H2O, without testing or even consideringall the theoretical relations that term enters into.)

We might say, however, following Goldman, that this ismerely pushing the question back to the learning stage. Howcould the chemist ever have learned the complex network ofrelations in which the term is embedded? As a student, forexample, surely he must have been able to know what H2O wasin the first place in order to understand the theoretical relations

it implies? This is a good question. In fact, determining thenature of theory-formation and change in particular, and ofconceptual change in general, is about as hard a problem ascognitive science faces. However, it is hardly a reason forthinking that theory construction never occurs. No matter howthe chemist and the child do it, it is undeniable that they do doit. Human cognitive systems have the capacity for swift andoften unconscious conceptual change, however mysterious orinconvenient that capacity might seem (unless, of course, likeJerry Fodor [1975] you think the concept of H2O is innate). Thequestion of how it is done puzzles developmental psychologists,but no more than it puzzles philosophers of science.

To turn to Goldman's positive proposals, I do not see any apriori reasons why we could not have phenomenological mental-state terms of the sort Goldman describes. Perhaps we are soconstructed that whenever we are in a particular physiologicalstate a particular qualitatively distinct experience arises. Icompletely agree with Goldman that whether mental-stateterms refer to such experiences is an empirical question. Howcould we tell?

The possible effects of expertise suggest that we could not tellsimply by introspection. As I have argued in the target article,the fact that we had a similar experience whenever we ascribed amental state to ourselves might be a consequence, rather than acause, of such ascriptions. Similarly, the fact that the diagnosti-cian had a similar experience of all cancer patients would notindicate that "cancer" was a phenomenological term.

One test for whether a term is theoretical or phenomenologi-cal might be whether the ascription could be made even whenthe qualitative bases were absent or contradictory. Anotherrelated empirical test is whether the ascription is, to use Py-lyshyn's (1980) phrase, "cognitively penetrable." Do changes inour beliefs about the theoretical relations into which the termcan enter change our ascriptions of the term itself? If mental-state terms are theoretical, we ought to expect (a) that they canbe ascribed even when the appropriate qualitative experiencecontradicts them and (b) that the accumulation of evidenceabout the functional relations of behaviors and experiences doesaffect their use.

We have empirical reason to believe that this is true of many,perhaps most, mental-state ascriptions. In our experiments, forexample, children ascribe mental states to themselves (withinshort-term memory on at least some definitions) even whentheir experience contradicts those ascriptions. Moreover, astheir beliefs about the relational properties of mental stateschange, particularly their beliefs about the relations betweenmental states and the world, their self-ascriptions also change.

My target article was confined to considerations of intentionalcontent. But, interestingly enough, the more you look, theharder it is to find mental states, including sensations, that areplausibly identified solely by their qualitative properties. Pain isa particularly interesting case because, while it has all thephenomenological vividness and immediacy we could wish forand is indeed the sensation term par excellence in philosophy,psychological evidence suggests that it is cognitively penetrableto an enormous degree. To take a simple yet dramatic first-person example (though one little considered in philosophy forobvious reasons), the qualitative sensation associated with uter-ine contractions in miscarriage and labor, presumably stateswith identical physiological bases, is radically different.

Goldman wonders whether my commitment to the existenceof genuine psychological experience means that I am not really arepresentational functionalist. Surely it is possible to be afunctionalist without being a behaviorist, without even being anenemy of qualia. It seems to me that experience plays a role inmental-state ascription in two ways. First, phenomenology mayfeature in the very functional definition of a term; part of itsfunctional role may be its relation to particular phenomenologi-cal states. Second, the consequence of developing a theory ofmind may be a psychological experience of a particular sort.

44 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 45: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

Simply as a matter of empirical fact, however, I do not believethat many mental-state term ascriptions (or, for that matter,ascriptions of almost any other "folk" terms) will turn out to bebased solely on qualitative features of experience. I am arepresentational functionalist not out of methodological, orontological, or philosophical conviction, but simply as a matterof empirical belief. In One-upmanship Stephen Potter suggeststhat a sure route to critical fame is to take the property for whichan author is best known and attack him for not having enough ofit ("the almost open sadism of Charles Lamb" and "the absence,in D. H. Lawrence, of the sexual element" are examples). Ithink Al Goldman should be even more empirical.

Self-ascription of belief and desire

Robert M. GordonDepartment of Philosophy, University of Missouri-St. Louis, St. Louis, MO63121Electronic mail: [email protected]

[Gol, Gop] Gopnik's concern is with the self-ascription ofintentional states, particularly beliefs and desires. Goldman'sdiscussion extends to the self-ascription of mental states ingeneral. Although he acknowledges that his arguments are onfirmer ground where sensation terms are concerned, Goldmanalso develops an introspectionist account of the propositionalattitudes, covering both self-ascriptions of attitude type (that itis an attitude, e.g., of belief or desire) and self-ascriptions ofcontent (what is believed or desired). The only head-on confron-tation with Gopnik appears to be over the question "What is theprimary source of self-ascriptions of propositional attitudes,particularly belief and desire?" Gopnik answers, "Theory," andGoldman answers, "Introspection." I will focus on this issue andargue that the correct answer is "Neither of the above."

Concerning a side issue, whether the development of anunderstanding of intentionality demands "fundamental changesin a theory" or (as the "simulation" theory maintains) "increasesin flexibility of simulation," I of course side with Goldman. Butunlike both authors (and a number of philosophers and nearly allpsychologists who have written on the topic), I find no reason atall to tie the simulation theory to an introspectionist account ofthe self-ascription of propositional attitudes. My own version, atleast, is anti-introspectionist (Gordon 1992c).

Both authors appear to assume that if self-ascription of beliefsand desires is to be generally reliable, it must involve a decisionprocedure that gives us some justification or basis for thinkingwe are in the state ascribed. Either we infer largely on the basisof theoretical knowledge that we presently have a certain beliefor desire, as Gopnik would have it; or, as Goldman suggests, wejust decide on the basis of direct, noninferential, introspectiveaccess, recognizing our own beliefs and desires by their (alleged)phenomenological or qualitative marks.

Goldman briefly considers an alternative view that does notinvolve a justificational decision procedure. This is the "classi-cal" functionalist account, which holds that: "Classification of apresent state does not involve the comparison of present infor-mation with anything stored in long-term memory. Just being ina mental state automatically triggers a classification of yourselfas being in that state" (sect. 6, para. 5).

But Goldman dismisses this "nonrecognitional" account asuseless for cognitive science because it tells us nothing about theprocess by which the system makes these "automatic" classifica-tions. Because cognitive science must account for the reliabilityof the process, it would have to explain how the system "knows"what classification to make.

I will now briefly sketch a nonrecognitional account thatreadily illustrates the reliability of the process. We commonlytrain children to preface nouns like "banana" and "chocolate

milk" with "[I] want . . . 'under the appropriate circumstances:For example, when they look longingly at a banana, or what-ever. Then we treat their utterance, "[I] want banana," as arequest or demand for a (or the) banana. Whatever may be thecase for adults, young children, at least - 2-year-olds, say -request or demand a <|> only when they actually want a c|>. Sowell-trained children will say, "[I] want banana," only whenthey want a (or the) banana. (Indeed, young children probablysay this virtually every time they want a banana, given the rightaudience: They have not yet learned to show restraint in the faceof conflicting desires.)

Such training obviously will not give children mastery of theconcept of wanting or desiring and it will not even teach themthat "I" refers to the speaker. So it is not sufficient for trainingchildren to make genuine ascriptions to themselves. It does,however, give them a remarkable headstart on self-ascription.Given the right audience (and perhaps within certain otherconstraints), they are already using the linguistic form of a self-ascription of desire when and only when they have the corre-sponding desire. They have already got the reliability: Now allthey need are the concepts! If this account is right, thenreliability, at least in the self-ascription of particular desires,does not demand that we or our "system" somehow know whenwe are in the state ascribed. It requires neither theoreticalknowledge nor introspective access.

I turn from desire to belief. Here is a question I invite youtake a moment to answer before reading further: "Do youbelieve the planet Neptune has rings?"

I doubt that in answering the question you examined yourrecent behavior, or indeed anything else, in the light of a theory.And I doubt that you introspected; searching for a telltale feelingor other experiential "mark" of belief. You probably rein-terpreted the question as, "Does Neptune have rings?" Then, ifyou gave or were about to give an affirmative answer to thatquestion, you thought "Yes, I do believe that." If you were notready to give an affirmative answer, then you thought "No, Idon't believe this," or perhaps, "I don't believe one way or theother."1 (I am not suggesting that the result of this procedurecould not be overruled, say, by an assessment based on behav-ior; in a fuller account I will explain why it can sometimes beoverruled.)

Because this procedure yields results that are generally takento be reliable, it would even be possible to take children - say,2-year-olds - who can make just a few simple assertions like "It'sraining," and easily train them to sound very sophisticated. Justget them to preface their assertions, optionally, with "I believe."They will then say, for example, "I believe it's raining." Whatanomalous precocity! Children barely able to express simplefirst-order beliefs, and yet here they are expressing metabeliefs!Not only that, remarkably reliable metabeliefs! In fact, thechildren will not have learned that they believe it is raining, oreven that they have beliefs. As far as they are concerned, theyare just parroting a formula before saying what they really meanto say, namely, that it is raining.2 Again, the point is thatreliability, in this case in the self-ascription of beliefs, does notneed theoretical knowledge, as Gopnik thinks, or introspectiveaccess, as Goldman suggests.

Merely to train a child to preface assertions with the formula"I believe" fails to distinguish self-ascription of belief fromassertion of fact. The child would not learn that the belief that p,unlike the fact that p, may be false, or at variance with fact; nor,for that matter, that someone may fail to believe that p even if p.The child would not have gained the conceptual competencethat appearance is necessary for success in the "false-belief"tasks that have been extensively investigated in the past decade.

To gain this competence, in my view - and here I amasserting, not arguing - the child would need the "increases inflexibility of simulation" mentioned by Goldman. The great leapforward from the 3-year-old's inflexibly factive mode of explain-ing and predicting behavior to the 4-year-olds flexible mode

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 45

Page 46: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

requires that the child bring to these tasks the capacity to"picture" or "represent" things differently from the way they are- to modify the world in imagination, adding, omitting, oraltering details - a capacity that children call on regularly intheir pretend play, and particularly in role play. Children whoremain locked into their own "point of view," who can only think"This is the way things are," without simultaneously thinking"And this is the way things are to my friend Sally," or "fromSally's point of view," actually remain in the inflexibly factive3-year-old mode of explaining and predicting behavior, even ifthey theorize with the sophistication of a modern cognitivescientist that human beings, themselves included, are mecha-nisms guided by encoded representations of the world.

NOTES1. As Gareth Evans wrote: "I get myself in position to answer the

question whether I believe that p by putting into operation whateverprocedure I have for answering the question whether p" (Evans 1982, p.225). This is similar to the "answer check procedure" described inWimmer et al. (1988b): If asked whether you know the answer to aquestion q (for example, about the location of something), simply checkto see whether you have an answer to q (see also Gordon 1992a; 1992b).

2. Although it is common practice to train children to use the form ofa self-ascription to "express a desire for something," it is neither acommon nor a useful practice to train children to use the form "I believethat p" as a way of "expressing a belief," that is, as a way of asserting thatp. (Adults often use the linguistic form of a self-ascription of belief as away of making an assertion, although they typically do so in order toqualify the assertion, that is, to add a Gricean "implicature.")

On behalf of phenomenological parity for theattitudes

Keith GundersonDepartment of Philosophy and the Minnesota Center for Philosophy ofScience, University of Minnesota, Minneapolis, MN 55455

[Gol] Alvin Goldman's conjecture that not only sensations(percepts and somatic feelings) but beliefs and thoughts, and soforth, may be hosts to qualia strikes me as altogether plausibleand important. It seems important for two reasons: (1) Thebeleaguered notion of qualia (even with respect to sensations)would have its purported perimeters within the philosophy ofmind extended (for most) and might freshen what has become arather stale debate about its conceptual integrity; (2) Goldman'sway of endorsing phenomenological egalitarianism across men-tal aspects, rather than sidestepping issues of consciousness,marks an attempt to welcome them into the purview of researchstrategies in cognitive science.

Goldman cites Jackendoff s friendly-to-qualia discussion of"tip-of-the-tongue" phenomena (Jackendoff 1987), where oneseems aware of a conceptual structure in the absence of thephonological means for expressing it. Goldman's claim, conso-nant with Jackendoff's position, that in such cases "entertainingthe conceptual unit has a phenomenology, just not a sensoryphenomenology," seems to me correct. But the significance ofthese rather exquisite cases, I would assume, lies in the fact thatthey suggest a more pervasive phenomenological presenceacross our attitudinal and linguistic experiences. Support forsuch a suggestion can be found, I think, by examining common-place speaker/hearer and writer/reader asymmetries with re-spect to disambiguation. (Paul Ziff first called my attention tothese many years ago, claiming that they greatly complicatedthe task of devising algorithms that capture anaphoric or cata-phoric information transfer across sentence boundaries.) Typ-ically, because what I say is a function of what I mean to say, andwhat I mean to say is also, typically, something I am as fullyconscious of as limb position or whatever occupies my visualfield, I do not need to disambiguate my own remarks for myself

in the manner that others may need to disambiguate them.Consider the simplest sort of case: If I say to someone "I'm off

to the bank," I need not, indeed cannot, query of my remark:"Going fishing? or making a deposit?" Another may need to ask,but I virtually always already know. As the encoder of what I say,what I mean (intend, want) to say monitors or determines what Isay. Here too, like the tip-of-the-tongue cases, one is "aware ofconceptual structure," but unlike those cases, the phonologicalmeans for expressing it is present, even though in some casesthe means chosen turn out to be ambiguous for others. Sartre(1948, p. 1060), an unwitting ally in this context, may beconstrued as calling attention to a similar awareness present inwriting which he says "involves an implicit quasi-reading whichmakes real reading impossible. When the words form underhis pen, the author doubtless sees them, but he does not seethem as the reader does, since he knows them before writingthem down." (Note: There is also, of course, the "tip-of-the-pen" phenomenon.) Now this not-(generally)-needing-to-disambiguate-for-oneself-one's-own-spoken-or-written-wordscan be likened, I think, to one's having a point of view withrespect to the beliefs, desires, thoughts, or intentions that giverise to their expressions. And it is the firm sense of havingthis point of view that cancels out the possibility of self-disambiguation where one has said what one wished to say.There is, in other words, something it is like to have meant onething ("going fishing"), rather than another ("off to make adeposit"), by what one has said. How could there not be?

Of course there can be exceptions - cases of mental impair-ment: aphasia, amnesia, severe short-term memory deficits,and so forth - where one loses touch, as it were, with what one issaying, and even, perhaps, with what one means or wishes tosay, so that the aforementioned asymmetries — what I haveelsewhere inelegantly called "Ziff-Sartre disambiguation asym-metries" (Gunderson 1990) - break down. But these are thesorts of exceptions that only prove the rule. Consider the casewhere a long passage of time is implicated in the breakdown.For example, you come across your entry in a diary written fiftyyears ago: "And so I went off to the bank." Due to time andfading memory you have fallen from the privileged (though notinfallible) perch of not needing to disambiguate your entry byvirtue of being the perpetrator of it. Here we can explain what isgoing on as we seek to understand anew our own inscription asbeing an attempt to recapture, if not relive momentarily, what ithad been like to author it initially.

To supplement the foregoing, recall John Searle's (1980)famous Chinese Room argument. Searle's perspective, or pointof view with respect to his constructions in Chinese engendersno Ziff/Sartre disambiguation asymmetries, for he knows noChinese. He has no privileged access to their meanings, for hehas no access to their meanings whatsoever. There is a sense inwhich they are his, that is, he constructs the inscriptionalstrings. But they are not, following Sartre, part of Searle'ssubjectivity. He knows no Chinese, so how could they beexamples of what he meant to say, and phrased in the way hewanted or was dissatisfied with, and so on? What he is consciousof directly - his own thoughts or intentions - neither monitornor get expressed in his Chinese inscriptions. Only the Chineserecipients of his responses know how to read them and whetherthey are ambiguous and so on. If we imagine other Englishspeaking noncomprehenders of Chinese also receiving Searle'sconcatenations, they are in the same position vis-a-vis a compre-hension of them as is Searle. Those squiggle-squoggles are,indeed, just a thing or object for Searle. They do not, in any way,shape, or form, embody his thoughts or beliefs; he has noawareness of what they are about - what they mean. They do notinvolve him in a Sartrean "quasi-reading which makes realreading impossible." Were he to learn Chinese and then readthem, he would really read them, recognizing nothing of him-self - his beliefs or intentions - in their content.

In summary, an inspection of examples of homely disam-

46 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 47: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

biguation asymmetries and their phenomenological flavor -more detailed than the above - might provide some supportivedata and color for Goldman's extension of the notion of qualia tothe intentional in general, and his, to my mind, convinc-ing, though sketchy, permutation of Jackson's (1982; 1986)argument.

Know my own mind? I should be so lucky!

Jennifer M. Gurd and John C. MarshallNeuropsychology Unit, University Department of Clinical Neurology, TheRadcliffe Infirmary, Oxford 0X2 6HE, England

[Col, Gop] You can always recognize philosophers: They are thepeople who invariably know what our grandmothers thinkwithout ever asking them. Nonetheless, as it happens, weentirely agree with Goldman about Granny. The small sample ofgrannies we have consulted are all unreconstructed Cartesians.They believe that the body and the mind are two different"things," although they are unwilling to call the mind a "sub-stance. " And they are firmly convinced of "two-way interaction-ism " ("Of course it hurts if you put your hand in the fire"; "You'llnever get well unless you have a more positive attitude"). As towhether one could exist without the other, there was somedivergence of opinion.

Some grannies argued that bodies could exist without minds("Just look in the morgue"); others argued that minds (usuallyconfused with "souls") could go to heaven (or hell) after crema-tion. Yet others thought it an open question.

All grannies (in our sample) believed they were conscious (atleast part of the time), and all were adamant about the existenceand indeed the incorrigibility of qualia ("If I say it looks green tome, young man, then it looks green to me"). No granny had anyproblem (in principle) with introspection, but a few thought thatif you overindulged you could go blind or morbid ("sicklied o'erwith the pale cast of thought," as one granny put it).

Now, all of our grannies are (fairly) "ordinary people" (al-though they did object to being called "common" folk). Theyaccordingly come within the domain of Goldman's analysis, ananalysis which appears to show that Granny is right. Or better,that everyone else is wrong.

But then along comes Gopnik to study these matters empiri-cally, although not in grannies. Gopnik's domain is 3- to 4-year-olds, creatures that she links to grannies (and Goldman's targetarticle) by pointing out (no doubt correctly) that all grannieswere once 3-year-olds.

According to Granny, "first-person knowledge comes directlyfrom experience, but third-person knowledge involves infer-ence" (Gopnik's abstract). Now, unless you believe in telepathy,everyone is agreed about the third-person case, but Gopnikclaims to have experimental evidence (from developmentalchanges between 3 and 4 years of age) that first-person knowl-edge is similarly inferential, not direct.

There are three points to clear out of the way before wediscuss the validity of the evidence. At first blush, Gopnik'shypothesis sounds ridiculous: If we stick a pin in our 4-year-old,surely he does not infer that it hurts? Happily, there is nocontroversy here: Some "things" are directly experienced in the"stream of consciousness" (and pain presumably is one of them).Gopnik's claim is restricted to only some kinds of beliefs aboutsome of one's own mental states.

Second, it is logically possible that Granny is right and that3-year-olds are different. It is conceivable that at the age of 4 (orthereabouts) an "introspectoscope' could mature in the mind/brain. If this were so, 3-year-olds would indeed not know someof their own mental states, whereas Granny and the 4-year-oldcould view their own states directly. We grant, with Gopnik(note 8), that Leslie (1987) has not yet postulated an introspec-

toscopic module that matures at 4 years, but that in itself is notevidence against the notion.

Third, once one moves beyond phenomenologically simplequalia ("I am in pain") and beliefs ("I hate opera/peanut butter"),Gopnik could well be right: It is not that clear how well adults(including grannies) know their own minds.

Consider the common state of mind called "I want to go toVenice for a long weekend." Can Granny be truly sure that shehas a clear and distinct notion of being in this state of mind?("Can I afford it? Would it be easier to go to Paris? Will there betoo many people in Venice? . . .") Perhaps it is only when she isactually sitting in Florian's that Granny infers that this hadindubitably been her state of mind.

Or, consider the following common exchange:

A: "Why are you so irritable?"B: (snaps) "I'm not irritable, or at least I wasn't until you said I

These arguments always end in each partner being totallyconvinced that he has been wronged and that the other one ispsychotic. Although B is not (necessarily) "lying," his "directaccess" will cut no ice with A. We mention all this simply to flagthe possibility that children may be just like everyone else (onlymore so).

And now the experiments. All the tasks that produce bizarreresults from 3-year-olds seem to have analogues that causedifficulties for adults. A "Hollywood rock" is indeed a prop, not arock (for us), but how much of a "Potemkim village" do you haveto construct before it becomes a village? Even Granny mustlearn that a (normal) blue filter is not an object that stainscardboard blue the second time you place it over the card.

Like Gopnik's 3-year-olds, we do not know where many of ourown beliefs came from (and we often cannot even imagine howother people's peculiar beliefs arise). But as Freud pointed out,(very) young children believe that adults are omniscient andomnipotent. If this were so, the notion of attending to the adult'sconditions of perceptual access to information would not occurto the child. For many of the experiments that concern infer-ence from other people's access to knowledge, it may accord-ingly be helpful to use the "toy panda" control (Donaldson 1978).The young child might not so readily credit panda with suchalmighty powers.

In some respects, however, the 3-year-olds who "have diffi-culty understanding the notion of subjective probability" seemvery adult already. Just try telling Salman Rushdie that all adults"appreciate that belief could admit of degrees." And similarlywith respect to "judgments of value": Just as a 3-year-old cannotconceive of someone who does not love ice cream, so one of usstill cannot believe that the other really does not like LaTraviata.

Some of the "belief change" results are more difficult tointerpret as "adultlike." We would, however, give rather moreweight than Gopnik does to the rule that "Nobody likes to admitthey were wrong (or worse, fooled)." Many of these tasks doinvolve admitting that you were fooled, and not merely "chang-ing your mind' (as Gopnik phrases her answer to the "embar-rassment" hypothesis). How many psychologists do you knowwho will admit that their theory was wrong after being pre-sented with (someone else's) discontinuing data?

Even the results that appear to show that 3-year-olds fail to"remember" their own prior intentions have a strong "social"component. If we were told that our drawing of a ball looked likea big red apple, we too might interpret that as criticism andretort defensively, "Well that's what I always intended to draw."

The "source of belief" experiments that have a memorycomponent also have quite specific "demand-characteristics."Much of the time, adults do not need to recall source informa-tion (and accordingly encode it weakly if at all). How often doyou remember a particular experimental result and then have to

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 47

Page 48: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

spend hours trying to recall (or deduce) which journal you read itin? Very young children seem to remember sources when theyneed to:

Daddy: "Don't paint on that!"Child: "Mummy said I could!!"

Is there not some language where it is part of the obligatorymorphology to mark "I saw it with my own eyes" versus "I have iton reliable evidence" versus "It's just hearsay"? We wouldreference the source (if we could remember it).

We do, of course, take Gopnik's point that all this "explainingaway" is conducted on a case-by-case basis. But it is not entirelyad hoc for all that: The explanations we have invoked all conformto well-established psychological principles.

And we do find it difficult to see why some 3-year-olds will notadmit they were hungry before the snack. It is not that we willnot change our minds under any circumstances; we just want tobe fully convinced that Granny was not right about direct accessin the first place (for some things some of the time).

Until that day, we will paraphrase a well-known exchangebetween Ernest Hemingway and Scott Fitzgerald:

Fitzgerald: Let me tell you about the very young. They aredifferent from you and me.Hemingway: Yes, they are much younger.

And we will not give Granny tutorials in philosophy lest shebecome as confused as Fodor's Auntie (Fodor 1985). The worldis sometimes a buzzing, blooming confusion for all of us (3-year-olds, aunties, and grannies included). When it is, we all try toinfer the best theory of what is going on.

First-person current

Paul L. HarrisDepartment of Experimental Psychology, University of Oxford, Oxford 0X13UD, EnglandElectronic mall: [email protected]

[Gol, Gop] I applaud the efforts of Gopnik and Goldman to bringpsychological evidence to bear on debate about the status andorigins of folk psychology. However, I favour Goldman's inter-pretation. My doubts about Gopnik's claims center on threeissues: 3-year-olds' success in reporting their current psycho-logical states; the obstacle that they encounter in reporting falsebeliefs; and the reasons for 4- and 5-year-olds' superior report-ing. I consider these three points and also the implications ofGoldman's claims for research on children's reports of mentalstates.

Reports on current psychological states. Gopnik points outthat 3-year-olds are poor at reporting selected past psychologi-cal states, notably their prior beliefs and desires. She omits tomention, however, that 3-year-olds are quite accurate in report-ing their current beliefs and desires. Gopnik and Slaughter(1991) asked a series of three questions, two about currentmental states and one about a past mental state. They induced amental state, asked children to report it, changed that state, andasked children to report the new state; finally, they askedchildren to report their initial state once more. For example, inthe case of belief, children were shown a familiar crayon box,asked what they thought was inside, then shown its actualcontents (candles), asked to say what they now thought wasinside; finally, they were asked to say what they had thoughtinitially. On this belief task, and a parallel desire task, all 3-year-olds were accurate on the two questions about current states.Inaccurate replies were obtained only for the question aboutprior states (approximately two-thirds reiterated their currentbelief; and approximately two-fifths, their current desire).

Gopnik would presumably argue that children can give re-

ports of their current mental states without any appreciation ofintentionality. Is that plausible? Conceivably, children identifytheir own current belief but do not recognise it for what it is.They might assume that whenever they are asked to report ontheir current psychological state, they are being asked to pro-vide a reading of the world as it is. This cannot be right becausewhen children are asked about what they are currently pretend-ing, they do not offer a reading of the world as it is. They answerappropriately. Still, one might refine the defence further bysaying that when children are asked questions about thoughtsthey focus on the way the world is. Wellman and Estes (1986),however, have shown that when 3-year-olds answer questionsabout thinking they do not regard thinking as a transparentrepresentation of reality. For example, they realize that a boywho is thinking about a dog may not see or touch the dog.

A final defence might be that children take questions about"thinking that" to call for a direct reading of reality rather than areport on their mental state, even though they do not make thesame error for "thinking of" or "thinking about." Yet this istantamount to an admission that children recognise the inten-tional inexistence of thoughts. They recognise that thoughts aredirected toward particular targets, and these targets may notcorrespond to the world as it is. Admittedly, what children donot appreciate with this model of thinking is the way beliefsmisrepresent reality, but that does not show that 3-year-oldshave no conception of intentionality. It shows simply that theconception of intentionality that they have does not include thenotion of misrepresentation.

At this point, Gopnik might circumscribe her claim dramat-ically and acknowledge that children do have a concept ofintentionality but do not realize its full application. Then,however, the theoretical revolution she postulates sheds muchof its radicalism. In any case, as I argue below, it is doubtful thatthe change observed between 3 and 5 years is a theoretical one.

Why do 3-year-olds fall to report false beliefs accurately?Gopnik argues that 3-year-olds misdiagnose false beliefs (of selfor other) because they do not conceive of beliefs as beingmediated by a representational component. An alternative view

• is that children use a simulation process, but inaccurately. Theyimagine the situation that faced self (or other) and ask whatconclusions someone would come to about that situation. Thissimulation will go wrong if current first-hand experience of thesituation (unavailable to the other or to the self at the earlierpoint in time) is not temporarily overwritten (Harris 1991;1992). Thus, the two views differ with respect to the source ofthe false-belief error: the failure to overwrite current knowledgeaccording to the simulation account; the absence of an adequateconception of intentionality according to Gopnik.

Gopnik acknowledges (note 4) that 3-year-olds sometimesavoid the standard error but she argues that such findings areconsistent with the theory-theory: Faced with counterevidenceto their nonintentional theory, children show signs of theoryrevision. This proposal ignores two difficulties, however. First,counterevidence may contradict an existing theory but it doesnot provide any clue about the form a more adequate theoryshould take (Bryant 1982). More generally, Gopnik is silentabout how children come to postulate a representational theoryof mind. Second, 3-year-olds do not succeed only when con-fronted by so-called counterevidence. Zaitchik (1991) has shownthat when 3-year-olds are not given first-hand perceptual expe-rience of reality (e.g., they are not shown the transfer of anobject from one box to another but merely told about thetransfer) there is a sharp improvement in their ability to predictwhere another person (who knows nothing about the transfer)will think the object is located. This finding is hard to reconcilewith the theory-theory: Telling rather than showing does notappear to offer any obvious counterevidence to an allegedlynonintentional theory. The finding does fit the simulation ac-count. When first-hand experience of the transfer is attenuatedby the substitution of verbal input for perceptual input, it is

48 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 49: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

easier to imagine the now contrary-to-fact situation (the objectin its initial location) without intrusion from recent conflictingperceptual input, namely, the visible transfer of the object to anew location.

The shift between 3 and 5 years: Alternatives to theory change.The difficulties exhibited by 3-year-olds are widespread butthey reflect inadequate simulation and not a fundamental con-ceptual deficit. What do the 4- and 5-year-olds acquire, then?Gopnik's answer (along with that of several others) is that theyacquire a representational theory of mind: In particular, theygrasp that beliefs are mediated by mental representations.

This claim has provoked numerous efforts to show that even3-year-olds hold such a theory (cf. Gopnik's note 4). I think suchefforts are misguided because solving the false-belief task -whether at 3 or 5 years - is poor evidence for a representationaltheory of mind. Success on such tasks does show children'sappreciation that the intentional object of a belief can be acontrary-to-fact situation - one that did obtain or appeared toobtain - rather than the real situation. Whether they think ofthat situation as being represented in the mind is unclear fromthis evidence.

It does seem likely from other evidence that children think ofthe mind (or head or brain) as a container in which nonrealsituations may be pictured. That metaphor, however, is some-thing they adopt independently of the 3 to 5 shift. The best;vidence for its use is provided by Estes et al. (1989). When3-year-olds were asked to visualize a nonexistent entity, theyreadily understood the novel phrase "make a picture in yourhead" and resorted to a container metaphor in their justifica-tions. Does this then mean that a representational theory ofmind is present at 3 years? Probably not. Rather, it suggests thatchildren adopt such talk as a way of capturing their phenome-nological experience: Imagining is like "looking at a picture inthe mind."As Estes etal. (1989, p. 84) acknowledge, this type ofevidence argues against "the claim that there is little basis inuntutored experience to inspire introspective reports."

In sum, there is evidence that 3-year-olds have an intentionalconception of mind and can provide accurate reports (includingintrospective reports) on their psychological experience. Theage change in reporting false beliefs reflects an improvement insimulation rather than a theoretical insight. Finally, even3-year-olds talk about the mind as a representational device, asshown by the way they report on their own visual imagery.

Goldman's claim and acquisition data. Goldman effectivelydistinguishes among three types of psychological state: (1) cur-rent states of the self; (2) noncurrent states of the self; and (3)states of the other (current and noncurrent). Phenomenologicalexperience is available for (1) but not for (2) or (3). His accountnaturally leads to the prediction that attributions of type (1) willbe different from, and probably more accurate than, attributionsof types (2) and (3), and that parallel inaccuracies should arise fortypes (2) and (3). Gopnik rejects, or at least minimizes, the roleof phenomenological experience. She asserts that the accuracyprofile across various psychological states is very similar for (2)and (3), and offers no grounds for expecting (1) to exhibit adifferent profile.

Developmental data offer strong support for Goldman ratherthan Gopnik. In three separate experiments involving normal 3-and 4-year-olds (Gopnik & Slaughter 1991), normal 3- to 5-year-olds, and older autistic and mentally-handicapped children(Baron-Cohen 1991a), the pattern is quite consistent: All groupsare more accurate overall for type (1) than for type (2), with thepattern of error on type (2) being similar to that typicallyobserved on type (3).

The emergence of this stable pattern offers an agenda forfuture research on natural language acquisition. Studies of theacquisition of mental terms (e.g., Bretherton & Beeghly 1982;Shatz et al. 1983) have examined when references to psychologi-cal states are used for both current and noncurrent states, andfor both self and other. Although there is some evidence for an

expansion of the temporal envelope (i.e., from current to non-current), evidence for any priority of self relative to other ismixed. The trichotomy mentioned above suggests a more theo-retically driven approach. If we divide all references accordingto two criteria (Is it a reference to self or not? Is it a reference to acurrent state or not?), then references attracting a double yeswill emerge as a special category. Analysis based on eithercriterion considered in isolation (the practice, hitherto) willyield less orderly data.

Unraveling introspection

John HeilDepartment of Philosophy, Davidson College, Davidson, NC 28036Electronic mall: [email protected]

[Gol] One of Goldman's targets is "representational functional-ism" (RF), a version of functionalism according to which (1) thecontent of every commonsense ("folk") mental-state concept "isgiven by its unique set of relations, or functional role, andnothing else" (sect. 3, para. 3); and (2) categorization involves thematching of a "category representation" (CR) with a suitable"instance representation" (IR) (sect. 5, para. 9). According toGoldman, RF lacks psychological plausibility. I agree. Goldmanrejects (1), but is inclined to accept (2). I reject both (1) and (2).

Goldman thinks it unlikely that we could discover the con-tents of the mental-state concepts, CRs, we deploy merely byintrospecting. Rather, "we need cognitive science to discoverwhat those contents are" (sect. 1, para. 2). "A CR might take anyof several forms" (sect. 2, para. 5). For example, it might be set-theoretical in character; it might consist of a representation of aprototype or exemplar; or it might have the form of "a connec-tionist network with a certain vector of connection weights"(sect. 2, para. 5). There are, Goldman contends, three reasons tothink that neither CRs nor IRs could be what RF says they mustbe. (1) Self-ascribers are often ignorant of the causes and effectsof particular states. "There are situations . . . in which the self-ascription of headache occurs in the absence of any information(or beliefs) about relevant causes or effects" (sect. 3, para. 6). (2)Self-ascribers are often ignorant of subjunctive properties:"Suppose the subject does not believe that a glass of water iswithin arm's reach. How is he supposed to tell whether hiscurrent state would have produced an extending of his arm if thisbelief were present?" (sect. 3, para. 7). (3) RF threatens "combi-natorial explosion." "To identify a state as an instance of thirst,for example, one might need to identify one of its effects as adesire to drink. Identifying a particular effect as a desire todrink, however, requires one to identify its relata, many ofwhich would also be internal states whose identities are a matterof their relata; and so on. Complexity ramifies very quickly"(sect. 3, para. 8).

These problems highlight the second component of RF, thenotion that categorization is a matter of matching a state (or anIR) to a stored CR. Goldman takes this to mean that self-ascribers "determine" or "decide" what they are thinking orfeeling by comparing that thought or feeling (or an IR of thatthought or feeling) to a stored CR. If these representations havethe structure proponents of RF take them to have, then thematching task will be difficult indeed. (Note: A criticism of thisform would apparently apply with equal force to connectionist-inspired CRs.) In ascribing a headache or a belief that p tomyself, I would need (in some sense) to decide that I am in astate with endless relational properties and then match this stateagainst an equally complex CR harbored in long-term memory.

Goldman grants that "classical functionalism" (CF) sidestepsthis difficulty. "According to the classical account, . . . it is partof the functional specification of a mental state (e.g., pain) that itgives rise to a belief that one is in that state" (sect. 6, para. 2).

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 49

Page 50: A Gopnik - How We Know Our Minds

Commentary/GopniklGoldman: Knowing our minds

CF, however, is a nonstarter: "A crucial feature of classicalfunctionalism is that it offers no story at all about how a persondecides what mental state he is in" (sect. 6, para. 5). CR leavescategorization "a complete mystery. . . . There must be someway a system decides to say (or believe) that it is now in a thirststate rather than a hunger state, that it is hoping for a rainy dayrather than expecting a rainy day" (sect. 6, para. 6).

I am skeptical that a rejection of Goldman's model of categori-zation has the disastrous results he thinks it does. Kahnemanand Miller (1986), for example, sketch an alternative conceptionaccording to which CRs are constructed ad hoc. Such a model is,on the face of it, compatible with CF. I shall not argue that here,however; I will instead focus on two general points.

First, consider our commonsense conception of mentality.On that conception, it does indeed appear that, ceteris paribus,being in a given mental state, M, itself involves a capacity torecognize that one is in state M (see Heil 1992, Chapter 5). Yourbelief that you are (now) in state M appears not (typically) to bebased on beliefs about anything else, in particular not on a beliefor decision that M satisfies a certain criterion. Suppose youannounce that you have a headache or that you believe it willrain. I reply: "How do you know?" In the absence of some specialprovocation, my question would be regarded as a joke by anyoneother than a cognitive scientist with a theory.

Second, imagine that you are in state M. According to Gold-man, in forming a belief that you are in state M you (or some partof you) must compare M (or an IR of Af) to a stored CR, anddecide that you are in state M. "Categorization occurs when amatch is effected between (1) a category representation and (2)either (a) a suitable representation of a state or (b) a state itself.Alternative (b) is especially plausible in the case of sensations,because it is easy to suppose that CRs for sensations are simplymemory "traces' of those sensations, which are easily activatedby reoccurrences of these same (or similar) sensations" (sect. 5,para. 10). This is the model aimed at avoiding the mystery-mongering inherent in CF. How does it work? You go into M.This activates a trace of some past M-like state. The state and thetrace are compared and a verdict is issued. But what is the tracecontributing? What enables you to decide that the trace is anM-type trace? Or is this another "complete mystery"? An ob-vious line of response here is to appeal to unmentioned causalfeatures of the mechanism: When an M-type trace is activated,you are caused to believe that it is an M-type trace. But ifGoldman is allowed this maneuver, he is in no position toquestion its use by his opponents.

Qualitative characteristics, type materialismand the circularity of analytic functionalism

Christopher S. HillDepartment of Philosophy, University of Arkansas, Fayetteville, AR 72701

[Gol] "The psychology of folk psychology" is admirably clear andon the whole convincingly argued. Moreover, in my judgment,its conclusions are substantially correct. It is, I feel, a splendidpaper. I also feel, however, that a couple of sections are not fullysatisfactory. I will describe what I take to be the flaws of thesesections and sketch some ideas that can be used to repair them.

1. Goldman on qualitative characteristics. Goldman's theoryof self-ascription relies heavily on the notion of a qualitativecharacteristic; that is, on the notion of the "subjective feel" of asensation. He describes qualitative characteristics as intrinsicand categorical and says they are fully accessible to the subjecton the "personal" level.

Some philosophers have identified qualitative characteristicswith certain functional characteristics, but it is clear that thisoption is not open to Goldman. Nor is he willing to identify

qualitative characteristics with neural properties: "Presumablyany sensation state or event has some neural properties that areintrinsic and categorical, but do these properties satisfy theaccessibility requirement? Presumably not. Certainly the naivesubject does not have 'personal access' . . . to the neural prop-erties of his sensations" (sect. 4, para. 6). It appears, then, thatGoldman is prepared to embrace the view that qualitativecharacteristics are irreducible.

Goldman may not be as fully committed to this irreducibilitythesis as the quoted passage suggests; but to the extent that he iscommitted to it, he is confronted by a serious problem. Propertydualism makes a mystery of how it is that we come to havequalitative states and it offends against plausible principles ofsimplicity.

Fortunately, property dualism is not forced by any of Gold-man's main arguments, nor even - despite his contentions to thecontrary - by Goldman's observations about the inaccessibilityof neural properties. Let us say that neural information isinformation about neural properties of the sort that could beobtained from a neuroscience experiment or from a textbook ofneuroscience. Goldman would be justified in saying that we donot have introspective access to neural information, but it by nomeans follows that we lack introspective access to neural proper-ties. Even though water is identical with H2O, the informationabout water that is afforded by our unaided senses is quitedifferent from the information about H2O that is provided bychemistry texts. It could be the same in the case of qualitativecharacteristics and neural properties. It could be that they areidentical even though the information about pain, for example,that is afforded by introspection is very different from the neuralinformation that is obtained from other sources.

Someone who embraces Goldman's theory of self-ascription iscommitted to accepting either property dualism or type mate-rialism (i.e., the view that qualitative characteristics are identi-cal with neural characteristics). Since most readers will pre-sumably be less sanguine about the prospects of propertydualism than Goldman, it may be of interest that in a recentbook (Hill 1991) I respond at length to the main argumentsagainst type materialism and also give positive reasons forthinking that type materialism is correct.

2. Goldman on analytic functionalism. In section 6 Goldmandescribes and criticizes a view about self-ascription that is incompetition with his own account. According to this alternativeview, which plays a role in paradigmatic versions of analyticfunctionalism, it is a defining feature of certain mental states thatthey have a tendency to cause one to ascribe them to oneself. Iwill call this view the definition thesis.

In my judgment, Goldman's objection to the definition thesisis unfair. He points out its advocates understand it as entailingthat "being in a mental state automatically triggers a classifica-tion of yourself as being in that state" (sect. 6, para. 5); he thenconstrues the latter view as entailing, in effect, that there can beno interesting subpersonal story about self-ascription of pain. AsI see it, this construal is unwarranted.

Fortunately, there is a different objection to the definitionthesis - an objection that appears to be conclusive. According tothe definition thesis, when we learn the meaning of the term"pain," part of what we learn is that the following principle istrue:

(P) Pain has a tendency to cause one to believe that one is inpain. Is this a reasonable view? It appears that the answer is no,for the view implies that the definition of "pain" is viciouslycircular. Thus, if it is true that (P) is an essential constituent ofthe definition of "pain," then it seems that I cannot fullyappreciate the content that the definition confers on "pain"unless I have already mastered the use of the expression "be-lieve that one is in pain." Moreover, given that the meaning of"believe that one is in pain" depends in part on the meaning of"pain," it seems that I cannot master the use of the former

50 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 51: A Gopnik - How We Know Our Minds

Com?nentan//Gopnik/Goldman: Knowing our minds

expression until I have mastered the use of the latter. In view ofthese considerations, it seems that the definition thesis entailsthat I cannot fully grasp the content of "pain" unless I havealready grasped its content. If so, the thesis is wrong. It isabsurd to say that "pain" is unlearnable.

It may be that analytic functionalists have failed to see thispoint because they have confused (P) with (P*):

(P*) In every case, pain is realized by a state X such that X hasa tendency to cause one to believe that one is in state X.

(P*) is free from semantic circularity, but it is unsatisfactory for adifferent reason. Analytic functionalists want to claim that painhas a tendency to cause one to form a belief involving theconcept of pain, where this concept is taken to be definable infunctionalist terms. Unlike (P), however, (P*) does not entailthis claim. (One can believe that one is in a state X, where Xhappens to realize a given functional state, without making useof a concept that is functionally definable).

Analytic functionalism withoutrepresentational functionalism

Terence HorganDepartment of Philosophy, Memphis State University, Memphis, TN 38752Electronic mall: horgant@memstvx1 .bitnet

[Col] Analytic functionalism (AF) is a view about the meaning offolk-psychological terms and about the nature of the mentalstates these terms express. The position Goldman dubs "repre-sentational functionalism" (RF), on the other hand, is a viewabout the psychological processes underlying the self-ascriptionof mental states. If I understand him correctly, Goldman main-tains that AF is not a tenable semantic/metaphysical view unlessRF is a tenable psychological view; and so he thinks the objec-tions he raises against RF also undermine AF. I find his objec-tions to RF quite persuasive, but I will argue that both hiscritique of RF and his own proposed psychological alternative toRF leave AF essentially unscathed.

Goldman's implicit argument against AF can perhaps beexplicitly rendered as follows.

1. If AF is true, then (i) folk psychology is a theory and (ii) thefolk-psychological terms of public language are functionallydefinable via this theory.

2. If folk-psychological terms are thus definable, then RFmust be true; that is, any cognizer who has semantic mastery ofmental terms must self-ascribe each mental state by means of acategory representation (CR) which fully specifies the causalrole definitive of that state.

3. RF is very likely false.4. Therefore, AF is very likely false (from 1-3).

The first premise is not a substantive claim; it just unpacks thedefinition of AF. And of course the third premise is defended atlength by Goldman - quite plausibly, in my view. So the heart ofthe argument is the second premise. The general idea lyingbehind it is surely right: No putative semantical account ofmental terms in public language and no putative metaphysicalaccount of the nature of mental states like desire and belief couldbe adequate or correct unless these accounts will mesh with theright psychological account of how cognizers actually ascribemental states like belief and desire. But this idea, sound as it is,does not really warrant accepting Goldman's second premise.

Why not? Because in general a category representation (CR)can reliably subserve the perceptual recognition of a certainstate or property without representing the physical or functionalessence of that property; this can happen if the presence of thefeatures specified in the CR is reliably correlated with the

property itself, across a suitably broad range of actual andcounterfactual cases. For example, competent cognizers canreliably identify water perceptually without using CRs contain-ing information about water's chemical composition. Presuma-bly they do this by using CRs containing information aboutcertain prototypical features of water: fluid, colorless, transpar-ent, tasteless, found in lakes and streams, and so forth. Sim-ilarly, even if beliefs and desires are functional states, definablevia the global theoretical structure of a folk-psychological the-ory, it could well be that competent cognizers self-ascribe such astate using a CR that contains certain prototypical features of thestate - rather than by using a CR that specifies the state'sfunctional essence.

Two aspects of Goldman's target article raise further ques-tions about AF. One is his contention, in section 6, that theclassical functionalist account of self-ascription is untenable. Hewrites:

A crucial feature of classical functionalism is that it offers no story at allabout how a person decides what mental state he is in. Being in amental state automatically entails, or gives rise to, the appropriatebelief. . . . It should be clear, however, that this autpmaticity as-sumption cannot and should not be accepted by cognitive science, forit would leave the process of mental-state classification a completemystery, (sect. 6, para. 5, 6)

But the classical functionalist has the following natural reply:Although folk psychology is a theory, it does not purport to be acomplete psychological theory that offers full psychological ex-planations. Rather, folk psychology could perfectly well getabsorbed into a fuller, more complete cognitive science thatposits many more of, and many more kinds of, psychologicalstates and processes than are posited by folk psychology itself.Hence mental self-ascription, although an "automatic" processfrom within the exclusively folk psychological explanatory per-spective, need not be an automatic process from within themore complete explanatory perspective of cognitive science.

A second apparent problem for AF.is Goldman's own pro-posed positive (psychological) account of mental self-ascription.Given the importance of qualitative/phenomenological charac-teristics in his account, one might think that this account itselfentails the falsity of AF. But a full-fledged analytic functionalistwill simply deny this. Remember: AF is compatible with thedenial of RF, and hence is compatible with claiming that mentalself-ascription uses CRs that do not track a mental state byrepresenting its total functional role. Furthermore, qualitativemental states do exist, according to AF; they are functionalstates themselves. So suppose that Goldman's account is right:Mental self-ascription involves (i) a CR that includes representa-tions of qualitative states but does not represent a total func-tional role; (ii) an instance representation that is largely qualita-tive; and (iii) a suitable CR/IR match. Why could not such CRsand IRs, plus their various representational and qualitative/phenomenological constituents, plus the first-order mentalstates tracked by the CRs, all be functional states?

Finally, it should be noted that even if the psychological viewGoldman calls representational functionalism is mistaken, thereare various other ways the psychological processes underlyingmental-state ascription might use a folk-psychological theory.Consider, for example, the cognitive mechanisms subservingthe generation of belief/desire explanations of actions. Perhapsthese mechanisms use mental representations of certain funda-mental folk-psychological generalizations, for example, "Ifsomeone has a desire D and no competing desires, and believesthat action A is easily within his power and would bring about D,then ceteris paribus, he performs A." Such mentally repre-sented generalizations might operate cognitively to constrainthe process of mental-explanation generation (and thus to con-strain the mental-state attributions involved) - thereby helpingto assure that the resulting attributions generally conform to thetheoretical principles of folk psychology.

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 51

Page 52: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

Qualia for propositional attitudes?

Frank JacksonPhilosophy Program, Australian National University, Canberra, ACT 2601,AustraliaElectronic mall: [email protected]

[Col] Alvin Goldman argues that if (and only if) we allow thequalitative rather than the functional properties of a mental stateto settle its type qua mental state, we can understand how it ispossible to self-ascribe mental states successfully. I will arguethat there is a serious problem for this view in the case of thepropositional attitudes.

In section 4 Goldman characterises a mental state's qualita-tive properties as "intrinsic, categorical properties that aredetected 'directly ' that give the state's "subjective feel." Hementions in section 5 that "some philosophers claim that qualita-tive properties are 'queer,' and should not be countenanced bycognitive science,' but he goes on to observe, quite rightly, thatqualitative properties are commonly referred to in perfectlyrespectable, scientifically oriented psychology. What these phi-losophers are most strongly objecting to, however, are notqualitative properties per se, but qualitative properties con-ceived as properties that lie outside the naturalistic world viewthat dominates current cognitive science. It is qualitative prop-erties regarded as quite distinct from the kinds of properties -neural, syntactic, structural, functional, or whatever - thatcould be realised by certain internal states (of the brain pre-sumably, in the case of creatures like us) of a purely physicalorganism, qualitative properties thought of as nonphysical prop-erties - qualia as they are often labelled - which they regard asqueer.

Goldman's qualitative properties are qualia. This is clear fromhis discussion of Jackson (1982; 1986) and Nagel (1974). More-over, if his qualitative properties were physical, his theorywould simply be a version of the "second functionalist model" herejects in section 4. Many (but not I) will regard this as enough todismiss Goldman's story. Goldman does go beyond the usualposition of those sympathetic to qualia, however, in a waywhich, to my mind at least, makes his position implausible.

Goldman holds that the propositional attitudes - those men-tal states whose paradigmatic members are belief and desire -and not just the so-called "raw feels" like pains and sensings ofbright yellow, have distinctive qualia; and, moreover, that thesedistinctive qualia are definitive of the propositional attitudesthey characterise. What makes a propositional attitude theattitude it is - what makes it, say, the belief that drinking wateris good for you, rather than the belief that drinking gin or thedesire for a radish is good for you - is its qualia, its "feel." Andthe latter feature of his theory is central to it. If Goldman were tohold that it was the functional role occupied by, rather thanqualitative feel of, a propositional attitude which made it theattitude that it is, the problem of justifying self-ascription wouldof course by the same for him as it is for the functionalist.

Goldman discusses one well-known problem for holding thatqualia are definitive of the attitudes, namely, that there doesnot, as a matter of introspective fact, appear to be a characteris-tic feel associated with, for instance, my belief that drinkingwater is good for you. That is one obvious difference betweenpain and belief. The objection I wish to press, however, is notmentioned by Goldman.

There can be no conceptual connection between a person'sinternal states' instantiating various qualia and that person'sactual and possible behaviour in actual and possible circum-stances. Qualia are intrinsic, categorical properties quite dis-tinct from, and not supervening on, the properties that figure incurrent science. No complex of instantiated qualia can by theirvery nature point toward one bit of behaviour rather thananother. It is, however, of the essence of the attitudes thatsuitable complexes of them do by their very nature point toward

one bit of behaviour rather than another. Enough informationabout what I believe and desire tells you in itself what I am likelyto do in certain circumstances. If I desire water strongly andbelieve that there is water available to me in the glass in front ofme, then, other things being equal, I will drink the water in theglass. And we know this because of our very understanding ofwhat it is to have those beliefs and desires. We do not need to doexperiments to know that that set of beliefs and desires pointstoward drinking the water as opposed to, say, tipping it downthe sink.

I am not saying that having that particular set of beliefs anddesires logically entails drinking the water. Outside factors canalways intervene; thus, it is essential to include the "other thingsbeing equal" clause. The same is true of poisons. Drinking apoison does not entail getting ill. You can always get lucky.Nevertheless, there is a conceptual connection between drink-ing a poison and getting ill. The rubric "Other things beingequal, one who drinks a poison gets ill" is not the tautology "Onewho drinks a poison gets ill in those circumstances where onewho drinks a poison gets ill" despite the notorious fact that it isdifficult to spell out precisely what other things being equalcomes to. Likewise, there is a conceptual connection betweenhaving certain sets of propositional attitudes and behaving incertain ways in certain circumstances, despite the difficulty ofspelling out the connection in full detail. Goldman's position, itseems to me, cannot accommodate this central, conceptual factabout the attitudes.

Gopnik's invention of intentionaiity

Carl N. JohnsonProgram in Child Development/Child Care, University of Pittsburgh,Pittsburgh, PA 15260Electronic mail: [email protected]

[Gop] Gopnik is a constructivist, eschewing introspectionism aswell as behaviorism. Our concepts of mental states, she tells us,cannot be derived directly from introspection. Yet she allowsthat first-person psychological experience may play a direct andimportant role in our developing understanding of mind.

Few psychologists or philosophers would disagree with thisgeneral framework. But within it, Gopnik presents a mostincredible argument. She begins by expansively focusing hertarget article on "one particular belief; the belief that psycho-logical states are intentional" (sect. 1, para. 8). This belief isconveniently equated with the conceptual achievements of4-year-olds. Prior to this age, children are described as thinkingthey do not have intentional states, even though they do. HenceGopnik concludes that "empirical findings show that the idea ofintentionaiity is a theoretical construct, one we invent in ourearly lives" (sect. 1, para. 10, emphasis is my own).

Of course, the empirical findings show nothing of the sort.Gopnik will surely concede that her theorizing is itself highlyinventive. My comments concern these intellectual leaps.

Gopnik's argument rests on unanalyzed assumptions aboutthe concept of intentionaiity: That intentionaiity is a singularbelief, a theoretical construct, part and parcel of a "representa-tional model of mind." In fact, it seems more likely that theconcept of intentionaiity is a multifaceted, foundational con-cept, only partly overlapping with a representational model ofmind.

Historically, the concept of intentionaiity developed as adescriptive term, distinguishingacoreproperty of certain mentalstates, namely, their aboutness or directedness. Thoughts,desires, or beliefs are about or directed toward some content.Although philosophers have been particularly intrigued withthe representational/misrepresentational implications of inten-tionaiity (see Gopnik note 6), there is no reason to simply equateunderstanding intentionaiity with having a representational

52 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 53: A Gopnik - How We Know Our Minds

Commgntary/Gopnik/Goldman: Knowing our minds

theory of mind. Surely 3-year-olds have a basic understanding ofthe directedness or aboutness of mental states: As Gopnikadmits, they know about the world-to-mind direction of fit ofmental states like desires and intentions. But they equally knowsomething about the directedness of states like beliefs, namely,that beliefs are directed toward fitting the world (even thoughthey may not be aware how this fit comes about). Some aware-ness of intentionality, it would seem, is intrinsic to any aware-ness whatsoever of intentional states, qua intentional states(Johnson 1988). What develops, hence, is not the concept ofintentionality per se, but rather increasing knowledge aboutparticular instances and operations of intentionality.

It is therefore a mistake to regard intentionality as a theoreti-cal construct. Whereas children may be naively theorizingabout all sorts of mental entities - desires, beliefs, representa-tions - they are not theorizing about intentionality per se.Intentionality, like causality, is a foundational concept intrinsicto any folk psychological theorizing, it is not an object ofpsychological theorizing itself (see Kuhn's distinction betweentheoretical and foundational concepts cited by Carey 1983).

This brings us to Gopnik's claim that although 3-year-oldshave intentional states, they believe that such states are nonin-tentional, having invented an erroneous Gibsonian theory ofmind. Here Gopnik falls prey to what Sugarman (1987) de-scribes as the negative-positive error, common to Piaget's theo-rizing. Whereas it is fair to grant that young children areignorant of certain aspects of folk psychology, it is a mistake toconclude that such ignorance implies an alternative set ofbeliefs. Just as Piaget erred in taking children's ignorance ofmind as evidence that they believe that their own view isabsolute (egocentrism/realism), so too Gopnik errs in takingyoung children's ignorance of certain properties of mind asevidence that children believe that the mind is not intentional.The fact that children are ignorant of how beliefs come aboutdoes not imply that that they have a theory that beliefs areacquired directly from reality.

In the end, the question is how children get from havingmental states to a fuller ability to reason causally about suchstates. Gopnik insists that children could not learn from theirown psychological experience because their experience iswrong. But by this token, neither could they learn from obser-ving the states of others, about which they are equally wrong.What is it that children are to learn from? How is it that theyever come to know what they do not know already? Claimingthat children "invent intentionality" is hand waving. Childrendo not have to invent intentionality. They have a big enough jobfiguring out the multiple and various ways it works.

"Good developmental sequence" and theparadoxes of children's skills

Brian D. JosephsonCavendish Laboratory, Madingley Road, Cambridge CB3 OHE, EnglandElectronic mail: [email protected]

[Gol, Gop] In her target article, Gopnik addresses the questionof whether our conscious knowledge of our psychological statesis direct or indirect (the latter involving, for example, observingbehavioural correlates to our psychological states and therebydiscovering useful ways of sorting them into categories of believ-ing, desiring, and so on). Her conclusion that in cases such asbelief our knowledge is indirect rests heavily on the interpreta-tion of observations made with young children; it is accordinglyappropriate to ask whether one can account for these sometimescounterintuitive observations in some other way. Examples ofsurprising results are that hungry children at age 3 may denyafter they have eaten that they were hungry previously; thatchildren of this age, asked midway through a drawing to change

what they are drawing, may report that their original intentionwas to draw the picture they drew finally; and that children whobelieved one thing originally about the contents of a box butlearned when the box was opened that it contained somethingelse cannot when asked to recall that they originally believed thebox did contain something else. On the other hand, as anindication that simple inability to remember is not the realproblem, children of the same age are able to answer accuratelyquestions about the past when asked about actual events ratherthan beliefs about those events, and can even answer questionsconcerning games they have played involving pretence. Ifchildren are concurrently conscious of things they cannot recalllater, how do we explain these deficiencies?

Difficulty with task performance is not, I think, the issuehere. Rather, we should ask what cognitive strategies areadopted by the child for the different tasks at various stages in itsdevelopment and why. In the case of the recall of hunger task, itmay be that hunger is experienced by the child at the time butthe fact of being hungry is not recorded in memory (though thefact of having had hunger satisfied by eating a particular kind offood would be). The primary response to hunger should be toobtain food and to eat, and since being hungry is a condition thatnormally persists until the hunger is satisfied, no additionalbenefit, as far as this primary response is concerned, would beconferred by putting the fact "I am hungry" into memory. Pain isdifferent: It may be beneficial to learn what circumstances areassociated with pain, unlike with hunger. Again, with regard tointention and belief, it may be noted that it is much morerelevant to know what one intends to do now and what onebelieves now than to know what one intended or believed atsome earlier epoch. It may well be that when a 3-year-old childchanges its intentions or beliefs it overwrites them in memoryand that this is why its former intentions and beliefs are inaccess-ible. As regards memory in pretence tasks, to carry out a game ofpretence successfully it is necessary to hold clearly in memory aspecification of what one is pretending, so good memory of pastevents is indicated. Superficially, pretence and discarded be-liefs are similar, but they actually fulfil (or fail to fulfil) quitedifferent roles. I suggest, then, that the experimental observa-tions can be explained logically on the basis of memory strate-gies alone, and that we do not need to make the somewhatimplausible alternative assumption that some psychologicalstates are routinely inaccessible to consciousness.

It may be asked, however, why the child does not make use ofmore advanced cognitive strategies at the same time as it usesthe very simplistic ones postulated above. An answer can befound, perhaps, in the concept of "good developmental se-quence" invoked by Josephson and Hauser (1981) to account forthe developmental sequence in sensorimotor skills: Certaindevelopmental sequences are more beneficial than others. Forexample, it may be theoretically possible for a child to learn torun before it learns to walk, but the outcome of such an exercisewould almost certainly be more ungainly and more energy-consuming than if it first mastered certain skills of balancethrough learning to walk. Similarly, it may be disadvantageousfor a child to store information indiscriminately in memorybefore it has learnt to judge which kinds of information areuseful for which purpose.

I conclude with two issues that are essentially methodologi-cal, the first being concerned with what I see as false dichot-omies in both Gopnik's and Goldman's target articles. ConsiderGopnik's question: Are our beliefs about intentionality theoreti-cal, enculturated, or more directly derived from first-personapprehension? As I see it, nature simply uses whatever methodswork, and does not necessarily use in isolation a strategyadvocated by some particular school of psychologists or philoso-phers. In the case cited, it may well be that when my way isblocked by a stream and I judge that it is narrow enough for meto be able to jump over it, I become aware that that is mypsychological state (belief) directly whereas if I visit a psycho-

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 53

Page 54: A Gopnik - How We Know Our Minds

Commentart//Gopnik/Goldman: Knowing our minds

analyst I may learn things about my beliefs by developing atheoretical model of myself. Yet again, cultural factors maystrongly influence my conscious beliefs about myself. I may tellmyself I am not biased against women because my cultureinforms me I ought not to be, while the arguments I make at, forexample, a selection committee suggest to observers the pres-ence of a strong bias in my underlying psychological state.

My second methodological comment concerns terminology. Ibelieve that statements such as that children "think" such andsuch, or "believe" such and such, should be made only withconsiderable caution. What do we mean, for example, when wesay, as Gopnik does, "3-year-olds believe that cognitive statescome in only two varieties"? The child can classify its states intotwo varieties, certainly, but "belief" that there are only twosuggests a different order of cognitive skill. In the interests ofclear thinking, it seems desirable in such circumstances to avoidusing words like "belief."

Common sense and adult theory ofcommunication

Boaz KeysarPsychology Department, University of Chicago, Chicago, IL 60614Electronic mail: [email protected]

[Cop] Gopnik presents an important and provocative thesis -that intentionality is a theoretical construct developing early inchildhood. Even though common sense tells us that we havedirect access to our psychological states, and even though weexperience this access as unmediated, it is not different from theway we gain our knowledge about the psychological states ofothers. Both are mediated by inferential processes. Our com-mon sense, then, is a theoretical construct that leads us to makesuch a distinction. Whereas the target article focuses on sup-porting this claim with evidence from the developmental litera-ture, I will instead concentrate on adults.

What is the adult's theory of intentionality? In contrast to theelaborate body of studies that directly evaluate the child'stheory of mind (e.g., Astington et al. 1988; Gopnik 1990;Wellman 1990), the adult theory of mind has not been directlyevaluated in Gopnik's paper. Some studies that she mentionscan address this question (e.g., Nisbett & Wilson 1977), butthere is no systematic attempt to describe an adult "theory ofmind" as such. In this regard some philosophical theories arebasically elaborate accounts of adult common sense (e.g., Searle1980). The resulting theoretical notion of adult common sense isthus overly based on intuitions. In the target article, then, thereis an assumption that we as researchers have direct access toadult common sense.

Given this assumption, common sense may still reflect one ofthe following: It could be that our belief in direct access to ourpsychological state of intentionality is a theoretical construct, orit could be the case that first-person experience of intentionalityis indeed accessed directly. To distinguish between these possi-bilities Gopnik suggests two lines of argumentation. First, acontinuity argument: Because we are the same entities, onlyolder, we probably have the same theory of mind that wedeveloped as children. Second, an argument by analogy: We arefooled into believing that our sense of intentionality is direct andnot theoretically mediated in the same way that experts are ledto believe that they do not use inferences but simply "see"solutions. Both arguments are suggestive; neither is conclusive.It is possible that after the child develops a theory of mind, asdescribed in the literature, further maturation may involve first-person access to intentional states, access that is not theo-retically mediated. One can imagine how such a change mayeven be prompted by the implicit theory of mind itself. Thesetwo alternatives may also be empirically indistinguishable. It

could be that intentionality is indeed a theoretical construct butso deeply ingrained that its products perfectly mimic those ofnontheoretical, direct access to psychological states. This is notnecessarily a problem for Gopnik's account. Instead, it suggeststhat it would be interesting to evaluate the theoretical status ofadult common sense systematically, and that such an investiga-tion could provide a direct test for the kind of arguments thetarget article puts forth.

Gopnik suggests that a possible source of evidence for theadult theory of mind may come from situations that lead toerrors, as in some cases of experts' failure. Communicativebehavior may be a prime example of a relatively complex activitythat promises to be revealing about the nature of the adulttheory of mind. According to the target article, one aspect of atheoretically driven common sense is the conjunction of twoelements: (1) Self-experience is interpretative and not direct,and (2) we believe that self-experience is direct and not inter-pretative. Because the comprehension of utterances is inter-pretative by its very nature, language use may be a reasonableplace to look for relevant evidence.

Some preliminary studies in language use may be consistentwith Gopnik's claim about the adult's theory of mind. Forexample, it seems that speakers may not be aware of theinterpretative nature of their own utterances. Elizabeth Hinkel-man (personal communication) found that speakers are quitepoor at reconstructing their own speech acts when they observea videotape of a spontaneous interaction. Even when they areprovided with the complete context, they are not always surewhether their utterance was a request, a suggestion, or a hint.Similarly, Anne Henly has collected data in my lab suggestingthat speakers are not much better than chance in their attemptsto identify their own intentions when they produced syntac-tically ambiguous sentences. For example, speakers who de-scribed a picture as "the man is chasing the woman on a bicycle"were not always able to tell whether they meant that the man orthe woman was on the bicycle when presented with their ownutterance the following day. Such examples suggest that whenspeakers are producing their own utterances they may not befully aware of their interpretative nature. Similarly, when peo-ple are asked to evaluate another person's interpretation ofutterances, they seem to rely on their own understanding as if itwere not interpretative: When people understand an ambig-uous utterance as sarcastic (e.g., "Thank you for the helpfuladvice") they believe that others would perceive the sameintention even when they know that the others lack informationthat is crucial for the perception of sarcastic intent (e.g., infor-mation that the advice was not very helpful). It may be that theyperceive their own interpretation as "direct," consequentlyunderestimating the inferences that led to the perceived inten-tion (Keysar 1991). In other words, they take their understand-ing to be noninterpretative.

Because adult theory of communication and language use is asubset of the general theory of mind, it provides a reasonableplace to start. Whether or not the results will support a theory ofmind in line with the arguments of the target article, Gopnik'starget article does challenge us to investigate adults' theory ofmind systematically and directly.

Self-attributions help constitute mental types

Bernard W. KobesDepartment of Philosophy, Arizona State University, Tempe, AZ85287-2004Electronic mail: [email protected]

[Gol, Gop] People have both (a) "object-level" sensations andbeliefs and desires about the world and (b) ordinary practices ofdescribing and explaining themselves and others mentalistically(folk psychology in Goldman's broad sense). What is the relation

54 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 55: A Gopnik - How We Know Our Minds

Comtnentary/Gopnik/Goldman: Knowing our minds

between the two? I shall argue that folk psychology, especiallyinsofar as it includes self-attribution, is a special practice ofdescription and explanation that helps constitute its own subjectmatter.

Goldman is a bit unimaginative in the way he saddles analyticfunctionalisni (AF) with psychological baggage. He criticizes AFon the grounds that it ought to try to be psychologically realistic,in the manner of representational functionalism (RF). Now AFtries to distill out of ordinary (i.e., nontechnical, not philosophi-cally self-conscious) usage and beliefs a network of causal inter-relations that is definitive of each mental state's type-identity.Articulating these is an enterprise distinct from the psychologyof folk psychology. It is conceivable, for example, that ordinarypeople make self-attributions directly on the basis of nondefini-tive, heuristic criteria that are reliable often enough to beworthwhile but go systematically wrong under adverse condi-tions. AF may accommodate this in any of three ways. First, themore articulated functional roles that (according to AF) definemental-state types may be stored in long-term semantic mem-ory but not be ordinarily used except as a fall-back when there isreason to double-check. Second, definitive functional roles maynot be stored in the minds of many ordinary speakers at all, butonly in the minds of some speakers to whom the many defer. AFmay postulate a semantic division of labor. Third, definitivefunctional roles may be stored in a highly implicit and gener-alized form (I shall return to this point). So folk psychology neednot explicitly represent causal relations definitive of mentaltypes and Goldman's assimilation of AF to RF seems, even froma broadly naturalistic perspective, something of a straw man.

Shoemaker (1975) argues that it is part of the very functionalspecification of pain (say), that it causes a belief that one is inpain. Thus, according to Shoemaker's view, in effect, a self-attribution of pain helps constitute the state it is about as one ofpain. Does Goldman's schematic RF allow for this? Suitablyconstrued, it does. On RF a belief that one is in pain occurswhen a match occurs between a category representation (CR) forthe mental State and a suitable instance representation (IR).Goldman infers from this that one could only come to believeone was in pain if one already believed one was in pain. But whyshould we acquiesce in the assumption of temporal priority? Ifcoming-to-believe that one is in pain helps constitute the stateone is in as one of pain, then we would expect the mental stateand the self-attribution to be concurrent. This is not inconsistentwith the matching model if the process of matching is allowed toinclude itself as part of the pattern being matched.

Compare the following piece of reasoning, which I shall call(R). Premise: (R) is a valid argument. Therefore, at least onevalid argument exists. Evidently (R) is itself a sound, albeit self-referential, argument. Analogously, a mental match may takeplace which includes itself as a component of one of the itemsmatched. Moreover, it is plausible that explicit noncircularmodels for the matching procedure could be constructed, espe-cially if, as Goldman allows,a partial match may trigger an initialtype identification of the mental state, which then receives ameasure of "bootstrapping" confirmation from the type identi-fication itself.

In my view we should be idealistic about the mind, not in thetrivial sense that what is mental is mentally constituted, but inthe more substantial sense that object-level mental states andevents, the subject matter of folk psychology, are partly consti-tuted by our self-attributions and hence indirectly by ourordinary public practices of mentalistic description and explana-tion. The view is not that whether I am in mental state M or notis indeterminate or nonfactual, nor that the question is somehowup to me to decide (the relevant self-attribution may be invol-untary and automatic). It is rather that psychologists cannotassume that "object-level" mental-state tokens belong to deter-minate types prior to and independently of relevant self-attributions.

Goldman suggests that the Schachter and Singer (1962) data

might best be construed as showing that cognitive influenceshelp determine which emotion is actually felt, rather thanmerely the process of labeling or classifying the felt emotion. Ifso, then a self-attribution of anger (say) may help constitute therelevant emotion-instance as one of anger. This example alsoillustrates the impossibility in certain cases of distinguishingbetween phenomenological awareness and awareness of func-tional role. Goldman himself allows great latitude and variety inthe objects of conscious awareness: Attitude types such as doubtand disappointment, even propositional-attitude contents, mayserve. Might we not, then, be consciously but implicitly awareof functional roles? In being aware of a propositional-attitudecontent p we are implicitly aware of how coming to believe pwould change our current beliefs and desires. We are also awareof the strengths of propositional attitudes such as belief anddesire, and this too constitutes an implicit awareness of anaspect of functional role. Goldman endorses recent psychologi-cal work according to which pain has three microfeatural dimen-sions: character, intensity, and aversiveness. But aversiveness,how much the subject minds the pain, is surely a functionalaspect of the experience, concerning the strength of its causalconnection to actions seeking to diminish the sensation. So,given Goldman's own views, it is plausible that we can beconsciously but implicitly aware of functional roles. Some phe-nomenological qualities may not be monadic but relational, inthe manner of functional states.

This raises the question whether Goldman's phenomenologi-cal model is properly characterized as an alternative to func-tionalism. If the phenomenological quality of a mental state orevent may count as conscious awareness of functional role, thenthat functional role may be matched against the functional rolerepresented in the CR. Thus, when I wake up with a headache, Iam quickly aware of being in a state that will tend to cause me totake steps to get rid of it - aware, that is, of its aversiveness. Ihave reliable and fast information about its likely behavioraleffects under various possible circumstances. Of course, I do notrepresent each of the infinitely many possible counterfactualsituations discretely and explicitly, but I do represent them in ageneralized, unified form, and this may be sufficient to trigger aquick match with the CR for headache. So this approach makesheadway against Goldman's arguments from the ignorance ofcauses and effects and of subjunctive properties.

We can also defuse the threat of combinatorial explosion, forin order to make the match between CR and IR the subject neednot explicitly and discretely represent each of the other mentalstates to which the IR is causally linked. These countless statesneed be represented only once in long-term semantic memoryas part of a single theory of mind, either explicitly or implicitlyvia some generative structure for propositional contents. The IRmay be an experiential representation of functional role. More-over, although the functional role of the instance includes awealth of causal connections, many of these may be causallymediated by the very match between IR and CR and theconcomitant self-attribution.

Consciousness is a sufficiently peculiar phenomenon that,antecedently, we should not be surprised if it turns out to bereflexive in the way I have sketched. The model is surely lessmysterious than the direct detection of (nonphysical?) phenom-enological properties to which Goldman apparently subscribes.

I want to close with some remarks about Gopnik's theory offirst-person knowledge. I find much of what Gopnik says underthis head plausible and I will assume that our theory of mindmay have a bearing on the particular attributions we make, evenself-attributions. But it is difficult to accept that the apparentauthority of strictly present-tense self-attributions is just anillusion due to expertise. So I want to suggest an answer onGopnik's behalf to Goldman's challenge on this score: "If faultytheoretical inference is rampant in children's self-attribution ofpast states, why do they not make equally faulty inferencesabout their current states?" (sect. 10, para. 8).

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 55

Page 56: A Gopnik - How We Know Our Minds

Commentary I Go^mVI Goldman: Knowing our minds

For attributions to others we have only external evidence, butfor our recently past selves we have a higher grade of evidence,namely, fresh memories of inner states Three-year old childrentoo have, presumably, good memories of the recent past. So it isnot plausible that memory decay accounts for children's system-atically erroneous beliefs about their immediately past beliefs.Rather, as Gopnik argues, the child's theory of mind preventsthe child from giving this memory evidence its proper weight,or perhaps from interpreting it correctly. Yet it is remarkablethat nobody of any age makes similar errors about strictlyconcurrent mental states. Why is this? High-grade evidencecannot be the whole story, for it is almost as good for recent-pastself-attributions, even at age 3, • et strikingly insufficient in thatcase.

I have argued that self-attribution helps constitute the type-identity of the relevant object-level mental state - but plainlythis holds only when the self-attribution is present-tensed. If amental state is in the (even very recent) past, then its type isalready fixed, regardless of subsequent self-attributions. Butpresent-tense self-attributions help constitute the type-identityof the relevant object-level states in such a way as to tend tomake them self-verifying. This is so even if all self-attributionsare as theory-laden as Gopnik thinks. So if a 3-year old (perhapsin consequence of theory-laden self-cognition) sincerely says orthinks that he thinks that there are pencils in the box, or that hewants chocolate mousse, then those self-attributions help con-stitute his present mental state as one in which he really doesthink there are pencils in the box, or really does want chocolatemousse. So this model yields an attractive account of thecontrast between present-tense self-attributions and recent-past self-attributions in 3-year-olds.

Even a theory-theory needs informationprocessing: ToMM, an alternative theory-theory of the child's theory of mind

Alan M. Leslie, Tim P. German and Francesca G. HappeMRC Cognitive Development Unit, University of London, London WC1HOAH, EnglandElectronic mail: [email protected]

[Gol, Gop] Although we endorse the theory-theory view ingeneral terms, we think the specific "consensus" version Gopnikadvocates is wrong. We do not believe that the preschool child'ssuccess on false-belief (FB) tasks reflects the construction of arepresentational theory of mind (RTM), nor do we believe thatthe child's theory of mind undergoes a radical conceptual shiftaround the age of 4.

Gopnik endorses the consensus RTM view of preschool de-velopment that is seen at its most explicit in Perner (1991b). Thekey notion is that success on standard FB tasks at 4 years is theresult of a conceptual shift to RTM. The vital question for anydevelopmental theory-theory is where the theory and its con-cepts come from in the first place. To the limited extent thatadvocates of preschool RTM address this question, the followinganswer can be gleaned. Because FB is a misrepresentation of asituation in the world, it can be understood by the child only interms of a theory of representation. The child constructs atheory of representation by (somehow) learning about artefactslike pictures (models, maps, etc.), which being both public andobservable are easier to learn about than beliefs. Having thusdeveloped a theory of representation, the child applies it to themind in the form of a pictures-in-the-head theory of mentalstates. Therefore, understanding public representations shouldoccur earlier than understanding FB.

The above story can be given sophistication: Although aphotograph is a representation, it cannot be false in the way abelief can be false. A photograph is simply an accurate represen-

tation of a situation (e.g., the chocolate sitting in the cupboard).If the situation changes, the photograph is simply a still-accuraterepresentation of the old situation, not a misrepresentation ofthe new situation. In the FB task, Maxi's belief starts, like thephotograph, as an accurate representation of the situation.When the situation changes, however, unlike the photograph,Maxi's belief does become a misrepresentation of the newsituation. This is because Maxi mistakenly believes his represen-tation (of the previous situation) accurately represents the cur-rent situation. The photograph cannot perform this trick be-cause the photograph cannot believe anything.

Notice that the difference between the two cases above isprecisely related to the special nature of believing (and moregenerally, to the nature of propositional attitudes) rather than tothe general problem of the nature of representations. Under-standing representation then could only be a subcomponent ofunderstanding belief. Unlike the photograph itself, Maxi couldmistakenly believe that the photograph depicts a current situa-tion. In this account, the problem of understanding representa-tions, like photographs or pictures-in-the-head that go out-of-date, is included as a subcomponent in the problem of under-standing beliefs that go out-of-date. False belief includes all theconceptual complexities of representational pictures plus someother complexities specific to belief. Again, reinforced by theidea that public representational artefacts will be easier to learnabout, this predicts that out-of-date pictures will be understoodearlier than FBs.

Unfortunately for this account, the evidence from preschooldevelopment clearly contradicts the prediction. When tested inthe same way, FB is reliably easier and "understood" earlierthan pictures (Leslie & Thaiss 1992; Zaitchik 1990). Either (a)one must find an analysis in terms of general processes of theoryconstruction in which FB is less complex than out-of-datepictures or (b) one abandons the assumption of purely generalprocesses and looks for an account of FB understanding in termsof specialized, domain-specific mechanisms. If one opts for thefirst of these, one cannot account for the performance of autisticchildren, which is near ceiling on out-of-date pictures butseverely impaired on FB. This leaves the second opinion towhich we return below.

Information processing and theory of mind. Gopnik has anidiosyncratic notion of what an information-processing theoryshould be. She stipulates (sect. 5) that an information-processingaccount of the shift in performance on FB tasks between 3 and 4years of age must advert only to a single factor in explaining thedifferences. Despite Gopnik's worries about parsimony, thisstipulation seems entirely arbitrary. A task analysis may wellreveal that a number of specific and nonspecific mechanisms areinvolved across theory-of-mind tasks. The nonspecific mecha-nisms will also be involved in tasks outside theory of mind.Actually, Leslie and Thaiss (1992) do propose a nonspecificproblem-solving mechanism that might neatly divide "easy"from "difficult" theory-of-mind tasks. However, this does notcommit anyone to there having to be one, and only one, suchcomponent involved.

Gopnik discounts "information processing" largely becauseshe conflates the rejection of the "shift-at-4" hypothesis withrejection of the theory-theory. But the former is only oneversion of the latter position; not all theory-theories needespouse the "shift-at-4" hypothesis. Indeed, is not attributing asingle theory to both 3- and 4-year olds more parsimonious thantwo theories plus a shift?

Finally, Gopnik assumes that an information-processing ac-count is on a par with a theory-theory, that is, it would competewith a theory-theory. But an information-processing account isnecessary whatever theory-theory one adopts (and indeed forany simulation theory too). Information processing provides theframework for cognitive science and thus for particular theoriesof cognitive abilities. Does Gopnik really believe that hertheory-theory does not assume the processing of information?

56 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 57: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

The ToMM theory-theory. One version of the theory-theory(Leslie 1987; Leslie & Frith 1990; Leslie & Thaiss 1992) - theonly one explicitly wedded to the information-processing frame-work - in its original form predates the "consensus" RTM. Thetheory postulates a domain-specific, processing theory-of-mindmechanism (called ToMM), which uses a specialized system forrepresenting propositional attitude concepts. This system canexpress four kinds of information: It identifies an agent, anattitude, an anchoring aspect of reality, and a fictional state ofaffairs, so that the agent holds the given attitude to the truth ofthe fictional state of affairs in respect to the anchoring aspectof reality. For example, mother pretends (of) this banana (that)"it is a telephone." A device such as ToMM, then, can equip thechild to process the behavior of agents in such a way that theeffects of fictional states of affairs on actual behavior are madesense of via the agent's attitude without requiring the child toconceptualize a mind stocked with representations.

ToMM is evident at least from the time the child can under-stand pretense (around 2 years old). [See also Whiten & Byrne:"Tactical Deception in Primates" BBS 11(2) 1988.] The evidencefor a subsequent radical theory shift around 4 years is less thancompelling. What is well established is that there is a shift inperformance on standard FB tasks. However, FB and represen-tation tasks are not the only problems in which there is aperformance shift between 3 and 4. We believe that a generalprocessing component, the "selection processor,' which plays arole in a number of tasks (e.g., false photographs and falsemaps), becomes functional during this period (Leslie & Roth, inpress; Leslie & Thaiss 1992). For theory-of-mind tasks that donot require this general component, good performance is seenin 3-year-olds (e.g., Roth & Leslie 1992; Wellman & Bartsch1988). The 3-year old's difficulty, then, is due to this generalcomponent that can be stressed by tasks regardless of whetheror not agents and attitudes are involved. Meanwhile, ToMM isintact. The autistic child, by contrast, shows poor performanceon a wider range of belief-reasoning tasks (e.g., Baron-Cohen1991a; Roth & Leslie 1991) alongside excellent performance onfalse photographs, maps, and drawings (Charman & Baron-Cohen 1992; Leslie & Thaiss 1992). This pattern reflects arelatively intact selection processor with an impaired ToMM.Developing a task analysis and associated models of informationprocessing is not an optional extra but an integral part ofunderstanding both the nature of the commonsense theory andwhere it comes from in development. It is also an essential firststep in relating development of theory of mind to the brain(Frith et al. 1991).

Simulating simulation. We are unmoved by Goldman's attackon the theory-theory, which seems to rest simply on consider-ation of complexity, combinatorial explosion, frame problem,and so on. These are general challenges to cognitive science andapply equally to reasoning about the physical world, where asensation/simulation account has no relevance.

A theory-theorist can happily allow the perception of bodilystates (e.g., pain and thirst). However, a young child's percep-tion of these is not clear, distinct, and just waiting to be labelled.The child comes to differentiate these perceptions by learningabout functional conditions. Thus, a 2-year-old notices a nearlyhealed graze and complains loudly of having a sore knee. Thechild's soreness is purely functional, though he may yet come torecognize a class of associated bodily perceptions. The notion ofbodily sensation cannot be plausibly extended to propositionalattitudes (PA). We are positive we experience no sensationswhatsoever accompanying PA, though perhaps Goldman does.Is this amenable to objective test?

We turn now to Goldman's discussion of early development(sect. 10). It is not true that 3-year-olds have a general difficultyimagining states that run counter to their own state, as shown,for example, by pretense (Leslie 1987) and by success onnonstandard FB tasks (e.g., Roth & Leslie 1991; Wellman &Bartsch 1989). In any case, imagining is not simulating. In

general, solving problems by imagining requires a "theory"(i.e., access to a data structure) that guides the imagining, forexample, imagining the result of mixing red and blue paint.What makes simulation different from a theory-theory is theclaim that because we are people ourselves, we do not need adata base: We can use the action-decision system that normallyguides our own behavior to simulate other people's behavior, ifthis system can run off-line. Note that this assumes that ournormal action-decision system does not itself use a theory (seeStich & Nichols 1992).

We see problems accounting for the explanation of behavior.Either our action-decision system can somehow run backwards(as well as off-line) or it works by "analysis-by-synthesis." Ineither case, explanation should be harder than prediction. Yet,wherever they have been found to differ (e.g., Bartsch &Wellman 1989), explanation has been easier for children.

We suspect, pending detailed models, that simulation theorywill have serious problems with higher-order mental statessince the decision-action system not only has to work "off-line"but also has to be able to simulate a simulation recursively. Thisis not a problem for a reasoning mechanism using recursiverepresentations (e.g., theory-guided "imagining") but it is notclear that this could plausibly be a property of our normal action-decision mechanism unless it too uses a theory. Furthermore,"standard" simulation theory gives a fundamental role to pre-tending. When children first get the ability to pretend, theysimultaneously acquire the ability to understand pretense-in-others (Leslie 1987). So the problem of recursiveness for the off-line, action-decision mechanism theory arises very early indevelopment when it tries to account for how the child under-stands pretense in others. If the child has to pretend in order tounderstand a mental state (like someone else pretending), thenhe has to pretend that he is pretending. But then how does heunderstand his own pretending? A theory-theory (like ToMM)can put recursiveness in the representations but the simulationaccount, we suspect, has to put mechanisms within mecha-nisms. We doubt this will work.

Three inferential temptations

Alexander Levinea and Georg Schwarz"Department of Philosophy, University ol California at San Diego, La Mia,CA 92093-0302Electronic mail: '[email protected] and [email protected]

[Gop] Introduction..We found Gopnik's survey of work by herand others on first-person knowledge of intentionality and theconclusions she draws of enormous interest, though not, per-haps, for the reasons intended by Gopnik herself. We wereparticularly struck by the fact that, although Gopnik astutelydiagnoses the underdetermination of hypotheses by evidence towhich a number of rival psychological and philosophical theoriesfall prey, her own interpretation of the evidence reveals pre-cisely the same sort of hasty inference. Our brief attempt todraw attention to dangers threatening many accounts of psycho-logical faculties proceeds by canvassing three pairs of orthogonalattributes, pairs for which the applicability of one member in noway implies that of its counterpart. What all three have incommon is the frequency with which this (invalid) inference istaken for granted. In what follows, we shall, for the most part,take Gopnik's experimental evidence for granted; it is not ourintention to challenge either the design of the studies on whichshe relies or her basic hypothesis that first-person knowledge ofintentionality is somehow acquired and that its acquisition isprobably not prior to the acquisition of less privileged, third-person knowledge of intentionality. We will, however, chal-lenge some further conclusions Gopnik draws on behalf of her

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 57

Page 58: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

evidence, criticizing them not as obviously false, but as unwar-ranted.

Thee pairs of orthogonal attributes1. Innate versus fixed. It should be clear, on reflection, that

having argued the innateness of one or another psychologicalfaculty (e.g., Chomsky's generative grammar), stance, or belief,we are not thereby entitled to infer the fixedness of that faculty,stance, or belief in adults. Nor is the converse inference, fromapparent fixedness to innateness, valid either; so we find Gopnikchallenging arguments from the apparent intransigence of com-monsense psychology in the adult understanding of the mind tothe innateness of that stance. Although she in no way deniesevidence cited by Searle (1983^ and others for the invariant,"intrinsic" nature of adult first-person intentionality, she rightlyinsists that such findings fail to constitute "evidence that thecommonsense psychological account of intentionality is an in-nately determined aspect of our understanding of the mind"(sect. 6, para. 4). It is worth adding that just as the inferencesfrom fixedness to innateness and vice versa are invalid, so is thecontrapositive of each. Just because we find that some psycho-logical faculty, belief, or stance is subject to change over time (isnot fixed), we are not entitled to conclude that its transforma-tions are not, nonetheless, somehow innately determined. Sim-ilarly, if the acquisition of some faculty is not determined by anyinnate predisposition, it still need not be subject to change. Forexample, although it is not innately determined which of themany natural languages infants will learn as their first language,they can never learn another with the same native proficiency.Native proficiency in a given language is not innate, but neitheris it subject to change (barring head wounds and the like).

So we find ourselves taking issue with Gopnik's move fromthe assertion that ". . . crucial aspects of [the commonsensepsychological understanding of intentionality] appear to beconstructed somewhere between 3 and 4" to her rejection of theview in which "the commonsense psychological account ofintentionality is an innately determined aspect of our under-standing of the mind." Alternative explanations, such as thehypothesis that the "shift is the result of the maturation of aninnately determined capacity" are, to be sure, mentioned in thefootnotes. Our point is simply that this hypothesis derives asmuch confirmation from Gopnik's evidence as the "theory-theory" she prefers.

2. Acquired versus theoretical. From the fact that a givenfaculty can be shown, in some sense, to be acquired (as opposedto innate), it does not follow that that faculty is theoretical, atleast on most ordinary conceptions of "theoretical." What allsuch conceptions have in common is their reliance on an analogyto scientific theories as paradigmatic theoretical structures, or adependence on what Gopnik calls "general parallels betweentheory change in science and some kinds of cognitive develop-ment." The failure of the inference from "acquired" to "theoreti-cal" is easily shown for two accounts of theoretical structurescommonly used, at least implicitly, in developmental studies.On the so-called classical or propositional account of theorystructure, acquired or constructed nonpropositional proceduralknowledge - "knowing how" - is a standard counterexample.Notoriously, Fodor (1975) argues that all propositional knowl-edge must, in a rather strong sense, be innate. The reading ofthe analogy between scientific theories and commonsense psy-chology at work in Gopnik's study, however, appears to becloser to that of Keil (1989) than to the classical account.According to this increasingly important view in developmentalpsychology,

The knowledge that children come to have that unifies a do-main . . . may be far too systematic and interconnected amongconcepts to be considered merely knowledge of definitions. Itsrelational nature and systematicity, in conjunction with its explana-tory role, may be best described as theoretical knowledge. (Keil1989, p. 118)

This account, too, leaves room for other forms of acquired orconstructed knowledge, for not all knowledge is systematic orrelational in this sense. To be sure, the acquired knowledge ofintentionality described by Gopnik might plausibly fulfill thissort of domain-unifying function. It cannot, however, be said todo so simply in virtue of its being acquired rather than innate(even if we concede that it is the product of construction, ratherthan maturation).

3. Theoretical versus penetrable. Without wishing to embarkon a detailed analysis of the aforementioned analogy betweenscientific theories and other knowledge structures, in virtue ofwhich we call the latter theoretical, we pause to note one centralaspect of the analogy. What early proponents of the theory-theory (such as Stich 1983) thought was important about theirproposal that "folk psychology" was a theory of sorts was that alltheories, by analogy to those of science, are defeasible, thuscorrigible or eliminable. In Gopnik's words, "One characteristicof such theoretical structures, in children as well as in scientists,is that they are defeasible, subject to change and revision." Butthe whole notion of defeasibility, as it applies to scientifictheories at least, is predicated on falsifiability in principle, awell-defined notion of old-time confirmational logic (see, e.g.,Popper 1968). Although there is substantial disagreement overwhether certain theoretical structures are really "subject tochange and revision," it is important to note that both sides ofthis debate agree in distinguishing in-principle falsifiability fromcognitive penetrability (see, e.g., Fodor 1984; 1988; Church-land 1988a). If a structure is theoretical, it is falsifiable inprinciple. It need not, however, be cognitively penetrable; if itis susceptible to change and revision at all, such change may notyield to any sort of conscious direction.

Functionalism can explain self-ascription

Brian LoarSchool of Philosophy, University of Southern California, Los Angeles, CA90089-045JElectronic mail: [email protected]

[Gol, Gop] Alvin Goldman's endorsement of qualitative factorsas central in our commonsense conceptions of the mental is, Ibelieve, right. But in my view our English mental ascriptionsalso have functional or systematic implications - a claim thatthere is no space to try to vindicate here. This need not implythe "Ramsey sentence" view, that there are commonsenseassumptions that, when Ramseyfied, yield necessary and suffi-cient conditions for the truth of ascriptions of, say, "X believesthat " That is implausible for reasons Goldman does notgo into. But because Goldman's arguments about self-ascriptionwould also apply to weaker and more plausible functionalist andother systematic view of mental ascriptions, it is important tosee that those arguments against the full Ramseyan view are notpersuasive.

First, a working formulation of functionalism. A functionalproperty F holds of a person X if and only if X has some lower-order property G that realizes F, that is, that has the relevantfunctional role. So the functional property of believing thatCaesar crossed the Rubicon holds of X if and only if X has (say) abrain property G that realizes that belief property, that is if andonly if some such G bears the relevant subjunctive and otherrelations to other of X's properties.

A self-ascription process is one that generates X's self-ascriptive belief that X believes that p from X's belief that p. Wecan translate this into functionalist terms. Suppose that thebrain property G realizes the belief that Caesar crossed theRubicon - that functional property - in X; and suppose thatthe brain property H realizes X's self-ascriptive belief that X

58 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 59: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

believes that Caesar crossed the Rubicon, that functional prop-erty. Then a self-ascription process is one that given certainconditions generates H from G and does the same for all (or mostor . . . ) pairs of brain properties H and G that are similarlymentally related. ("Certain conditions" might mean, for exam-ple, a brain state / that realizes the functional property ofturning one's attention to one's beliefs.) Such a self-ascriptionprocess maps nonsubjunctive local brain properties onto non-subjunctive local brain properties. But it also thereby generatesa state with the functional role of a self-ascriptive belief from astate with the functional role of the ascribed belief. (A func-tionalist theory of belief could well hold that it is an essentialfunctional constraint on any pair of beliefs, one of which is a self-ascription of the other, that they be connected by some suchprocess.)

This picture is similar to what Goldman refers to as "classicalfunctionalism,' which he rejects; and the self-ascriptive processis, I believe, of the type he counts as automatic. Goldman'sreason for rejecting such processes is that they do not "use anycriterion" or "involve the comparison of present informationwith anything stored in long-term memory" (sect. 6, para. 5).

Now things are complicated by the fact that the above processcould involve qualitative properties. For the property G thatrealizes a given belief, that realizes that functional property,could itself be a complex qualitative property. Qualitative prop-erties may (in my view) be identical with neural properties -which does not imply that one introspects them under neuraldescriptions. The self-ascription process then takes a neu-ral/qualitative property G as input and yields H, also perhapsqualitative, as output. Would the qualitative property Gthereby count as a "criterion" for the functional property, theproperty of being the belief that Caesar crossed the Rubicon? Itis not clear to me how "criterion' is intended. In any event, thispossibility is not among those that Goldman discusses. It is notthe "second functionalist model" (sect. 4), for it does not involvelearning correlations between qualitative and functional proper-ties as phrased in subjunctive terms.

This raises an important point. That sentences of the form"believes that " have Ramseyfied functional explicationsdoes not imply that a self-ascription process has as input oroutput something in the head that is structurally isomorphic to along conjunction of subjunctive conditionals. Even if we com-bine the Ramsey view with a language of thought view, a self-ascriptive belief - the output of the self-ascription process -could be just a mental sentence of the form "I believe thatCaesar crossed the Rubicon." The Ramsey sentence would beentirely the product of philosophical explicative activity, piecedtogether from the conceptual roles of our mental ascriptions.Thus functionalism implies no combinatorial explosion.

Putting aside qualitative input, consider the bare functional-ist self-ascription process. Given that this uses no criterion,Goldman says that "it would leave the process of mental-stateclassification a complete mystery" (sect. 6, para. 6). At this pointI do not follow the argument. Goldman praises the representa-tional functionalism (RF) model for at least attempting to do anhonest bit of cognitive psychology. But, as he points out, themodel is absurd, at least if it means that a process that tells onethat one has a given belief does so by verifying a conjunction ofsubjunctive conditionals or the like. But in what way does the"automatic' self-ascription process shirk an essential explana-tory task that the RF model gamely takes up? If my descriptionof the abstract form of the self-ascription process is coherent,why not count it as explaining what needs explaining? I have notunderstood what constraint Goldman understands the normalpractice of cognitive psychology to impose that is both reason-able and unsatisfied here.

One may, after all, describe the self-ascription process asexplaining "personal access to functional information," if this ismeant in the right sense. It cannot mean that something in the

head registers functional information in the form of a conjunc-tion of subjunctive conditionals - nothing like that is eitherinput or output to the process. But it explains personal access tofunctional information in the following sense. Outputs of theprocess (such as H above) satisfy these two conditions: (i) Theytrack certain functional properties of the inputs (like G above),such as being a belief that Caesar crossed the Rubicon. That is,they are reliable detectors of such functional properties -indirectly, by being caused by brain properties that realizethose functional properties. They are not the result of comput-ing information in subjunctive syntactic form, (ii) Those outputproperties - which in a language of thought view have asyntactic form that corresponds to "I believe that Caesar crossedthe Rubicon" - themselves realize self-ascriptive belief proper-ties, for example, believing one believes that Caesar crossed theRubicon. This appears to explain "personal access to functionalinformation" in the only sense it is reasonable to expect of afunctional theory of belief. Why should more be required?

Suppose a Ramsey-style functionalism purports to be "wide."That is, it regards ascriptions of the form "believes that "as capturing externally determined prepositional content and itpurports to explicate such ascriptions via functional-causal rela-tions to external things and properties. Now it appears to methat such ascriptions are in fact wide, not because I hold sometechnical theory of content, but because of intuitions about thesensitivity of that-clauses to the shifting of external factors (cf.demonstratives, twin-Earth, etc.) But, Goldman suggests,there is a problem about self-ascribing such wide properties:"Cognizers seem able to discern their mental contents . . .without consulting their environment" (sect. 9, para. 2). Nowintuitively it seems to me that I am authoritative about suchproperties: Without having consulted my environment, I havejust noted that I believe that Caesar crossed the Rubicon. Ofcourse I presuppose that Caesar and the Rubicon exist. Still, Iknow without having consulted my environment that, if theyexist, I have that belief about them. And that implies authorita-tive knowledge of external causal relations. One sees how aworry arises, and I do not deny that there are interesting puzzlesabout self-ascribing wide content. (Indeed I have my own petpuzzle, and a solution: forthcoming!). But is there really aproblem about one's detecting wide properties of one's beliefsjust by virtue of having those beliefs - modulo only knowledgeof existence? I find this quite unobvious. The viability of theabove self-ascription model appears unaffected; nothing in themodel turns on whether the properties realized by the brainproperties G and H are themselves wide or narrow.

For all this, I agree with Goldman's thesis that internal,phenomenologically accessible content properties are central inpsychology. But I do not see how these properties can avoidbeing functional, in part. Two examples: (a) That a thought isconditional - that is, has the form "if p then q" - is presumablyan aspect of its internal content. But does this not imply sometendency to respond appropriately to the information that p orthat not-q or that p & not-q, putting aside compartmentalizing?And is this not a functional fact? (b) Consider the content of mythought that chicken tastes best when roasted. Does ascribingthat not imply that I also have appropriate additional beliefsabout heat, food, phenomenal qualities, comparison? Any no-tion of mental content surely implies systematic connections;there must be some nonlocal constraints on internal content.

Alison Gopnik's argument that "believes" is theoretical restson the fact that children go from having a conception of belief inwhich there is no room for falsity to a conception in which beliefscan be false. This development in understanding is counted as atheoretical development. I agree with Gopnik's thesis that"believes" is a theoretical term in her sense, which I take to beweak enough not to require that there be commonsense psycho-logical laws. But I do not find it obvious that her experimentssupport this; indeed, the basic experiment might be interpreted

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 59

Page 60: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

phenomenologically. "These 3-year-olds,' it might be said, "arevery impressed with the phenomenology of their present be-liefs, which serve as their paradigms. They can hardly regardtheir present beliefs as false; and given the paradigmatic statusof such beliefs, the children cannot conceive of false beliefs,whether their own past beliefs or others' beliefs. Conceptualdevelopment occurs when they become aware of the differencebetween the phenomenological contents of their present beliefsand the external states of affairs that are presented to them inthose beliefs, and that is what allows for falsity."

Is this point the one addressed in Gopnik's section 5.3, on"reality seduction'? If so, her answer is not clear. The contrast-ing results about imaginings of blue dogs and pink cats areconsistent with the foregoing phenomenological interpretation;the phenomenology of a present belief is as of something realand the phenomenology of imagining is not like that; this wouldexplain the differences children exhibit with respect to beliefand imagining. As for the experiment about belief sources,Gopnik does not make it clear why a change in the ability toidentify sources is meant to show that the child's conception ofbelief is not phenomenological. Why should a child not have tolearn that states of certain phenomenological types could haveseveral different sources, or that certain phenomenologicallyidentified states can come in degrees?

I found Gopnik's remarks about expertise compelling andclearly helpful in connection with problems about ascribingfunctional properties in the third-person case. But I feel skepti-cal about their explaining the first-person case. One's ability toself-ascribe a present belief of one's own is doubtless a moresophisticated ability than the ability to have an unreflexivebelief. But that ability does not appear to me to be like theexpertise of a chess-master, for it seems too "automatic." Now ofcourse the chess-master's judgments appear automatic at theconscious level, and that is Gopnik's point, even though theyrely on a subpersonal analysis of complex clues from thechessboard. The point is that one appears to be in a position toself-ascribe a present thought just in virtue of having it. Thus themove to the reflexive thought is automatic in a way beyond thechess-master's: Unlike the chess-master's subpersonal methodsof sizing up the board, one's mechanisms for ascribing currentthoughts do not look around for and analyze clues. At least, thatis what I am inclined to say.

The fallibility of first-person knowledge ofintentionality

Peter Ludlow and Norah MartinDepartment of Philosophy, SUNY Stony Brook, Stony Brook, NY 11794Electronic mall: [email protected]

[Gop] The central claim of Alison Gopnik's target article is that,contrary to our adult intuitions, we have no first-person knowl-edge of intentional states. The evidence for this comes from thestudy of children's reports of their mental states. The implicitargument in Gopnik's proposal is the following. If children haveintentional states, they should have direct and infallible first-person experience of those intentional states. But in fact chil-dren are mistaken about their own mental states, so they cannothave first-person knowledge of their intentional states andtherefore may not have intentional states.

Our primary concern with this argument is the questionableassumption that having first-person experience of a mental stateimplies infallibility. As Boghossian (1989) has argued, all legiti-mate forms of self-knowledge involve cognitive achievement.That is, all forms of self-knowledge require that an investigationtake place. The phenomenology of first-person experience is notsuch that we immediately and completely apprehend the expe-rience. To see this, consider our first-person experience of pain:

Such experiences can be attended to more or less carefully. Wemight even be trained to observe certain features of our pains(e.g., whether they are shooting pains, throbbing pains, etc.).We should expect the same phenomenon to hold when weconsider first-person knowledge of intentional states. If knowl-edge of our intentional states involves cognitive achievementthen we should expect that we can attend to and investigate ourown intentional states with varying degrees of care - a view thathas been held by advocates of intrinsic first-person inten-tionality from Freud to Searle (1990). In short, privileged accessand authoritative self-knowledge do not imply infallibility orcompleteness. If we are right on this score then it is naturalto suppose that children will err in their understanding oftheir own mental states and indeed will err more often thanadults because they are less experienced investigators of self-knowledge.

We wish to emphasize that when we say children are lessexperienced investigators of self-knowledge, we are not sayingthat children are necessarily less adept at understanding taskdemands, or at reporting what they believe, or at recalling whatthey have experienced. Thus, in our view, the considerationsraised in section 5 by Gopnik do not undermine the thesis thatchildren are less experienced investigators of certain kinds ofknowledge. In our view, first-person knowledge of inten-tionality is particularly complex and subtle, much more so thanpretenses and images, and the difficulty of the task lies in thevery nature of this kind of self-knowledge.

Apart from the case of children, Gopnik must explain awaythe intuitions of adults that they have direct perceptual access tointentional states. Against such intuitions, Gopnik (sect. 7)draws an analogy to the study of experts' intuitions that theyhave direct perceptually based experience of the nature of theirexpert knowledge. As a number of studies show, their expertknowledge has a long theoretical history. Gopnik, following anumber of the studies, concludes that the experts fail to havedirect perceptually based knowledge of the nature of theircurrent expertise.

It seems to us, however, that the analogy to expert knowledgecan be turned against Gopnik's central thesis. An illustrationfrom work in knowledge engineering shows how. Typically,when one is building expert systems, one.must acquire knowl-edge from experts in the field - domain experts. Most of theknowledge one must acquire is what knowledge engineers call"overlearned" or (drawing an analogy to compilers in computers)"compiled" knowledge - knowledge of rules that the expert doesnot consciously access in performing the task. Crucially, how-ever, when called upon to do so, experts can be made toarticulate the rules that underlie their competence in chess,machine operation, or whatever, although it may not always beeasy for them to do so. There is actually a subspecialty inknowledge engineering, "knowledge acquisition," which seeksways of eliciting compiled knowledge from domain experts (seeBelkin et al. 1987; Greenwell 1990; Gruber 1989; Hoffman 1987;McGraw & Westphal 1990). Most methods of knowledge acqui-sition involve a kind of Socratic dialogue with domain experts;the key is not to invent new rules but to get experts to recall theiractual domain knowledge more extensively (but see Compton &Jansen 1990 for a dissenting view here). In short, the case ofexpert knowledge could well serve as a model for cases like first-person knowledge of intentional states, in which one can intro-spect the nature of one's knowledge with varying degrees of careand detail.

In sum, Gopnik has assumed that first-person knowledge isimmediate and infallible. Since children's self-knowledge ap-pears to be neither, she concludes that they do not have first-person knowledge of intentional states. Perhaps a better conclu-sion would be that first-person knowledge of intentionalityrequires extensive investigation and is therefore neither imme-diate nor infallible.

60 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 61: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Go\dman: Knowing our minds

Reporting on past psychological states:Beliefs, desires and intentions

Alfred MeleDepartment of Philosophy, Davidson College, Davidson, NC 28036Electronic mall: [email protected]

[Cop] Among the data to be explained by Gopnik's hypothesisthat young children lack a representational model of the mind(RMM) are children's false reports about their immediately pastbeliefs, desires, and intentions. The hypothesis applies neatly tothe false-belief reports, but its application to comparable reportsabout past desires and intentions is not made clear.

Three-year-olds, writes Gopnik, "think of be l ie f . . . as amatter of a direct relation between the mind and objects in theworld, not a relation mediated by representations or proposi-tions. They think we simply believe x, tout court, just as, even inthe adult view, we may simply see or want x, tout court, ratherthan seeing, wanting, or believing that x" (sect. 3.2, para. 2).This is a puzzling claim. When x is a person, believing x withoutbelieving that x is routine: I believe Ann when she tells me thatshe lives in Utah, but I don't believe that Ann (although I dobelieve that Ann lives in Utah). When, alternatively, x is astatement or proposition, to believe * is to believe that some-thing is the case. Let x be the statement "Ann lives in Utah":believing x while not believing that Ann lives in Utah isunimaginable.

To grasp Gopnik's claim, then, one must look further. Shecontends that a view of belief typical of 3-year-olds "doesnot . . . allow them to understand cases of misrepresentationsuch as false beliefs or misleading appearances," and "makes itdifficult to understand that beliefs may come from many differ-ent sources, that they may come in degrees, and that there areintermediate steps between the mind and the world" (sect. 3.2,para. 3). So perhaps her earlier claim can be put as follows:Three-year-olds view belief as directly and correctly mirroring"objects in the world." Their having such a view (rather than arepresentational model of belief) might help explain why theyerroneously report that they believed all along that there werepencils in the candy box and that others would believe this too: Ifbeliefs were always correct, no one would believe that the boxcontained candy.

Three-year-olds sometimes report falsely on their past inten-tions and desires, as well. Children who had started drawing ared ball but then complied with an experimenter's request todraw an apple reported that they had intended all along to drawthe apple. Similarly, hungry children who ate crackers "untilthey were no longer hungry" subsequently said that theyweren't hungry when they started eating. These children, nowlacking a desire to eat crackers, in effect claimed that they had nosuch desire earlier either. Both cases provide analogues ofchildren's false reports that they believed all along that therewere pencils in the candy box.

The belief reports are allegedly explained by the 3-year-olds'viewing belief as correctly mirroring reality. If the children whosaid that they intended all along to draw an apple held thecomparable view that reality correctly mirrors intention - or,less figuratively, that all intentions are executed - a parallelexplanation of their intention reports could be offered. On theassumption that all intentions are executed, the young artistsnever intended to draw a ball: If they had so intended, theywould have drawn a ball.

Gopnik reports, however, that even 2-year-olds "under-stand . . . that desires may not be fulfilled" (sect. 3.1, para. 15)and that 3-year-olds construe intentions as desires. So the latterpresumably understand that "intentions" may not be fulfilled. Ifthe 3-year-olds' viewing belief as correctly mirroring realityexplains their false-belief reports in the candy box study, andthey don't similarly view reality as correctly mirroring inten-

tion, why, one wonders, do they make the false-intentionreports in the drawing case, and how is the alleged absence of aRMM supposed to explain those reports?

Gopnik's brief remarks on desire promise some illumination(she says less about intention). "The absence of a [RMM]," shewrites, "might also make it difficult for children to appreciate theintentionality of desire; the fact [fact/] that objects are desiredunder a description and that desires may vary as a result ofvariations in that description" (sect. 3.2, para. 4). Does this helpexplain the false reports about earlier intentions and desires (orhunger)? Two observations are in order.

First, it isn't obvious that an appreciation of desire'sdescription-sensitivity is required for understanding that, forexample, although one doesn't want crackers now one mighthave wanted some earlier. To be sure, a representation of one'scrackers as dry and flavorless is less stimulating than a represen-tation of them as crisp and tasty; but why can't a child whodoesn't understand facts of this kind nevertheless appreciatethat, at different times, one can have very different attitudestoward very similar objects (e.g., the crackers available earlierand the crackers available now)?

Second, it isn't clear how appreciating that desires may varyas a result of variations in descriptions can help one understandthat one's not currently desiring behavior under an apt descrip-tion like "eating crackers" is compatible with one's earlier havingdesired behavior under the same description. Arguably, whatthe children need to understand in order to avoid believing(falsely) that they didn't want to eat crackers earlier is that, atdifferent times, one can have very different attitudes towardvery similar objects (or toward similar descriptive contents).And this understanding is not directly provided by an apprecia-tion of fact / .

There is much more to having a RMM, of course, thanappreciating fact / . My question for Gopnik is this: Whatdifferences between a 3-year-old's model of the mind and aRMM do account for the false reports about past desires andintentions? Alternatively, in virtue of what features of a RMMis the frequency of such mistakes reduced once children pos-sess a model of that kind? Plausible answers would signifi-cantly strengthen Gopnik's case for the hypothesis underconsideration.

ACKNOWLEDGMENTThis essay was written during my tenure of a 1992/93 NEH Fellowshipfor College Teachers.

Knowledge of the psychological states ofself and others is not only theory-laden butalso data-driven

Chris Moore and John BarresiDepartment of Psychology, Dalhousie University, Halifax, Nova Scotia,Canada B3H 4J1Electronic mail: [email protected]

[Gol, Gop] Essential to Gopnik's thesis of the illusory nature offirst-person knowledge of intentionality is the claim that ourinitial understanding of the intentionality of psychological statesoccurs in parallel for self and other, and that such understandingis theoretical. As such, Gopnik's target article is essentially anargument for the theory-dependence of observation in the caseof intentional states attributed to both oneself and others. Webelieve that in focusing on the theory-dependence of under-standing intentionality Gopnik underplays the equally impor-tant issue that theories themselves are built on data. What arethe data on the basis of which young children build a theory ofthe intentionality of both their own and others' psychological

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 61

Page 62: A Gopnik - How We Know Our Minds

Commentary/Gopnik/ Goldman: Knowing our minds

states? Gopnik is unclear on this critical point. She suggests(sect. 8, para. 2), "First we have psychological states, observethe behaviors and the experiences they lead to in ourselves andothers, construct a theory about the causes of those behaviorsand experiences that postulates intentionality, and, then, inconsequence, we have experiences of the intentionality of thosestates." According to this account, the data result from thepsychological states that people have. The problem for thisview, however, is that no account has been given of how theorybuilders know that the data they have from their own psycho-logical states can be interpreted in terms of the same conceptualentities as the data they have from the psychological states ofothers.

To elaborate, experience of the consequences of psychologi-cal states, whether intentional or otherwise, differs radicallydepending on perspective. Take the simple psychological stateof someone seeing an object, for example. The datum that ismost obvious to theory builders when they are in such a state isthe phenomenological experience of the object, the "first-person perspective" (FPP). The existence of the self as observeris not an essential part of this information. In contrast, the datumthat is most obvious to the theory builder when observinganother person in a similar state is the head/eye orientation ofthat person, the "third-person perspective" (TPP). The exis-tence of the object of the psychological state is not an essentialpart of this information. How, then, could the theory builderpossibly know that these two pieces of information are in realityinstances of the same psychological state? For theory builders tounderstand that the experiences they have when' seeing anobject in fact relate to the same kind of event as when anotherperson's gaze is oriented in a particular direction, there has to besome mechanism that provides the knowledge that both self andother exist in similar psychological relations to objects.

So how is the knowledge that self and other exist in similarpsychological relations with objects constructed? We suggest(see Barresi & Moore 1992) that the existence of certain forms ofsocial interaction in infancy such as joint visual attention, imita-tion, and empathy allow matching of psychological states in selfand other. These behaviors can be thought of as simulation inaction in that actions on the part of one or other participantinvolved in a social interaction bring about a matching of thepsychological relations that both participants share with someobject or event in the world. As a result, the young infant hasavailable at certain times information from the FPP and TPP ofcorresponding psychological relations. For example, in the caseof joint visual attention, infants observe another's head/eyeorientation (TPP), and in turning themselves, they have avail-able the psychological experience of seeing the object (FPP). Inthis way, the TPP of another co-occurs with FPP of the self.These two pieces of information may then be combined into arepresentation of the psychological relation involving both first-and third-person aspects but without its being associated exclu-sively with either the self or the other. Thus, from the point ofview of the theory builder entrenched at this level of under-standing, there would be no understanding that both self andother exist in separate psychological relations with objects orevents. To achieve the latter understanding, another form ofsimulation is required - one that occurs in imagination. Tounderstand the psychological state of another when observingthat person's TPP, theory builders must imagine the appropri-ate FPP for the other; for this purpose they must use the self as amodel. Conversely, in order to understand the psychologicalstates of self when having some psychological experience, the-ory builders must imagine the TPP of the self; for this purposethey must use others as models.

To summarize, examination of the data available to the theorybuilder for understanding the psychological states of both selfand other indicates the need for a form of simulation in psycho-logical understanding. It is important to point out, however,that we advocate an account of simulation that is somewhat

different from the others that are available (e.g., Goldman 1989;Gordon 1986, 1992c; Harris 1991). The typical view of simula-tion, against which Gopnik arrays the evidence, is that it isasymmetric in that self-knowledge is used to understand others.According to our account, not only must the theory builderimagine the FPP in order to understand the psychologicalrelations of others, but in addition, and of equal importance, theTPP must be imagined in order to understand the psychologicalrelations of self. The account is therefore consistent with thefindings reviewed by Gopnik showing that the understanding ofintentionality, and indeed even of simpler forms of psychologi-cal understanding, develop in parallel for self and other.

On the other hand, this view of theory acquisition implies anaccount of the "illusion of first-person knowledge" of inten-tionality different from that offered by Gopnik. It is not merelythat we are typically more of an "expert" about ourselves thanabout others (cf. Gopnik, note 10), but rather that we attributeintentional states to self and other based on qualitatively differ-ent kinds of data (i.e., the FPP for ourselves contains moreelements of "psychological experience" than the TPP for others,which is more behavioral). The illusion occurs because, asGopnik is surely right in arguing, our act of attributing inten-tionality is not recognized as such. Instead, it inherits thequality of the data on the basis of which the attribution is madeand thus the intentionality presents itself as transparent in thecase of the self but typically inferred in the case of the other.

Mismatching categories?

William Edward Morris3 and Robert C. Richardson"Department of Philosophy, University of Cincinnati, Cincinnati, OH45221-0374Electronic mail: °[email protected] and [email protected]

[Gol] Goldman vacillates between two importantly differentversions of "folk psychology" and "analytic functionalism." Hemaintains that the "chief constraint on an adequate theory of ourcommonsense understanding of mental predicates is . . . that. . . it should be psychologically realistic," explaining that atheory's "depiction of how people represent and ascribe mentalpredicates must be psychologically plausible" (sect. 2, para. 2).And he says, "What is at stake is the ordinary understanding ofthe language of the mental" (sect. 3, para. 1). These andnumerous other passages are critically ambiguous between (A) areading that has as its focus what ordinary people actually do -how they actually ascribe and use mental-state terms - and (B) areading that focuses on what ordinary people think they aredoing - what they reflectively think about ascribing and usingmental-state terms. Neither (A) nor (B) serves the goals of"cognitive science."

(B) is a possible reading of Goldman, given passages whichimply (B), such as his discussion of Churchland's (1979) elim-inativism and his application of Block's (1978) Chinese nationexample. Such a reading regards "folk psychology" as the "the-ory" that ordinary people rely on when asked to reflectivelycharacterize mental states. This is similar to the way that "folkphysics" has been used to characterize the "theory" that ordi-nary people rely on when asked to reflectively characterize whyor how certain events occur in the physical world.

(B) is not what Goldman should want. It is irrelevant to whathe regards as the "central mission" of cognitive science, namely,"to reveal the real nature of the mind" (sect. 1, para. 1). Thoughit is possible that ordinary people might come up with areflective account of the "real nature" of the mind, this is nomore likely than that they come up with a reflective account ofthe "real nature" of the physical world. Moreover, determiningthe nature of "folk psychology," so understood, would provideno independent reason for thinking that it is true - that it

62 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 63: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

describes the "real nature' of the mind. Finally, it is implausiblethat any ordinary account of the ascription of mental-state termswould include anything like the category-matching schemeColdman adopts. In this schema, self-ascriptions of mental-stateterms are based on matching instance representations (IR) -representations of one's own current mental states - withcategory representations (CR) - the distinctive semantical rep-resentations associated with a mentalistic word.

But what of (A)? If (A) is what Goldman intends, then it isappropriate to ask: How well does this account meet Goldman'sown standards of adequacy? Is it psychologically realistic? Is itpsychologically plausible? We think there are serious questionsabout Goldman's theory when it is measured by these stan-dards. The distinctive feature of Goldman's account, and thecentral component in his critique of functionalism, is the viewthat

1. Category matching is the basis for using mental terms anddetermines their representational or semantic content.In applying this account of category matching to self-ascriptionsof sensation terms, Goldman argues that his "qualitative ap-proach" is compatible with this "basic framework for classifica-tion " (sect. 5, para. 8). Goldman "hypothesizes' that CRs andIBs "for each sensation predicate are representations of a quali-tative property" (sect. 5, para. 1). It follows that:

2. CRs are exclusively qualitative for first-person ascriptions.But for ascriptions of the same sensation concepts to others, CRscannot be qualitative. Goldman's methodology (sect. 2) requiresthat in discovering the CR associated with a word, any assign-ment of a CR must be consistent with what IRs are available tocognizers. For third-person ascriptions, this can't include quali-tative features. So:

3. CRs are nonqualitative for third-person ascriptions.But this creates a radical asymmetry between first- and third-person ascriptions of the same sensation terms:

4. CRs for first- and third-person ascriptions of sensationterms must be fundamentally different.The "semantic content' of sensation words must be radicallydisjointed in these two applications. Goldman might accept thisconsequence, but the asymmetry is unattractive. It presents aproblem that any account of mental concepts should attempt toavoid. Theoretically, Goldman could abandon any of the threecommitments which give rise to (4). But since he can't reject (3)without abandoning the category-matching model, he must giveup (1) or (2).

There are two ways Goldman might weaken (2). One possi-bility makes the qualitative component merely sufficient forcategory matching:

(2') Qualitative properties are sufficient for category match-ing and are criterial for first-person ascriptions.(2') allows that functional or behavioral properties could also besufficient. In any case, because nonqualitative properties arecritical in third-person ascriptions, they must be sufficient evenif they are not necessary. We suspect that this gives more to thefunctionalist than Goldman would willingly allow.

Another possible weakening of (2) makes the qualitativecomponents necessary for category matching:

(2") Category representation is essentially qualitative for first-person ascriptions.(2") is weaker and more plausible than (2). But it does not helpGoldman. If (2") means that matching of category and instancemust essentially include a match of qualitative features, then theproblematic asymmetry remains, because third-person ascrip-tions do not include such features. If (2") means that qualitativefeatures are essential for any category representation but allowsthat the matching between CR and IR need not include a matchof qualitative features, then it reduces to (2'), which (again) givesthe functionalist more than Goldman would probably admit.

Can Goldman abandon (1), the category-matching model?Goldman evidently assumes (in sect. 2) that any "psychologicallyrealistic" approach must be consistent with his category-

matching model. To give up (1) would be for Goldman to give uphis main vehicle for attacking functionalism in the first place. Atthe same time, he would lose the central motivation for his ownaccount: The category-matching model is its fulcrum. We thinkthat the category-matching model and its several variants war-rant a harder and more detailed look than Goldman attemptshere.1

NOTE1. We thank the Taft Faculty Board of the University of Cincinnati for

their generous support of our research.

Heuristics and counterfactual self-knowledge

Adam MortonDepartment of Philosophy, University of Bristol, Bristol BS8 1TB, EnglandElectronic mail: [email protected]

[Gol, Gop] Their rhetoric would make you think that Gopnikand Goldman were arguing for opposite conclusions about adultself-knowledge, which depends for Gopnik on the application ofan explicit theory and for Goldman on the application of specificcapacities to form representations of one's own states of mind.But the rhetoric is misleading. The two views are not soopposed.

Goldman makes a convincing - I would say conclusive - casefor saying that functionalist theories cannot tell us anything veryinformative about how people manage to know things aboutthemselves. One crucial reason is that the connections betweenstates which are crucial to such theories add up to a prettyintractable theory, one which puts severe combinatorial obsta-cles in the way of drawing simple one-attribute conclusions.Now it could be that functionalist theories need to be aug-mented with accounts of self-ascription, or on the other handparts of them might need to be replaced for this purpose.Goldman argues for replacement: The facts about self-ascriptionare incompatible with functionalism. His crucial argument is insection 6: According to functionalist theories, people in a state Sacquire the belief that they are in S, and this is part of theconcept of the state S. But, in his account, in order to know thatyou are in S you have to match your experience (IR) with yourconcept of S (CR). But then if both are right, in order to believethat you are in S you will have to establish first that you believethat you are in S. You can't get started.

To see how this might be wrong, go back to the insight aboutthe unmanageability of folk psychology as construed by func-tionalism. That is in a way an attractive feature. It entails thatpeople will find it hard to learn how to apply folk psychology tothemselves and to others, thus fitting the observation thatchildren take a long time becoming standard users of folkpsychology. And it suggests that some of the strategies that areused to tame unmanageable scientific theories may be found invernacular psychology too. In particular one could expect sim-plified theories and what I have elsewhere called "mediatingmodels" (Morton 1993). The simplified theories could be bothabout feelings and about behaviour in this case. This would fitwith Strawson's (1959) insight that the mental concepts we havein common sense are ones we are able to ascribe to others on thebasis of their behaviour and to ourselves on the basis of our ownexperience. The behavioural simplifications would say what aperson in a given state would do in given circumstances. So toknow whether you are in S you might proceed by discoveringwhat you would do in various circumstances. In particular -when S is a propositional attitude - you might wonder whetheryou would assert or defend some sentence s or be elated onhearing that s. Equally, for some S one might wonder whetherone's experience at the moment fitted one's representation ofsome experience associated with S.

The point is that we can ascribe on the basis of heuristics and

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 63

Page 64: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

approximations. (And we can, indeed surely do, learn mentalconcepts in terms of them, on our way to picking up our culture'sfull concept of mind. I think this fact, together with the usechildren can make of adults' telling them they are in some state,overcomes the problems Goldman finds for RF at the learningstage in his sect. 4.) Moreover, heuristics and approximationscan give us concepts of mental states, what Goldman calls CRs.

Now return to the argument in section 6. Suppose that someform of functionalism is right and that part of the full specifica-tion of some state S is that people in S believe that they are in S.The concept of S, the CR, may be based on something less thanthe full functional specification. You may ascertain that thatheuristic CR matches your IR, and because you believe that thatCR is a good-enough approximation, you conclude that you arein S. If you are sophisticated, you realise this is a defeasibleconclusion. One thing that might defeat it is the realisation thatthe reasoning you just went through caused you to have a newbelief that S. But if you were really in S you would have believedit all along. For some S and some people in some circumstancesthis seems perfectly possible.

Gopnik's target article also raises issues about heuristics andcounterfactual self-knowledge. In section 4 she reports on workthat suggests 3-year-olds misreport their beliefs of not longbefore. The details are striking and fascinating. They suggestthat small children have an understanding of mental states verydifferent from that of adults, and Gopnik's diagnosis of thedifference is that adults have mastered the complex theory offolk psychology, whereas children have not. Is this a plausiblediagnosis?

Skills as well as theories have to be learned and can be learnedmore or less well. Two skills are particularly relevant here. Oneis that of knowing what one would have done in given circum-stances. As I argued above, this skill allows one to use behav-ioural approximations to a full folk-psychological theory. And inthe case of the three-year-olds, it seems particularly relevant.Few recent writers have thought that people have direct intro-spective access to their beliefs. Rather, one has limited andfallible knowledge of what one would say and do, which one canuse to ascribe beliefs to oneself (see Dennett 1978b). The 3-year-old children in the experiments reported by Gopnik are clearlynot very good at saying what they would have said a short timebefore. Lacking that skill, they have to try to read their formerbeliefs off their former situation, resulting in an interestingsymmetry with their ascriptions to others.

The other relevant skill is selectively suppressing one's ownknowledge in predicting the psychological state of anotherperson in given circumstances. The difficulties small childrenhave with false-belief problems show that this too is not a skill totake for granted.

Though skills are clearly missing here, it is far from obviouswhat they are. They might just be the skills of understanding theadult folk-psychological theory. I doubt this, for children grow-ing up in our culture pick up from their elders many complextheories particular to our culture and science. Many 3-year-oldsknow that everything is made of atoms and that dinosaurs onceroamed the earth. The equally many adult beliefs they are notcapable of understanding are usually marked by the fact thatthey require some cognitive or imaginative skill beyond that ofsimply absorbing syntactically packaged information. In fact,the evidence that small children attribute states of mind tothemselves very differently from adults is, if anything, evidencethat what is used by adults is not simply a theory. Somethinghard-to-acquire is needed in order to understand it.

Thus, Goldman has not shown that theory-theories excludeself-knowledge and Gopnik has not shown that failures of self-knowledge entail them. In fact, the idea of a folk-psychologicaltheory is not clear enough for more than very general rhetoric.As Goldman notes in his section 10, different writers mean verydifferent things by "having a theory of mind." It seems just aboutobvious that we do not understand ourselves in terms of an

explicit theory that can be understood by the use of only purelygeneral cognitive abilities. And in spite of Gordon's (1992d)heroic defense of the idea, very little of our understanding canbe completely independent of any culturally based concepts andbeliefs. Rather, I would suggest, we acquire theories that areshaped and constrained by many specific skills. The slowlyacquired skill of knowing what one would do in hypotheticalcircumstances is one such skill, as are the loose family ofcapacities one can gather together as simulation. We should notask, "Do we use a theory or an atheoretical skill?" Instead,developmental and cognitive psychology should ask "What kindof theories are found here? What constrains them and what skillsare required to understand and make use of them?"

Developmental evidence and introspection

Shaun NicholsDepartment of Philosophy, College of Charleston, Charleston, SC 19424Electronic mail: [email protected]

[Gop] Sellars (1956) maintained that we use a psychologicaltheory rather than introspection to determine our own mentalstates. Since then, social psychology has provided the centralempirical battleground for the Sellarsian hypothesis (Ericsson &Simon 1980; Nisbett & Ross 1980; Nisbett & Wilson 1977). Inher target article, Gopnik rightly notes that developmentalpsychology might be an equally revealing arena for investigatinghow we determine and report our beliefs. Indeed, developmen-tal psychology seems particularly well suited to testing theSellarsian hypothesis. However, the evidence Gopnik offersdoes not support the hypothesis.

Gopnik argues on the basis of developmental evidence thatself-attribution of beliefs is informed by our folk psychology andnot by direct introspection. As I read her, Gopnik's argumentgoes as follows:

Premise 1: Developmental evidence suggests that children(and, by extension, the rest of us) use a psychological theory andnot introspection to determine and report their immediatelypast beliefs.

Premise 2: The span of introspection includes one's immedi-ate past.

Conclusion: The developmental evidence suggests that weuse a psychological theory rather than introspection to deter-mine our beliefs.One obvious Introspectionist response is to quarrel over 2,claiming that the span of introspection is shorter than Gopniksupposes. The Introspectionist may then concede that we use apsychological theory to reconstruct our own immediately pastmental states yet maintain that we use introspection to deter-mine and report our own present beliefs.

Because she focuses on reporting past mental states, Gopnik'sevidence is too indirect to convince the Introspectionist. How-ever, Gopnik's hypothesis would also make predictions regard-ing present belief reports. If children use a psychological theoryin determining and reporting their present beliefs then weshould expect their reports of their own present beliefs toparallel their reports of others' present beliefs. Since the claim isthat they rely on their theory to accomplish both tasks, weshould expect their performance on both tasks to be compara-ble. However, some of the results Gopnik cites on understand-ing sources of beliefs suggest that this is not the case (Wimmer etal. 1988a).1 In these studies the children apparently attribute abelief to themselves without having the theoretical apparatusrequired to attribute the belief to another in the same circum-stance. In Wimmer et al. the children were asked:

Q: Do you know what is in the box or do you not know that? (p. 383)Three-year-olds performed significantly better on this questionthan they did on the question about another's knowledge:

64 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 65: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

Q': Does [name of other child] know what is in the box or does she[he] not know that?

Assuming that correct responses to Q follow from genuinesecond-order beliefs, the Wimmer et al. evidence suggests thatthe children use some extratheoretical resource to determineand report their own present beliefs. For it is difficult to see howchildren could have determined what they believed given onlytheir inadequate psychological theory.

Perhaps we can reject the assumption that the 3-year-oldscorrect response to Q follows from a genuine second-orderbelief. One might argue that the child's saying, "Yes, chocolate,'in response to Q is merely the product of the child's first-orderbelief. That is, the response is an unreflective act, not a genuinereport of one's belief. One problem with this claim is that Q andQ' are syntactically identical. The only differences between Qand Q' are the substitutions for the pronouns in Q. Hence itseems ad hoc to maintain that the children are reporting thebeliefs of others in responding to Q' but not reporting their ownbelief in responding to Q.

Copnik is surely right to draw our attention to developmentalpsychology as a largely untapped source for research on the roleof introspection in determining one's beliefs. However, theevidence she provides does not support her general claim thatwe use a psychological theory rather than introspection todetermine our own beliefs. In fact, some of her evidencesuggests just the opposite for our own present beliefs.

NOTE1. These results need further confirmation. A number of studies

challenge the research on children's understanding of sources (e.g.,Wellman 1990). However, Gopnik herself accepts the Wimmer et al.results and has corroborated them to some extent (Copnik & Graf 1988;O'Neill & Gopnik 1991). Furthermore, Gopnik relies on the Wimmer etal. results to argue that we do not introspect the sources of our beliefs. Inthe present context, then, it seems appropriate to appeal to theseresults.

The role of concepts in perception andinference

David R. Olson8 and Janet Wilde Astington"«Cenfre for Applied Cognitive Science, Ontario Institute for Studies inEducation, Toronto, Ontario, Canada M5S 1V6 and "Institute of ChildStudy, University of Toronto, Toronto, Ontario, Canada M5S 1A1Electronic mail: °[email protected] and [email protected]

[Gol, Gop] The central problem for both Goldman and Gopnikis the relation between beliefs about the mind, specificallybeliefs about belief, desire and intention, and one's introspec-tively available phenomenological experience. Both agree thatwe experience our mental events in terms of the conceptsrepresented in our folk theory of mind. The problem is therelation between these concepts and the phenomenologicalexperiences represented by this set of concepts.

For Goldman, the relation is quite direct. Mental events suchas beliefs and desires have properties or qualities analogous topains and tickles that are directly experienced. Instances ofthese experiences are summed to form a concept of the statewhich may then be referred to by such terms as thinking orintending. The process is not essentially different from formingthe concept of a cup or a flower.

The question, of course, is the introspective access to thesestates and qualities. Goldman claims that their accessibility iswhat makes the learning of concepts for representing mentalstates possible, in the same way that perceptual accessibility ofshape and size makes possible the learning of concepts for cup orflower. Consequently, understanding one's own mental statesand ascribing mental states to oneself is unproblematic; ascrip-

tion of similar states to others is achieved by simulation, a kind of"mental rotation" of mental states (Shepard 1978).

Four objections come to mind, two logical, the other twoempirical. First, to claim that mental states such as beliefs havephenomenological properties or qualia is a simple act of faith.Pains may indeed have such perceptible properties but beliefsand intentions seem not to. The problem is this: Beliefs, unlikepains, tickles, and even pretense, are perspectival, which is tosay that while John's pain remains John's pain whether de-scribed by John or Sam, John's belief may be true for John butfalse for Sam. Even true beliefs lack this perspectivalism, fortrue beliefs correspond to the way the world is, a world that iscommon to all viewers. Consequently, the only secure basis forattributing an understanding of belief, or of attributing intro-spective awareness of a belief, is in understanding that the beliefis, or at least could be, false. This is a far cry from pains andtickles.

Second logical point: Goldman argues that the ascription ofbeliefs to others is done by simulating the other's state on thebasis of one's own. But, as mentioned, the only definitiveevidence for ascribing belief occurs in the case of ascribing falsebelief. Yet one's own beliefs are never introspectively availableas false beliefs, so how could false beliefs ever be ascribed toothers? That is, how could one see in another what was neverexperienced in one's self?

First empirical point: As Gopnik points out, it is well estab-lished empirically that when children do come to an under-standing of false belief, at about 4 years of age, these false beliefsare as readily ascribed to oneself as to another. If mental-stateconcepts are built from introspectively available qualities onewould expect them to be recognized in the self and only laterascribed either by inference or simulation to others. They arenot; mental concepts are as readily applied to others as to self.Hence, the evidence weighs against the direct accessibility ofone's own beliefs and intentions.

Second, if understanding of mental states derives from intro-spective access to one's own states, the understanding of otherminds becomes a serious, perhaps unresolvable problem. Ifchildren solve false-belief tasks by simulation, one would alsoexpect them to be able to report the other person's self-reflection, or the emotion the person would experience in thatsimulated state. The evidence suggests they cannot (Hadwin &Perner 1991; Perner & Howes, in press). Perception and ascrip-tion appear to depend heavily on the set of available concepts.

The aspect of Goldman's account that seems inescapable isthat children do experience the emotions of surprise, pleasure,and disappointment on the basis of anticipations long beforethey acquire a theory of mind; and those experiences are, weagree with Goldman, absolutely fundamental to acquiring atheory of mind. Without that first-person experience to providemeaning for the mental-state concepts they are acquiring, suchlearning would be impossible. This points to the weakness inGopnik's theory-theory argument.

Gopnik argues, following Quine (1956) and Churchland(1984) that theory of mind is just that - a theory analogous to anyother theory, such as the theory of mechanics or the theory ofplanetary motion. Once the theory of mind is acquired it isapplied inferentially to both self and others. Once one is suffi-ciently practised in making the inference to one's own mentalstates the illusion of direct introspective awareness is created.

Gopnik's proposal is that our beliefs about our own mentalstates, specifically, our belief in their intentionality, comes fromthe same source as our beliefs about the intentionality of otherpeople's states, namely, our theory of mind. She argues, con-trary to common sense and the simulation theory, that theintentionality of mental states is not discovered introspectivelythrough first-person experience. The apparent immediacy ofintrospective knowledge is the result of practice.

The theory has two strengths. It accounts for why it is noeasier to ascribe false beliefs, for example, to oneself than to

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 65

Page 66: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

others; both involve inference to the best explanation of talk andaction. Second, it preserves the critical point that mental statesare introspected in terms laid down by the available set ofconcepts, the theory-theory. Introspection, like ascription toothers and reconstruction of one's own past, is mediated by theavailable set of concepts. No concepts, no introspectively avail-able mental states.

The weakness of the theory-theory view is that it gives nospecial place to first-person experience. Above we argued thatwithout experience of failed, fulfilled, and shared expectationsand the accompanying emotions, children could never acquirethe critical set of mental concepts. Indeed, this seems to be thepreferred explanation of why autistic children fail to develop atheory of mind. They appear to manage other "theories" but notthe theory of mind. Mental concepts are not like other concepts,contra both Goldman and Gopnik.

We conclude with a few suggestions as to how our under-standing of these issues may be further advanced. One way is topay closer attention to the role of concepts in perception.Gombrich (1960) long ago pointed out that perception is servedup to consciousness in terms of the concepts available. Percep-tion, he said, is like entering information into a form or formu-lary - if there is no slot available for certain kinds of information,it is just too bad for that information. Putnam (1981) made thesame point in his discussion of internalism, namely, that percep-tion is always in terms of available concepts. That, we suggest, iswhy introspection is governed by concepts, just as ascriptionand remembering is. But the fact that the perception of our ownmental states is mediated by concepts does not make ourknowledge of those mental states inferential or the directness ofour experience of those states, illusory.

Second, we suggest that a more promising approach to theproblem of perception and ascription of mental states was set outsome time ago by Sellars and Strawson. Sellars (1963) offered anaccount, in his celebrated "myth of Jones," of the relationbetween first-person reports of mental states and the ascriptionof such states to others. Sellars imagines our "Rylean ancestors,"who without any knowledge of the mind, came up with the ideathat people actually had ideas and beliefs. His "myth" was thatJones, who had access only to the talk and actions of his fellows,could, by adopting a theoretical stance toward such evidence,come up with the simplifying theory that quite diverse ut-terances or actions could be explained by assuming that theywere products of a smaller set of beliefs and desires. Havingfound them appropriate for characterizing the talk and actions ofothers, he discovered that such a theory was useful for charac-terizing oneself as well (Sellars 1963, pp. 188-89).

This "myth" fits the developmental data quite well; childrenunderstand others' mental states just as well, or just as badly, asthey do their own. The data are compatible with the claim thatintrospection relies upon the same concepts that are used forascription.

Sellars suggests, rather, that adopting theoretical categoriesfor self-reports is what makes such concepts as thinking inter-subjective; self-reports presuppose this intersubjective status.Second, he points out that in self-reports "one is not drawinginferences from behavioral evidence" (p. 189) but rather therelation between behavior and inner states is built into the "verylogic" of these concepts.

That inner logic was set out more precisely by Strawson(1964), who focused on the peculiar nature of mentalistic con-cepts. Rather than constructing an abstract theory of behaviorand then learning to apply it to persons, including the self,Strawson suggested that in learning the concepts in the firstplace, the learner must assign quite different cases to theconcepts. The state ascribed to the other theoretically mustapply equally to the feeling or state directly experienced by theself. Thus, the very concepts that apply to others by inferenceapply to oneself directly. This view would give a proper place tofeelings of empathy and intersubjectivity in the acquisition of

these concepts while also acknowledging the critical role ofconcepts in introspection.

Third, the issue as to whether understanding the mentalstates of others is done by simulation or by theory applicationmay not be resolvable. Simulation requires some model ofentities and relations; the theory-theory attempts to specifywhat those entities and relations are. The only difference is thatsimulation implies more or less continuous representations, thetheory-theory specifies categorical ones. The undecidabilityarises from the fact that adding enough properties and relationsto a categorical model makes it indistinguishable from a contin-uous one. If calibrated finely enough, digital clocks becomeindistinguishable from analogical ones. This battle has alreadybeen fought in cognitive psychology over the issue of thetransformation of mental images.

We have attempted to tread a middle path, arguing thatwhereas Goldman attributes too much to phenomenologicalqualities Gopnik attributes too little. The problem may be morefruitfully addressed in terms of the more general role of conceptsin perception and inference.

A plea for the second functionalist modeland the insufficiency of simulation

Josef PernerLaboratory of Experimental Psychology, University of Sussex, Brighton BN19QG, EnglandElectronic mall: [email protected]

[Gol, Gop] Psychologists who have been embracing the label"theory of mind" may start wondering whether by using thisterm they meant to have been buying into the "representationalfunctionalism" (RF) for which Goldman raises three difficultproblems (sect. 3). As an alternative, he suggests that ourunderstanding of mental states is not based on functional prop-erties but on intrinsic properties by which we directly perceiveour own mental states.

As a somewhat reluctant user of the label "theory of mind" Iam quite open to Goldman's suggestion but I nevertheless findthe compromise proposal that Goldman dubs the "second func-tionalist model" (SFMj) more appealing. It differs from the basicRF in that it allows individual instances of mental states to beidentified by intrinsic properties whose relationship with thedefining functional properties was established in a learningperiod. Goldman, however, argues that this move to intrinsicproperties does not solve the problems of the basic RF. SFMstill suffers from the same afflictions because of the learningperiod. Goldman does not argue this point in detail, however.

When one tries to apply his three criticisms to the SFM, hiscase becomes less convincing.

Problem 1 (sect. 3, para. 6). "Self-ascription of headacheoccurs in the absence of any information . . . about relevantcauses or effects." Clearly, individual instances of mental statesare not identified by functional properties as RF requires. Butthere is nothing in this observation that makes it impossible - asrequired by SFM - that recognizing intrinsic properties as aheadache depends on learning the relationship between intrin-sic and functional properties.

Problem 2 (sect. 3, para. 7). Goldman points out correctlythat instances of mental states could also not be identified bytheir subjunctive properties because such properties are hard tocome by. But again, if one is given time in a learning period,testing for such properties - as SFM requires - does not seemout of the question.

Problem 3 (sect. 3, para. 8). Because type identity for tokenmental states depends on the type identity of their relata, ofwhich some in turn are mental states whose type identitydepends on the type identity of yet other mental states, and soon, there looms binomial explosion of complexity, which is

66 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 67: A Gopnik - How We Know Our Minds

Commentart//Gopnik/Goldman: Knowing our minds

obviously too large to be solved when identifying particularinstances. Yet when it comes to learning these relations theimplications are less obvious. Since we do not know how muchcomplexity we can cope with over the years (in acquiringlanguage it seems a lot) and because we do not know howcomplex a functional network of mental states we actuallyrepresent in our commonsense "theory of mind," complexity isnot a knockdown argument against the SFM.

Having rescued the SFM from Goldman's criticism I feel lessneed to follow him on his suggestion that intrinsic properties ofone's own mental states are the primary basis for our under-standing of mental concepts. I am weary of this claim because itleaves no room for conceptual development. By linking intrinsicproperties with functional properties, SFM provides a sensiblepossibility for such development. Learning about new func-tional relations may change how we perceive things. Of course,the basic truisms of our "theory of mind' seem unchangeablyclear. But Gopniks well-chosen notion of the "illusion of exper-tise" is helpful here. Things seem so obvious regarding whetherwe know, think, or want something because we are such expertson these aspects of the mind. For the young child this may notbe the case, however, and Gopnik describes some experimentalfindings that indicate that children have to learn to conceptual-ize their own mental states.

It is still early days, however, and I am inclined to agree withGoldman that the evidence is rather weak. My problem withthese data is that the main paradigm for investigating children'sunderstanding of their own mind relies on their memory for thecontent of mental states. Gopnik and Slaughter (1991) investi-gated several different states and found a scale of difficulty.However, this may not reflect conceptual difficulties posed bydifferent types of state but rather a mere memory problemaccording to the following two principles: (1) If the contentcorresponds to actual past situations it is easy to remember; (2) ifthe content is a nonexistent situation it is easier to remember ifintentional effort was required in creating it.

Following principle 1, we see that memory of a true belief,which reflects an actual past state of the world, is easy; memoryof a pretend content is easy because conscious effort is requiredin creating it; but memory for a mistaken belief is difficultbecause it does not correspond to a real situation and wascreated automatically without conscious effort. In line with thisexplanation Mitchell and Lacoh6e (1991) succeeded in improv-ing children's memory for false belief by making them reflect ontheir automatic belief at the time it was held.

With my colleagues (Hutton et al. 1991) I have tried todisentangle children's ability to remember the content of theirmental states from their understanding of the particular type ofmental state with which they held that content. The data showthat before 4 years of age children cannot clearly differentiatewhether they acted in an unusual way because they pretendedor because they mistakenly believed something to be the case.So I think clearer evidence for Gopnik's position will beforthcoming.

A further motivation to hold on to the SFM and not adoptGoldman's position comes from the difficulties it creates forexplaining how we understand other minds. Goldman offerssimulation (also known as role- or perspective-taking) as thesolution. I am not inclined to believe in this remedy. My reasons(some outlined in Perner 1991a, Ch. 11; Perner & Howes 1992)are best summarized according to three categories of cases:

1. Possible and sometimes used (e.g., emotions): To figure outhow somebody feels we sometimes resort to simulation byimagining ourselves in a similar situation and observing whichemotions this evokes in us. Even this is not pure simulation,however, because we have to make a "theoretical" decisionabout which aspects of the other person's situation (presumablyfunctional properties of the to be simulated emotion) to includein our imagined situation. In other words, "theoretical" knowl-edge of functional properties is a prerequisite for simulation.

2. Possible but introspectively implausible (e.g., beliefs): Ifwe are told that Maxi didn't see how his chocolate was unexpec-tedly transferred we could use simulation: I think of myself asMaxi putting the chocolate in a particular place (blocking out anyinformation about its unexpected removal from this place after-wards): Where will I look for the chocolate? Answer: where I putit! Correct. Although we could solve the task this way, I just feelthat I never do. My intuition is sharpened by consideringsecond-order beliefs as, for example, used by Perner and Howes(1992): Mary and John agree that he will put away the chocolate.In her absence John does so, but then, unbeknownst to both ofthem, it is transferred. If we now ask Mary, "Does John knowwhere the chocolate is?" what will she say? To answer thisquestion by simulation we would have to simulate John within asimulation of Mary. I find this almost impossible to do, yet Ihave little difficulty answering the question.

3. Plainly impossible (e.g., perceptual states): Another per-son looks at the same scene as I do but from a different vantagepoint. How can I figure out what that person sees? I cannot do itby simulation, that is, by assuming I sat in that position and thenletting my visual system mock-operate on these hypotheticalassumptions. Unlike our emotions, our visual system does notrespond to imagined conditions.

For these reasons I think that simulation does not solve theproblem of other minds for Goldman, and so I think it is a goodthing that his criticisms of RF do not discredit the SFM, whichaccordingly remains a viable and plausible alternative.

Finally: How to be critical of Gopnik's target article? This isdifficult when one shares roughly the same outlook and whenshe has put the basic case quite convincingly. So I resign myselfto finding some additional evidence in order to strengthen hercase for the theory-theory.

As Goldman points out, most psychological accounts of "the-ory of mind' in the animal (Premack or Woodruff 1978) and thedevelopmental literature (Gopnik 1988; Wellman 1990) use arather weak meaning of theory. In particular, they omit what isconsidered characteristic of theories in the philosophical litera-ture, namely, the requirement that it contain the formulation oflaws, that is, counterfactual supporting generalizations. In fact,there is evidence that even children form a theory of mind in thisstricter sense of the term "theory." I have pointed out (Perner1991, Ch. 10) that one hallmark of theories is that one candistinguish the possible from the impossible, which is just asimpler way of saying that counterfactual-supporting generaliza-tions must be made. Good examples of this ability can be foundin children's understanding of the necessity of informationalaccess for the acquisition of knowledge. When there is noconceivable way a person who knows something could haveattained the necessary information, children are puzzled andput effort into resolving the puzzle. They persist in theirquestioning until some possible origin of the alleged knowledgeemerges. So there is evidence that children entertain a theory ofmind in even the strong sense of "theory."

First-person authority and beliefs asrepresentations

Paul M. PietroskiDepartment of Philosophy, McGill University, Montreal, Quebec, CanadaH3A2T7Electronic mail: [email protected]

[Gol, Gop] There are cognitive and subjective aspects of first-person authority (FPA). The beliefs of an agent A concerningwhat A believes are, in general, considerably more likely to becorrect than the beliefs of other agents concerning what Abelieves. Moreover, it seems to agents that they often knowwhat they believe "directly" or "without mediation," and this

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 67

Page 68: A Gopnik - How We Know Our Minds

Commentary/GopniklGoldman: Knowing our minds

appears to distinguish self-ascriptions of belief from similarascriptions to others. Ideally, theories of belief ascription shouldhelp explain these facts; at a minimum, such theories must becompatible with these facts. As Goldman suggests (sect. 10,para. 9), Gopnik's account of the subjective aspect of FPA isunconvincing. She owes an explanation of why we lack "ex-pertise" in applying our folk-psychological theory to others.Moreover, defenders of the "theory-theory" must speak to thecognitive aspect of FPA. The Rylean proposal that agents-qua-theorists have more evidence about themselves is unsatisfyingin this regard. There is little reason for thinking the requisiteevidence is available to, or accessed by, subjects reporting ontheir beliefs. Goldman's account fares better. If agents have asubjective and "immediate" experience of their own thoughts,but only of their own thoughts, both aspects of FPA can beexplained tidily. But I worry whether Goldman's strategy mightbe a case of theoretical overkill: Must we appeal to qualia toexplain how we ascribe mental states to ourselves? I suggest thatin both target articles, the assumption that beliefs are represen-tations is confusing, rather than clarifying, the relevant issues.Moreover, as Davidson (1984; 1987; 1989b) suggests, FPA maytell against this assumption.

Let RTM (Representation Theory of Mind) be the thesis thathumans have internal states that represent states of affairs andthat these states figure prominently in the etiology of (at leastsome) human behavior. Representations are, it seems seman-tically evaluable, about others things, and often - though notalways - caused by the things they are about. Beliefs have these(peculiar) properties as well. So the idea that beliefs are repre-sentations of the sort posited by RTM is attractive. To have abelief about one's own belief, in this view, is to have a represen-tation of one's own representation. Goldman adopts this posi-tion explicitly. "What an Af-belief is, according to our frame-work, is the occurrence of a match between the CR for M and anappropriate IR (sect. 6, para. 3)." Gopnik's stance is morecomplicated, as she wants to distinguish the mental entities thatpsychologists seek to describe, from mental experiences. Butonly the former explain behavior in her view, and the philoso-phers Gopnik sides with in advocating the theory-theory areprecisely those who hold that beliefs (if such there be) areinternal representations.

Nonetheless, other writers - for example, Davidson (1987;1989b), Dennett (1991b), Stalnaker (1984), Putnam (1988) -insist that agents have beliefs, accept RTM, but remain skepticalof the idea that beliefs are representations. Moreover, if beliefsare representations, the cognitive aspect of FPA can be ren-dered as follows: If A has a representation fl whose content isthat p, then in general, it is (considerably) more likely that (i)3fl* [A has fl* & fl* has the content that A believes that p] than(ii) 3fl* [B has fl* & fl* has the content that A believes that p],where B # A. For reasons that Goldman provides in sections 3and 4, functionalists - and not just "representational" or "ana-lytic" functionalists - will be hard pressed to explain why thisshould be so. But it is not clear that (i) is more likely than (ii); andif (i) isn't more likely than (ii), we have good reason to reject theassumption that beliefs are representations. Perhaps if (i) ismore likely than (ii), Goldman's model is attractive - althoughone wants to hear more about what will count as correctly re-identifying qualia over time. But before we invoke qualia here,we should be sure that the problem has been cast perspicuously.We ascribe mental states to ourselves, and we believe that wehave mental states. But nothing about representation yet fol-lows. One might think, regardless of what beliefs are, thatagents clearly represent themselves and others as believers, andthat verbal ascriptions of mental predicates must be the effectsof such internal representation. But this is far from obvious. Forone thing, agents might sometimes - I emphasize, sometimes -internalize assertions that are not readily characterized as theverbalizations of internal states with the same content. That is,we might think some things are so because we say they are so,

and not vice versa. I think Davidson and Dennett at least renderthis hypothesis palatable for restricted domains.

Let us put aside belief, for a moment, and consider FPA withrespect to claims: In general, the claims of an agent A concern-ing what A claims are more likely to be correct than the claims ofother agents concerning what A claims; and agents often thinkthey have some kind of special knowledge of the content of theirown claims. For example, I am more likely to correctly reportthe contents of my own claims than those of Russian speakers.But even with speakers of the "same language," the possibility ofidiolectic differences ensures the kind of asymmetry that charac-terizes the cognitive aspect of FPA. I may be unsure that youmean what I mean by "elm," "liberal," "honest," and so on. Butin general, I will not be unsure that / mean what I mean by termsin my idiolect: I know that by "liberal" I mean liberal. There isnothing problematic about this kind of "authority," which sim-ply reflects the fact that I use my words to say what I mean by mywords. I do bear a special epistemic relation to the contents ofmy claims. But we need not posit "claim-qualia" or expertise toexplain this; all we need assume is that I speak my idiolect.

Could a similarly "deflationary" account of FPA for beliefs,where these are not internal representations, be correct? Con-sider a familiar analogy: Objects have weights in virtue of theirmass and the local environment; and to ascribe a weight to anobject is to relate the object to a number, for purposes ofcomparing objects along a common metric: A weighs twice asmuch as B and half as much as C. But weights are not object-internal entities. Similarly, perhaps agents have beliefs in virtueof their psychological makeup - where this makeup includes therepresentations posited by RTM - and the local environment;and to ascribe a belief to an agent is to relate the agent to aproposition, for purposes of comparing agents along a commonmetric: A believes what B believes and what C denies. It doesn'tfollow that beliefs are agent-internal entities, representationalor otherwise. In Davidson's account, belief-ascribers use thecontents of their own mental states as the propositions whichform the basis for comparison. For example: Donald believeswhat I believe, but Jerry denies this.

In this kind of view, the correctness of A's ascription of a beliefto B depends, not on B's having a certain representationalentity, but on there being a "suitable similarity" between B'spsychological makeup and A's; where specifying the relevantrespects of similarity is to provide a theory of belief- no easytask on anyone's account. But we might get FPA "for free" onthis approach, as a byproduct of the special case in which A = B.To correctly ascribe a belief to myself in such a view would be tocorrectly say (or think) that my psychological makeup is similarto my psychological makeup in certain respects. Such ascrip-tions will, of course, be correct in general. But they will not beinfallible: I may not use all of my beliefs as the backgroundagainst which I assess the beliefs of others; my beliefs areinconsistent; there may be pockets of delusion, because mymental life is "fragmented" (see Stalnaker 1984). But to be abeliever at all, according to Davidson, requires the (substantial)mental stability presupposed by intentional explanation. David-son never says, however, that such stability is required forrepresentational systems. I do not suggest that this account ofFPA is completely satisfactory. As Goldman points out (note 6),current deflationary accounts face difficulties. But I think weshould explore the prospects and limitations of this kind ofaccount, before we invoke qualia - or anything else, for thatmatter - to solve problems we don't yet know the extent of.

Finally, perhaps 3-year-olds are representational systems,but not full-fledged believers. If 3-year-olds lack the mentalstability required for agenthood (and their behavior suggeststhey might), the question of what, precisely, 3-year olds believeabout their beliefs (or anything else) may be out of place - just asit may be out of place to ask what, precisely, a frog or a puppybelieves. The psychological makeups of puppies and smallchildren may be dissimilar enough to those of adult humans that

68 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 69: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

the practice of belief-ascription begins to break down. Perhaps,then, we should construe Copnik's version of the theory-theoryas a proposal about how we represent our mental states in so faras we do, and explicitly not as a proposal about beliefs. Foragain, accounts of representation are not obviously constrainedby the facts about FPA noted at the outset. Perhaps we do have arepresentational capacity characterizable as a "theory of mentalstate ascription" that we apply indiscriminately to ourselves andothers. Gopnik presents some evidence for thinking this is thecase; and clearly something interesting is going on in that fourthyear. But if the theory-theory is plausible on this construal, thatis just one more reason to think that beliefs are not repre-sentations.

Limitations on first-person experience:Implications of the "extent"

Bradford H. PillowDepartment of Psychology, University of Pittsburgh, Pittsburgh, PA 15260Electronic mall: [email protected]

[Cop] When Daryl Bern wrote, "To the extent that internal cuesare weak, ambiguous, or uninterpretable, the individual isfunctionally in the same position as an outside observer, anobserver who must necessarily rely upon those same externalcues to infer the individual's inner states" (1972, p.5), the keyword was "extent." Knowing the extent of conscious access toinformative cues about psychological functioning would tell ussomething about the extent to which knowledge of one's ownmind is similar to or different from knowledge about otherminds. The same applies to Gopnik's argument about thelimitations of conscious experience and the development ofknowledge about mental states. If the mind were entirelytransparent to itself, then self-knowledge would be quite differ-ent from knowledge of others. At the other extreme, if therewere no introspective access of any sort, then knowledge of selfand other would be identical. Both would be based on observa-tions of overt actions. There is, of course, a large middle ground,and Gopnik's position appears to fall somewhere in that middleground.

Whether Gopnik's claims that (a) people know their ownminds in the same way that they know the minds of others and(b) children do not acquire knowledge of mental states throughfirsthand psychological experience necessarily follow from theassumption that conscious access to underlying states is limiteddepends crucially upon the extent of that limited access. Evengranting that there is no direct conscious experience of inten-tional states, as long as conscious experience is at all informativeabout those states, self-knowledge differs from knowledge ofother minds. Gopnik allows that there is direct access to somefirst-person evidence and indicates that psychological stateslead to psychological experiences. Also, psychological states,such as beliefs, are presumably influenced by psychologicalexperiences, such as percepts. Because psychological experi-ences appear to be related to intentional states in some systema-tic way, they could be informative. Nevertheless, Gopnik as-serts that theoretical knowledge does not come from theseexperiences and that 3-year-olds' experiences are "wrong."

A more subtle possibility is that these "wrong" experiencesare in fact an informative basis for the conceptual understandingof belief that Gopnik designates as theoretical. The characteriza-tion of 3-year-olds' experience as "wrong" stems from evidencethat children of this age fail to report their own past false beliefsaccurately, but instead report what they currently know to betrue. They do, as Gopnik points out, report other past psycho-logical states (experiences?) accurately, including their own pasttrue beliefs. Three-year-olds' ability to report past true beliefsindicates that they are able to report on, though not necessarily

experience, past intentional states. Their reports fail to beaccurate when their past beliefs conflict with their currentbeliefs.

The distinction between children's reports and children'sexperiences is important. Children may be able to recall theirprevious experience, as suggested by their accurate reports ofpast true beliefs, but fail to report them when questioned. Thisfailure may reflect an inability to reconcile the conflict betweentheir past and present experiences. Gopnik, along with Flavell(1988) and Perner (1991b), may well be right that reconcilingsuch conflicts becomes possible when children, around age 4,conceive of these experiences as alternative representations ofthe same object. That is, a concept of mental representation mayenable children to coordinate past false and present true beliefs.

Assuming that the concept of representation plays this impor-tant role, the next question concerns how this concept might beacquired. According to Gopnik's account, it does not come fromexperience. However, although the concept of belief as a poten-tially mistaken representation may not be directly specified byfirst-person experience, this construct may be based on suchexperience. That is, experiencing conflict between expectationsand actual states of affairs may provide the data that lead to arepresentational understanding of belief. Gopnik emphasizesthe parallels between children's difficulty reporting their ownfalse beliefs and their difficulty inferring the false beliefs ofothers. These parallels are intriguing, but they do not rule outthe possibility that the understanding of one's own mental statesmediates the understanding of others. Contrary to a commonassumption, the claim that firsthand experience provides a basisfor understanding does not necessarily imply a lag betweenunderstanding self and other. Instead, this understanding maygeneralize almost immediately from self to other.

To determine the relation between knowledge of self andother and decide what developmental story to tell about theacquisition of such knowledge, the extent and informativenessof first-person experience needs to be defined theoretically andassessed empirically. Characterizing experience as a "Cartesianbuzz" is insufficient. Moreover, young children appear to havesome potentially informative experiences. Although Gopnikviews 3-year-olds as unable to report the sources of their ownknowledge, in her experiments, 3-year-olds performed abovechance. When O'Neill and Gopnik (1991) used a straightforwardtwo-choice question, 3-year-olds responded correctly 70% ofthe time. This evidence does not suggest a qualitative differencebetween 3-year-olds and older children. Gopnik relies on verbalreports as evidence of the extent of children's first-personexperience, but there is a substantial metacognitive literatureusing more subtle and sensitive measures. For example, Culticeet al. (1983) reported evidence that preschool children experi-ence the "feeling-of-knowing" and Flavell et al. (1981) found thatchildren showed nonverbal signs of detecting comprehensionproblems at earlier ages than they were able to report compre-hension difficulty. Such evidence indicates that children's expe-rience of their own minds includes information not availableabout other minds. The issue, then, is not whether there is anypotentially informative first-person experience; but what sort ofexperience is available and how it contributes to development.In addition to evaluating the utility, or limitations, of any singlesource of information in isolation, developmental accounts willeventually need to specify how first-person conscious experi-ence, observations of overt action, and verbal referencesto mental states jointly lead to the acquisition of mentalisticconcepts.

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 69

Page 70: A Gopnik - How We Know Our Minds

Co?nmentan//Gopnik/Goldman: Knowing our minds

Representational development and theory-of-mind computations

David C. Plauta and Annette Karmiloff-Smithb

"Department of Psychology, Carnegie Mellon University, Pittsburgh, PA15213-3890 and "MRC Cognitive Development Unit, London WC1H OAH,EnglandElectronic mall: [email protected] and [email protected]

[Gop] Cognitive developmentalists often jump on bandwagons.This was the case with Piagetian "conservation" in the 60s and70s. Hoards of experimental variations on a theme swamped thedevelopmental literature, with little or,no theoretical interestother than to lower the age that correct performance on conser-vation tasks could be demonstrated (see Karmiloff-Smith 1981for critical discussion). The latest hot topic of the developmental80s and 90s - theory of mind - stemmed from Premack andWoodruff's (1978) seminal BBS target article on the chim-panzee, together with the clever experimental adaptation tohuman children by Wimmer and Perner (1983). Every develop-mental journal and conference is now permeated with theory-of-mind experiments. Fortunately, however, this time there aresome notable theoretically oriented exceptions to the usualexperimental bandwagon, and these have made it possible toexplore the biological versus cultural underpinnings of chil-dren's knowledge, the modular status of theory-of-mind repre-sentations and their impairment in autism, and the status oftheory-of-mind knowledge as an explicit theory in the child'smind (Baron-Cohen 1991b; Baron-Cohen et al. 1985; Frith1989; Gopnik & Wellman 1992; Leslie 1987; Perner 1991b).Gopnik's present target article is another step in this importantdirection.

Gopnik argues that children's understanding of their ownbeliefs (and other intentional states) is mediated by the sameinternal representations as their understanding of the beliefs ofothers and hence that there is nothing "privileged" or "direct"about access to first-person intentional states. She goes on toclaim that these common representations take the form of anexplicit theory that is directly analogous to a scientific theory.This "theory-theory" (Churchland 1984; Gopnik & Wellman1992; Stich 1983) is contrasted with a "simulation theory" (Gold-man 1989; 1992a; Gordon 1986; 1992c), in which the inten-tionality of psychological states is gradually developed throughexperience. This dichotomy leads Gopnik to interpret the de-velopmental evidence as indicating "a dissociation between thepsychological states that cause the children's behavior and theirsincere conscious report of their psychological experience"(sect. 6, para. 6). Implicit in this view is a claim that thepsychological states of children are the same as those of adults;children simply have difficulty in accessing and reporting thosestates before age 4. Indeed, Gopnik explicitly endorses thisclaim in describing children's psychological states: "these crea-tures [children] are not just like us, they are us . . . a few yearsago" (sect. 6, para. 8).

We do not fundamentally disagree with Gopnik's generalthesis that common representations subserve theory-of-mindprocesses for both first- and third-person states. However, inthis commentary we conceptualize changes in children's theoryof mind (as well as changes in expertise) as involving representa-tional change rather than more efficient access. We believe thatthis provides a more productive view on the nature of develop-ment in this area. Furthermore, let us be clear that we endorsethe view that theory-of-mind computations involve second-order or meta-representations (Leslie 1987), that they find theirroots in the proto-declaratives of infancy (Baron-Cohen 1991b;Gomez 1991), that the propositional attitudes they involve findan early behavioral manifestation in toddlers' pretend play(Leslie 1987), and that the capacity for meta-representation isspecifically human (Karmiloff-Smith 1979a; 1979b). We claim,however, that the developmental pattern of success and failure

on theory-of-mind tasks can be better understood in terms ofprogressive refinement and elaboration of the representationsthat underlie intentional states, rather than in terms of discretestages of improvement in access to fully specified represen-tations.

In support of her position, Gopnik places great stress on thefact that 3-year-olds can successfully solve a problem in whichpast knowledge is pitted against present knowledge as long asthe past knowledge does not involve mental states but physicalstates. Thus, 3-year-old children can succeed in an experimentalsetting where an object is hidden in location A, subsequentlymoved to location B, and is in B at the moment of questioningabout its previous location (the physical location task). Three-year-olds, however, fail if mental states must be computed, suchthat one protagonist thinks an object is in location A while thechild knows it to have been moved in location B (the mentallocation task). Children seem unable to attribute the false beliefof location to the other protagonist's mind until they are 4 yearsof age. This has led several authors to invoke a fundamentalchange in theory of mind at age 4.

There are other data, however, that suggest 3-year-olds canattribute false beliefs to other minds under certain conditionsand that 4-year-olds cannot under other conditions. In particu-lar, when given the same theory-of-mind task but solely inverbal form (i.e., the child does not see where objects arehidden or moved to but is merely told so), 3-year-olds succeed(Zaitchik 1991). By contrast, when given the task purely in visualform with no verbal input - for example, in the form of a silentfilm (Norris & Millan 1991) - 4- and 5-year-olds find the taskdifficult and it is only at close to 6 years that they succeed.

It is difficult to interpret this developmental pattern of perfor-mance in terms of differential access to equivalent beliefs. If4-year-olds can access beliefs that are generated from bothvisual and verbal input (in the standard form of the false-belieftask), why should they fail when given only visual input? Howcan 3-year-olds succeed with only verbal input when they fail onthe standard task with both visual and verbal input? Clearly,more is needed to explain the results of these different experi-ments other than simply differentiating between physical andmental location tasks.

Theory-of-mind tasks require the ability to simultaneouslyrepresent conflicting information: the protagonist's (or one'sown) belief about a past situation and the current true situation.We believe the developmental results are best interpreted interms of increasing capability in using and generating symbolicrepresentations that are sufficiently well elaborated to overridethe otherwise compelling interpretations generated by directexperience. Furthermore, language is central to theory-of-mindprocesses precisely because it provides particularly effective"scaffolding" for symbolic representations. Critically, it requiresless cognitive sophistication to merely maintain and use sym-bolic representations provided by others (in the form of verbaldescription) than it does to generate the appropriate representa-tions oneself.

Three-year-olds succeed at the physical location task becausethey need only maintain a representation of a past physicalsituation, not a belief about a past situation. They succeed at theverbal-only mental location task because the experimenterprovides the necessary symbolic encoding in verbal form andthere is no direct perceptual experience generating conflictingrepresentations. However, they fail at the standard theory-of-mind tasks because their rudimentary symbolic representa-tions, even when verbally generated by the experimenter, areinsufficient to override an experience-based interpretation ofthe true situation.

We should also explain why 3-year-olds, and even youngerchildren, can engage in elaborate pretend play (Leslie 1987).Although it involves symbolic representations or meta-repre-sentations that conflict with current perceptual experience(e. g., a block that stands for a car), the demands on the symbolic

70 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 71: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

representations per se are rather meager. In particular, duringplay the world is manipulated so that more complex relation-ships among real objects are consistent with the relationshipsamong pretend objects.

Four-year-olds are able to perform the standard false-belieftasks because their symbolic representations are sufficientlydeveloped to compete with the experience-based ones, so longas they are supported by external verbal description. But4-year-olds have difficulty with the purely visual conditionbecause they cannot yet spontaneously generate on their ownthe necessary symbolic representations from visual input.

In sum, we claim that theory-of-mind processes dependcritically on symbolic representations to override more percep-tual ones. Development involves progressive elaboration ofthese representations rather than simply efficient access topreexisting representations.

Copnik suggests that what distinguishes first- from third-person intentional inference is analogous to what distinguishesexperts from novices. Ironically, we find this analogy a ratheruseful one while disagreeing as to the nature of the progressionin both. Indeed, we invoke the same arguments as above, andview the transition from novice to expert in terms of thedevelopment of more elaborate and effective representations(see VanLehn, 1989, for a review) rather than more rapid andefficient access to the same representations as Gopnik suggests.

Finally, although we have taken as our point of departureGopnik's criticism of simulation theory, we subscribe neither toit nor to the theory-theory in their current form. Rather, the sortof representational development we have outlined in this com-mentary is likely to exhibit characteristics of both: the gradualdevelopment of intentionality through experience as suggestedby the simulation theory, and the increasingly systematic infer-ences supported by symbolic representations as suggested bythe theory-theory.

Matching and mental-state ascription

Ian PrattDepartment of Computer Science, University of Manchester, ManchesterM13 9PL, EnglandElectronic mail: ipratt @cs.man.ac.uk

[Gol] Alvin Goldman has presented us with an argument pur-porting to show that ordinary people's grasp of mental-stateconcepts cannot be understood as their having a theory specify-ing the causal-functional roles definitive of those mental states.The argument is conducted within a general framework in whichthe application of a concept C to an individual in a judgement isassumed to involve the matching of a representation of theindividual to another representation, stored in long-term mem-ory, which encodes the definitive features of the concept C. Mystrategy in this commentary will be to sketch, with recourse to alittle fictional psychology, how concepts like belief and desiremight be implemented in a reasoner. We shall see that, on thescheme I propose, the reasoner's judgements about his currentbeliefs and desires need not be the result of any sort of matchingprocess.

First, suppose that for our reasoner, S, to believe that P is amatter of the tokening, in a suitable part of his brain - let us callit S's "belief-box" - of a certain (physical) type of data-structureAP. We shall say that, for S, AP encodes the proposition P.Similarly, let us suppose that for S to desire that X is a matter oftokening, in a suitable part of his brain - let us call it S's desire-box - of a certain (physical) type of data-structure A x, where Ax

encodes, for S, the proposition X; and so on for all prepositionalbelief-contents and all psychological modalities. I now suggest away for S to encode propositions about his own current psycho-logical states. To encode "I believe P," take the data-structure

AP encoding the proposition P, and tag it with some specialsymbol denoting the modality of belief. By "tagging," I meanliterally altering the physical properties of AP in some small way,so that it is easy to recover, if necessary, the original from thetagged version. To encode the propositions "I believe Q," "Ibelieve R," just tag the corresponding data-structures AQ, Afi,with the same symbol denoting belief. Similarly, to encode theproposition "I desire X, ' take the data structure A^, encodingthe proposition X, and tag it with some special symbol denotingthe modality of desire. To encode the propositions "I desire Y,""I desire Z," just tag the corresponding data-structures Ay> Az

with the same symbol denoting desire. Let us denote the abovetagged data-structures by Af, A^, Ag, A£, A£, Ag and so on.For S to believe "I believe P ' is then a matter of S's having atoken of Ap in his belief-box; for S to believe "I desire X" is amatter of his having a token of A£ in his belief-box; and so on.

Given this rather simple-minded representation scheme, it iseasy to see how S might monitor his mental states: Monitoring,in this view, is just tagging. Notice that S does not match(representations of) his mental states against functional descrip-tions of state-types: He just takes the data-structures that comeinto his belief- and desire-boxes (or whatever) and tags them.Now, at this point, the reader may be wondering what right Ihave to describe these tagged data-structures as encoding (for S)propositions of the form "I believe that P," "I desire that X," andso on, for such a description will be illegitimate unless thesedata-structures participate in a certain range of inferences andother cognitive activities. That is correct: But the very point Iwant to make is that it is quite plausible that such tagged data-structures as Ap1 and A£ should participate in the requisiteinferences and other cognitive activity to qualify them as repre-sentations with contents "I believe that P" and "I desire that Q,"respectively.

First of all, consider how S might come to beliefs about thesorts of circumstances that cause him to have beliefs and desiresof various sorts. If S can monitor when he is in this or thatpsychological state - say, the belief that P - then he can, inprinciple, notice the conditions under which those psychologi-cal states tended to arise in the past. He can thence inducegeneralisations about the typical aetiology of those psychologicalstates. In addition, S can assume that when it comes to psycho-logical states the situation with other people will be much as it iswith himself. If S has found, for example, that chancing on a catstrolling across his path in broad daylight usually causes him tobelieve that there is a cat in front of him, then he can assume thatsimilar beliefs will be so caused in others.

How, then, might S encode propositions about the psycho-logical states of other persons? Well, just as the proposition "Ibelieve that P" is encoded by a tagged version Ap1 of the data-structure Ap, so let the proposition "T believes that P" beencoded by a differently tagged version rAf of the same data-structure, where this time the tag incorporates a pointer to S'sconcept of the person T in question. The point about such anencoding is that it facilitates the process of S's generalisationabout the causes (and effects) of his own beliefs to the case ofother people. Under the suggested encoding, that process isessentially one of systematically transforming data-structures ofthe form Ap1 to suitable data-structures of the form rApl. In short:It is a process of tag rewriting.

It has often been suggested that reasoning about others'mental states might proceed by a kind of simulation. Suppose,for example, S has inferred that T believes P, Q, R and that Tdesires X, Y, Z, and suppose S wishes to know how thatpsychological state will develop. Here is one possibility. What ifS imagines that P, Q, R were true and that X, Y, Z weredesirable? In so doing, he would simulate the mind of one whoreally does have these thoughts. Can this sort of psychologicalreasoning be supported if beliefs about the psychological statesof others are encoded using the tagged data-structures sug-gested above?

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 71

Page 72: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

Extending our fictional psychological model we might sup-pose that such an act of imagination involves setting up a"pretend" belief-box containing the data-structures AP, AQ, ARand a "pretend" desire-box containing the data-structures Ax,Ay, Az. (Recall that a tokening of AP in S's belief-box wouldamount to his believing P, and so on.) In addition, S must copyinto these pretend belief- and desire-boxes the data-structurescorresponding to those of his background beliefs and desireswhich he assumes to be shared by T. Having cloned a suitableset of pretend beliefs and desires, he then lets those data-structures interact with each other and with his standard infer-ential machinery to yield new data-structures (corresponding tonew "pretend" beliefs and desires) which he can then observeusing his usual introspective powers as detailed above.

Again, the crucial point here is that the suggested encodingscheme fares well when it comes to such simulations. Forsuppose S encodes the above propositions about Ts beliefs anddesires using the tagged data-structures TAg, rAft, rAg (Tsbeliefs) and TA£, TA$, TA§ (Ts desires), as suggested. Then, forS, the task of constructing a model of T*s beliefs and desires ismerely one of inserting, into his auxiliary belief- and desire-boxes, the untagged versions of those data-structures: namelyAP, Ap, AB and Ax, Ay, Az. And, as we have said, systematicallyuntagging data-structures is, by hypothesis, a simple operation.Similar remarks of course apply to encoding, as conclusionsabout Ts likely thoughts, the results of the simulation: that, forS, is just a matter of tagging interesting data-structures pro-duced in the simulacrum with labels indicating that they are thelikely future beliefs of T, and of inserting the tagged data-structures into his own belief box.

To be sure, this account of how the concepts of belief anddesire might function in S's thought can at best be a poorcaricature of the real thing. But I hope to have made at leastbelievable the possibility that such concepts may be so encodedthat judgements about one's own current mental states do notinvolve the sort of matching process that Goldman assumes aspart of his general framework. Thus, I agree with Goldman thatordinary self-ascription of current mental states cannot be un-derstood as involving a process of matching against representa-tions encoding a battery of definitive causal-functional proper-ties; but that, I claim, is because ordinary self-ascription ofcurrent mental states should not be understood as involving aprocess of matching at all.

Theory-theory theory

Howard RachlinDepartment of Psychology, State University of New York at Stony Brook,Stony Brook, NY 11794-2500

[Gol, Gop] About the theory-theory of Gopnik and that anti-theory-theory of Goldman, my grandmother (folk-psychologistextraordinaire) might have asked: What are these theories?Where are they? When did they appear? and most important (forher), Why were they published? and last and least, How werethey concocted? Like Aristotle before her, my grandmother wasmost interested in why questions (final causes) and onlysecondarily in how questions (efficient causes). So in readingGopnik's article my grandmother would have perked up whenshe came to the section entitled, "Why I am not a behaviorist."But she would have been disappointed. Instead of an answerthat would have appealed to her, like: "So my students can getjobs," she would have read, "First we have psychological states,observe the behaviors and the experiences they lead to inourselves and others, construct a theory about the causes ofthose behaviors and experiences that postulates intentionality,and then, in consequence, we have experiences of the inten-tionality of those states" (sect. 8, para. 2). Gopnik has given an

unsatisfying (to my grandmother) how answer masqueradingunder an interesting (to her) why rubric.

Folk psychology is the game being played here. Goldmanwants us to listen to our inner voices (or at least our "mentalwords") and Gopnik wants us to "listen to children." I prefer tolisten to my grandmother. Here are my questions and heranswers (through a medium): Q: What are these theories? A:They are the ideas of these nice psychologists and philosophers.Q: What do they mean? A: What do you mean, what do theymean? You want me to say that their true meaning is Gopnik's"experiences of intentionality" and Goldman's CRs, IRs and Ms?Listen to me, sonny boy. If you want to find out more aboutthese theories, their "true meaning," don't try to psychoanalyzethese nice people. They already have enough troubles. Go lookat their other books and articles, go look at what their teachershave written, go look at the interesting data they (Gopnik atleast) collected and analyzed or, if you want to take a short cut,read the other commentaries. Q: But grandmother, where can Ifind the theories themselves? A: Where are they? Right in frontof your nose, dummy. On the paper in front of you. Oh I knowyou. You would like to avoid writing your commentary on thegrounds that the real theories are hidden in the minds of Gopnikand Goldman, their private knowledge, and that the paper infront of you is only a pale reflection of those inaccessibletheories. So why bother to comment? You can't fool me. Nowget to work and write your commentary. Q: And grandmother,when did these theories come into being? A: They came intobeing (as you put it) when they were written. What do you wantme to say - that Gopnik when she sat down at her wordprocessor was just transcribing her experiences of intentionality(and Goldman, his Ms)? - that they did the hard work ofinventing their theories first, before they wrote them down?Then why did they find it so difficult to sit down at those wordprocessors: Why would they rather go for a nice walk in thepark? And you, Howard dear, why do you have to force yourselfto work and why do you erase so much? When you see it on thepage - then you have a theory. Before that it's nothing. Don'ttell me about those theories in your mind. Your grandfather,may he rest in peace, used to tell people he could sing likeCaruso - and so he could - until he opened his mouth.

But enough. It should be obvious that one person's folkpsychology is another's convoluted theory. My grandmother'sfolk psychology was entirely behavioristic. People like mygrandmother ascribe mental terms to the patterns of behavior ofother people and to themselves not as disinterested reporters ofsome theatrical performance in their heads but because bydoing so they get along better in the world. The turn-of-the-century (the last century) biologist Herbert Jennings believedthat if an amoeba (a predator) were the size of a whale we wouldbest avoid being eaten by talking about the amoeba's mentalstate. Albeit with less vivid motivation, the same goes forascription of mental states to other people and ourselves. Thereason Gopnik repeatedly bends over backward to deny being abehaviorist is because her data so strongly suggest some suchbehavioristic conclusion.

I cannot characterize what is learned by children between 3and 4 years old any better than Gopnik's characterization as alearning to perceive other people's mental states and one's ownmental state. What makes Gopnik a mentalist is not her beliefthat children learn to use mental terms. (No one can deny thatthey do.) It is not even that children form theories about theirown minds and those of other people. Contrary to Goldman,Gopnik's data suggest that they do form such theories. Thedifference between my behaviorism and Gopnik's mentalismlies in our differing conception of what those theories consist of,why they are formed, where they occur, when they occur.According to Gopnik, children's theories consist of their internalexperiences, they are formed as disinterested descriptions ofinterior mental states, they occur in the child's private mentalarena, and they occur prior to their mere exhibition in the

72 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 73: A Gopnik - How We Know Our Minds

C ommentary/GopniVI'Goldman: Knowing our minds

child's overt behavior. According to me (following my grand-mother) they consist of overt discriminations between complexbehavioral patterns (those of other people and the child's own),they are formed in response to environmental demands ("con-tingencies of reinforcement," "affordances") and they occur, justas Gopnik's own theory occurs, in overt temporally extendedbehavioral actions at the time, or rather over the period of time,during which those actions (and their environmental records)occur.

Of course, this is not to say that there exists a good behavioris-tic theory-theory theory, still less, to present one. (Rachlin,1993, does present a behavioristic theory-theory.) However, Idid attempt a behavioristic account of pain (Rachlin 1985), amental state often held not to be amenable to even a functional,still less a strictly behavioral, account. Goldman refers to thattarget article, but not to its central point - that pain behavior,overt behavior of whole organisms normally considered asexpressive of pain, is in the long run all there is to pain. Goldmandoes seem to argue against this point if we take his argumentagainst functionalism as applying to behaviorism as well. Hesays, "For any functional [read, behavioral] description of asystem [whole organism] that is in pain . . . , it seems as if wecan imagine another system with the same functional [behav-ioral] description but lacking the qualitative property of painful-ness . . . . When we do imagine this, we are intuitively inclinedto say that the system [organism] is not in pain." In its anti-behavioristic version this is the familiar perfect-actor argument.I suggested that the perfect-actor argument is intuitively con-vincing (it is never rationally convincing) only over a brief periodof time. A stage actor acting as if he were in pain is not really inpain if when the curtain goes down he stops acting that way. Butour intuition collapses when we try to imagine a person whofrom birth to death reacts appropriately to normally painfulstimuli and yet was never in pain (Putnam's super-super Spar-tans are unconvincing). Why does our skepticism about a perfectactor's mental state fail to embrace a whole lifetime of perfectacting? Because retaining skepticism under such circumstanceswould undermine the usefulness of the mental concept.

The perfect-actor argument falls into a common category ofantibehavioral arguments, another of which is the mechanical-dolly example: Suppose you met the woman [man] of yourdreams and spent one ecstatic night together. The next morningit is revealed that your "lover" is not a human being but amachine made of transistors and latex. Wouldn't you be disap-pointed? Yes, by presumption, you would. But now supposethat the doll is not identified as such until much later. You havemarried this doll, spent a joyous life together, had children [halfdoll-half human] and grandchildren. It is now the night of your50th anniversary. Your wife [husband] is on her [his] deathbed.She [he] whispers in your ear, "Don't let them do an autopsy."You forget. The next day it is revealed to you that your spouse of50 years was just a machine - composed of inorganic rather thanorganic chemicals. Would that lessen your grief? No, I daresay.What we intuitively saw in the original perfect-actor andmechanical-dolly examples was the inconsistency between thebrevity and particularly of the act and the mental term used todescribe it. Mental acts are not brief and particular, they aretemporally extended and abstract.

The problem with the mechanical-dolly example is the sameas that of the perfect-actor example - neither goes far enough.We use mental terms to describe extended patterns of behavior,not brief individual acts. As Gopnik's data reveal, mental termsare no more and no less than labels for abstract behavioralpatterns. The mechanisms that explain how we do this arematters for cognitive and physiological investigation. But a priorquestion, the more interesting question raised by the importantresearch of Gopnik and her colleagues, is why we do this - whatare the final causes, the reasons, underlying our mental lives?

Theories of mind: Somemethodological/conceptual problems and analternative approach

Sam S. RakoverDepartment of Psychology, University of Haifa, Haifa, Israel 31999Electronic mail: [email protected]

[Gop] A major goal of Gopnik's very interesting target article isto disconfirm the commonsense theory of mind empirically andto support an alternative one called theory-theory. The com-monsense theory is that our knowledge of our own intentionalstates, such as desires and beliefs, is fundamentally differentfrom our knowledge about others' intentional states. The formeris based on first-person knowledge, the latter on third-personknowledge or analogical inferences. From this proposed epis-temological difference, Gopnik infers that a child's knowledge ofhis own mind would be much more accurate than his knowledgeof others' minds. In contrast, theory-theory predicts a paralleldevelopment of a child's knowledge of his own mind and others'minds, since it contends that a child develops a unified theory ofhis own mind as well as those of others. A child ascribes the sameintentional states to himself and to others in an attempt toexplain his and others' behavior; intentionality is conceived as atheoretical, explanatory construct linked indirectly to observa-tions. Gopnik's empirical data (i.e., children's responses incertain tasks such as appearance-reality and false-belief) supportthe theory-theory's predictions.

I shall make three comments regarding Gopnik's attempts todecide between these two theories. The first concerns variousmethodological problems with the attempt to refute common-sense theory; the second suggests certain conceptual ambi-guities in theory-theory, and the third offers an alternativeapproach.

Falsification of commonsense theory. Gopnik's derivation ofthe commonsense theory's prediction is based on a false assump-tion that a child's knowledge about others' minds is based ononly one source of information: analogical inference. I suggestthat a child's inner world consists of additional evidence whichenables him to develop hypotheses more easily about others'minds than about his own. A child's inner world (as well as anadult's) consists of two types of mental events. The first type(type I) consists of the sensations, feelings, images, and soundsof which a child is conscious. These mental events are initiallyprivate, and are made public through the child's verbal reports.The second type of mental event (type II) is the various pieces ofinformation a child obtains through the processes of socializationand language acquisition. Most of these mental events are firstpublic, then made private through internalization (see Rakover1990, p. 196). It seems to me that in comparison to type I mentalevents type II mental events would provide a child with muchmore information about others' minds and behavior than abouthis own mind, since a major purpose of the process of socializa-tion and language acquisition is to inhibit a child's naturalegocentric tendencies and to open his mind to others' needs andpoints of view. Hence, it is possible to predict the reverse ofGopnik's deduction: A child would do better on tasks that testhis knowledge of others' minds than on tasks that test hisknowledge of his own mind. This prediction has been confirmed(e.g., Gopnik & Astington 1988; target article). So, according tothe above analysis and interpretation, commonsense theory isnot refuted, but rather confirmed.

Another problem with the refutation of commonsense theoryis Gopnik's implicit assumption that first-person knowledge isconstructed independently of third-person knowledge. HadGopnik not made this assumption, the justification for derivingtwo contrasting predictions from the respective theories wouldbe weakened to a great extent; indeed, the conceptual distancebetween commonsense theory and theory-theory would beshortened considerably. But commonsense theory does propose

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 73

Page 74: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Go\dman: Knowing our minds

a complex interaction between these two methods of knowledgeacquisition. In the words of Burt (1962):

I conclude that the observation of self and observation of others areboth indispensible. Unless we study our own inner consciousness wecannot fully understand the behavior of others, and unless weobserve others we cannot fully understand our own. (p. 242)A final obstacle to the refutation of commonsense theory is

related to what I call "the mind-body correlational problem"(Rakover 1990, p. 247). The essence of this problem is that themethodological difficulties encountered in attempting to estab-lish empirically causal relationships among mental events, andbetween mental events and behavior, are comparable to theproblems confronted in proposing causal interpretations forcorrelations. Empirical solutions to such issues require sophisti-cated experimental designs and techniques, such as path anal-ysis. Similar problems arise in Gopnik's key proposition that themajor difference between commonsense theory and theory-theory lies in the causal sequence of mental events and be-havior:

The commonsense picture proposes that we have intentional psycho-logical states [see definitions below], then we have psychologicalexperiences of the intentionality of those states, then we observe ourown behavior that follows those states, and finally, we attribute thestates to others with similar behavior. I suggest a different sequence:First we have psychological states, observe the behaviors and theexperiences they lead to in ourselves and others, construct a theoryabout the causes of those behaviors and experiences that postulatesintentionality, and, then, in consequence, we have experiences of theintentionality of those states, (sect. 8, para. 2)

Clearly, the empirical data presented by Gopnik are not enoughto help one decide between these two sequences.

Conceptual ambiguities. Gopnik distinguishes between twodifferent meanings of intentional states. "Psychological states"refer to

. . . the underlying entities that explain our behavior and experi-ence. . . . These psychological states are similar to other physicallyor functionally defined objects and events in the world: atoms,species, word-processing programs, and so forth, (sect. 1, para. 2)

"Psychological experiences" refer to our. . . conscious experiences with a particular kind of phenomenology,the Joycean or Woolfian stream of consciousness, (sect. 1, para. 3)

From these it follows that psychological states are explanatorytheoretical concepts that one uses to account for one's ownbehavior and others'. For example, I might conclude that I atethe Big Mac because I was hungry and because I believed thissandwich was edible. Thus, one generates an explanation ofone's behavior on the basis of certain conscious experiences,such as an awareness of hunger, which appeared in one's mindbefore one attempted to account for the behavior in question. Inother words, one proposes a subjective hypothesis for explain-ing behavior by introspecting certain intentional states in one'smind (see Rakover 1983). If this interpretation is correct, thenpsychological experiences come first. This sequence contradictsGopnik's (which is reminiscent of the chain of events in theJames-Lange theory of emotion), wherein it is only after con-structing a theory of mind (on the basis of initial psychologicalstates) that one has intentional psychological experiences.

A possible explanation for the above ambiguity is the falseassumption that a child's theory of mind is identical with ascientific theory. There are similarities between a child's theoryof mind and a scientific theory; for example, both are susceptibleto the process of falsification and confirmation, and in both,concepts are delineated in terms of their functions. However,there is at least one crucial difference: Explanatory intentionalstates are conscious experiences, whereas scientific concepts arenot. Scientific theories are not like live creatures - likewise, I donot believe that highly complex computers produce consciousexperiences (see Rakover 1990, pp. 308-9; Rakover, in press).In short, a subjective hypothesis is generated from certainexisting conscious mental events, whereas in science observa-

tions are guided by theoretical propositions (see Rakover 1990,Ch. 4).

An alternative theory. In accordance with a previous paper(Rakover 1983), I would argue that a child hypothesizes fromintrospections about the causal relations between his (andothers') mental states and behavior. These hypotheses would bebased on existing type I and type II mental states and could beconfirmed or disconfirmed so that the child could change hismind and propose new and better hypotheses. In contrast toGopnik's approach, the present theory, which I call "hypothe-sizing from introspections" (HI), emphasizes that (a) consciousexperiences are primary, (b) subjective hypotheses are gener-ated from existing mental events, and (c) explanatory intentionalstates are conscious experiences and not identical to scientificexplanatory concepts.

One important implication of HI theory is that the set ofhypotheses held by a child is less broad and less coherent thanwhat is implied by theory-theory. This leads to the following twopredictions: First, the consistency among a child's hypothesesincreases with age. Indeed, one could view some of the experi-ments reported by Gopnik as indirectly indicating an increase incoherence. The second prediction deals with transfer of learningexperiments. In the first stage of such experiments, 3-year-oldchildren would be trained until they reach a criterion on a taskthat would test for understanding of their own mind (or others'minds); in the second stage, they would be tested in tasks thatmeasure understanding of others' minds (or one's own mind).For this design, a smaller degree of transfer of learning would bepredicted by HI theory than by theory-theory.

A C K N O W L E D G M E N TThis commentary was supported by Carleton University, Faculty ofSocial Sciences, Ottawa, Canada. I would like to express my gratitude toTeresa van den Boogaard, who read an earlier draft of the present paperand made helpful comments. For reprints write to Sam S. Rakover,Department of Psychology, Haifa University, Haifa, Israel 31999.

Why presume analyses are on-line?

Georges ReyDepartment of Philosophy, University of Maryland, College Park, MD 20742Electronic mail: [email protected]

[Gol] Goldman's target article displays his usual exemplaryclarity of thought and prose: It is a pleasure to be able to locatedisagreements so clearly.

Goldman is concerned with a specific version of functionalism- what he calls "representational functionalism" (RF) - which heregards as "a hypothesis about how the cognizer . . . representsmental words" (sect. 3, para. 2), about "what guides or under-pins an ordinary person's use of mental words" (sect. 2, para. 6).In particular, Goldman claims RF is committed to its analysis ofa term (or concept) serving as a "category representation" (CR)that is exploited "when a cognizer decides what mental wordapplies to a specified individual" (sect. 2, para. 7). Indeed: "RFimplies that a person will ascribe a mental predicate . . . whenand only when an [instance (IR)] occurs in him bearing the[proposed analysis]" (sect. 3, para. 2). Biederman's (1987)"geon" representations are, for example, CRs that seem to playsuch a role in object recognition. Goldman offers a number ofentirely convincing reasons for supposing that functionalistanalyses couldn't possibly play a similar role in the recognition ofmental states. Consequently a "simple RF model of mental self-ascription seems distinctly unpromising" (sect. 3, last para.).

But, of course, the functionalist can reply that it is prepos-terous to assume that analyses - or, for that matter, any singlesort of representation - are actually deployed in on-line process-ing. I don't know offhand what a correct analysis of [giraffe]

74 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 75: A Gopnik - How We Know Our Minds

Commentary/Gopnikl'Goldman: Knowing our minds

might be, but a geon arrangement certainly isn't one: Somegiraffes don't look like (the geon arrangement for) giraffes, andsome things that look like giraffes are not. Moreover, geons arehardly the only way someone decides whether something is agiraffe: Sometimes one reads labels or asks zoo-keepers. Inmaking identifications, people use whatever works, and there issubstantial evidence that they don't often use perfectly goodanalyses even when they know them, but instead use prototypesand other heuristics that they know are not perfectly reliable(see Armstrong et al. 1983; Fodor et al. 1975). In any case, as Iemphasized in Rey (1983; 1985), it would be gross verification-ism to suppose that these epistemological facts about what weuse to determine what's what are constitutive of the metaphysi-cal facts of what is what, or what the defining conditions for aconcept might be.

So, it's no argument against functionalist analyses that peopledon't use them in day-to-day reasoning. What role do they play?Let me suggest an alternative to Goldman's story that is compat-ible with functional ism and all his claims about ordinary process-ing: Concepts are represented in an agent by files that havemultiple addresses. Many of the addresses involve features(e.g., geon arrangements) that just happen to be good indicatorsof the concept's extension in the environments in which agentsfind themselves. Inside the file there may also be a bonafide,counterfactual supporting analysis, or a space for one; but, beingclumsy, it is there only as a backup, to "guide or underpin anordinary person's use' in thought experiments beyond the usualcases (sect. 2, para. 6). Whether that analysis is purely func-tionalist, or involves reference to "intrinsic qualitative states" isan issue that needn't be settled by this model. The file, how-ever, can also simply be called up "automatically," by brutenoncognitive causation: no addresses, no definition; just thecausal consequence of, say, being stuck with a pin, or, moregenerally, of being in a certain state (maybe even one involving"intrinsic" pain qualia! Again, beside the point). Such brutecausal connections are probably a good idea for an organism thatdoes not have time to check things out. In some cases they arequite probably innate.

Goldman considers something like this story in sections 4 and6, but rejects it for reasons I find strange: "this automaticityassumption cannot and should not be accepted by cognitivescience, for it would leave the process of mental-state classifica-tion a complete mystery" (sect. 6, para. 6). He presumes that theonly reason anyone would believe it would be the weak one oflack of introspections to the contrary. But there is also theelegance of the above model. In any case, what is the leastmysterious about an animal being rigged so as to call up a fileautomatically when, say, its c-fibers are firing, or when it isfrightened? At some point, anyone's story - even the mostsophisticated cognitive ones involving comparisons of IRs withCRs - has to take some processes as automatic. How, forexample, does the cognizer know that one shape matchesanother? Surely not always by determining that an IR and a CRmatch a CR for matching] Moreover, as even behaviorists admit,some things can't be learned (sect. 4, para. 3). Basic matching isprobably one of them. Why shouldn't the connection betweenbeing in pain and thinking you are be another? (Moreover, Ishould think a reliabilist - such as Goldman elsewhere? - couldeven add that that is precisely how people know so well whenthey are injjain; and an information semanticist - so compatiblewith a reliabilist - could claim that that is how the file for "pain"comes to have its meaning.)

Goldman does raise two other, standard objections to func-tionalism, which are, however, entirely unrelated to his pro-cessing arguments: what's come to be called the "holism" argu-ment - "every little change anywhere in the total theory . . .entails a new definition" (sect. 7, para. 3) - and the familiarproblem from Block (1980) regarding arbitrary realizations offunctional states (sect. 5, para. 6). These are genuine problemsfor functionalism that have been discussed abundantly in the

literature, and Goldman might have replied to some of thosediscussions. Suffice it here to say that (pace the impression inLewis 1972) a functionalist is free to abstract from the whole(folk or psychological) theory and claim that only specific rela-tions are constitutive of a particular mental state (judgment, say,might be merely the relation an agent bears to the output ofperception and reasoning that is the input to decision making).The correct level of abstraction that allows for some but notindefinite variability in psychology and physical realizations issurely in part a question for empirical theory (isn't it economicsthat settles what "capital" is, and how variable capitalism andrealizations of it can be?). Notice, moreover, that, if either ofthese objections were valid, we couldn't provide functionaldefinitions of anything - even of patently functional conceptslike [can opener], [capital,] or [carburetor]!

For anything Goldman has said, functionalism is still a liveand important option in the analysis of mind. It is arguably a lotbetter off than its rivals, like Goldman's, that posit qualia overand above functional states (to which, n.b., it is by no meansclear that scientific psychologists like Gleitman [1981] are in theleast committed). But the arguments here concern analysis notprocessing, and, I take it, only the latter was really the focus ofGoldman's discussion here (see Rey, 1992, for further discussionof the former).

Qualities and relations in folktheories of mind

Lance J. RipsDepartment of Psychology, University of Chicago, Chicago, IL 60637Electronic mail: [email protected]

[Gol] Goldman's folk have trouble with functional theories forrecognizing mental states. Theories have so many intertwinedrelations - and counterfactual ones at that - that they're harderto operate than a microwave oven. These folks hanker after theold simplicity and qualitative feel.

Well, good for them. Goldman may be right to call attentionto the role that qualitative properties play in classifying sensa-tions and attitudes. In my view, these properties can be accom-modated by the theory-theory (TT) in the version that cognitivepsychologists are coining to accept. At the same time, however,it seems possible to defend a role for relations within this sameframework, raising questions for some of Goldman's arguments.

Functionalism versus folk theories of psychology. As it comesup in recent work in psychology (e.g., Gelman & Markman1986; Keil 1989; Rips 1989), TT holds that people's beliefs aboutcategories include (whenever possible):

(a) some account of what makes an instance a category member,(b) information about the typical properties of category

members, and(c) some account of the relation between (a) and (b).

It is not worth debating whether (a)-(c) form a "theory" in a strictsense; if you prefer, call it a "schema" or just a set of mentalesesentences. In any case, a folk theory for a natural category likesparrows might consist in the information that a particulargenetic structure (not further elaborated for most of us) makessomething a sparrow; a specification of a sparrow's normal size,color, and habitat; and knowledge that the genetic structurecauses these characteristics (though again the detailed causalpath is not further developed for us nonbiologists). Thus, simplenonrelational properties of sparrows are included in (b) andcould well be involved in recognizing visually presented spar-rows or pictures of them. Their verdict might be overridden,however, by additional information that bears on (a) and (c).Similarly, folk theories of a psychological state like thirst mightstipulate that it results from deprivation of fluids, that it is

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 75

Page 76: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

accompanied by parched-throat sensations and desire for drink-ing, and that the deprivation causes the sensations and desire.In first-person attributions we might rely heavily on the sensa-tions, again subject to revision.

Representing categories as theories does not by itself banishqualities; in fact, it is difficult to see how theories could predictobservable phenomena without them. A ban on qualities wouldfollow, however, if TT were necessarily committed to func-tionalism. Functionalism seeks to identify a mental state likethirst with its position in a certain causal network; so if IT isnecessarily functionalist, then mental states are represented inIT as purely causal-relational entities. It's a bit unclear whatGoldman believes the relationship is between TT and func-tionalism. On one hand, he regards functionalism as "the mostprecise statement" of TT for mental terms (sect. 3, para. 1),which suggests that functionalism is TTs best chance; and hetakes himself to be denying "that our mental concepts rest on afolk theory" (sect. 12, para. 1). On the other hand, he holds that"commitment to a TT approach does not necessarily implycommitment to RF in the mental domain" (sect. 10, para. 7). It'sthe second stance that's right: If representational functionalismis false, what follows is not that folk theories aren't used inclassifying mental states, but that folk theories can't be purelyfunctionalist. A theory in the style of (a)-(c) is consistent with thislatter eventuality.

Relations In representations of mental acts. Many psychologi-cal categories depend, at least in part, on relations. Consider,for example, the items listed in Table 1, which are the 30 mostfrequent responses that a group of undergraduates gave whenasked to write down examples of "mental activities." Theseinclude items like feeling and wanting that are of the sortGoldman analyzes, but there are many others that have strongrelational components. In the case of talking, reading, drawing,writing, and maybe creating, it seems clear that they do notapply just on the basis of phenomenal mental experience. Youcan't be said to be talking unless the subjective experience istied in the right way to output. Much the same is true for thefactive items, remembering and forgetting (Just & Clark 1973;Kiparsky & Kiparsky 1970). For although you might describeyourself as remembering that Sam paid Fred on the basis ofqualitative information, you would revise this opinion underpressure of evidence that the payment was never made. Table 1also includes activities, such as computing, planning, solvingproblems, reasoning, and perhaps analyzing, that involve cer-

Table 1 (Rips). The most commonly listed mental activities,with the proportion of subjects listing them

Mental Act

ThinkingDreamingRememberingWonderingFantasizingTalkingChoosingConcentratingCreatingDaydreamingFeelingImaginingMemorizingUnderstandingLooking

Proportion"

.93

.60

.60

.47

.40

.40

.33

.33

.33

.33

.33

.33

.33

.33

.27

Mental Act

PlanningQuestioningSolving problemsWorryingAnalyzingKeeping calmComputingDecidingDrawingForgettingIgnoringReadingReasoningWantingWriting

Proportion"

.27

.27

.27

.27

.20

.20

.20

.20

.20

.20

.20

.20

.20

.20

.20

"n = 15 subjects

76 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

tain arrangements of subsidiary acts. Solving problems, forinstance, includes apprehending the problem, adopting the goalof solving it, carrying out additional activities that achieve thegoal, and so on. Qualitative information might suffice to recog-nize the components, but they have to be arranged in thecorrect sequence for the ensemble to count as solving aproblem.

Mental activities are not Goldman's targets. However, rela-tional mental acts pose a question for his analysis: If relations tomental states are as difficult to assess as he claims, shouldn't thissame difficulty carry over to mental activities? Goldman saysthat "Virtually all of our antifunctionalist arguments . . . applyto all types of mental predicates" (sect. 8, para. 2). Yet weencounter few problems in self-ascribing acts like rememberingor solving problems, where the relational content is hard todeny. Goldman anticipates some ways around this problem(while denying that they salvage a functional theory): Qualita-tive properties may sometimes serve as proxies for relationalones (as in the use of shape to identify can-openers). We mayalso rely on just a few of the more important or accessiblerelations ("partial matching"), and we may have computed someof these prior to the classifying episode in question. Suchpossibilities short-cut the need to process an explosive numberof potentially obscure relations.

These strategies reduce our dependence on relations inclassifying, but they should not tempt us to say that relationsaren't part of our folk theories of mental acts or that they play norole in classifying them. It is easy to come up with examples forthese acts (like the remembering example above) in whichrelational facts become decisive. The same goes for can-openers, as Goldman notes. Although we can use shape as a cluethat something is a can-opener, evidence that the object can'tpossibly open cans is usually enough to make us change ourmind. This is, in fact, exactly the situation for which TT wasdesigned: Surface properties that are normally sufficient forclassifying are sometimes downgraded in the face of theoreticalinformation. But if this is true for mental acts, why not also forattitudes? It's possible to imagine two situations that are thesame qualitatively, but in one of them we would affirm a beliefand in the other deny it on the basis of causal or relational facts.For example, I might affirm a belief that there are insectscrawling on me on the basis of phenomenal information, butdeny the belief in the same phenomenal setting if I also knewthat I was participating in an experiment on drug tolerance. Inshort, we need a stronger argument from the premise that weuse qualities in recognizing sensations and attitudes to theconclusion that we represent them in a purely qualitative way.

On leaving your children wrapped in thought

James RussellDepartment of Experimental Psychology, Cambridge University, CambridgeCB2 3EB, EnglandElectronic mail: [email protected]

[Gop] According to Alison Gopnik, children begin life with a bigtask ahead of them: They have to construct a psychologicaltheory of action and mental life. The one they "invent" is the"theory of intentionality." They first test inadequate theoriesagainst the data, but inevitably these are exposed to "falsifyingevidence" and abandoned. Luckily, all children not only hitupon the same theory in the end but they all go throughtheoretical shifts at much the same time! When a little theoristgrows up he may become a philosopher who thinks about theway that conscious states relate to reality and may argue thus:"All my beliefs may be wrong, but at least my belief that I havethese beliefs is safe, which is more than I can say about otherpeople's mental states." What we must do about such a deluded

Page 77: A Gopnik - How We Know Our Minds

Commentany/Gopnik/Goldman: Knowing our minds

creature is to say to him, "Look, you have forgotten how hard-won was your conception of belief. Don't you remember what itwas like being a child, when you spent so much time wrapped inthought? This idea of'belief was one you invented (we all did!),so you r believing that you are believing X is no more incorrigiblethan any other proposal in science. To cure yourself of this error,cancel your subscription to the Proceedings of the AristotelianSociety and begin one with Child Development."

This is a caricature, but I do not think it is unfair. In short, Iwould want to take issue not only with how Gopnik regardsmental development but with how she views the relation be-tween empirical and philosophical questions in cognitivescience.

First, there is an uncontroversial sense in which childrencould be said to be "acquiring a theory" within the mentaldomain. Mental terms are theoretical terms because graspingthem requires a more general grasp of the theoretical network inwhich they are embedded. Not all terms are theoretical, andrecently Campbell (in press) has made the interesting proposalthat terms which refer to "primary qualities" are theoretical andthose which refer to "secondary qualities" are not. But whateveris the answer to that question, the apparent fact that theacquisition of the theory is gradual does not at all imply thatdeveloping approximations to the adult theory are themselvestheories. It does not imply that the reason children think aboutmental concepts in one way when they are 3 and in a differentway when they are 4 is that the older children have constructed abetter theory for themselves. But we could reasonably say that itis because they are now capable of mental operations of whichthey were not capable at the earlier age. (The next two pointsabout development will enlarge on this claim.)

Second, it is pretty certain that the brains of scientists in theRenaissance performed the same kind of mental operations asthose performed by the brains of scientists today. The idea thatone scientific theory replaces another because the practitionersjust get smarter must be wrong. And is it not equally wrong tobelieve that younger children can perform all the mental opera-tions of which older children are capable, but that they havebeen working at the problem of cognition for less time? In short:It is a reasonable conjecture that as the brain develops, newerand more adequate judgements about mental life become possi-ble. Gopnik tries to deal with this kind of point under"information-processing alternatives." I agree that failures ontheory-of-mind tasks cannot be explained away in terms of suchprocessing factors as memory failure and reality seduction (torecant somewhat on: Russell et al. 1991); but this is not the sameas believing that 3-year-old brains process information in justthe same way as 4-year-old brains. Of course we can say that3-year-olds lack, in the uncontroversial sense, the right theoryof mind, but we still have to explain why. I cannot see any otherway of trying to explain why than by going "deeper" towardscomputational models of cognitive change expressed in terms of"information-processing" rather than "theory." Explaining de-velopment by theory change is either an uncontroversial de-scription of what happens or it is a false parallel with the case ofthe scientist.

Third, let us consider the theory itself and let us agree withGopnik and others, for the sake of argument, that we can regardthe theoretical network of mental concepts as being similar to ascientific theory: This weakens, not strengthens, the parallelbetween child and scientist. The scientist - and in a sense thehuman race - is standing before Nature constructing a theoreti-cal system to render her intelligible. But this is not how the childis placed at all. Children do not stand before Nature, but beforeNature and the human conceptual system, a system (the con-structed "theory") to which the child comes to adapt in thecourse of development. There is no scientific analogue to this:Doing science is not a matter of adapting to a conceptual system"out there" (unless it is what some physicists call "the mind ofGod"!). Gopnik would not approve of this talk of adaptation to a

conceptual system and would probably regard it as "encultura-tion," but this would be a mistake: The view that children adaptto the conceptual system surely does not entail that the processis on a par with learning "to eat politely and dress appro-priately." And it certainly does not mean that the adaptationcannot have a stagewise character - whose nature Gopnikeloquently describes. The stagewise character may be deter-mined by the kind of endogenous, information-processingchanges mentioned above. The plausible alternative to theadaptational view is some form of nativism (Fodor 1987). Theview that all children heroically construct the human conceptualsystem for themselves is a good deal less plausible.

Finally: The principal aim of Gopnik's target article is to showhow facts about the early course of mental development make abroadly Cartesian view of mind untenable, taking this view to bean empirical theory. But surely it is not an empirical theory. If ithad turned out that young children fail to answer questionsabout their own mental states whilst brilliantly answering ques-tions about the mental states of others, the Cartesian could saythat they had learned verbal formulae without knowing theirmeaning. If the grown-up theorist in my first paragraph iswrong, then he is philosophically wrong. Better advice to himwould be to read Chapter 3 of Individuals (Strawson 1959) or tolook at some of the recent work on belief and the problem of"content" (Brown & Luper-Foy 1991, for a handy summary).

Disenshrining the Cartesian self

Barbara A. C. SaundersDepartment of Cultural Anthropology, University of Utrecht, 3504 TCUtrecht, Netherlands

[Gop] According to Gopnik, young children do not rely on agiven self characterised by intentionality, directly accessed byfirst-person privilege; they do not possess the crucial authorshipor inner core required by the common sense notion of self. Theyare, it seems, a counterexample to the cherished belief that"knowledge of intentionality, like knowledge of sensations,comes directly and reliably from our psychological experience."

Gopnik is right to make the analogy with sensation, for thattoo requires a theory, neither intention nor sensation beingknown by indubitable, incorrigible access (Saunders 1992). Ifintentionality is that which most fully characterises self-referential mental components permeating social phenomena(Searle 1991) then the subject of mental-predicate avowals - the"I" of "I believe, think, fear" and so on - might be thought atheoretical concept, structuring the prepositional content of themind of the actor. Although I agree with the general thrust ofGopnik's argument, I prefer to cast her account in a differentmould. Specifically, I disagree with her implicit attribution ofthe source of this theory, for despite her caveat about matura-tion (note 8), she seems committed to the givenness of theindividual organism directly acting on the physical environmentand constructing representations out of this activity without themediation of others - a solitary, central information processor,constructing its own theory of intentionality.

In contrast, I prefer to think of the subject of intentionalascription as being brought into being by appropriations ofresources provided by caregivers that promote the kind of de-velopment that issues in first-person mental-predicate avowals.Caregivers, as products and producers of intentionality, medi-ate their interactions with a child through linguistic and quasi-linguistic processes and practices attributing the mental re-sources for the mastery, management, and organisation ofintentionality, self, or mind (Shotter & Newson 1974). That is, asJohn/Jane howls, the caregiver says: Do you want (fear, feel,believe, hope, etc.) that X? (Do you want your diaperchanged/feel hungry/need a cuddle?) As all the fine grain of

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 77

Page 78: A Gopnik - How We Know Our Minds

Commentary I Go-pnfcl'Goldman: Knowing our minds

higher mental functions probably requires linguistic mediationor the "artificial" development of a child (Vygotsky 1966), theapparent transparency of first-person privilege would be theresult of the myriad, mundane processes of linguistic trainingand learning that are ingrained and operate throughout life-history in a way that is preconscious and hence not readilyamenable to conscious reflection and modification - what Bour-dieu (1991) calls the "habitus."

A symbiotic relation of caregiver and child scaffolds thegradual mastery of the "I" of intentional ascriptions, broughtinto being by the factual and reciprocally taken-for-grantedadult superiority of linguistic competence and knowledge of theworld. The adult partner, knowing what talked-about states ofaffairs are, and what words mean, inducts the child into theappropriate language games through the asymmetry of adultcontrol of the emerging, sustained, shared, social reality andmeaning (Rommetveit 1985). In this setting a child masters theintricacies of local first-person mental-predicate avowals interms of communicative functions.

In this light, Gopnik's review of experimental work can bereinterpreted in terms of "the power of the question," which is ata maximum under conditions of unequivocal asymmetry, as inthe dialogues reported in the experiments: The children merelystand corrected through "other-regulation" of asymmetricdyadic control, dependent as they are on adult truths. SoGopnik et al.'s 3-year-olds who predict that others will thinkthere are pencils in the candy box have been corrected intomastery of something about the box (a new kind of pencil boxsimilar to a candy box?). But what they have not yet mastered isthe language game parasitic on the possibility of deceptiveappearances and difficult adult talk.

Regarding the account of the origins of intentionality I favour,self-ascription or intentionality arises out of the joint intentionalactions (intersubjectivity) of mindful creatures whose intentionsare structured and stocked from a social and interpersonalreality (Harre' 1983; 1989). Thus, beliefs are social phenomena,the result of an interpersonal history of surrounding languageand culture, the intrapersonal history of the individual, and theinteraction of the two. As a child gains mastery of local first-person mental predicate avowals, it gains mastery of the keynodes of sociolinguistic practices or language games and theappropriate judgments for such avowals. There are then no Car-tesian, Piagetian, or Popperian homunculi using hypothetico-deductive models of knowledge acquisition.

On the subject of intentionality inspired by Wittgenstein(1953) and Vygotsky (1986), Gopnik's illusion vanishes, direct-ness becomes relative to the local theory, and the apparentdissonances take on new values in the explanatory and inter-pretational framework. Indeed, her appeal to the topologicalinteriority and mental inwardness located in the activities of thebrain starts to look like a confused notion (Shotter 1987). Hersolitary, individualist, information-processing child, construct-ing labels (verbalising "beliefs") for the inner-goings-on of thephysically and functionally defined underlying entities, is facedwith the deep philosophical problems of how intersubjectivity ispossible, how there can be knowledge of other minds, and howbeliefs are constructed about beliefs.

According to the account I prefer the "I" of intentionality isthe product of the local theory - a contingent cultural artifact -acquired in the course of its structurations (Giddens 1984; Harre"1989) - a conclusion born out by linguists and anthropologistswho have found no simple synonymy of mental-predicate as-cription and concomitant pronoun systems. For example,Kwakw'ala-speaking infants (of Vancouver Island) may learn theinclusive "we" of "speaker and others, including person ad-dressed" before the exclusive "we" of "speaker and othersexcluding the person addressed" (Boas 1911). This is an exampleof how the sense of self may differ cross-culturally and in thecourse of a lifetime.

Gopnik's striking finding that children understand their own

psychological states when they understand the psychologicalstates of others supports my contentions, for rather than being afinding it is precisely what one would predict on the Witt-genstein-Vygotsky model.

Finally, what of the directness or privilege of first-personmental predicates? Following Miihlhausler and Harr6 (1990),the entire Cartesian underpinning of the commonsense notionof self should be abandoned, for neither child nor adult pos-sesses a given inner core or ego that can be directly accessed.Following Davidson (1989a), privileged access should be de-moted, for once the subject of intentional ascription is in place,thoughts are obviously private in that they belong to a singleperson, knowledge of which is asymmetrical, because the per-son who has a thought generally knows he has it, as others donot. But that is all there is to it.

Special access lies down with theory-theory

Sydney ShoemakerSage School of Philosophy, Cornell University, Ithaca, NY 14843-3201Electronic mail: [email protected]

[Gol, Gop] Contrary to what Goldman frequently implies, whatfunctionalism holds to be constitutive of something's being amental state of a given kind is not what its actual causes andeffects are but what states of its kind are apt to cause (orcontribute to the causing of) and be caused by, which I willequate with what Goldman calls the "subjunctive" properties ofthe state. Functionalism is not committed to "pure relationism."So, although it keeps coining up, the unavailability to the self-ascriber of information about actual causes and effects is a redherring. Why does Goldman think that information about sub-junctive properties is also not available? His discussion assumesthat our self-knowledge is grounded on our being aware, orsomehow presented with, mental items that we must identify asbeing (or representing) states of this or that mental kind on thebasis of their presented properties. If one thinks of this aware-ness on the model of sense perception (and in particular, vision),it will indeed be natural to conclude that the information itprovides directly cannot itself be causal or subjunctive informa-tion and will often be insufficient to provide the basis forinferences to causal or subjunctive information of the requiredsort. But the appropriateness of such a perceptual model ofintrospective self-knowledge is one of the myths of foundational-ist epistemology, and it is one that I would have expectedGoldman to spurn.

The view Goldman calls "classical functionalism" (which, nearenough, is my own) cuts out the middle man (quasi-perceptionof mental items presented as candidates for identification on thebasis of quasi-perceived properties) and holds that it belongs tothe nature of certain mental states that having them directlyissues, under certain conditions, in the belief that one has them.The "certain conditions" will of course include having theconcepts of the states. And if these are functional concepts, thenof course self-ascriptions involving them will have causal orsubjunctive implications, to which the self-ascriber is commit-ted. But the awareness of these implications will normally onlybe tacit, and there is no question of the self-ascription of thestate being grounded on prior knowledge of causal or subjunc-tive information about the state ascribed.

Goldman thinks this view leaves the process of mental-stateclassification "a complete mystery," because it denies that thereis "a microstory about how we make . . . mental classifications."But this is a groundless charge. Classical functionalism does notdeny that there is a microstory; what it denies is that there is amicrostory of a highly implausible sort.

Compare another sort of case in which mental states "auto-matically" give rise to other mental states - that in which a set of

78 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 79: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

beliefs, B, give rise to a further belief, C, whose content is anobvious consequence (deductive or inductive) of their contents.No doubt there is a microstory (as yet unknown) about how thistakes place. But one sort of microstory seems out of the ques-tion. It would be wrong-headed to suppose that having identi-fied the underlying mechanisms or structures in which thepossession of the various beliefs (and the various concepts theyinvolve) is implemented, one must postulate additional mecha-nisms, completely independent of these, to explain how it is thatB gives rise to C. Given a neural or other subpersonal mecha-nism, nothing could justify regarding that mechanism as animplementation of a given belief if the nature of the mechanismis not such that the microstory of its existence and operationinvolves an implementation of the inferential role of that belief(and so involves relations to a larger system). It is equally wrong-headed, I think, to suppose that having identified the underly-ing mechanisms in which beliefs, thoughts, sensations, and soon, are implemented, and those in which the possession ofconcepts of these is implemented, one must postulate yet othermechanisms, independent of these, that explain how it is thatthese states give rise to introspective beliefs about themselves.

There are, I think, strong conceptual reasons for thinking thatthe introspective belief that one has a first-order belief or desiresupervenes on the first-order state plus human intelligence,rationality, and conceptual capacity (see Shoemaker 1988;1991b). And a version of functionalism that incorporates thisview accords much better with the phenomenology than Gold-man's view does. Certainly there is nothing we are aware of thatanswers to the description: being aware of something andidentifying it as a belief that the cold war is over by noting itsobservable features. Introspective awareness of intentionalstates is normally awareness of mental facts unmediated byawareness of mental particulars.

Such intuitive plausibility that the perceptual model of intro-spection has derives entirely from the case of awareness ofsensory states (and, I suspect, from an act-object construal ofsuch states that, if adhered to consistently, leads straight tosense-datum theory). And here the considerations that makeplausible the view that "qualia" are not functionally definablemay lend plausibility to Goldman's view. I have had my sayabout these considerations elsewhere (see Shoemaker 1975;1991a), and cannot repeat it here. I will only observe thatGoldman scuttles his own case when he lists the "dimensionalcomponents" of pain. One of these is aversiveness; and hecannot maintain with any plausibility that this is a "qualitative"feature of the sort that something can have independently of itscausal role.

Although Gopnik's position is in some ways very differentfrom Goldman's, I think they share the assumption that self-knowledge must be grounded on something analogous to senseperception. Gopnik's developmental data do not directly sup-port any view about the basis or provenance of first-personmental state attributions. Let us assume that she is right thatthey support some version of the "theory-theory." The theory-theory is not in the first instance an epistemological thesis at all;it is a thesis about the semantics of mental terms, or the nature ofmental concepts - the thesis that the meanings of the terms, orthe nature of the concepts, is some function of their role in somesort of theory. There is nothing in this, by itself, to contradict thecommonsense view, opposed by Gopnik, that "our knowledge ofour own psychological states is substantially different from ourknowledge of the psychological states of others." The theory-theory is compatible with the view of "classical functionalism"that (oversimplifying it greatly) self-ascriptions of mental statesare direct and automatic products of the states that are self-ascribed. But Gopnik thinks that self-ascriptions are somehowbased on what she calls "psychological experiences." She sensi-bly shies away from the Rylean view that they are groundedentirely on the same sorts of evidence (behavior, etc.) thatascriptions to others are grounded on. But she apparently thinks

that just because the concepts involved in these self-ascriptionsare theoretical, as in the theory-theory, it cannot be the case thatwe directly experience our possession of the beliefs, desires,and so on, that are self-ascribed. Here, I think, we see theperceptual model of introspection at work. The assumption isthat the self-ascriptions must be grounded on something likesense perception, and that where claims with theoretical con-tent are grounded on perception they cannot be direct percep-tual reports and must be instead the product of some kind ofinference. In making this assumption she plays into Goldman'shands. Gopnik tells us basically nothing about what the immedi-ate objects of "psychological experience" are that provide thebasis for our theoretically grounded inference to intentionalstates, and she leaves it a mystery how these unspecifiedcontents enable the child (a little Einstein already at 3i) toconstruct the theory-theory and make self-ascriptions based onit. If she wants to hold the theory-theory, she would do muchbetter to abandon this assumption, and with it the phenome-nologically implausible claim that self-ascriptions of intentionalstates are grounded on some kind of "experience."

Gopnik saddles the commonsense view that we have a specialaccess to our intentional states with some highly implausibleclaims, for example, that it is by experiencing these states thatwe know such things as that they "refer to the world - but thatthey may be false, they may change, or they may come frommany different sources - " as if these are things we simply readoff from something we directly experience. But the specialaccess view is not committed to such claims. The view is thatonce one has the concepts of the various mental states, havingthose states gives one immediate (noninferential) access to thefact that one has them - and the fact that such states can be false,and so forth, is something one knows in having a full grasp of theconcept of them. What Gopnik's developmental data show isthat the grasp of the concepts of such states proceeds by stagesand is not completed until around age 4. This is entirelyconsistent with the special access view. (But this process ofconcept acquisition seems most plausibly thought of, not astheory invention by individual children on their own, but aseither the internalization of concepts [and theory] already inplay in the child's social and linguistic environment or a devel-opmental process in which an innate conceptual structure ma-tures - or some combination of these.)

Despite their disagreements, Goldman and Gopnik are alikein thinking that the special access view (favored by him, opposedby her) and the theory-theory (favored by her, opposed by him)are incompatible. Underlying this point of agreement, I think, isthe fact that both take for granted a certain sort of perceptualmodel of introspective self-knowledge - a model "classicalfunctionalism" rightly rejects.

Knowing children's minds

Michael SiegalDepartment of Psychology, University of Queensland, St. Lucia, Brisbane,Australia 4072Electronic mail: [email protected]

[Gop] According to Gopnik, children are so preoccupied withtheir own current mental states that, not only do they lack anunderstanding of others' beliefs, but they have little insight intotheir own past beliefs. This conclusion is an extension of thetraditional Piagetian emphasis on childhood egocentrism andsquares with the frequent proposal that a conceptual deficitexists in young children's "theory of mind" (e.g., Perner et al.1987). In contrast to Gopnik's view, mine is that children'sauthentic understanding in many domains of cognitive develop-ment, including their knowledge of beliefs, is more extensivethan is often estimated. It is merely masked by subtle conversa-

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 79

Page 80: A Gopnik - How We Know Our Minds

Commentan//Gopnik/Goldman: Knowing our minds

tional and contextual factors in specialized, experimental set-tings (Siegal 1991a; 1991b)

Linguists and philosophers such as Grice (1975) have notedrules or conventions that characterize the nature of communica-tion. In short, these may be termed the maxims of quantity, "Sayno more or no less than is required," quality, "Try to say thetruth and avoid falsehood," relevance (or relation), "Be informa-tive," and manner, "Avoid obscurity and ambiguity." In conver-sations between adults, the implications of violating the rulesare usually mutually understood. This can occur, for example,when there is a desire to ensure that respondents are certain oftheir initial answers through the use of repeated or prolongedquestioning that contravenes the quantity rule. [See BBS multi-ple book review of Sperber & Wilson's Relevance, BBS 10(4)1987.]

For the scientific purpose of ensuring what children know,well-meaning experimenters may inadvertently set aside con-versational rules and use repeated or prolonged questioning orask questions where the answer is obvious. Yet children's earlyconversational habits are consistent with the speech of care-givers who, for the most part, have not set aside conversationalrules. In adjusting their speech to suit the characteristics of alearner who is not fully familiar with the complexities of adultlanguage, caregivers are apt to say no more or no less than isnecessary to sustain conversation, they are relevant and infor-mative in referring to objects and events in the here and now,and they are concerned to correct truth value in the child'sspeech rather than errors of syntax (De Villiers & De Villiers1978, pp. 192-98).

Young children therefore may not recognize the scientificpurpose of test questions that depart from conversational rules.To interpret the questions, they may import relevance fromlocal concerns and answer incorrectly. Another explanation fortheir lack of success was pointed out by Donaldson (1978) andgoes beyond Gricean examples of ambiguity and obscurity inspeech. It is that children and experimenters differ in themanner in which they represent the physical setting of the task.They may answer incorrectly not because they are ignorant ofthe answer but because they do not share the adult's conversa-tional territory.

In "theory of mind" tasks, 3-year-olds are likely to fail unlesscare is taken to follow conversational rules and to ensure thattheir representation of the purpose, relevance, and setting ofthe task does not differ from that of the experimenter. Oneexample comes from false-belief measures (e.g. "Jane wants tofind her kitten. Jane's kitten is really in the playroom. Janethinks the kitten is in the kitchen. Where will Jane look for herkitten?") that are liable to contravene the quantity rule by sayingless than is necessary if children do not recognize that thepurpose of the questioning is to determine whether they canidentify the relationship between beliefs and outcomes. In somerecent experiments (Siegal & Beattie 1991), the test questionwas changed to "Where will Jane look first for her kitten?" toprevent children from wrongly inferring on the basis of theirown local concerns that it meant, "Where will Jane have to lookfor her kitten in order to find it?" Most answered correctly inpredicting the consequences of holding a false belief.

By comparison, Gopnik and Slaughter (1991) describe a belieftask in which 3-year-olds originally think crayons are inside abox of crayons but are then shown that birthday candles areinside instead. Children scored no better than chance on thequestions, "When I first asked you, before I opened up the box,what did you think was in the box then? Did you think therewere candles inside or crayons inside?" However, having justviewed the candles inside the box, children are likely to use thisphysical information to answer the complex (and in my opinion,awkward) questions about beliefs. In reconciling the relevanceof the context to the question, they regard the introduction ofthe information as pivotal. After all, they might reason, whywould an adult go to the trouble to create and disclose the

information unless the implication was to use it in answering thetest questions? In doing so, children would simply reinterpretthe questions about beliefs as the straightforward question,"What do you think is inside the box?" The 3-year-olds wouldprobably have answered more correctly had they received thesingle, short question "What did you first think was in the box?"with care taken to ensure that they regarded each alternative asequally relevant - as, for example, in Mitchell and Lacoh6e's(1991) tasks, which endowed each alternative with a physicalrepresentation. But rather than endorsing Mitchell and La-coheVs claim that performance on belief tasks reflects a reluc-tance to identify representations without a physical counterpart,I contend that children's success requires situations in which therelevance of the correct alternative is not undermined by thephysical setting or the form of conversation.

Gopnik and Slaughter contrast responses on their belief taskto those on a pretend task where 3-year-olds easily answeredquestions about pretending that a stick was first a spoon andthen a magic wand, "When I first asked you, before we movedover here, what did you pretend the stick was then? Did youpretend it was a spoon or a wand?" However, from an early age,children recognize that pretending is detached from physicalreality, unlike thinking that can be either detached or non-detached and continuous. They could not incorrectly rein-terpret the questions about pretending to mean "What do youpretend the stick is?" because of their detachment from the twoinduced pretend states and because the experimenter had nothighlighted physical information that could lead one of thepretend states to be construed as more relevant to the answer.

Elsewhere we detail how tasks that have purported to showconceptual deficits in young children's theory of mind, includ-ing those cited by Gopnik, can be attributed to a clash betweenthe conversational worlds of adults and children (Siegal &Peterson 1992). The task for investigators is to examine condi-tions under which children do demonstrate understanding intheir knowledge of beliefs as in many other domains.

The developmental history of an illusion

Keith E. StanovichOntario Institute for Studies in Education, 252 Bloor Street West, Toronto,Ontario, Canada M5S 1V6Electronic mail: [email protected]

[Gop] Although Gopnik is careful to restrict the scope of hertarget article and to limit her claims for general philosophicalimpact on some wider debates (see sect. 8 of the target article),her argument has, I believe, fairly broad implications for theongoing debate about the status of folk psychology (Baker 1987;Bogdan 1991; P. M. Churchland 1989; Clark 1987; Garfield1988; Greenwood 1991). I view Gopnik's argument as a blow toadvocates of an autonomous folk psychology who wish to staveoff eliminativism. Gopnik presents empirical evidence question-ing a critical causal link in folk theories of the mental: that ourbelief/desire explanations about ourselves derive from direct andincorrigible first-person knowledge of internal experience.

Those wishing to deny that Gopnik's argument poses prob-lems for the idea of an autonomous folk psychology might wellargue that we do not actually know that a pivotal assumption herwork contradicts - that knowledge of intentionality comesdirectly and reliably from psychological experience (call this the"central assumption" of Gopnik's analysis) - is in fact part of thetheoretical structure of folk psychology. This rejoinder is possi-ble because the intense debate about the philosophical status offolk psychology takes place largely in the absence of empiricalevidence concerning just what folk psychological beliefs are andjust how adequate they are as an explanatory system.

There is little evidence on the central assumption that inten-

80 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 81: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

tional states derive directly from privileged access to personalpsychological experiences because there is little systematicevidence on any aspect of folk psychology. Gopnik provides nodata for the assumption and, if it is wrong, then some of the morerevolutionary implications of her target article disappear. How-ever, the sparse information available does suggest that thecentral assumption is part of commonsense psychology. D'An-drade's (1987) description of a folk model of the mind based oninterview material from five college and high school studentswho had never had psychology courses contains many proposi-tions consistent with the central assumption. The model of folkpsychology derived from his interviews contained the assump-tion that "What one knows or believes is usually considered tobe a creation from within, the result of the operation of the minditself" (p. 117). The folk model of D'Andrade's subjects ap-peared to involve a self that "reads off" internal states, includingbeliefs, and consciously decides on a course of action.

My own questionnaire and interview studies (Stanovich 1989)suggest a folk theory of the mind that converges with D'An-drade's in this respect. An active, conscious, "self" is assumed tobe the agent of control that makes decisions based on beliefsgleaned from introspective access to information in a passivebrain. Belief in privileged and incorrigible introspective accesswas strong among my subjects and strongly correlated with thetendency to endorse dualistic views of the mind/brain relation-ship. Thus, based on the limited data from the D'Andrade (1987)and Stanovich (1989) studies, Gopnik's central assumptionabout commonsense psychology appears to be correct. Gopnik'sinterpretation of the developmental evidence is, I believe, alsolargely on the mark; thus she ends up building a third-persontheory of the development of aspects of mind (intentional states)that does not implicate first-person experience.

The implication of Gopnik's challenge to the folk-psycho-logical assumption that psychological states and psychologicalexperience are closely linked adds to the evidence that some ofthe foundational assumptions of folk psychology - not just itsspecific predictions about restricted behavioral domains - arenot empirically sustainable. For example, Dennett (1991c)argues

I suppose that if we look carefully at the ideology of folk psychology,we find it pretty much Cartesian - dualist through and through[questionnaire and interview data indicate that Dennett is right,although the layman's view appears to be an unprincipled amalgama-tion of substance and property dualism; see Stanovich 1989]. Butnotice that nobody in philosophy working on folk psychology wants totake seriously that part of the ideology . . . . We have apparently justdecided that dualism . . . is an expendable feature of the ideology.The question that concerns us now is whether there are other, lessexpendable features of the ideology, (p. 137)

Gopnik has presented us with evidence questioning a "lessexpendable feature of the ideology": the idea that privilegedfirst-person access is the mechanism that builds the intentionalstates of folk psychology. This, like folk psychology's ill-advisedlinkage with dualism, is a fundamental challenge to the integrityof the folk theory.

These problems surround fundamental premises about theprocesses that are supposedly at the center of mental life. Theyare the bedrock assumptions about what "common sense" takesthe psychological world to be. Gopnik's challenging conclusionreveals - as does science's general rejection of dualism - apowerful illusion in the intuitive psychological theories thatmost people carry around. When we add to these two embar-rassments the indication that the folk concept of consciousnessembodies many conceptual confusions (Allport 1988; P. S.Churchland 1988c; Lyons 1986; Wilkes 1984; 1988) and recenthints that even our notions of "first-person experience" may besomewhat out of focus (Dennett 1988), we have, in total, asizeable group of major contradictions about fundamental mat-ters. I don't know what philosophers think of all this, but as apsychologist, it makes me wonder about the oft-repeated posi-

tion (e.g., Clark 1987; Horgan & Woodward 1985) that, exceptfor predictions in a few arcane behavioral domains, folk psychol-ogy gets "most things right." Really?

Categories, categorisation and development:Introspective knowledge is no threat tofunctionalism

Kim SterelnyDepartment of Philosophy, Victoria University of Wellington, Wellington,New ZealandElectronic mail: [email protected]

[Gol, Gop] I take Goldman to have constructed an antifunc-tionalist dilemma. A theory of folk psychology ought to explainhow the folk recognize the mental states that they themselves,and others, are in. Neither functionalist option for doing thisworks.

Recognition might be generated by "matching" an instance toa kind. Matching assumes that the folk have some complexrepresentation of, say, what headaches are, against which theymatch a representation of a particular instance of a mental state.If the match is close, or exact, they recognize this token state as aheadache; otherwise, not. Goldman argues that a functionalistaccount of mental states undermines matching. For if func-tionalism is right, mental states are complex, recondite, andeven partially counterfactual amalgams of causal threads. Yet, asGoldman points out, an agent can wake up and know imme-diately he has a headache though entirely innocent of that state'scauses and consequences. Even if the causal information were insome sense available, on functionalist accounts the sufficientconditions of a state's being a headache are very complex. Somatching may well be computationally intractable. Moreoverthere is even a threat of paradox. Knowing one is in a mentalstate is one of the distinctive elements of being in that state. Therepresentation of an instance of a mental state should elicit thejudgment "headache!" only if it has already resulted in the self-knowledge that the judgment is supposed to produce. That is aparadox.

The problems of matching instances with kinds are lessterminal than Goldman suggests. For one thing, Goldman's casepartially relies on the recondite nature of knowledge of "sub-junctive" properties. He claims these "are extremely difficult toget information about." But consider an example: This statewould cause aspirin taking, were some available. There isnothing particularly hidden about this counterfactual truth. Weall have immediate access to countless subjunctive facts aboutwhat we deal with. I know without trial that my partner wouldbe irritated by being awakened by having icewater poured overher in the middle of the night. These are not recondite facts.Although there may be a general problem of explaining ouraccess to subjunctives, this is no specific difficulty for functional-ism. Subjunctive information is routinely available. Further-more, there are independent reasons for resisting versions offunctionalism that would be subject to a matching paradox. Ifmental kinds are causal kinds, they are causal patterns in whichno single element is essential. No particular cause or effect isrequired for some state to be a headache. So a folk functionalisthad better associate a mental term with a causal syndrome ofrelations between input, inner states, and behaviours, no partic-ular element of which is necessary, so long as most are present.Such a theory has problems aplenty, but it does not suffer from amatching paradox. So I think this horn is blunt: An appeal tocounterfactual knowledge, partial matching, and an antiessen-tialist form of functionalism leaves the idea that instances areidentified through matching still in the field.

Still, I think a better strategy is to sharply distinguish theconstitutive from the epistemic. What an elephant is is one

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 81

Page 82: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

thing; how we recognize elephants is another. No doubt thereare trunkless elephants, and trunked nonelephants. But if youare confronted with a trunk you will not go far astray insupposing it comes with elephant attached. An agent might thusrecognize creatures via this symptom of elephanthood whetheror not they had a complex cognitive understanding of what it wasto be an elephant, whether or not this symptom is even part ofwhat being an elephant is.

The folk functionalist should argue that mental concepts areterms in a language of thought whose meaning are understooddenotationally.' The kinds denoted are constituted by theirfunctional roles. The idea is to combine a denotational idea ofunderstanding a concept with a functionalist account of thekinds so denoted. Having the concept of fear is just thinking in aversion of mentalese that includes a fear-denoting concept;having a word for fear is just having a word that expresses thatconcept. I suspect that Goldman thinks there must be more tohaving a mental concept than that (see note 1) but given anaccount of how such concepts can be applied to their instances, Isee nothing to debar this minimalist account of understanding. Ido not think there can be a general objection to a denotationalaccount of understanding even though this idea is almost theonly one Goldman does not consider in section 2. For the mentalrepresentations he does discuss are complexes whose primitiveelements, presumably, derive their meanings from their deno-tations. Understanding cannot be from definition or definitionsurrogates all the way down.

The alternative to matching is to suppose that mental con-cepts are applied through detection of more or less reliable signsof the state. What might serve as reliable signs of a state's being aheadache? We might exploit the intrinsic properties of the state,sensations, perhaps. Functionalists had better not think theseconstitutive of mental kinds, but they can perfectly well allowthat headaches, for example, have intrinsic neural properties.The alternative is to appeal to its higher order consequences: thethought that you have a headache. Goldman considers neithersatisfactory; I think both might be. Let us consider first head-ache recognition mediated by the intrinsic features of that state.

Goldman rejects appeal to intrinsic neural properties incategory judgments. He doubts that they are accessible to anagent. If accessible, they render functionalism redundant.Goldman denies neural properties are accessible qua neuralproperties. Agents do not know the neural states they exem-plify, hence cannot exploit them to tell what mental states theyare in. No appeal to physicalism is relevant: "All informationprocessing in the brain is . . . neural processing. The question,however, is whether the contents (meanings) encoded by theseneural events are about neural properties" (sect. 4, para. 7).Goldman argues they are not, by analogy with visual processing.Though visual states are neural states, they are about geometricfeatures of the environment. I agree on teleofunctional groundswith his view of the content of visual sensation. But a functional-ist here should buy Armstrong's (1968) model of internal percep-tual organs; his "self-scanning" model of awareness. The teleo-function of internal detectors is to represent neural properties ofthe CNS; when conscious, perhaps, these representationalstates are sensations. So the very considerations that lead us tourge that visual percepts are about geometric properties lead usto urge that the content of sensation is of intrinsic neural featuresof the CNS. So intrinsic neural properties of mental state tokensare not unknowable; they are potentially exploitable in cate-gorisation.

Intrinsic properties do not render functional categories re-dundant. Once we have introduced sensations into our story,Goldman wonders "what need is there for functional-role com-ponents in the first place?" (sect. 5, para. 1) This misses the pointof distinguishing between the epistemic and the constitutive.Even though trunk-owning tracks elephanthood, having theconcept of a trunk does not render having the concept of anelephant redundant. We appeal to sensation to explain an

agent's self-knowledge. The functional category captures ex-planatory similarities across agents whose intrinsic states may bequite different.

An alternative is to suppose that there is a brute-causalconnection between being in a state and thinking you are in it.Though I do not think this could be a fully general account ofcategorisation, I find Goldman's response to this idea is puz-zling. He thinks that it "would leave the process of mental-stateclassification a complete mystery" (sect. 6, para. 6). I do not seeit. The claim that "being in the mental state automaticallytriggers a classification of yourself as being in the state" is just theclaim that it is computationally primitive; it is not composed ofcognitive or computational microsystems. It will have a morefundamental physical explanation. In any case, on pain ofregress, some computational connections must be brute-causal.Goldman may think that a connection between your mentalstate, and your thought that you are in that state, is an unlikelycandidate for primitive connection. But that would require anindependent argument that he does not give.

Goldman has a second strategy against this account of theapplication of the terms of folk psychology. He argues that itmerely shifts the problem to an earlier learning stage. How doesthe agent learn that a particular sensation, or even thinking youhave a headache, correlates with having one? I see no problemhere. Agents do learn, for example, the symptoms of their ownemotional states; the signs of their being tense or grumpy. Theintuitive force of Goldman's headache example is its immediacyand speed; no calculation needed. But recognition can be swiftand unselfconscious once learning is in place: Think, for exam-ple, of face recognition. The functionalist supposes we learnover time the headache syndrome; its causes and effects; thecounterfactuals associated with it. Goldman's agent, identifyingtheir own headache from just one effect, has brought to that taska good deal of experience. Now Goldman may well think all thiscrazy; no one ever has to learn about headaches, for you can feelyou have one. But that only recycles standard qualia objectionsto functionalism.

I suspect Goldman is a standard qualiaphile in thin disguise.But though he is right in thinking that many versions of func-tionalism have epistemic worries, his dilemma has two blunthorns; functionalism need not render our inner life unknowable.So I think Gopnik goes in for a more radical rescue of functional-ism from its epistemic worries than necessary. With her I haveno deep disagreement; I agree with her on the theory-theoryand on the importance of developmental considerations to thephilosophy of mind. But I do have some reservations about thetarget article. Though development is undoubtedly relevant, Iam not sure its relevance is quite so direct as Gopnik supposes.

I take her chief target to be an asymmetry thesis in first-person epistemology: Your knowledge of your own mental statesis importantly different from your knowledge of the mentalstates of others. As Gopnik says, this claim has empiricalconsequences, for there could hardly be an epistemic asym-metry without a cognitive asymmetry. Gopnik's case against theasymmetry thesis is her case for developmental symmetry.Young children know as much of others' minds as they do of theirown. Indeed, 3-year-olds may not have a theory of mind encom-passing intentional notions at all. Their concepts are of detec-tion; they do not have fully intentional concepts. The experi-mental evidence Gopnik reviews scarcely wears its correctinterpretation on its face. Nevertheless, I found her argumentsconvincing, but still insufficient to directly establish the theory-theory or the cognitive symmetry of knowledge of self andothers.

Consider first the theory-theory. The developmental facts(Gopnik thinks) support the theory-theory by showing that theknowledge of intentional states is not inevitably given in experi-ence. It has a history; it must be learnt. Now she remarks inpassing that this thought would be undermined if developmentwere maturation rather than learning, but she clearly thinks

82 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 83: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

such an idea is so ad hoc that it can be safely exiled to a footnote(no. 8). The idea seems reasonably plausible to me: Much ofwhat Gopnik says about the development of mind strikinglyparallels the development of language. A child's theory of minddevelops without special instruction, at a similar pace andthrough similar stages across all children. Gopnik's own storyseems to show this. I bet one could replicate Chomsky's argu-ment: There are certain sorts of mistakes children do not make(in, e.g., Chomsky 1980). A theory of mind may well be the nextbest candidate after language for a cultural universal. Likelanguage, intentional psychology is not an obvious inductivegeneralization from experience. The concepts in terms of whichthe explanatory generalizations are framed are not needed formere description of the data. Does intentional psychology, likelanguage, develop from "degraded data"? That, I suspect, de-pends on the extent to which rationality assumptions are builtinto intentional generalization. If they are, then competencewill be exhibited in performance no more transparently in thisdomain than in language. Of course some hold that a theory-theory is right for language too. But though it is clear that agrammar must be internally represented in some form, it is veryfar from obvious that grammars are known, propositional, im-plicit theories. I agree with Gopnik in doubting that theory ofmind could be modular. There is no plausible way of encapsulat-ing the data on which intentional attributions are made. More-over, we are talking here about beliefs about beliefs, that is,about central processes. But must a hypothesis about matura-tion be a modular one?

Next, symmetry. Gopnik's data bear first and foremost oncompetence, on capacity. But the asymmetry claim is about thepsychological realization of that competence. In general, a givencompetence can be realized by quite distinct cognitive mecha-nisms; hence all the fuss about the "psychological reality" oftransformational grammars. There may even be some reason tosuspect that equivalent competences are mediated by distinctcognitive mechanisms. For Alexander (1987), Humphrey (1984)and others have argued that the function of knowledge of self isknowledge of others. The point of introspection is to be able toanticipate what others will think and hence do. If this veryspeculative hypothesis were right, though agents would notknow more of themselves than of others, that knowledge wouldbe the result of different processes.

The distinction between competence and its underlying real-ization here is probably picky (though what philosopher has everresisted that temptation?); different mechanisms are unlikely toresult in similar patterns of error. More seriously, the inferencefrom development to current competence and organizationseems loose. Distinct developmental pathways can lead toidentical outcomes (first versus second language learning maybe an example). Symmetries can break. Try this hypothesis:Gopnik is right: Three-year-old children do not have the con-cept of belief. They do not have a sufficiently rich theory ofmind. Without the appropriate concepts, they are unable toexploit the resources that are in principle available from cogni-tive mechanisms designed to give access to their own cognitivestates. There can be no perceptually based belief that Bessie is acow without the concept of cows. Until Jason acquires thatconcept, Jason can see cows, but not see that something is a cow.Similarly, until Jason has the concept of a belief, he cannotdetect his own beliefs. The acquisition of that concept might besymmetrical. Experience of others might play just the same roleas experience of himself. But once acquired, the functionalorganization of Jason's mind induces a cognitive asymmetrybetween himself and others. I do not say this hypothesis iscorrect. I do say it is consistent with the symmetry in develop-ment that constitutes Gopnik's case.

Finally, I think there is at least some residual ambiguity aboutGopnik's main thesis. I have referred to it, neutrally, as asymmetry thesis. There is no epistemically significant, or cog-nitively principled, distinction between knowledge of one's own

intentional states and knowledge of others' states. I find Gop-nik's own metaphor unhelpful. She denies that self-knowledgeis "direct," or, at any rate, more direct than knowledge of others.But "directness" is an obscure notion. It might mean "encapsu-lated," "not mediated by intermediate representation," or"stimulus bound." These are all distinct notions, distinct, butconceptually obscure and empirically intractable.

I do not think the analogy with experts clarifies the issue.Gopnik may be right in thinking that self-knowledge is likeexpert knowledge. But what is that like? There is after all anasymmetry between Kasparov's knowledge of chess, and hisknowledge of cricket. Gopnik accepts very strong hypothesesabout expert competence. For her, it comes into play only in theinterpretation of perceptually based belief. So an expert's beliefthat a patient has cancer (sect. 7) does not count as perceptuallybased because some background beliefs would block the forma-tion of that belief; a change in theory, for example. Do anybeliefs (as distinct from percepts) satisfy that condition? Thebelief that I am sitting here writing does not; if I came to"understand" that I suffered endless delusions of being a BBScommentator . . . . Gopnik thus commits herself not only to theindependence of perception from the cognitive changes thatbecoming an expert causes, but also to the still stronger claimthat these changes have no impact on perceptually-based be-liefs. She buys into (e.g., sect. 7, para. 4-6) very strong "propo-sitionalist" assumptions about the underlying nature of expertskill. Consider the skills of, say, a paleontologist reconstructinga three-dimensional form from brown smears in a rock. Perhapsthe long history of learning and experience that goes into theseskills affects only what happens after the formation of beliefsderived solely from perceptual processing. But you do not haveto be a born-again connectionist to doubt that learning has suchsegregated consequences. If so, experience and learning mightlead to qualitative differences in one's knowledge of self andothers. Perhaps being an expert on yourself is such a difference.

Gopnik's experiments are fascinating and her conclusionsplausible. But I do not think the conclusions are mandated bythe experiments. I do, however, side with her on the fundamen-tal question that divides her position from that of Goldman.

NOTE1. I think Goldman puts these issues misleadingly. He writes of "our

commonsensical understanding of mental predicates" (sect. 2, para. 2),"how people represent and ascribe mental predicates" (sect. 2, para. 2),"asking us to consider [functionalism] as a . . . hypothesis about howthe cognizer represents mental words" (sect. 3, para. 2). But I do notthink people represent mental predicates; they represent mental kinds.The theory-theory holds that the folk both have and satisfy a functional-ist theory of mind, but not that the folk have a theory of the semantics ofmental terms. They use mental terms; they need not in doing so have ametatheory of them.

Why Alison Gopnik should be a behaviorist

Nicholas S. ThompsonDepartments of Biology and Psychology, Clark University, Worcester, MA01610-1477Electronic mail: [email protected]

[Gop] Alison Gopnik's account of privileged access as expertintuition is marvelous. Hers is the first account of consciousnessI have read that corresponds to my own experience. Cognitivescientists as a rule seem to claim an infallible grasp of their ownminds. Moreover, they seem to claim only an inferior access tothe minds of others. I myself am often muddled about what isgoing on in my own mind and as often find myself in the grip ofpowerful, apparently "im-mediate" perceptions of the minds ofothers. Gopnik gives an account of consciousness for which first-and third-person and mediation and "im-mediation" are orthog-onal. I am grateful to know that at least one other psychologist

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 83

Page 84: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldm&n: Knowing our minds

experiences the world in this way and grateful also for Kas-parovia as a thought experiment with which to demonstrate thispoint. I predict that many a prominent philosopher of mind willdwell in Kasparovia before this millennium is out.

But having said this, I would like to take Gopnik to task for herrenunciation of behaviorism, because on the evidence of hertarget article, she is a behaviorist and in renouncing behavior-ism she lends her considerable authority to some silliness. Shewrites "I do not deny that there are internal psychologicalstates," and a bit later, "First we have psychological states . . . "(sect. 8, para. 1-2). The notion of an internal psychological state,something that "we" can "have," is woefully confused. When wespeak of machines or other systems and states, we speak of thesystem as being in the state. But for some reason, when wespeak of psychological states such as emotions, feelings, andbeliefs, we feel entitled to describe the state as being in theperson. But surely this is misplaced concreteness. If mycomputer malfunctions I might say that it was in a malfunction-ing state, but I would never say that there was an internal state ofmalfunctioning within my computer. Computer malfunction is astate whose specification requires knowledge about computers

and their relationships to people; and while the malfunctioningstate of the computer may be caused by something wrong insideit, the state itself is of the computer, not inside it. There is noway we could look inside a computer to determine definitivelythat it is malfunctioning, although, of course, we might discoverthere why it is malfunctioning.

Similarly, psychological states are states whose specificationrequires knowledge about people, their behavior and theircircumstances and, while a psychological state may be caused byevents in the braincase or body of a person, the state itself is ofthe person, not inside it. We cannot look inside people and hopeto determine definitively that they are in particular mentalstates, although, of course, we might discover there why aperson was in a particular mental state. If something has to beinside something else (and I don't recommend it), then thehuman has to be in the psychological state - in the motivation, inthe feeling, or in the belief- rather than the other way around.

This misplaced concreteness leads to yet another confusion,the notion that psychological states can lead to or explainbehaviors and experiences. To say that a psychological statecauses a behavior or experience is like saying that the brittlenessof the glass caused its breakage. Easy breakability is a definingcharacteristic of brittleness and as such cannot be caused by itany more than being unmarried can be caused by bachelorhood.Brittleness is itself a lawful relation between some objects andthe consequences of those impacts. The baseball causes thebreakage. The physical nature of the glass causes its brittleness,that is, mediates the lawful relation between strength of impactand probability of shattering. But it is vacuous to say thebrittleness causes the breakage. Similarly, emotional behaviors(for example) are constituents of emotions and as such cannot becaused by them. Emotions are lawful relations between per-sons, some events, and the responses they elicit. The eventscause the emotional behaviors. The glands and the nervoussystem cause the emotion, that is, mediate the lawful relationbetween emotional stimuli and probability of emotional behav-ior. It is vacuous to say that emotions cause emotional behavior(Derr & Thompson, in press).

If Gopnik is determined to avoid behaviorism, she mightconsider resuscitating the New Realism. The New Realism was aphilosophical movement that was organized at Harvard in the1910s by students and associates of William James (Holt 1914;Holt et al. 1912). It held that each person's consciousness is across section of the world "out there" consisting of all thefeatures of that world to which the person responds. Accordingto the New Realists, observer and subject stand in the samerelation to the world. No particular privilege is granted to firstperson consciousness, since it is just one of an infinite number ofpossible cross sections through the real world. New Realists

trained two psychologists who are important to Gopnik's exposi-tion. One of these was J. J. Gibson, who held that perception,like the New Realist's consciousness, was a direct awareness offeatures of the world. The other was E. C. Tolman, who is theoriginator of the "theory-theory." The New Realism has had aneffect on my own brand of behaviorism, one that treats mentalterms as referring to different behavioral design properties(Thompson 1987; in press). For those unwilling to admit tobehaviorism, a New New Realism might offer a way of talkingabout psychological states without locating them within theperson or speaking of them as behavioral causes.

ACKNOWLEDGMENTSThis commentary was improved by discussion with several colleagues,including Gillian Barker, Michael Bamberg, Bernard Kaplan, PatrickDerr, Penny Thompson, and John Watson.

Where's the person?

Michael TomaselloDepartment of Psychology and Yerkes Primate Center, Emory University,Atlanta, GA 30322Electronic mall: [email protected]

[Gop] I agree with Gopnik's very cogent and convincing argu-ment that persons' knowledge of their own intentional states isnot in any sense primary. Gopnik's main evidence is a number ofstudies demonstrating that young children have no more insightinto the structure of their own minds than they have into theminds of others. Adults cannot help but think they know theirown minds more directly than they know the minds of othersbecause they have become "experts" who directly perceivethings in their domain of expertise - themselves. And expertise,as has been demonstrated in other cognitive domains, en-genders a bad case of source amnesia for the long constructionalprocess.

My only complaint about the target article is that in all the talkabout representations, the "theory-theory," beliefs, expertise,and the like, the fact that what we are really talking about isknowledge of persons - of which the self is one instance - gets abit lost. I would contend that Gopnik's basic argument can bestrengthened considerably by taking account of the social natureof the knowledge of self and other persons (see Hobson 1987, onthe concept of person). The basic point is that it is knowledge ofothers that is primary in human development, both on-togenetically and phylogenetically.

Ontogenetically, Gopnik says very little about the years of lifeprior to the emergence of a "theory of mind." But infants displaya sophisticated knowledge of the intentional states of others(albeit not all kinds of intentional states) well before they showevidence of any kind of self-knowledge. At around 9 months to 1year of age, for example, human infants engage in a variety ofbehaviors relying on their knowledge of the intentions of otherpersons toward the world: They look where others are looking(joint attention), they take on their emotional states towardoutside objects and persons (social referencing), they imitate theobject-directing actions of others, and they actively direct theattention of other persons to aspects of their shared experiencethrough various nonlinguistic gestures. Infants do all of this wellbefore they refer to themselves by name or with a pronoun ofany kind, before they recognize themselves in a mirror, orbefore any other measures of self-knowledge of which I amaware. One could easily imagine a scenario - which I presume issomething like what Gopnik intends - that knowledge of otherpersons combines with the categorization of self as a person toforever yoke the two kinds of knowledge.

I have recently put forward an ontogenetic account of thisyoking that goes as follows (Tomasello, in press). Before 9months of age the infant can engage in and enjoy face-to-face

84 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 85: A Gopnik - How We Know Our Minds

Commenton//Gopnik/Goldman: Knowing our minds

interactions with other persons, as well as "face-to-face" interac-tions with objects. When these come together at around 9months of age into what Trevarthen calls secondary intersubjec-tivity (Trevarthen & Hubley 1978), the infant can follow into theadult's focus on an object and can even direct the adult'sattention to objects by pointing, showing, and the like. Thecrucial point is that after this triadic awareness has beenachieved, the face-to-face interactions of infants with otherpersons take on a new meaning. The infant is now for the firsttime monitoring the adult's attention to the outside world, butin this case that outside world is the infant herself. In trueMeadian fashion, this is the looking glass self in which we onlyknow ourselves through our simulations of some outside per-spective - that is, the perspective of other persons, either real orimagined.

I would also argue that knowledge of other persons is phy-logenetically primary (cf. Tomasello et al. 1993). The growingconsensus among behavioral biologists is that what is uniqueabout primate intelligence, including human intelligence, is notits adaptations to the physical environment, but its adaptationsto the social environment. Primates of all types engage in allkinds of behaviors demonstrating their sophisticated knowledgeof the behavior of conspecifics - forming alliances with them,communicating with them, deceiving them, and the like (e.g.,see Byrne & Whiten 1988; Cheney & Seyfarth 1990; deWaal1982). [See also Whiten & Byrne: "Tactical Deception in Pri-mates" BBS 11(2) 1988 and Cheney & Seyfarth: Multiple bookreview of How Monkeys See the World, BBS 15(1) 1992. ] It is notclear precisely how one might test the self-knowledge of theseanimals, but the attempts using mirror self-recognition areuniformly negative in many monkey and some ape species whonevertheless display very sophisticated social intelligence vis-a-vis conspecifics (Gallup 1986) - and there are no other types ofevidence I am aware of for the reverse in these species. Anadditional point is that social expertise has obvious adaptive andreproductive advantages involving both cooperation and com-petition, whereas self-knowledge does not seem to fit as neatlyinto evolutionary scenarios involving adaptive significance.

I would thus like to bolster Gopnik's thesis by arguing thatprocesses of social cognition are most directly adapted forunderstanding the intentional states of others. These social-cognitive processes are then used in acquiring all forms of self-knowledge and self-reflection, especially those involving themind. Just as Wittgenstein pointed to the impossibility of aprivate language once its deeply social nature is understood, sotoo the idea of a private self is unthinkable without somemechanism by which the individual can get outside itself to lookat itself as an object (James's "Me"). Self-knowledge is thus aphylogenetic byproduct, an evolutionary "spandrel" as it were,of the evolution of social intelligence, and self-knowledge inhuman ontogeny depends on some prior understanding - differ-ent at different levels of self-understanding, perhaps - of theintentional states of other persons.

Common sense, functional theories andknowledge of the mind

Max VelmansDepartment of Psychology, Goldsmiths' College, University of London, NewCross, London SE14 6NW, EnglandElectronic mail: mvelmans @ gold.lon.ac.uk

[Gol, Gop] Conscious experiences remain a puzzle for thosewho wish to understand the mind purely as the functioning of amaterial system. Without conscious experiences, how could weknow our own mental states or come to understand the mentalstates of others? Goldman argues persuasively that philosophi-cal accounts need to be psychologically plausible - and the

qualia of experience are intrinsic to any plausible account of howour own mental states are known. Functionalist accounts of themind that deal only with causal relations between inputs,mediating processes, and outputs fail to capture the "itch" ofitchiness, the "ache" of headache, and so on. In Velmans (1991a)I have argued similarly that a purely functional account of themind can never be complete. But it does not follow thatfunctional accounts have no place at all. As Aristotle saw, muchof what we normally think of as "mental" relates to capacities ormodes of functioning - exemplified in descriptions such asintelligent, forgetful, rational, empathetic, and so on. Conse-quently, any scientific psychology that aspires to inform orimprove on folk psychology needs to account for mental capaci-ties or modes of functioning and experience (Velmans 1990).

In spite of behaviourist claims to the contrary, the question ofhow mental functioning relates to experience has long been afocus of psychological research (in studies of perception, psycho-physics, imagery, dreaming, and so on). The studies reviewedby Gopnik concerning how children acquire knowledge of theirown mental states illustrate the increasingly subtle nature of thisresearch programme. Philosophers have long wondered how wecan know "other minds" but they have often assumed we havedirect experience of our own (this appears to be Goldman'sposition). Gopnik shows, however, that knowledge of somemental states (intentional states such as belief and desire) has adevelopmental history. Rather than being "directly given" inexperience (as common sense would suggest) they depend onthe child's maturing theory of the mind. The studies reviewedby Gopnik provide a telling demonstration of the way classicalphilosophical questions can be teased apart by empirical re-search. However, whether they support the "theory-theory" asopposed to the "commonsense" theory of how children knowtheir own mental states is a moot point.

It all depends on what one means by "directly given," or"direct access." For Gopnik, this means having an experience ofone's own mental states, as opposed to having a theory aboutthem. But Gopnik makes it clear that she is not suggesting thatchildren have an explicit theory of their own minds. Rather, atheory is implicit in their judgements and behaviour. Butwhether the theory is explicit or implicit (tacit) is central to theissue. As Goldman points out, beliefs and desires are to varyingdegrees manifest in experience (they have their own phenome-nology). Beliefs, for example, may be manifest in awareness notjust as feelings (of believing or disbelieving something) but moreexplicitly, in the form of "inner speech." We have very littlefirst-person access to the antecedent processes that produceinner speech or other manifestations of belief and desire, but itis reasonable to assume that such "inner" experiences, likeexteroceptive experiences, may be cognitively driven as well asdata-driven. For example, a belief, manifested as "innerspeech," might result in part from some tacit theory. If so,changes in tacit theory might result in changes in how beliefs areexperienced. In these circumstances, the theory-theory and thecommonsense theory are not in opposition. Children's reportsof their beliefs and desires might be theory-driven while at thesame time being reports of what they experience, as commonsense suggests.

None of this requires that children's knowledge of their owntacit mental states be "direct" in the sense of incorrigible. Thecognitive or conative content of tacit mental states may bemanifest in experience in only a partial, approximate way;translation of such experience into overt, verbal descriptionsmay require a further act of interpretation. In this, first-personknowledge resembles third-person knowledge of the mentalstates of others. However, this does not alter the fact that thesources of information for one's own mental states and those ofothers are different. What children (and adults) know of theirown mental states comes primarily from "inner" experience; themental states of others must be inferred from external sourcessuch as verbal reports and behaviour.

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 85

Page 86: A Gopnik - How We Know Our Minds

Commentary/GopniVIGoldman: Knowing our minds

Nor are the parallel developmental changes in children'sjudgements of their own mental states and those of othersinconsistent with the commonsense theory that the source ofsuch judgements is first-person experience, extrapolated toothers when they manifest appropriate behaviour (contra Gop-nik, sect. 10). If children's judgements about others deriveprimarily from judgements about themselves, one would expectjudgements about others to change in tandem with self-judgements.

If these arguments are correct there may be no inconsistencybetween Gopnik's developmental evidence and Goldman's casethat first-person experience is intrinsic to (explicit) knowledge ofone's own mental states, although the (tacit) theory-drivennature of some beliefs and desires would require more of afunctionalist analysis of (tacit) mental states than Goldmanwould allow.

Goldman also invites a fuller discussion of how consciousnessmight figure in "the causal network of the mind" (sect. 11). Hesuggests that the prominence of phenomenology (in his analysis)seems to be challenged by my own BBS target article (Velmans1991a) in which I argue that conscious phenomenology does notenter causally into human information processing. My conclu-sion, however, was that human information processing andother functional models view human beings only from a third-person perspective.* Viewed from a first-person perspective,conscious phenomenology is central to any causal account of themind. First-person accounts and third-person accounts arecomplementary. Neither perspective is automatically privi-leged. How first-person causal accounts relate to third-personaccounts is dealt with in Velmans (1991r, sect. R9.2).

Although accounts of first-person experience can sometimesbe translated into third-person accounts (of functioning, pro-cessing, etc.) experiences cannot be reduced to functioning. AsGoldman himself is at pains to argue, a functionalist accountinevitably misses something intrinsic to mental states - howthey appear from a first-person perspective.

Given this, it is curious that Goldman supports the causalmodel suggested by Block (1991). Block reverses the conven-tional wisdom in cognitive psychology that focal attention is anecessary condition for consciousness, suggesting instead thatinformation has to enter consciousness before it can be subjectto focal-attentive processing (his executive system).2 For Block,phenomenal consciousness occupies a central "box" within aninformation processing model of the mind. How consciousphenomenology could be central to information processingwithout being a form of information processing is far from clear.3

Consequently, Goldman faces a dilemma. If phenomenal con-sciousness just is a form of information processing, then why is itnot reducible to (a special set of) causal relationships within afunctionalist analysis of the mind?

NOTES1. See section 9.3 in the target article, and sections R8 and R9 of the

response.2. The many difficulties for Block's position are outlined in Velmans

(1991r, sect. R5.4).3. The possibility that consciousness might be "supervenient" or an

"emergent property" would not affect this point. If it is central toinformation processing, it would still have to be a supervenient oremergent form of information processing.

Three questions for Goldman

Andrew Woodfield

Department of Philosophy, University of Bristol, Bristol BS8 1TB, EnglandElectronic mall: [email protected]

[Gol] 1. S is in token state m at time t, m belongs to a mentalkind K, and S can/does judge straight off that m is a K. The task is

to analyse and explain S's self-knowledge. Goldman argues thatS recognizes m's K-hood by internally perceiving m's intrinsicqualities, these being a good guide (to S) of K-hood.

For how many mental states does he think this model willwork? Prima facie it will falter in any of the following conditions:(a) where m has no phenomenology at all, (b) where m has aqualitative aspect, but it is insufficient, or insufficiently distinc-tive of state-type K, to enable S to judge that m is a K on thatbasis alone, and (c) where in has a phenomenology, which is infact distinctive of K's, but S's grasp of the concept of Ks is suchthat S is unskilled at applying it to instances on the basis ofphenomenological cues. (My grasp of the concept of pins andneedles is inadequate in just this way.)

In such conditions, if S manages rapid, successful self-ascription he must draw upon information additional to theinformation currently provided by in.

This fact carries implications for the "simulation" model ofother-ascription. The ascriber must sometimes draw upon evi-dence from outside his simulation, because the simulatingexercise sometimes yields a (simulated) m-state which is phe-nomenologically insufficient to fix the other person's state asbeing a K.

S's reliability at self-ascribing K-states is relative to the kindthat K is. One relevant dimension is taxonomic specificity. Smay be good at recognising that he has a pain (genus), but not sogood at diagnosing which species it belongs to. When I thinkthat p, I can tell that my attitude is of the genus thinking, but Iam not always sure if it is a believing (even when it determinatelyis). Phenomenology is enough for a rough classification, but notenough to ground a more subtle classification. Expert introspec-tors rely not just upon inner observation of their current statesbut also on supplementary evidence about themselves.

2. The second functionalist model in its qualia-friendly ver-sion (sect. 4) looks good as a first step toward reconcilingfunctionalism about mental concepts with the immediacy of self-ascriptions. It needs to be developed further. It would be evenbetter if it built privileged access into the very analysis of certainmental concepts (e.g., the concept pain) by saying that to knowwhat pains are, to have the concept pain, one needs to know thatthey are immediately felt. In this analysis, a person who hadnever felt pain could have the concept of pain and could ascribepains to other people correctly. He is unacquainted with theirfeel, but he knows that there is a way that they feel.

Goldman says this model has a problem with the "learningstage." Could he please elaborate?

He says that the difficulties that hinder S from getting knowl-edge about the relational properties of his current pains arepresent in the learning period. But surely the difficulties aremitigated if S has time to reflect upon the events surrounding hisprevious E-type experiences. Surveying these in retrospect, Smight discern a common pattern in, say, four out of every ten:"Damage to my body occurred, then in occurred, then I yelled."S extrapolates that £-type events normally have similar causalproperties. Like all inductive leaps, the data underdeterminethe generalisation; but S takes the plunge anyway. For variousreasons S discounts the six out of ten cases where he did notnotice such a pattern. He was not paying much attention in somecases; the pattern might have been present but he cannotremember. In others, he remembers suppressing a yell; thepattern would have been there, if he had not blocked it.

On the next trial, having accepted the rule of thumb that £sare normally Fs, S feels a sudden E-event, ignorant of its causesand effects. He infers that it has F, and categorizes it as anF-type. This is his basic-level conceptual categorization of thisexperience. There is no reason to say that he first categorizes theexperience conceptually as an E-event. He may lack the conceptof E-event. The generalisation that Es are Fs was representedsubconceptually.

3. In section 6, Goldman says that the classical functionalistaccount of S's judgement that he is in pain fails to explain how he

8 6 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 87: A Gopnik - How We Know Our Minds

Commentary/GopniVI'Goldman: Knowing our minds

arrives at this classification. This is true. In fact, the classicalfunctionalist account implies that becoming aware that one is inpain is not a process of classifying, in the usual sense, at all; itrejects the model of "IR/CR matching" for cases of immediateself-knowledge.

Suppose that in does have the property of producing a certainm-referential cognitive effect. Suppose that functionalists ex-ploit this fact. Are they obliged to spell out how m produced thateffect? Surely not. Their aim is to specify the distinctive func-tional properties of various mental states; it is not part of theirbrief to specify causal routes, processes, or mechanisms as well.

It is a dangerous ploy for Goldman to insist that theories mustspecify processes, because he then imposes the same obligationupon his own theory and every other theory. Under such aregime, if you think that self-ascribing involves matching an IRto a CR, you are required also to spell out how the matching isdone. If you think that self-ascribing involves matching a tokenmental state directly to a CR, you must explain how that is done.Goldman favours one or other of these hypotheses, but he doesnot explain how the operation is carried out in either case.

In truth, there is no such obligation. No methodological rulesin cognitive science favour the hypothesis that self-knowledge isinner object recognition over the rival functionalist hypothesis.Both are equally OK methodologically. Two things surprise meabout Goldman's espousal of the former, though. First, he saysnothing about the main twentieth-century philosophical assaulton this position, viz. Wittgenstein's attack on "the model ofobject and name" in Philosophical investigations. Second, hebuys into the old Al-style picture of K-categorisation as a matterof matching an internal object against an internal representationof the conditions of K-hood. Cognitive science is even nowoverthrowing this picture!

Intentionality, theoreticity and innateness

Deborah Zaitchik3 and Jerry Sametb

'Department of Psychology, University of Massachusetts!Boston, Boston,MA 02125 and t>Philosophy Department, Brandeis University, Waltham, MA02254Electronic mall: °[email protected] and»samet@brandeis. bitnet

[Gop] As we understand it, these are the central claims ofGopnik's target article. Adults "know" things about the mindthat children do not grasp until they are between 3 and 4 yearsold. This knowledge is not "introspectively given" - one doesnot come to it simply by attending to one's beliefs. Instead, thechild must figure it out and it becomes part of the child's "theoryof mind." Some of the things younger children don't know: thatbeliefs have different sources, that beliefs can change, and thatbeliefs can misrepresent the way the world is. These claims maybe right, but we have considerable misgivings about the waythey have been made.

Intentionality. First, we are puzzled by Gopnik's deploymentof the concept of intentionality. The empirical studies, to whichGopnik has made a significant contribution, have uncoveredinteresting phenomena about children's belief attributions andinteresting arguments have been put forward regarding chil-dren's understanding of representation and misrepresentation.Gopnik, however, seems to think that these phenomena sup-port her contention that intentionality is a theoretical construct,invented to explain certain mental phenomena. But what con-cept of intentionality is at work here? If by intentionality shemeans the "aboutness' of the mental, then the claim is scarcelybelievable. Surely children understand from the start that theirbeliefs are about the world. They might have a bad theory or notheory at all about what their beliefs are; but what would it meanto say that a child has a belief- for example, that her Mommy is

downstairs - and does not realize that the belief is aboutMommy and that Mommy is in the world? To not realize thatbeliefs are intentional is tantamount to construing them as beingon a par with headaches or itches, as having no "aboutness.'Does Gopnik suggest that children go through such a stage? Weknow of no evidence that would support such a claim.

What has gone wrong? Our diagnosis is that Gopnik hasmistakenly run together the intentionality of belief with the factthat beliefs are representational, and this leads her to count suchfacts as that beliefs may have many different sources as part ofthe intentionality of the mental. Perhaps her underlying reason-ing is this: Studies show that kids have odd problems with theappearance-reality distinction and with the attribution of falsebeliefs. So they don't grasp the possibilities of mental misrepre-sentation: They don't understand that beliefs and other mentalstates are representational: They must not have a representa-tional theory of mind: They don't have an intentional theory ofmind: They don't believe that beliefs, and the like are aboutanything (the world). But this chain of reasoning is flawed.Although most cognitivists would find the position at least odd,one can consistently accept the intentionality of the mental anddeny that there are mental representations. Representational-ism would explain the intentionality of the mental; it cannot beidentified with it. (The logic of these intertwined positions hasbeen discussed extensively by Fodor and others.)

Gopnik's subsequent discussion of where knowledge of inten-tionality comes from (direct access or constructed theory) suffersbecause of this conflation. She argues, for instance, that contraryto common sense and philosophers like Goldman and Searle, wedo not have direct experiential knowledge of the intentionalityof belief. But given her too broad construal of intentionality, thistranslates into the claim that introspection does not automat-ically reveal the sources of belief or the possibility of false beliefand misrepresentation. But this is not that radical a claim; whythink that common sense (or Searle and Goldman) would opposeit? You can champion the qualitative aspects of belief withoutassuming that these qualia inform you of the fact that beliefs canbe wrong. Given her broadened notion of intentionality, Gop-nik can too easily show that we do not have immediate access tothe intentionality of the mental and therefore conclude that ourknowledge of our own mental states is theoretical. But thisconclusion is unwarranted. It could be that we have directaccess to some aspects of our mental states (their intentionality),and that others (e.g., the possibility of misrepresentation) aregrasped only as the result of reflection, inference, learning, orwhatever.

Empirical evidence. Gopnik makes an important contributionto the empirical literature in showing that 3-year-olds fail toreport their own immediately prior false beliefs correctly just asthey fail to attribute false beliefs correctly to others. Thequestion remains, however, as to just what this failure means.Gopnik considers the possibility that children are "reality-seduced"; that is, they may feel compelled to report the realstate of affairs even though they have the conceptual under-standing necessary for false-belief ascription. Gopnik dismissesthis possibility on the grounds that it predicts children's failureon the pretense and image tasks, and she is right that a generalstrategy of reporting reality would predict this. Instead, shefavors a conceptual account along the following lines: The youngchild, failing to understand the representational relation ofbeliefs to reality, believes that all beliefs are true. Pretenses andimaginings, in this view, are considered by the child to bedivorced from reality altogether; hence, these are easy for thechild to reason about. There are, we think, several problemswith this view. First, as we have argued elsewhere (Zaitchik1991), the specific theory Gopnik attributes to the child(Chandler 1988's "copy-theory") clearly is a representationaltheory. According to the copy-theorist, objects and events in theword "are bullets which leave an indelible trace on any mindwhich is in their path"; in short, they leave copies. It is true that

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 87

Page 88: A Gopnik - How We Know Our Minds

Commentary/Gopnik/Goldman: Knowing our minds

the copy-theory specifies no mechanism for acquiring a falsebelief that is false at the moment it is acquired (as in the candybox task or appearance-reality task), but it certainly does pro-vide a mechanism for acquiring a true belief that later becomesfalse because of a change in the world (as in the original false-belief task [Wimmer & Perner 1983] where an object's locationis changed in the actor's absence). Indeed, if the child thinks onehas to be "in the path" of an event to "get the copy," why does thechild fail this sort of false-belief task too? After all, it is clear thatthe deceived actor was in the path of a different state of affairs; aslong as our copy-theorists can remember who was where when -and the empirical evidence shows that 3-year-olds can - theyshould correctly attribute a cc\jy of that former state of affairs.

Let's assume, however, that some form of a nonrepresenta-tional theory - or at least a theory which wouldn't allow for theacquisition of false beliefs - could be attributed to the child (andthis is no small assumption, since this theory would also need toexplain children's successful reasoning about pretenses andimaginings). Isn't there a weaker claim that would account forchildren's problems on misrepresentation tasks just as well? Inparticular, might children just find it a usefiil and simplestrategy for attributing beliefs to consider the true state ofaffairs? If most beliefs are in fact true, this would be a usefulheuristic indeed. Our proposal, then, is that children hold a"veridicality assumption" about beliefs; that is, the defaultsetting for beliefs is "true." The important point here is this: It isnot children's theoretical commitments that cause their failureon the false-belief tasks; it is their failure to access or use thetheoretical knowledge they have (Zaitchik 1991). They canattribute false beliefs, but in these tasks, they do not. In ourview, children in the false-belief task are misled by theirgenerally useful heuristic. It follows then that if the heuristicwere not available - either because the true state of affairs isunknown to the child or because it does not uniquely specify aresponse to the test question - the child would succeed inattributing a false belief. The recent literature on false beliefdoes provide a few striking examples of 3-year-olds' successunder just these conditions (Wellman & Bartsch 1988; Zaitchik1991). Our hypothesis would also accommodate the child'ssuccessful reasoning about pretenses and imaginings. Sincethese representations are not generally true to reality, therewould be no such default setting.

A final point about matters empirical: Gopnik would be rightto point out that our view does not explain children's failure onthe sources task but, as far as we can see, neither does theaccount she defends. Clearly one could hold the strong view thatall beliefs are true and still conceive of multiple channels forreceiving those truths. There is nothing in any of the accountsconsidered in the target article that predicts the child's failureon the sources task. This is presumably because the understand-ing probed in the sources task is different from the knowledgetapped in all the misrepresentation tasks.

Theoretlclty. Psychologists working on the child's theory ofmind have been debating whether the child's conception canprofitably be viewed as a theory. The correct answer is (as usual):In some ways it can be and in some ways it can't (for morediscussion, see Samet 1993, sect. 6). But Gopnik takes the'theoreticity claim too far and imports what we think are irrele-vant features of scientific theories into the theory-of-mindcontext.

For example, Gopnik appears to take it for granted that thetheory-theory is committed to the view that our conception ofthe mind is refutable and revisable (sect. 2, para. 3) and may oneday be replaced. Perhaps it will be, but being a theory-theoristdoes not commit you to this possibility. One might have an"undefeatable" folk theory of x that will "do it's thing" no matterwhat you may come to believe about x. A familiar example: If Itell you that the phrase in parentheses at the end of this sentenceis not really English even though it looks like it, your parsermight do its work anyway ("big dog"). The same general point

holds for Gopniks related claim that theory-theorists hold thatthe theory of mind is constructed from evidence. A theory mightbe innate, or acquired via implant, or by swallowing a pill, andstill be a theory in some important structural or functional sense.

The most striking, and for us the most contentious applicationof the theory-theory is the claim that beliefs and desires aretheoretical entities and not experienced. Gopnik and othersthink children postulate beliefs and desires to play specificexplanatory roles, in the way that cosmologists postulate theexistence of black holes. But we find this claim implausible andunnecessary. Consider the analogy to a physical theory. Matter,motion, and so on, we might say, are all concepts that find theirplace in a commonsense (and ultimately scientific) theory of theworld. But do we want to say that for the child, matter andmotion are postulates, that they are concepts of unobservedthings and events? Again, perhaps we do want to say this, butthe theory-theory does not commit us to it. What, in Gopnik'sview, do we observe? Sense-data?

Because she takes theoreticity too literally, she also oversim-plifies the bearing of empirical findings on competing philo-sophical conceptions of mind and intentionality. She takesDennett's intentional stance to imply that our understanding ofthe mind develops by enculturation. Why should a "stancist"accept this implication? Why can't we have an innate mecha-nism that applies the intentional idiom and machinery? Searle'sview about intrinsic intentionality is treated similarly. Searleholds that it is a biological fact about the brain that some of itsstates are intentional. Gopnik is wrong to conclude that such aview implies that our knowledge of intentionality is a matter ofdirect first-person apprehension.

In the end, the concept of theoreticity supports what we thinkis a false dichotomy in Gopnik's analysis. For her, mentalconcepts are either directly apprehended in introspection orthey are theoretical postulates "invented" to explain the evi-dence. Since the adult's concept of belief transcends the directlyapprehended, she concludes that it is a theoretical postulate andthat we do not experience our beliefs. Her conclusion may becorrect, but the argument by elimination will not work. Afterall, we experience apples, even though our concept of an appletranscends our experience. The existence of a "folk botany" doesnot imply that we do not experience apples. (For furtherdiscussion of this point, see Samet 1993.)

Is the theory of mind Innate? Gopnik claims the empiricalevidence she surveys does not support the view that a theory ofmind is innate and the fact that an understanding of inten-tionality only develops between the ages of 3 and 4 countsagainst the nativist view. In our view, the empirical evidencedoes not have much to say one way or the other; we are just onthe road to investigating the innateness claim in this domainempirically. Still, it is clear that innate elements need notexpress themselves at birth; moreover, the fact that somethingdevelops at a critical period should be seen as positive, notnegative, evidence for innateness. Given all the evidence weknow of, the nativist can argue that the theory of mind is (tosome degree) innate and that it is activated between the ages of 3and 4. This hypothesis leaves us with the interesting question ofwhat happens between 3 and 4? Is the innate theory of mindtriggered somehow? It is a matter of maturation? Is there someenabling condition that is only met at that point? Indeed, thenativist might go on the offensive and challenge Gopnik's claimthat the notion of intentionality is constructed. What support isthere for this conjecture? How is such a construction possible forthe child? Using what construction materials? (See Samet 1993for discussion of a similar "constructivism" in Wellman 1990.)

We suspect that part of what leads Gopnik astray about theinnateness issue is her interpretation of what it means to say thatour mature understanding of mind is a theory. If it is a theory,she seems to think, then it must have been arrived at viatheorizing. If it were innate, then there would be no need forsuch theorizing. So it cannot be innate. Both premises of this

88 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 89: A Gopnik - How We Know Our Minds

Commentary/Gopnikl'Goldman: Knowing our minds

argument are suspect. We have already discussed the misin-terpretation of theoreticity that underlies the first premise. Thesecond premise is also problematic, since it assumes withoutargument that there must be something automatic or immediatein the application or development of what is innate. Plato in theMeno (which Gopnik cites approvingly) argues for the innate-ness of mathematical theorems (among other things), and theseare typically hard-won intellectual achievements: one must tryto develop a proof to reach the innate knowledge.

The psychologist's fallacy (and thephilosopher's omission)

Philip David Zelazoa and Douglas Fryeb

'Department of Psychology, University of Toronto, Toronto, Ontario,Canada M5S 1A1 and department of Psychology, New York University,New York, NY 10003Electronic mall: '[email protected]

[Gol, Gop] Common sense need not be nonsensical. Goldmanmakes a strong case that access to qualia plays an important rolein our recognition of our psychological states. Goldman's argu-ment stands in sharp contrast to Gopnik's attempt to dismiss thefolk-psychological notion of privileged first-person access to ourown beliefs.

On the surface, Gopnik's case seems attractive. First, sheseems to indicate that if (a) "we know our own beliefs and desiresdirectly" (sect. 1, para. 1), then (b) we should have direct accessto a particular property of them, namely, their intentionality. Inaddition, she claims that if we have direct access to the inten-tionality of our mental states, then (c) children should performbetter on tasks that require them to verbalize their changedbeliefs than on tasks that require them to talk about the falsebeliefs of other people. For Gopnik, "If [children] knew any-thing at all about their own minds, then what they knew oughtto be substantially correct" (sect. 4, para. 2). Finally, she con-cludes that because children do not perform better onrepresentational-change tasks than on false-belief tasks, wemust not have direct access to the intentionality of our mentalstates. She also concludes that because we do not have directaccess to the intentionality of our mental states, we do not havedirect access to our mental states (although we may have directaccess to our psychological experiences): Our commonsensebelief that first-person knowledge of our psychological states isdirect is nonsense.

Each of her conditional premises is moot, and her rejection ofthe claim (c) that children do better on "self' than "other" tasks isat best premature - it is a confirmation of the null hypothesisbased on a failure to find a difference: Recent evidence supportsan alternative interpretation of the correlation between repre-sentational change and false belief.

All of the theory-of-mind tasks that children have difficultywith require them to reason first from one perspective and thenfrom another, incompatible one. The complexity of these tasksalone, and the representational flexibility that they demand,may account for children's difficulties on them. In effect, thesetasks require children to use embedded rules flexibly: If x, thenif y, then z.

We have attempted to assess directly whether the complexityof the tasks per se is what makes both representational changeand false-belief judgments difficult. We constructed a numberof problems that had a logical structure similar to standardtheory-of-mind tasks but did not involve reasoning about mentalstates (Frye et al. 1992; Zelazo et al. 1992). In one task, childrenwere presented with colored shapes that would be categorizeddifferently if one were sorting by color or by shape. Childrenwere then required to sort cards first by (say) color (i.e., theywere told, "If it's red, then put it here, but if it's blue, then put it

there") and then by (say) shape. Another task involved a physicalcausality mechanism: a ramp with two input holes and twooutput holes. In one condition, marbles rolled straight down theramp. In another condition, they rolled across the ramp, so thata marble inserted on the left would exit on the right. Childrenwere asked to predict the behavior of marbles first under onecondition and then under the other.

On the card sort, 3-year-olds were able to use the first set ofrules they were given but could not switch between rule pairs.Likewise, in the causality task, they continued to make predic-tions based on one condition, despite being told explicitly thatthe condition was switched, and despite being told the relevantrules on each trial. On both tasks, 4- and 5-year-olds performedwell. Analogous age-related changes were found for the stan-dard theory-of-mind tasks, and there were positive correlationsamong tasks, with age controlled. The standard theory-of-mindtasks and the tasks we created are similar in that they requirechildren to use a higher-order rule in order to select thecondition (or perspective) from which to reason: (1) In the false-belief task, for example, they need to determine whether tomake judgments from their own or another's perspective; (2) inthe representational change task they must choose whether toreason from their present or their previous point of view; (3) inthe card sort, they must decide to use the color rules or theshape rules; and (4) in the causality task, they need to knowwhether the marbles will roll across the ramp or straight downit. Incompatible rule pairs are nested within each of the twoconditions (or perspectives). The logical structure of these tasksdiffers from the logical structure of the tasks on which 3-year-olds perform well. The latter tasks typically require children touse a simple pair of rules (e.g., "If before, then pretended hotchocolate, but if after, then pretended lemonade").

We concluded from these results that children acquire theability to use embedded rules at about 4 years of age and that thisdomain-general cognitive change underlies increases in perfor-mance on standard theory-of-mind tasks (cf. Frye, in press).Performances on the representational-change task and the false-belief task are correlated because the tasks have the same logicalstructure. At the least, it is not possible for Gopnik to concludefrom failures to find differential performances on these tasks thatchildren do not first know about the intentionality of their ownmental states.

More important, however, is the theoretical relation betweendirect access to psychological states and knowledge about inten-tionality. Although people may well have access to their psycho-logical states, just as they have access to a Jamesean stream ofconsciousness, it seems trivially true that knowledge aboutintentionality is theoretical in the sense that it involves thepostulation of a relation between entities. Intentionality consistsin the relation between the form of a psychological state and thecontent of that state (i.e., what the state is about). To haveintentionality is one thing, to know about it is another. Gopnikcommits what James (1850) calls the psychologist's fallacy, theconfusion of psychologists' standpoint with that of the mentalfact about which they are making their report. James writes,"What the thought sees is its own object; what the psychologistsees is the thought's object, plus the thought itself, plus possiblythe rest of the world" (p. 197). He cautions, "We must avoidsubstituting what we know the consciousness is, for what it is aconsciousness of. . . "(p. 197). When one simply has a thought,one is aware only of the content of the thought, not the form ofthe thought, or the relation between the form of the thought andthe content of the thought, or the relation of the thought and itscontent to the world, or the relation of the thought to some "I,"or Self. One can be aware of a painting, say, without having theadditional reflective awareness of being aware of being aware ofa painting (Zelazo 1992).

To understand the complex relation that is intentionality (andthe relation between psychological content and the world), onemust not only have an instance representation of a mental state

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 89

Page 90: A Gopnik - How We Know Our Minds

Response/Gopnik: How we know our minds

(an IR, in Goldman's terms), one must also consider the relationbetween this IR and some other entity. Contrary to Gopnik'sclaim, it is the relation of one's psychological state to the contentof that state, and the relation of the content to the world, and ouridea of the world (considered as a thing in itself- e.g., as thething that gives rise to our perceptions), that are hypotheticalentities, not one's own direct awareness of the contents of one'smental states. The latter awareness is directly given - in fact, itis the only thing of which one can ever be directly aware: Therest is noumenal.

Gopnik's instantiation of the psychologist's fallacy most likelycontributes to her misreading of Searle (1990). Searle claims thatstates that are not at least in p' inciple accessible to conscious-ness lack intrinsic intentionalicy, and Gopnik takes this to meanthat "the intentionality of psychological states is known directlyin the first-person case' (sect. 2, para. 5). Conscious brain statessurely are intentional (Zelazo & Reznick 1990); it is a differentthing altogether to claim that their intentionality is known.

So, 3-year-olds may be directly aware of the content of theirmental states. And these mental states are necessarily aboutsomething, and so are intentional (Brentano 1973). When3-year-olds report that they think there are pencils in the box,they directly access and report the contents of a mental state.What they do not understand is that their mental states havecontent, and that there is some other reality (defined in terms ofthe noumenal world or in terms of other people's mental states),that the content of their mental state is a representation of. Thisis an understanding of a higher-order relation among terms(some of which are hypothetical) that is hypothetical, and it is alate development. The logical structure of tasks that assess thisunderstanding is the same for tasks that require reasoning aboutone's own mental states and about other people's mental states.As such, it is not surprising that performance on these tasks iscorrelated.

There are differences, however, between reporting on one'smental states and reporting on other people's mental states.When one reports on another's mental states, one clearly doesnot have direct access to those states. Contra Goldman, one usesknowledge of functional relations to infer those states. Goldmanis right that one may rely on simulation and on analogy to one'sown IRs to understand another's mental states. However, sim-ulation and analogy plainly rest on functional information. Howdoes one know which situation to put oneself in to simulateanother's mental state? One must identify the relevant input-internal state-output relations and imagine oneself as participat-ing in those relations. The need for functional informationbecomes clearer when one cannot duplicate another's situationexactly, as when one says, "I have never experienced beingdiscriminated against on the basis of race, but it may be some-thing like . . . ," drawing an analogy to a similar (in the relevantfunctional respects) situation. Direct access to one's own pastmental states is necessary here but so is functional informationand a certain level of logical sophistication.

A complementary case may be made for self-attribution.(Note that Goldman's account must apply to the self-attributionof past mental states: Introspection is really retrospection,because the moment of which we speak is already past.) Al-though we have direct access to some of our past mental states,access to other of our mental states would seem to rely on the useof functional information to set up the conditions for recall. Thisis analogous to setting up the proper simulation of another'smental states. Accessing the appropriate memory in representa-tional change tasks requires (1) being able to handle embedded-rule reasoning, (2) using the proper functional information toaccess just the relevant memories, and (3) being able to exhibit acertain amount of representational flexibility (if the content ofthe present mental state is more salient, children may focus onthis). In some cases, functional information overrides phenome-nological information so that one may conclude that one must

have thought one thing, even though one remembers thinkinganother.

EDITORIAL COMMENTARY

One can argue about whether or not I have a "what it's like to believethat the cat is on the mat" quale and whether or not having that belief isuniquely accompanied and identified by having that quale (I happen tothink it is). But it seems much less disputable that that belief must atleast be grounded in having qualia for "cat" and "mat," for otherwise itwould be hanging from a contentless formal skyhook. Now it might bethat as a matter of fact its actual and potential causal interactions withthe world ground that formal skyhook functionally (I happen to thinkthat's true too), but unless that functional grounding also gives itqualitative content, it is not clear why the belief should be regarded asmental at all. Purely functional "beliefs" can be had by coat hangers,too. Or, to put it another way, surely it cannot be a mere coincidencethat the only creatures that have beliefs are also the only creaturesthat have qualia, and that qualia seem to be the only thing that putsmental flesh on the formal/functional bones of what those beliefs areabout.

Author's Response

Theories and illusions

Alison Gopnik

Department of Psychology, University of California, Berkeley, CA 94720Electronic mail: [email protected]

R1. Theories and illusions. My commentators raise a widevariety of issues ranging from highly empirical questionsabout the details of various experiments to highly abstractphilosophical concerns. I have organized these into sev-eral general topics, hoping not to minimize the subtlety ofparticular arguments. I have tried to deal first with anumber of technical questions about the experiments,many of them from other developmental psychologists.Then I have moved on to some more abstract conceptualand definitional questions and philosophical arguments.Finally, in the last sections I have tackled what seem tome the most interesting questions, substantive theoreti-cal ones about the nature and origins of our understandingof the mind.

R2. Information-processing alternatives redux. At leastone prediction of the target article was completelyconfirmed: Many commentators have alternative infor-mation-processing-like explanations for the children's dif-ficulty with the false-belief task. These include Butter-worth, Campbell & Bickhard, Dittrich & Lea, Goldman,Gurd & Marshall, Josephson, Leslie et al., Plaut &Karmiloff-Smith, Russell, Siegal, Zaitchik & Samet, andZelazo & Frye. I am not too perturbed by this, however,for two reasons. First, all these commentators have differ-ent alternative explanations for the false-belief task. Ifthere is an information-processing alternative, it certainlyis not obvious. In contrast, the "theory-theory" accountsadvanced by myself and others (such as Flavell [1988],

9 0 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 91: A Gopnik - How We Know Our Minds

Response!Gopnik: How we know our minds

Wellman [1990], and Perner [1991b]) have shown a strik-ing degree of convergence (to the dismay, I sometimessuspect, of their independent-minded advocates).

It is significant that none of the commentators take upthe challenge raised in the target article, to give anaccount that will not only deal with the false-belief task,but explain the whole pattern of evidence, easy tasks aswell as difficult ones. Plaut & Karmiloff-Smith describeonly too accurately the tendency of developmental psy-chologists to fixate on a particular task; I fear this hasindeed happened with the false-belief task and "theory ofmind." Any reasonable explanation would have to give acoherent account not only of this task but of the othertasks as well. Some of the commentators attempt to dealwith the pretense and imagination tasks, but none of themdeals with other tasks that are solved by 3-year-olds,particularly our level-1 perceptual task (in which thechildren are asked about what they saw on the other sideof the screen) and Wellman's "discrepant-belief" task. (Inthis task children are asked whether Johnny, who has onlylooked in the desk, will know that there are pencils in thedesk and on the shelf; Wellman 1990; Wellman & Bartsch1988). Although these tasks are structurally similar to the"false-belief" tasks, they require only a "copy" or Gibso-nian" theory of the mind. In order to solve these tasks, thechild only has to know that someone may fail to representan object, not that they may misrepresent the object. Nordo the commentators deal with the other tasks that showchanges between the ages of 3 and 4, such as the O'Neillet al. (1992) sources task, our own sources task, and theMoore et al. (1990) subjective probability task. The desireand intention tasks also need to be considered. (Meleraises a number of extremely interesting questions aboutthese tasks; for a full discussion of the tasks and theirrelation to the representational model of mind, see As-tington & Gopnik 1991b.)

Zaitchik & Samet wonder why I think the changes inthe sources task reflect the same theory change as those inthe false-belief task. The importance of the sources task isthat it reflects the causal mechanism advanced by therepresentational theory of mind. Theories character-istically provide a framework for causal explanation. It isthis causal story, a story about the specific complexrepresentational processes that connect world and mind,that enables children to understand misrepresentation.In the children's case the major obstacle to understandingfalse belief is knowing how they themselves or anothercould ever come to hold such a belief. (Gurd & Marshallraise the question of languages that mark the source ofinformation: The story about such languages is almost toogood to be true from my point of view; children do not usethese source markings correctly until they are precisely 4years old; Aksu-Koc 1988).

The early understanding of sources revealed in studiesby Pillow (1989) and Pratt and Bryant (1990) providesexactly the right causal structure to sustain the earlyGibsonian account of belief. Pillow and Pratt and Bryantfound that 3-year-olds understand that if you look, thenyou will know, and if you do not, then you will not. Tounderstand the more complex cases of false belief, youneed to understand the possibility of knowing in differentways, by inferring or by assuming or by being told, andthat those differences may lead to different representa-

tions. One must understand, for example, that the mis-taken belief about the "candies" is due to a misleadingconclusion from the exterior of the box. In the originalWimmer and Perner (1983) task you must know thatseeing something in the past does not warrant the as-sumption that you will be right about it in the future, andso on. Trying to understand misrepresentation provides amotive for understanding sources, and understandingsources provides a mechanism for understanding mis-representation.

Both developments are consequences of developing afully representational model of mind. Understandingrepresentational processes allows both correct predic-tions about when false beliefs will be generated andcorrect explanations about where beliefs come from. Inthe same way, Mendelian theory, for example, allowsboth correct predictions about future sweet-pea colorsand at least a tentative mechanism for explaining how thesweet peas come to have that color. Logically, we couldbelieve or predict that sweet peas have that color withoutknowing the mechanism, but in practice the predictionsand the explanations seem to go together, and both followthe theory.

To consider at least some of the alternative explanationsbriefly, Siegal and Dittrich & Lea suggest that the childfails to understand the conversational conventions ofthe false-belief question. Siegal in particular claims thatthe experimenter's action in displaying the pencils insidethe box leads the child to assume that the pencils arerelevant to the question. This is of course a perfectlystandard methodological concern in developmental psy-chology, one that every investigator is aware of. It was forthis reason that we included the control task in ourexperiments. In that task precisely the same "awkward"question is asked, and the child also sees a change in thestate of the object induced by the experimenter. Whydoes the child not also assume in that case that theintroduction of new information (the physical change inthe boxes' contents) was relevant to the answer? And whywould the children answer the level-1 perceptual ques-tion correctly? Similarly, Siegal's explanation of failure inthe false-belief task cannot deal with the children's suc-cess in the Wellman discrepant-belief task, which in-volves the same questions and conversational assump-tions as the false-belief task but makes differentconceptual demands. Nor does he explain the sourcestasks.

If Siegal thought there was a single change in conversa-tional understanding between ages 3 and 4 that wasresponsible for all the theory of mind changes, his posi-tion would at least count as a coherent alternative. In fact,he proposes, post hoc, different conversational failures foreach task and presents no independent evidence forchanges in conversational competence in this period.

A similar problem underlies Zaitchik & Samet's sug-gestion that children have a "default" assumption thatbeliefs are true. Why does this default not operate in thecase of the Wellman discrepant-belief task? It is true,after all, that the pencils are in both locations. Why wouldchildren show deficits in desire tasks as well as belieftasks? Why would they fail appearance-reality tasks,which do not ask about beliefs? And on their own admis-sion this will not do as an explanation for the sources task.

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 91

Page 92: A Gopnik - How We Know Our Minds

Response/'Gopnik: How we know our minds

Butterworth's explanation works well for the originalWimmer and Perner task, but it will not work for the"changed content," false-belief tasks described here, norfor the desire or sources tasks. Nor will it explain thebetter performance in our perceptual task or in theWellman discrepant-belief task. Zelazo & Frye suggestthat the problem is the logical complexity of the tasks, butin what sense is the discrepant-belief task or the percep-tion task less "logically complex" than the false-belieftask? Why would the sources task, which asks a simpleyes/no question about two alternatives, be more complexlogically than the discrepant-belief task, which asks abouta complicated series of alternatives? A similar question forJosephson: Why would it be "strategically advantageous"to remember past perceptions but not past beliefs? Or toknow that seeing leads to knowing but not what thesources are for your particular beliefs? Campbell & Bick-hard's suggestion is a nonstarter: It is clear from thenatural language data that children can reflect on theirown mental states well before 4 years of age.

Leslie et al.'s commentary is particularly baffling. Les-lie simply does not discuss the wide range of other theoryof mind tasks, aside from false-belief, that is outlined atlength in the target article, and that indicates that there isindeed compelling evidence for a conceptual shift be-tween ages 3 and 4. He does, however, introduce anothertask - the Zaitchik (1990) task involving children's misun-derstanding of false photographs - as a counterexample tothe theory-theory. He does so by outlining what he thinksPerner (1991b) might think might be one route from the3-year-old to the 4-year-old view, namely, an analogy tophysical representations.

I have in fact argued and written elsewhere that theZaitchik task provides a compelling piece of support forthe "theory-theory" in so far as the failure to understandrepresentational processes extends to physical as well aspsychological objects (Astington & Gopnik 1991a; Gopnik& Wellman 1992). Indeed a paragraph to that effect wasomitted from an earlier draft of the target article throughsheer pressure of space! I would have expected Leslie etal. to be perturbed by the fact that the autistic childrenare able to solve the photograph task. On the face of it,this seems to deal a serious blow to their account of thechildren's deficit as a general, architectural inability todeal with metarepresentation. Leslie (1987) may of coursewant to modify his architectural proposals by restrictinghis "decoupler" and "expression-raiser" to a special set ofcontents, as well as adding a "theory of body" module at 9months and a "selection-processor" at 4 years. This seemsmore than a little ad hoc.

A general point about these alternative explanations:The theory-theory would be of little interest if it simplysaid that children change from ages 3 to 4 because they"know more" about the mind, or if it presented a differentproposed theory change for each task. The strength of thetheory-theory is that it allows a quite specific proposalabout what the child knows earlier and later, the shiftfrom a copy theory to a representational theory, and thatthis proposal provides a unified explanation for per-formance on a wide variety of disparate tasks. Theinformation-processing accounts will have to be equallydetailed, with equally general consequences, to be inter-esting alternatives.

R3. Past and present. Cassidy, Chandler & Carpendale,Ericsson, Goldman, Harris, Kobes, Nichols, and Pillowraise the question of whether first-person privilege can beextended to the case of immediately past mental states, aswell as currently held states. A number of the commenta-tors point out that the children in our experiments giveaccurate reports when they are asked about their currentmental states, suggesting that this supports the notion offirst-person authority.

From the theory-theory viewpoint, however, this is notsurprising. There is every reason to believe that thechildren would give equally accurate reports of the cur-rent states of other people, with a similar knowledge base.We have asked about past states for methodologicalreasons.

Part of the reason why the belief in first-person privi-lege is so pervasive is precisely that it is difficult tocontrive circumstances in which an outsider will havemore accurate theoretical information about your ownmental states than you do (see the discussion of MA[Mommy Authority] below). Even scientific psychologi-cal theories rarely make more accurate predictions thanfolk theories. Adult psychologists like Nisbett and Ross(1980) rely on their knowledge of obscure bits of psycho-logical wisdom, such as position preferences, of whichordinary folk are ignorant, to show that we do not alwaysknow our own minds. But, alas for psychology, there arefew of these cases where we are sure we know more aboutthe mind than Grandmother does, in areas where Grand-mother has any opinions at all. (And when we do, Grand-mother is usually quickly convinced by our argumentsand becomes, for example, a lay Freudian. Folk psychol-ogy is a moving target.) These cases are essential to testthe "first-person privilege" claim.

Even if we rarely know more than our grandmothers,however, we certainly know more than our grand-children. That is the basis of the research program in thetarget article. It depends on the fact that the children'stheories are less accurate than the adult theory in oneimportant respect. They do not include a representationalmodel of mind, nor do they make the various predictions,about beliefs and other mental states, that follow fromthat model. The children do have an alternative theory,however, namely, a copy or Gibsonian theory. This the-ory can deal with true beliefs. The children's account oftheir current beliefs will be correct since the childrenthink these beliefs are true, just as their account of thetrue beliefs of others will be correct. We need to turn toother examples, such as past false beliefs, cases where ourtheoretical knowledge will be more accurate than thechild's. Only these cases can test the "first-person privi-lege" claim.

Does the fact that these states are past states invalidatethe argument against introspectivism? First, I shouldemphasize again that we are talking about the very recentpast. In the candies/pencils task there is a gap of 30 to 45seconds between the children's earlier belief and theiranswer to the question. For the sources tasks, the gap iseven smaller. This is relevant to Ericssons comments, forexample, about the decay of accuracy in self-report withtime. Surely he would not expect the subjects in hisprotocol studies to forget their reasoning after 30 seconds!Do Goldman and Harris really find it more phenome-

92 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 93: A Gopnik - How We Know Our Minds

nologically plausible that they infer what they thought 30seconds ago than that they infer what they think now? Tryit at home; the certainty of your 30-second-old beliefsseems as phenomenologically secure as your currentbeliefs. If the children are introspecting, their introspec-tive capacity is, at the least, radically different fromadults.

Second, several commentators, particularly Chandler& Carpendale, raise the point that the beliefs in questionare replaced or updated by the current beliefs. Again, tryit at home; even when your beliefs have been updated 30seconds ago you still feel that your knowledge of them isdirect. Moreover, this updating argument only applies tofalse beliefs. In the sources case, children make similarerrors about themselves and others, even though there isno question of updating. In fact, even though the sourcesquestion makes reference to immediately past events, it isactually phrased as a question about one aspect of thechild's current belief ("How do you know?" not "How didyou know?"). Nevertheless, children err on this task.

Nichols raises the sources case as an example in whichchildren are accurate about their current beliefs, whereasthey are inaccurate about the beliefs of others. I am sopleased to see a philosopher who really reads the develop-mental results that I hate to discourage him by referringto the complicated and sometimes messy details; but thetrouble is that the particular sources question he dis-cusses (Do you know or not know what is in the box) doesnot require a representational model of the mind. In-stead, the question can be answered easily using a copytheory: If I had access to the box I know what is in it, if Idid not I don't know. It is not surprising, therefore, thatsubsequent studies (Pillow 1989; Pratt & Bryant 1990)found that 3-year-olds could easily answer such questionsabout both themselves and others. The difficult sourcequestions are the ones that require an understanding ofthe complexity of the representational process; not just"Do you know?" but "How do you know?" And childrenmisreport this information about their current belief, justas they make the wrong prediction about others.

Finally, in considering this question, it has occurred tome that there is another case in which children makeerrors about their current mental states as a result of theirlack of a representational theory. This is the appearance-reality task. Although it is not usually phrased withreference to the child's current mental states, this ques-tion depends on the child's accurately reporting an aspectof his current state, namely, the way the object looks tohim. Children report that the sponge-rock looks to themlike a sponge. To us, the fact that the sponge looks like arock is a part of our immediate phenomenology, notsomething we infer. Children can understand and cor-rectly answer "looks like" questions in other contexts,when they consider, for example, similarities betweenpictures and pictured objects. The inability to understandthe idea of false representations, however, seems to keepthe child from accurately reporting perceptual appear-ances, even though those appearances are current mentalstates, just as it keeps them from accurately reportingsources or their immediately past beliefs.

R4. Intentionality. A number of commentators, almost allpsychologists, including Baron-Cohen, Bartsch & Estes,

Kesponse/Gopnik: How we know our minds

Harris, Johnson, Tomasello, and Zaitchik & Samet, takeissue with my use of the term "intentionality" in describ-ing the child's accomplishment at age 4. They favor awider use of the term that includes any sense in which amental entity might be said to refer to or be about thephysical world. In this sense, early joint attention behav-iors, such as drawing a mother's attention to an object,might be said to imply an understanding of intentionality.Construed as a terminological dispute, this isn't veryserious; let anyone use the term as they like so long as it isclear how they are using it.

I would suggest, however, that my narrower use, theuse that implies the possibility of misrepresentation, has,in fact, been the standard use in philosophy. The termseems to stem from Brentano's (1874) discussions of falseand nonexistent representations, and Frege (1892) andRussell's (1905) discoveries in the analysis of meaning (theintentionality of thought being analogous to the inten-sionality of sentences). In both these contexts, the philos-ophers begin by considering the possibility that thoughtsor sentences are simply connected to the world directly;they then point out the various possibilities of misrepre-sentation that make that idea unworkable. The ideas ofintentional inexistence, content, intension, and so forth,are introduced to deal with the failures of the simplermodel to deal with cases of misrepresentation. Similarly,in contemporary attempts to naturalize intentionality(e.g., Dretske 1981; Millikan 1984) it is taken for grantedthat merely setting up causal connections between ob-jects and mental states will not do. To capture inten-tionality, the naturalistic story must also allow for thepossibility of misrepresentation.

One difficulty with broadening the use of the term isthat it becomes difficult to see how to distinguish merelycausal relations between mental states and the physicalworld from intentional relations. This is precisely whyDennett (1978a) proposed the "false-belief" task as a wayof determining whether primates really have the conceptof belief. How could we distinguish, for example, be-tween a simply causal story of joint attention in which thebaby learns to make certain gestures in response toobjects and people, and a genuinely intentional one?Another problem is that it becomes impossible to definecontent. If we think that early joint attention implies anunderstanding of intentionality, what are the contents ofthe intentional states? Does the baby who points to a redball "mean" "I see a ball, a red sphere? An object?'

R5. Knowing about intentionality and knowing the con-tents of your beliefs. There is another confusion thatarises in several commentaries, notably those of Shoe-maker, Bartsch & Estes, and Cassidy, that reflects agenuine confusion in the target article. It is not madecompletely clear in the target article whether the claim Iam making is that children do not know their beliefcontents or that they do not know that those contents areintentional. Several commentators suggest that thesecond claim is uncontroversial and should not be con-strued as contradicting either common sense or Searle(1991). In fact, as I reflect on these comments and on theevidence, I would state my claim as follows: Children donot know that their beliefs are intentional, in the sensethat they do not know that they may vary, be false, and so

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 93

Page 94: A Gopnik - How We Know Our Minds

Response/Gopnik: How we know our minds

forth. They develop this knowledge between 3 and 4years of age. This lack of general knowledge about belief,however, also leads them to misidentify the contents ofspecific beliefs in particular relevant circumstances. It isnot just that children say "I always knew what I knownow," although they do. They also say flat out, "I thoughtthere were pencils in the box." They actually misreporttheir belief content.

R6. Am I a behaviorist? Some commentators, notablyThompson and Rachlin, but also perhaps Josephson,think I ought to be a behaviorist (actually I think theythink I really am a behaviorist but am too timid to comeout of the closet and admit it). There are, of course,several strands in behaviorism. The strand that Thomp-son and Rachlin emphasize is a mistrust for the postula-tion of mental entities, as opposed to dispositions, toexplain behavior. This is quite distinct from the questionsI raise in the target article, and I would give the standardfunctionalist answer. I do not see any reason to be moretheoretically austere in psychology than physics. Physi-cists do not feel obliged to translate talk of electrons intotalk of the dispositions of chemicals and cloud chambers;why should psychologists? A second strand is a distaste forintrospective evidence. Some of my best friends arepsychophysicists, scientists with noses so hard they wouldput the most Teutonic physicist to shame, and their talksabound with elaborate statistical analyses of their intro-spections. I would hate to try to tell them to stop. Thepoint of the target article is that introspection, like behav-ior, is simply one source of evidence about underlyingmental states, not a particularly privileged source, andnot the source that tells us about the intentionality of ourbeliefs. If I can be a behaviorist without giving up mentalentities and introspection, then I am happy to come out.But I admit it does seem a little like trying to come out as alesbian without giving up men.

R7. Expertise and the question of Mommy Authority. Anumber of commentators raise doubts about whether thedifference in expertise between ourselves and others issufficient to account for the appearance of first-personprivilege. Goldman, in particular, in his target articleraises the question of whether the illusion of expertiseactually does lead to the appearance of immediacy, and, ifso, why that impression is not extended to others. Bartsch& Estes, Loar, Ludlow & Martin, Pietroski, and Ster-elny raise similar questions.

In fact, we should not underestimate the differencesbetween our knowledge base about ourselves and mostothers. I have after all had access to my own behavior andlanguage 16 hours a day, 365 days a week for at least 35years. Compare this with my knowledge base about myvery closest and oldest friend. I have no knowledge at allof the first 25 years of his life, and I talk to him for perhapsfour or five hours a week. The striking thing is that wethink we know anything at all about most other people,compared to ourselves, not that we do not know more.

Imagine the following experiment. Suppose we reallydid know nearly as much about someone else as we knowabout ourselves, in strictly quantitative terms. We knewthe person from birth onward and observed their everywaking hour. Suppose we watched them bathe and eatand dress, wake and fall asleep, and knew each salient

event in their environment. Suppose, moreover, we werehighly motivated to make accurate predictions about theirbehavior, and that they had no ability to deceive ormanipulate us. Suppose, finally, that the internal states ofthis person, and their functional relations to behavior,were quite different from our own in important ways.Would our impression of perceiving mental states imme-diately and directly extend to such persons? Would ourinterpretations of them be particularly privileged? Anintrospectivist theory or a simulation theory would, Isuggest, predict that they would not; the expertise theorywould predict they would.

First-person reports may be self-interested, of course,but I have myself been a subject in three such experi-ments; I have raised three children. My psychophysicalreport would be that our knowledge of the internal statesof persons to whom we are related in this way is also directand immediate, in spite of the dissimilarity between theirbehaviors and our own. We might refer to this as MA(Mommy Authority). I suggest that many of the empiricalobservations that give rise to the notion of First-PersonAuthority apply with equal force to Mommy Authority.As a Mommy, you may occasionally need to make directand conscious inferences about your infants, as you mayindeed occasionally need to do about your own states, buton the vast majority of occasions there is no more impres-sion of inference in the Mommy case than in the case ofthe self. One perceives fussiness, insight, puzzlement, animpending tantrum with the same immediacy and accu-racy that one perceives similar states of the self. I wouldsuggest, in fact, that much of the emotional satisfaction ofparenting, the sense of closeness, understanding, andimmediacy that is often referred to as "attachment" or"intimacy" derives from precisely this fact. Similarly, wewould generally concede that, except in occasional casesof wishful thinking, Mommies are more accurate judges oftheir infants' mental states than non-Mommies. Even thecases of wishful thinking are similar; our Moms, like ourreflective selves, tend to give us the benefit of the doubt.

In fact, as I suggest in note 10 of the target article, it ispossible that a set of metaconvictions, mainly derivedfrom scientific psychology itself, leads us to conclude thatour knowledge of others' mental states is always andintrinsically indirect. This conclusion may not be basedon a genuine difference in the phenomenology of ourperception of ourselves and at least some others. Gurd &Marshall casually say "unless you believe in telepathy,the third-person case must involve inference." But Iwould bet a majority of common folk would say that theydo believe in telepathy, and that they base this beliefprecisely on experiences of close, wordless communica-tion with those they know extremely well. I would saythat the sophistication of their philosophical beliefs de-forms the intuitions of those commentators who are skep-tical about expertise, rather than that the intuitions leadto the philosophical beliefs.

R8. Conceptual analysis and empirical claims. A numberof the commentators, especially philosophers, (Kobes,Loar, Olson & Astington, Pietroski, and Shoemaker)make claims of the following form: It follows from the veryconcept of x being a mental state of a particular kind that xwill be self-ascribed. For example, it is somehow part ofthe very concept of belief that if we have beliefs we will

94 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 95: A Gopnik - How We Know Our Minds

Response/Gopnik: How we know our minds

know that we have them. If we do not know we havethem, then they are not beliefs.

My problem with these commentators is that I do notknow, quite literally, what they are talking about. Wemight mean two things when we talk about the concept ofa mental state, say, belief. We could be trying to explicatewhat "we," in the sense of ordinary commonsense folk,think beliefs are. Or we could be trying to say what beliefsreally are (if anything). In this second sense the concept ofbelief would be one that played a role in some trueaccount of the mind, presumably a psychological account,but possibly a mathematical or sociological or even liter-ary one. It is unlikely that the "concept of belief" in thefirst sense will be identical to the "concept of belief" in thesecond sense. I do not know what else the concept ofbelief could be.

If the philosophical claims are about what we mostlythink, then they are at least partly true, but not veryinteresting. What they amount to is that we believe infirst-person privilege. In fact, we really believe in it a lotand very deeply. To make this version of the claim at allinteresting it needs to be made a bit more circuitously, asit is in Davidson's (1987) account (cited by Pietroski andSaunders). On at least one reading of Davidson, he saysthat our deep belief in first-person privilege is not, as itseems to be, simply given by our experience, nor is it, as Iwould argue, the result of expertise. Instead, it is itself aconsequence of other deeper beliefs about language. Inparticular, it is a consequence of a more fundamentalconviction that we know what we mean, and that we knowwhat we mean in a way that we do not know what otherpeople mean. Davidson's insight is that these two com-monsense claims are linked.

Davidson's claim that our commonsense beliefs aboutlanguage are more fundamental than our beliefs aboutthought, and that they ground those latter beliefs, reflectsa common tendency in late twentieth-century philosophyto treat linguistic facts as if they were somehow more basicthan psychological facts. But why would phrasing first-person privilege as a claim about language make it anymore convincing, or, as Pietroski says, "deflationary,"than phrasing it as a claim about thought? Why would itbe more obvious that we know what we mean than that weknow what we think? It seems far more likely that com-mon folk think they know what they mean because theythink that they know what they think, rather than theother way around.

Moreover, the link between these two aspects of ourordinary understanding does not offer a compelling expla-nation of why we believe in first-person authority foreither language or thought. The related claim aboutlanguage is not only not necessary and obvious, as David-son and Pietroski seem to think, but if it is taken as a claimabout some psychological reality, it is, as we will see,actually false, as false as the claim of first-person privilegefor beliefs. It seems a little strange to explain one falsebelief by referring to another. Suppose we were lookingfor an explanation of why people believe in witches whenwe have good reason to believe that witches do not exist.Someone replies that people believe in witches becausethey believe in the Devil and think that the Devil hasagents on earth. This might be true, it might even countas some sort of insight, but somehow it does not seem likequite the explanation that was wanted.

Maybe Davidson's claim is not supposed to be aboutwhat people think about belief and meaning. In fact, itseems unlikely that it is about this, since ordinary folks (asStanovich points out) almost certainly endorse a percep-tual view that Davidson and other philosophers explicitlydisavow. Could it be about what beliefs and meaningsactually are like? Suppose we construe the claim psycho-logically, namely, that we think we know the content ofour thoughts because we necessarily know the meaningsof our words and sentences. If "meaning" refers to thepsychological entities and processes that relate words andour knowledge of the world then the claim is simply false.In fact, empirical evidence that people do not know whatthey mean is actually much easier to find than evidencethat they do not know what they think. According to anypsychological definition of meaning, a semantic networktheory or a prototype theory or a spreading activationtheory or whatever, most people will not know what theymean most of the time.

Similarly, if the general claim of first-person authorityis a psychological one, then, as I suggest in the targetarticle, there is good reason to believe it is false. On anyreasonable scientific definition of belief, the 3-year-oldshave beliefs, and they do not self-ascribe them. Indeed, Ithink Pietroski implicitly acknowledges this when hedistinguishes between "representations," the scientific,psychological equivalent of belief, and "real" beliefs. I stillcannot see what kinds of things the real beliefs aresupposed to be, however. At one point Pietroski suggeststhey are relations, and particularly relations to proposi-tions. But then what sorts of things are the propositionssupposed to be? And what kind of evidence is there, orcould there be, for or against this claim?

Shoemaker claims that it belongs to the nature ofcertain mental states that having them directly issuesunder certain conditions in the belief that one has them,the certain conditions including minimal rationality, con-ceptual coherence, and having the concepts of the states.Three-year-olds have beliefs; they do not have the beliefthat they have them. Sounds like a pretty good counterex-ample to Shoemaker's claim. Aha, Shoemaker says, thenthey must not have the concept of belief. What doeshaving the concept of belief mean? Well, 3-year-olds usepretty much the same belief vocabulary as adults. Theyunderstand the words "think" and "know" and use themappropriately. They know that thoughts are differentfrom things and that one person can have a thought aboutsomething while another person may not have a thoughtabout the thing. There is, however, one important re-spect in which the 3-year-olds concept of belief differsfrom the adult concept; children do not know that beliefsmay be false. This false belief about beliefs leads to thechildren's failure to self-ascribe those beliefs in certaincircumstances. If we define "the concept of belief"broadly enough we can save Shoemaker's claim, but if wedefine it this broadly the claim hardly seems worthsaving. What it amounts to is that if children believed allthe things adults believe about belief then they wouldbelieve all the things adults believe about belief.

Consider an analogous claim. I will call this "speciesauthority." I claim that it belongs to the nature of certainbiological categories that the fact that an animal is amember of the category directly issues in the belief that itis a member of that category, under certain conditions.

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 95

Page 96: A Gopnik - How We Know Our Minds

Response/Gopnih. How we know our minds

The conditions, of course, include having the concepts ofthe animal and the category. Lo and behold, we discover acounterexample. Johnny thinks whales are fish, thoughthey are really mammals. But of course we can return toour original proviso about concepts. It is not that Johnnyis wrong about whales; rather it is that his concept ofwhales (and fish) is incomplete. And, of course, in a waythis is quite true. Johnny certainly fails to share certainfundamental beliefs about whales and fish with adults. Hethinks whales are fish. The claim is saved.

If Shoemaker's claim is about our ordinary beliefsabout beliefs, it seems an elaborate way of stating theobvious: Grown-ups' beliefs about beliefs lead them tobelieve in privileged first-person authority and childrenhave beliefs about beliefs different from those of grown-ups. As in the case of Davidson (1987), other parts ofShoemaker's account, his denial of the perceptual view inparticular, are clearly not characteristic of common un-derstanding. If the claim is about what beliefs really arelike, invoking the "knowing the concept" proviso makesthe claim unfalsifiable and circular.

Could the philosophical claims be logical or mathemati-cal ones? A number of commentators invoke "logic" in aloose way. Olson & Astington, for example, cite Sellars(1963) to the effect that the relation between behavior andinternal states is built into the "very logic" of the conceptsand Strawson (1964) as setting out that logic. Howeverestimable Strawson may be as a philosopher he certainlydid not set out the logic of these concepts as a set ofaxioms, rules, and proofs. Shoemaker, similarly, drawsan analogy between the way that holding certain beliefs,say, P implies Q, and P, inevitably leads to the belief thatQ, and the way that holding a belief inevitably leads to thecorrect self-ascription of the belief. As a psychologicalclaim, the first half of Shoemaker's analogy is clearly false;there is an enormous literature on failures of folk logic. Itis true as a logical claim because of certain facts about thevery structure of logic as a formal system. But there is nosimilar strictly logical sense in which belief implies orcould imply self-ascription. Where is the proof? What arethe axioms? "Belief" and "self-ascription" are not termsthat play a role in some formal system; and to say they arelogically connected is merely to say they seem verystrongly linked intuitively.

R9. Whose concepts? My guess, however, is that thesephilosophical claims are not really intended to be abouteither our ordinary folk psychology or about scientificpsychology, nor are they mathematical or logical claims.That is why they sound either so obvious or so peculiarand implausible when they are translated into theseterms. These philosophers would say that their claimsabout first-person authority are "conceptual" ones, prod-ucts of the "conceptual analysis" that many still see as theobject of philosophy. Shoemaker, for example, says thathis claim about first-person authority is made "for purelyconceptual reasons." Kobes similarly describes analyticfunctionalism as trying "to distill out of ordinary . . .usage and beliefs a network of causal interrelations that isdefinitive of each mental state's type-identity"; he goes onto say that this is not to be identified with the psychologyof folk psychology or presumably psychology itself. Sim-ilarly, Loar says that a definition of the functional role of amental state would be "entirely the product of philosophi-

cal explicative activity, pieced together from the concep-tual roles of our mental ascriptions." But what kind ofactivity is this supposed to be? What kinds of things arethe mental states that are type-identified by this process?Whose concepts are they, anyway? And if they are neitherour ordinary concepts nor the concepts of some reason-ably plausible scientific account, why on earth should wecare about them?

Suppose I said I was engaged in "trying to distill out ofordinary usage and beliefs a network of causal interrela-tions that is definitive of each physical state's type-identity" and that this enterprise was neither the psychol-ogy of folk physics nor physics itself. What kind of strangeproject would this be? I think it would be the project ofAristotelian scholasticism. The medieval scholastic phi-losophers believed in precisely the same way that bythinking hard about our ordinary understanding of physi-cal objects one could arrive at an understanding of suchobjects, or at least a taxonomy of them.

Imagine a medieval analytic philosopher who argued asfollows: "I have strong conceptual reasons for believingthat weighing more implies falling faster. When I consultmy intuitions and those of all my fellow philosophers inOxford and Padua, not to mention even the common folk,I find that there is indeed an inextricable conceptualconnection between the very idea of weight and the ideaof fast fall. Therefore if one claims that objects that weighmore do not fall faster, then obviously, on conceptualgrounds, one is not talking about weight at all. One maybe right about something else, but not about weight."

The history of physics has been one of modifying andaltering and sometimes even abandoning our common-sense physical concepts in the light of evidence andexperiment. Similarly, the history of psychology shouldbe one of modifying and altering our commonsense men-tal concepts, such as our concept of belief, in the light ofpsychological evidence and experiment. I have suggestedin the target article that one important modification wemay have to make is to abandon the idea of first-personprivilege, no matter how deeply embedded this notion isin our ordinary concept of belief. To repeat at length thatthis notion is indeed deeply embedded in common folks'ordinary concept of belief, or even philosophers' ordinaryconcept of belief, is beside the point.

R10. Is the theory of mind socially constructed? Theremay be another way of construing some of these philo-sophical claims that makes them more substantive butalso less plausible. Several of the philosophers implicitly(particularly Gordon, Morton, and Pietroski) and psy-chologists explicitly (e.g., Butterworth, Rakover, Saun-ders, and Tomasello), raise the possibility that "real"beliefs are not psychological entities but rather sociologi-cal ones, and that the fact of first-person privilege is theresult of social relations. What kind of account could thisbe?

There is a weak and uncontroversial sense in which myview of the mind could be socially constructed, namely,that I live in a world of people who have accumulatedknowledge about the mind, and much of my knowledgeabout the mind may come from these other people.Russell and Shoemaker raise this as an objection to thetheory-theory. But of course even scientific theories donot arise in a social vacuum; any kind of theory formation

96 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 97: A Gopnik - How We Know Our Minds

Response!'Gopnik: How we know our minds

will take place in a cultural context, at least if humanbeings are doing the theorizing.

A stronger version of the "social construction" viewmight be that I have privileged authority over my ownbeliefs, just in virtue of being their possessor, in the waythat I have privileged authority over my own children,just in virtue of being their mother. There is nothingintrinsic to the children or to me that gives me thisauthority; it is simply a function of my social relation tothem. In another culture, with other conventions, theirfather's sister, or the eldest cross-cousin might be placedin authority instead. Davidson (1987) might be construedas claiming that my relations to my beliefs are like this,just a matter of social fiat. As I mention in the targetarticle, Dennett's (1987) position that intentionality is akind of stance also has this character. Saunders, forexample, clearly thinks that both the idea of intentionalityand the idea of first-person authority derive from this sortof social convention.

Such social relations often have some of the self-constitutive character that many philosophers find char-acteristic of notions like belief. Pietroski mentions thecase where simply saying something is so itself makes itso, and this is true of some of these socially constitutedrelations; "I now pronounce you man and wife" or "Ichristen this ship the Queen Elizabeth" are examples. Isuspect that part of why Davidson transforms the psycho-logical argument into a linguistic one is that it makes asimilar move more plausible. Much of the appeal of "thelinguistic turn" in philosophy lies in the Wittgensteinianintuition that linguistic matters can be construed in aconventional or social way. It is after all an arbitrary andconventional matter that a particular sequence of pho-nemes means "cat" in English. Perhaps it is equallyarbitrary and conventional that we know what we meanourselves but not what others mean, and hence believethat we know what we think ourselves but not what otherpeople think.

Such an account seems plausible, and interesting, forcertain kinds of psychological attributions. Perhaps Icould not truthfully ascribe the state of romantic marriedlove to myself or others until the cultural conditions of theeighteenth century had arrived (La Rochefoucauld oncesaid that nobody would ever fall in love if they had notread about it first); perhaps one cannot ascribe guilt in a"shame" culture like that of Homeric Greece or the OldSouth. Perhaps also the very nature of these psychologicalstates dictates that only people in certain social roles canhave them; perhaps it would be incoherent to attributematernal love to someone who did not function as amother, or noblesse oblige to someone who was notnoble. Think about Shakespeare's discussions of kingli-ness; you can be a king without being kingly, but clearlyyou could not be kingly in a culture without kings, andeven in such a culture, you could not be kingly withoutactually being the king, or at least, a likely claimant to thethrone. But are beliefs states of these kinds? And is first-person privilege like kingliness?

I do not think there are any knock-down empiricalarguments against this possibility, but it does not strikeme as highly plausible, largely for reasons outlined insection 6 of the target article. What kinds of culturalpractices would be involved? Is knowing that you knowwhat you think like knowing that forks go on the left or

that you are the wife of your husband? Are the 3-year-oldsbehaving in a socially inappropriate way or are theysimply incorrect? One would like to see a fully articulateddevelopmental account of how this socialization practicetook place, because detailed accounts of the developmentof the child's theory are beginning to be formulated (seefor example, Gopnik & Wellman 1992; in press; Perner1991b). In any case, this is an empirical question and not aconceptual one.

Furthermore, this argument is tangential to the ques-tions I raised in the target article. My point was simplythat one view of the concept of belief, the commonsenseview and the view of philosophers like Searle and Gold-man, namely, that our knowledge of intentionality is theresult of special properties of our cognitive relations toour own mental states, is mistaken. The social construc-tion view and the theory-theory would be in accord onthis point. Whether intentionality is postulated to explainour behavior or invented to make our social life workbetter, it is not perceived or given in experience.

R11. Do children have a missing introspectoscope? Sev-eral commentators raise the possibility that representa-tions of your own mental states are given directly, but thatknowledge of others is necessary to articulate or noticethat experience. Ludlow & Martin make this point partic-ularly strongly, but it is echoed by Dittrich & Lea,Goldman, Gurd & Marshall, Johnson, Loar, Sterelny,and Velmans. The difficulty with this view is that al-though it might account for the absence of correct self-reports, it seems considerably more difficult to use it toaccount for actively mistaken self-reports. To take Lud-low & Martin's own example, you can imagine that Imight not initially notice certain aspects of my pains. Imay only notice that the pain in my side is a grinding painafter I read about my particular syndrome in a medicaltext. Suppose, however, I report that the pain is a grind-ing pain after I read one medical text and decide I amsuffering from Whosis disease, characterized by grindingpains, and then report that it is a shooting pain after I reada more recent text and change my diagnosis to What-sisname's syndrome, characterized by shooting pains. Wemight begin to suspect that I was not accurately reportingmy pains at all, even if the second diagnosis was correct.This is, of course, a common argument against readingmedical texts; they always make you feel sick. (It has beensuggested that there are historical changes in the natureof widespread illnesses that have no discernible biologicalcause. Hysterical paralysis, the great syndrome of the lateVictorian age, has practically disappeared, while varioustypes of fatigue syndromes are currently common. Isuspect that rather than being a sign of malingering, thisphenomenon may reflect the cognitive penetrability ofpain, yet another example of the illusion of expertise.)

Similarly, the point about 3-year-olds is not only thatthey do not accurately report their false beliefs, but thatthey actively and consistently misreport them, and do sowith all the sunny assurance with which they report theirtrue beliefs. They do not report their past mental statescorrectly because they do not have the representationaltheory of mind yet. But they do report them incorrectlybecause they have an alternative theory, the copy theory.It is not just that they do not know something about theirmental states, but that they do know something that is not

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 97

Page 98: A Gopnik - How We Know Our Minds

Response/'Gopnik: How we know our minds

the case. They not only do not say they thought therewere candies in the box, they also do say they thoughtthere were pencils there. A missing telescope may makeyou unable to see the stars, but it cannot make you seestars that are not there.

R12. Is the theory of mind innate? A large number ofcommentators, including Baron-Cohen, Butterworth,Cassidy, Leslie et al., Levine & Schwarz, Moore &Barresi, Olson & Astington, Russell, Shoemaker, Ster-elny, Tomasello, and Zaitchik -& Samet, suggest thatthere may be an innately determined theory of mindrather than, as I suggest, a constructed one. In consid-ering this possibility, I want to distinguish between twoquite different kinds of nativism: what I have called"starting-state nativism" and "modularity nativism" (seealso Astington & Gopnik 1991a; Gopnik & Wellman, inpress). The first view would be that innately given cogni-tive structures provide the foundation for a theory ofmind. We might even think of such structures as the veryfirst theory of mind, the starting state for the later devel-opments. According to this view, however, that initialtheory is defeasible; it may be revised in the light ofevidence, just as later theories will be. As children learnmore about their own mind and the minds of others, theymodify and revise this initial view, often in radical ways.

According to the modularity view, on the other hand,the child's native inheritance is a set of nondefeasibleconstraints on the form of a final "theory of mind." Nativisttheories in syntax and visual perception, the accounts ofChomsky (1980) or Marr (1981), for example, typicallyhave this character. It is not that the child begins with aparticular cognitive endowment, a kind of knowledge thatis susceptible to revision in the face of evidence, but thatthe final form of the child's knowledge is largely predeter-mined, with experience simply serving a triggering func-tion. Sterelny suggests that an understanding of the mindmight be maturational without being encapsulated; butnotice that the modularity view automatically implies akind of encapsulation, since only very limited types ofexperience will be relevant to the final form the knowl-edge will take. This is the view Leslie et al. have in mindin advancing a "theory of mind" module.

I think there are good reasons for supposing there is arich innate "starting state." Many of the features of thisendowment are suggested in the commentaries of Baron-Cohen, Butterworth, Moore & Barresi, and Tomasello.In particular, there is good evidence that infants areinnately endowed with a sense of persons.

One interesting possibility is that our innate starting-state knowledge of the mind does not differentiate be-tween the self and the other. That is, it maps the behaviorof others, our own behavior, and certain types of internalstates or sensations onto the same representations. Thefact that newborn infants imitate the facial gestures ofothers suggests that we have this kind of representation ofbodily expressions innately (Gopnik & Meltzoff, in press;Meltzoff 1990; Meltzoff & Gopnik 1993). This informationcomes from our own kinesthetic sensations and our visualperception of others, but both these types of informationare represented in the same way. Other research suggeststhat this might also be true for certain emotions. It mightbe literally true that we initially represent the internalsensation of joy, our own smile, and the smiles of others in

a common code. There may be similar innate links be-tween, for example, our perception of eye movementsand our experience of attention, as evidenced in Baron-Cohen's work. According to this view then, our startingstate would already bridge the self and the other, thebody and the mind. This would in turn provide a founda-tion for our later assumption that further theoreticallyconstructed notions should be applied to self and otherequally. On this admittedly rather Rousseauian view, wewould initially think of ourselves, quite literally, as beingone with others and we would have to revise this view aswe appreciated the differences between ourselves andothers. Tomasello and Moore & Barresi make sugges-tions along these general lines (see also Gopnik & Melt-zoff, in press; Meltzoff & Gopnik 1993).

Whatever this rich initial endowment is like, when it ismissing, as in the case of children with autism discussedby Baron-Cohen and Leslie et al., the child is unable todevelop the same type of further theory of mind as anormal child; or developing such a theory is at least a longand painful process. I would interpret Baron-Cohen'sdata as suggesting that without an initial theory of personsand attention it is very difficult to develop the furthertheory of belief (just as without an initial understanding ofspeed it is difficult to go on to understand acceleration).This seems a simpler explanation of the autistic deficitsthan the notion that there is one maturational failure at 9months and yet another one at 3 years.

All this, however, is quite different from proposing thatthere is a theory-of-mind module. In particular, it doesnot follow that the idea of intentionality, in the full sense,is part of that innate endowment, or that the changebetween ages 3 and 4 is the result of maturation. Therelative uniformity of the children's theories could aseasily be the result of the fact that they begin with asimilar starting state, have similar theory-formation ca-pacities, and receive similar patterns of evidence, as thatan innate module matures. After all, the fact that anumber of biologists independently converged on thetheory of natural selection in about 1850 does not lead youto conclude that an innate "natural selection representa-tion" was triggered at that point (at least not unless you areJerry Fodor).

The crucial experiment would involve exposing chil-dren to different patterns of evidence. My predictionwould be that children who lived in a world wheremisrepresentation never occurred, where everyone al-ways agreed on the state of the world, would retain andelaborate alGibsonian view of the mind, rather thandeveloping a representational model. Unfortunately, asalways in developmental psychology, the crucial experi-ments are impossible or immoral. There is, however,some fascinating though still very tentative evidence thatchildren with older siblings, who experience conflictingvisions of the world all the time may be advanced in afalse-belief understanding compared to only children(Perner & Ruffman 1992).

Even without the crucial experiments, however, thepattern of development can give us some clues. As I haveargued elsewhere in detail (Gopnik & Wellman 1992; inpress) the pattern of development of theory of mind doesnot look much like the maturation of a single module. Inthe target article I concentrated on only one developmen-tal transition, the change that occurs from ages 3 to 4.

98 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 99: A Gopnik - How We Know Our Minds

Response/Gopnik: How we know our minds

However, this is only one of a series of changes in thechild's understanding of the mind: a succession of increas-ingly accurate theories of mind. For example, thereappears to be an important change at 18 months whenchildren begin to understand pretense. We have begun tochart changes in children's understanding of perceptionbetween 18 months and 2i years, following up earlierwork by Lempers et al. (1977). There is a similarlysignificant change at around the time of the third birth-day, when children begin to develop nonrepresentationalaccounts of belief (see Wellman 1990). Even at 4, afterchildren develop a representational model of mind, thereare still many aspects of the mind that they have notmastered; understanding inference, and understandingthe emotional consequences of representation, seem par-ticularly difficult.

Each of these changes builds on the changes that havetaken place before. An understanding of perception pro-vides a foundation for understanding a belief; a nonrepre-sentational account of belief provides a framework for arepresentational account and so on. Moreover, in 3-year-olds there are many transitional phenomena that occur asthe child moves from one theory to another. As noted inthe target article, children appear to understand repre-sentational aspects of desire somewhat before they under-stand similar aspects of belief; and they may show aninitial understanding of false belief when they are con-fronted with counterevidence but still not use false beliefproductively in generating predictions about action.

The picture that emerges does not look like the matura-tion of a single theory-of-mind module. One might under-stand how evolution could select a particular view of themind, even how that selected view might not mature untilage 4. It is much harder to see how evolution could haveselected a series of different modules, each a logicalextension of the former one, each partially incorrect andeach maturing only to be replaced by another.

This sequence of developing theories of mind, each amodified version of the earlier ones, and each moreclosely fitting the evidence, fits a theory constructionview much more naturally. However rich our innateendowment may be, the developmental evidence sug-gests that it does not include the concept of intentionality.

R13. Perception and inference. The most challengingquestions raised by the commentators concern the rela-tion between perception and inference, and particularlythe contrast between directly perceiving or accessing astate and postulating that state as a result of some inferen-tial or theoretical process. This question is particularlyperspicuously raised in the commentaries of Czyzewska& Lewicki, Levine & Schwarz, Sterelny, Velmans, andZaitchik & Samet; it also is raised by Butterworth, Camp-bell & Bickhard, Ericsson, Curd & Marshall, Moore &Barresi, and Olson & Astington. Obviously this is animportant contrast for my argument, since I want to claimthat knowledge of intentionality is inferential rather thanperceptual. I think the commentators are quite right inemphasizing how difficult it is to distinguish the twocases, and I am not sure I have a completely satisfyingaccount of the difference myself.

The point of the expertise examples is that mere phe-nomenology will not do the trick. Knowledge that is, bymost lights, inferential can feel perceptual. At the same

time we may also be unable to make a clear distinction onstructural or functional grounds. One of the greatachievements of cognitive science has been the view ofperception as a series of transformations of representa-tions, often transformations that look very much likeinferences (see, e.g., Rock 1983). We might say that manyof the structural characteristics of theories - the fact thatthey involve coherent lawlike generalizations that allowpredictions, that they go beyond the data of sense experi-ence, and that they lead to a veridical account of the world- are also characteristic of perceptual systems. In percep-tion we begin with an object, the object produces sensoryinformation, and a long process of transformations movesus from the information to a veridical representation ofthe object. How is this different from the transformationsof representations that are involved in inference or theoryformation?

Is there a useful distinction to be made then? It willcome as no surprise that I think the developmentalhistory of the two types of knowledge may suggest ananswer. I suggest that it is not the phenomenological orfunctional or structural features of theories that distin-guish them from perception, but rather their dynamicfeatures, the ways they change; however, I do not thinkthe distinction involves the past history of representationsso much as their future prospects. It seems likely thatperceptual representations will usually be innately givenwhereas inferential or theoretical ones will usually beconstructed. In principle, as Levine & Schwarz mention,there might be innate theories (such as the starting-statetheories of mind referred to above) or learned perceptualrepresentations (perhaps, for example, those involved invarious kinds of perceptual adaptation or in some of thephenomena referred to by Czyzewska & Lewicki).

According to my view, the distinguishing feature ofinferential or theoretical as opposed to perceptual rep-resentations is their defeasibility, what Pylyshyn (1984)calls their cognitive penetrability. (Although Levine &Schwarz may be right in identifying a logical distinctionbetween falsifiability and cognitive penetrability, from apsychological point of view the two seem indistinguish-able. What would it mean to say that human beingsgenerated defeasible representations but could neveractually defeat them?) The crucial difference is I suggest,that genuinely perceptual representations will not bechanged by further evidence, whereas inferential or theo-retical representations may be. I agree with Zaitchik &Samet that a theory might be innate; I do not agree that itmight be nondefeasible.

Fodor's notion of modules (Fodor 1983) and his distinc-tion between peripheral and central processing may actu-ally be helpful here. I would suggest that some kinds ofcognitive processes operate on inputs to produce outputsin automatic and constrained ways. That is, given theinputs, these systems will always give particular repre-sentations as outputs. Other kinds of processes are notconstrained in these ways. In particular, the final repre-sentations produced by such systems, perhaps even theprocesses by which these representations are derived, arenot fixed, but may change radically as a result of newinputs and of information from other parts of the system. Iwould be tempted to call systems of the first sort percep-tual and those of the second sort inferential.

The acid test, for me, would be whether a representa-

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 99

Page 100: A Gopnik - How We Know Our Minds

Response/Gopnik: How we know our minds

tion, once formulated, could be overridden by furtherknowledge. This is typically not true for perceptual repre-sentations; genuine perceptual illusions notoriously per-sist, for example, even when you know they are illusions.It is true for more theoretical knowledge, even when thisknowledge is "compiled" in a way that leads to perceptualphenomenology. For example, I have suggested thatreading medical journals or failing to do so could lead anexpert diagnostician to "see" things quite different fromthose he had previously seen.

This is of course an extension of our usual meaning of"perception" and "inference." According to this view, forexample, certain kinds of linguistic processing, perhapseven certain types of parsing, might be more like percep-tion, whereas certain kinds of visual object-identification(to take Zaitchik & Samet's example, seeing a red, roundfigure as an apple) might be inferential or even theo-retical.

How would these distinctions translate into claimsabout our mental life? A cognitive version of the first-person authority claim must say something like this:When I am in a mental state, there is some set of cognitiveprocesses operating on some kind of internally giveninformation that automatically leads me to form a veridi-cal representation of that state. These processes do notoperate similarly in the case of others. In that case I mustmake inferences from their behavior and these inferencesare cognitively penetrable in the same way that otherinferences are. These inferences may also lead to veridicalrepresentations of that state, but they do so by a radicallydifferent route.

We may not want to think of these internal cognitiveprocesses as perceptions; Searle (1991), for example, doesnot like that terminology, but I do not see what else hisclaims about "intrinsic intentionality' and the priority ofthe first-person case could mean. The crucial biologicalfact about intentionality, in his view, is that it necessarilycauses conscious veridical representations of intentionalstates in the first-person case. Similarly, the core ofGoldman's argument is that inferential processes wouldnever allow us to attain veridical representations of men-tal states with sufficient speed and accuracy. Therefore,they must be supplemented by more perceptual pro-cesses of the sort outlined above.

I actually think, as I said in the target article, that we dogenuinely perceive, in this sense, some of our internalstates. Our kinesthetic sense of our own bodily position isa good example of such perception. I am not sure how farthis genuinely perceptual apparatus extends, however. Inparticular, even some very phenomenologically immedi-ate internal sensations seem highly cognitively penetra-ble. Consider the contrast between the aversiveness ofpain and its location. Both sensations are phenome-nologically immediate, but aversiveness seems to becognitively penetrable in a way that location is not. Thefact that I know that uterine contractions will lead to thebirth or the loss of a baby may radically alter theiraversiveness, but the fact that I know that a limb no longerexists cannot change the fact that the sensations arelocated there. The central point of the target article is thatthis is all the truer for such things as intentional contents.Other kinds of quite general and theoretical beliefs aboutrepresentations, beliefs that must be based largely on

behavioral evidence, lead to radical changes in children'srepresentations of their own belief contents.

R14. But is it really a theory? The preceding discussioncould, of course, apply to types of knowledge that werenot fully theoretical, though they were also not percep-tual. There could certainly be intermediate cases, a pointraised by Zaitchik & Samet and Levine & Schwarz, who,along with a number of other commentators, ask whattheories are, how they are formulated, and how they canbe differentiated from other types of nonperceptualknowledge. These are very good questions, and I havetried to formulate detailed answers elsewhere (see espe-cially Gopnik 1988; Gopnik & Wellman 1992; in press).Suffice it to say that I think we have a theory when wepostulate a network of abstract, coherent, causally effi-cacious entities and laws that explains our experience,allows prediction, and leads to interpretation, and whenthese theories change and are replaced by new theories asa result of counterevidence, experimentation, and sim-plicity considerations. I do not know exactly how theoryformation and change take place but I know they can takeplace, because I see scientists forming and changingtheories all the time, and they must be using some humancognitive capacity to do so. I do not see any reason to denythis cognitive capacity to children, who are, after all, themost impressive learners any of us will ever know.

I would defend in more detail the claim that children'scognitive structures, and particularly children's views ofthe mind, really are theories but I cannot do this here.And though that claim is sufficient to make the pointabout the illusion of first-person privilege, it is not neces-sary. If children's knowledge of intentionality is reallytheoretical in this sense, and I think it is, then commonsense is wrong. But even a much weaker and moregeneral claim about children's view of the mind - simplythat it is applied equally to the self and others and thatwhen it is wrong it is equally wrong for the self and others- could deal the commonsense view an equally devastat-ing blow.

R15. Listen to grown-ups, too. Finally, Czyzewska &Lewicki, Ericsson, Gurd & Marshall, Keysar, Velmans,and particularly Stanovich point to parallel findings in theadult literature that suggest that even adults may haveconsiderably less access to their internal mental statesthan is commonly believed. These commentators alsopoint out the peculiar fact that although folk psychologyhas been extensively studied in children, it is only begin-ning to be explored in adults. I welcome the supportingdata and am excited by the possibilities for furtherresearch.

100 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 101: A Gopnik - How We Know Our Minds

Author's Response

Functionalism, the theory-theory andphenomenology

Alvin I. GoldmanDepartment of Philosophy, University of Arizona, Tucson, AZ 85721Electronic mall: [email protected]

The ordinary understanding and ascription of mentalstates is a multiply complex subject. Widely discussedapproaches to the subject, such as functionalism and thetheory-theory (TT), have many variations and interpreta-tions. No surprise, then, that there are misunderstand-ings and disagreements, which place many items on theagenda. Unfortunately, the multiplicity of issues raisedby the commentators and the limitations of space make itimpossible to give a full reply to everyone. My response isdivided into five topics: (1) Which version(s) of functional-ism are candidates for an account of folk psychology, andis any such version satisfactory? (2) Where does TT standin light of the two target articles and commentaries? (3)Does a phenomenological story provide a promising ac-count of mental states, and is there special first-personauthority about them? (4) What are suitable methodologi-cal assumptions for our subject matter? (5) How well doesthe simulation theory fare as an account of third-personmentalistic ascription?

R1. Functionalism and its problems

R1.1. Scientific functionalism and folk functionalism.One question we can ask about mental states is: What istheir nature, or "essence"? This is a scientific and meta-physical question. If we give a functionalist answer to thisquestion - contending that mental states are really (per-haps "essentially") functional states - we are endorsingscientific functionalism (SF). As explained in my targetarticle (sect. 3, para. 1), I am not raising objections to SF.I do not doubt that mental states have functional proper-ties (virtually all states and events have some functionalproperties), nor do I seek to deny that mental states areessentially functional states (although I have my doubtsabout essences). These questions and answers, however,have little to do with the topic of the target article,namely, the ordinary person's understanding and repre-sentation of mental state concepts. That mental states arein fact functional does nothing to show that this is how theordinary person conceptualizes them, which is what thestudy of folk psychology is all about. We philosophers andcognitive scientists may have good functional theoriesabout mental states; this is by no means evidence forsupposing that the naive user of mentalistic words under-stands them in functional terms.

Several commentators appear to misunderstand thispoint because their favored form of functionalism is SF.For example, Loar gives the following "working formula-tion" of functionalism:

A functional property F holds of a person X if and only ifX has some lower-order property G that realizes F, thatis, that has the relevant functional role. So the func-

Response/Goldman: Folk psychology

tional property of believing that Caesar crossed theRubicon holds of X if and only if X has (say) a brainproperty G that realizes that belief property, that is, ifand only if some such G bears the relevant subjunctiveand other relations to other of X's properties.

This formulation specifies conditions for a person X tohave a functional property, but does not address thequestion of what it is for a (naive) speaker to understand aword like belief in a functional way. Person X might be in afunctional state without conceiving of it function-alistically.

A similar point applies to Horgan. Horgan defendswhat he calls "analytic functionalism" (AF), but this ver-sion of functionalism seems closer to SF than to folkfunctionalism (FF) (the kind we need). This emergeswhen Horgan points out that the CRs (category represen-tations) people use in identifying water need not expressits essence, that is, its chemical composition. Even ifbeliefs and desires are essentially functional states, hesays, competent cognizers could self-ascribe such statesusing CRs that contain no reference to their functionalessences. Horgan clearly implies that AF is concernedwith the essences of mental states; but then AF is just aversion of SF, with which I have no quarrel. However, ifthe CRs people use in self-ascribing belief contain noreference to functional essences, why say that they under-stand or represent words like desire or belief in terms ofsuch essences? Why say that the meaning they attach tothese terms has anything to do with functional roles? It'sour theory that characterizes desire and belief in func-tional terms, not theirs.

Another commentator who fails to take the distinctionbetween SF and FF sufficiently seriously is Fetzer. Heclaims that there is a "continuum" between the two; I amnot sure what he means by this, however, and I don't seethat anything of the relevant sort is adequately supported.What he seems to mean by "causal role functionalism" isthat beliefs and other mental states in fact have causalinteractions with one another and with behavior. This isundisputed. He proceeds to say that these causal transac-tions can take place, and that subjunctive properties ofour mental states can hold, "whether or not we everrealize that we are trading in subjunctive conditionals."True enough; but if we (the folk) don't realize (at any level)that our mental states have subjunctive properties, and ifwe don't type-identify them in terms of such subjunc-tives, what are the grounds for saying that the folkunderstand mental states in functional terms? Yet that isjust what FF maintains.

Sterelny's main device for saving FF is to promote adenotational account of primitive meaning and to holdthat the kinds denoted by mental terms are functionalroles. Since meaning or understanding cannot come fromdefinition or definition surrogates "all the way down,"some primitive meanings must arise from a pure denota-tion (reference) relation. Why not say that mental termsthemselves are such primitive terms, and that they di-rectly denote functional roles which in fact comprise thenature or essence of mental states?

This proposal ventures very far from the core idea ofFF. The core idea is: (1) People have an independent andunproblematic understanding of terms for inputs andoutputs, and (2) mental terms are defined by their rela-

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 101

Page 102: A Gopnik - How We Know Our Minds

Response/Goldman: Folk psychology

tions to the input and output terms (and one another).Thus, FF pictures mental terms as understood defini-tionally, not primitively. Furthermore, if Sterelny's de-vice of endowing a cognizer with functional-nomologicalcontent by pure denotation were acceptable, wherewould it stop? If untutored cognizers perceive a chemicalin a beaker for the first time, should their denotationalrepresentation of the chemical be endowed with.a contentthat captures all the functional-nomological propertiesactually possessed by the chemical? That would be amiraculously easy way to learn science! Sterelny's ap-proach, then, can hardly appeal'to the usual defenders ofFF. Nor can it appeal to the psychological defenders ofTT, for how would it accommodate the developmentalfindings, which TT proponents interpret as the gradualrefinement or change of a cognitively possessed theory?

R1.2. Classical functionalism and automaticity. If onestarts with SF, it is easy to arrive at what I called "classicalfunctionalism" (CF). We may simply add the idea that tobe in a certain mental state is, among other things, to havea tendency to believe that one is in that state. Moreover,at this level of theorizing, we do not need to spell out howthat tendency is realized; we do not need to give a"microstory" of the manner of realization. Thus, it is notsurprising that various commentators, including Daniel,Hill, Loar, Rey, and Shoemaker, complain that CFshould not be required to provide a microstory. How-ever, if we do not start with SF, but instead with FF - thetheory that the naive person's own understanding ofmental terms is functionalist - then things do not flow sosmoothly. To explain, some stage-setting will be helpful.

To study how ordinary speakers understand mentalisticwords, we should start from the same psychological per-spective used in studying the ordinary understanding ofwords like chair and bird. Although ordinary speakerscannot tell us (very reliably) what they mean by chair, weassume there is some content or other that they associatewith this term: one content associated with chair, anotherwith table, and so on. Furthermore, the distinctive con-tent must sometimes be used in classifying objects aschairs; at least it must be conceptually linked to whateverelse (such as a stereotype of a chair) is used in makingclassifications. It would be absurd from the standpoint ofcognitive science to hypothesize that people "automat-ically " classify things as chairs without comparing infor-mation about the target object with any (stored) represen-tation of what a chair is (or looks like). A "noncriterial"approach to chair classification is a nonstarter from thestandpoint of cognitive science. It is equally implausibleto hypothesize that people (or their cognitive systems)have no criteria by which they decide whether a currentmental state is a case of "hoping" or "planning,' of "guess-ing" or being "certain," and so forth. There must be somecontent-laden way that these verbal choices get made.And the criteria tacitly used are directly relevant to howpeople understand or represent the concepts of hopingand planning, guessing and being certain.

So if FF is true, some sort of functional criterion (CR)must be involved in a competent cognizers understand-ing or representation of each mental predicate. In otherwords, (1) there must be a criterion (somehow mentallyencoded), and (2) it must be a functional-style criterion.In short, the appropriate version of FF is RF (representa-

tional functionalism). Now if RF is to accommodate self-ascription in the manner of traditional functionalism, thefunctional criterion must include the condition that beingin state M involves a tendency to believe that one is in M.This is where the threat of circularity arises. To self-ascribe state M, one must determine that one's presentstate tends to cause a belief that one is in M. But how canone do that without first self-ascribing M, that is, believ-ing that one is in M?

To avoid this circularity, we might reject the contentfulmicrostory about criterion-possession and criterion-matching. If a person can believe he is in M withoutdetermining that the state in question satisfies the crite-rion, the circularity can be avoided. To reject this micro-story, however, as CF does, is to reject the only plausibleframework from a cognitive science perspective. BothShoemaker and Daniel question a contentful microstory,but I do not find their doubts compelling. Shoemakercompares mental classification to hardwired inferentialprocedures; that cannot be a good analogy, however,because we must learn a distinctive content for eachdistinct mental term. The target article overstated mat-ters a bit in saying, or implying, that CF rules out anymicrostory; I stand corrected on that. There could be apurely neural microstory, as Shoemaker, Daniel, andothers point out. What is true, though, is that CF rulesout any contentful microstory of the sort sketched aboveon pain of circularity. This is damning enough from acognitive science perspective.

There is a further problem for CF, which is nicelypresented by Hill. CF involves what Hill calls the "defini-tion thesis," namely, that learning the meaning of paininvolves learning that pain tends to cause one to believethat one is in pain. If the definition thesis is correct, Icannot fully appreciate the content of the term painunless I understand what it is to "believe that one is inpain." But since the meaning of the latter expressiondepends partly on the meaning of pain, I cannot fullygrasp the content of pain unless I have already grasped itscontent. Here we have another, equally serious, form ofcircularity that CF generates.

Cordon defends "automatic" self-ascription, but hisdefense has two problems. First, Gordon himself admitsthat training children to say "I want a banana," instead ofsimply "banana!" whenever they want a banana, may beinsufficient to give them the concept of wanting or desir-ing. If so, then the training story sheds no light on what itis to grasp the concept of wanting. Second, it is totallyimplausible, especially from a cognitivist perspective, tosuppose that training occurs without the mediation ofcognitive learning. How does the child learn to say "Iwant a banana" only when it wants a banana unless itlearns some sort of (internal) cue by which it identifies itspresent state as a wanting? Although Gordon's descrip-tion of the training process mentions no recognitionalfactors, the learning must surely be recognitional (or"cognitively substantial").

Kobes proposes a variant of CF according to which ourself-attributions partly constitute object-level mentalstates and events. It disputes the assumption that object-level mental state-tokens belong to determinate typesprior to and independently of relevant self-attributions.This proposal, however, seems unpromising from a de-velopmental standpoint. Presumably children under 2

102 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 103: A Gopnik - How We Know Our Minds

Response/Goldman: Folk psychology

years of age (or perhaps 18 months) do not have well-developed concepts of desire and belief; they do not self-attribute these states to themselves. Nonetheless, don'twe want to explain their actions in terms of desires andbeliefs? Don't we think they have desires and beliefs? Soself-attribution is an overexacting standard for the exis-tence of such states. If this is right, it undercuts thefundamental contention of CF.

Finally, several commentators suggest that the CRspeople use "on line" are not strict functionalist CRs, butvarious derived or associated heuristics. Rey speaks of"heuristics" and "indicators" of the functionalist analyses.Kobes speaks of using "nondefinitive, heuristic criteria,"while the definitive functional roles are stored in a "highlyimplicit and generalized form." Morton talks about"heuristics and approximations," which are "somethingless than the full functional specification." Obviously it isimpossible to exclude these possibilities, especially sincethey are stated so vaguely. I never supposed that mycritique of the functionalist approach to self-ascriptioncould rule out all variations in one fell swoop. However,to meet my challenge, any such proposal would have topass three tests. First, it must accommodate the actualspeed and accuracy of first-person ascriptions. Second,the user would have to be describable as having, in themain, a functionalist conception of the state in question.Third, it is not enough that the motley assortment ofdefinitions and heuristics could work; there must beevidence that they really are what people have in theirheads. If we knew a priori that functionalism is the rightgeneral approach, we might feel secure that the detailsand heuristics can ultimately be filled in. However, weare not entitled to this a priori assumption; (folk) func-tionalism may just be an entirely misguided hypothesis.

R2. The current status of the theory-theory

R2.1. Do my arguments prove too much? Folk functional-ism is examined at length in the target article because it isthe clearest and best developed version of TT. Even if myarguments against FF are correct, however, might therenot be some alternative version of TT that can commandour assent? Before considering this possibility, let meaddress a point made by three commentators about myantifunctionalist arguments. Copnik, Chater, and Rey allobject to my arguments on the grounds that, if correct,they prove too much. Being perfectly general, they wouldrule out any psychologically real theoretical terms. Yetsurely this would be absurd. Scientists understand andapply theoretical terms, such as proton, mass, gene, andso forth.

In reply, we may first note the possibility that theunderstanding of theoretical terms in science does notproceed primarily by means of laws, but rather (especiallyin the case of the novice) by means of analogies withfamiliar objects in other domains. In the case of RF,however, the alleged understanding consists precisely inmentally represented laws, which are harder to acquireand arguably more cumbersome to manipulate than anal-ogies. Second, the set of laws postulated by functionalismis far more complex in certain respects than laws ofphysics. Each separate mental predicate introduces itsown bundle of laws, and there is a potential infinity of

distinct mental states (especially among the attitudes).Third, I already acknowledged that relational and disposi-tional terms may have "secondary" means of identificationvia intrinsic and categorical correlates. These could cer-tainly be used in applying theoretical and dispositionalterms in general. In the case of mental terms, however,this poses a special difficulty for functionalism because thebest candidates for such correlates, namely, qualitativeproperties, are precisely the sort that pure functionalismseeks to avoid. Finally, it should be stressed that in thesciences, students are explicitly taught laws that weremostly developed by others. By contrast, the putativemental laws are rarely formulated by anybody except afew philosophers. Thus, children could not acquire theselaws by explicit tutelage; individual children must eachconstruct their own set of laws. That is a more demandingcognitive task than the one confronted by students ofscience. Thus, the cases of mental terms and scientificterms are not entirely parallel.

R2.2. The theory-theory and conceptual change. We can-not survey all evidence that might support TT in itsmultifarious forms, but how does Gopnik's defense, inparticular, stand in light of the commentaries? Her princi-pal argument for TT is the alleged existence of conceptualchange from 3 to 4 years of age. This kind of change, sheappears to think, can only be explained as theoreticalchange. Her argument for conceptual change, moreover,hinges on changes in task performance, especially false-belief tasks.

As several commentators point out, however, the em-pirical evidence does not clearly support radical concep-tual change between 3 and 4. The radical-conceptual-shiftapproach has trouble accommodating many findings. Forexample, as Zaitchik & Samet, Harris, and Plaut &Karmiloff-Smith observe, there is Zaitchik's (1991) find-ing that 3-year-olds succeed on the familiar false-belieftask when it is given solely in verbal form. This, togetherwith the literature cited in my own commentary, speaksagainst the notion that 3-year-olds have an entirely differ-ent conceptualization, or theory, of belief. Further, liter-ature cited by Bartsch & Estes and Harris clearly sug-gests that 3-year-olds do grasp the "aboutness" orintentionality of mental states. As evidence that the factoraccounting for performance change is not radical concep-tual shift, the findings reported by Zelazo & Frye areparticularly striking. If 3-year-olds have trouble withperspective shifts generally, as Zelazo & Frye's experi-ments indicate, performance change on false belief tasksis explainable as a domain-general cognitive change, not achange in the specific content of a theory of mind.

R2.3. Weaker forms of the theory-theory. My target articleraised problems for a strong version of TT, that is, func-tionalism. What about weaker versions of TT? Don't theystand a better chance of being true? According to Rips, TTis simply the thesis that people's beliefs about categoriesinclude: (a) some account of what makes an instance acategory member, (b) information about the typical prop-erties of category members, and (c) some account of therelation between (a) and (b). On this very loose version ofTT, even a phenomenological or qualitative account ofmental categories would qualify (as Rips notes). Obvi-ously people are free to use the term theory as they

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 103

Page 104: A Gopnik - How We Know Our Minds

Response/Goldman: Folk psychology

choose. If the term is used this loosely, however, then TTdoes not even imply that the process of ascribing mentalstates is "inferential" rather than "introspective" (or"simulative").

We should be wary of TT in such a weakened form.Whereas the functionalist version of TT gives clear indica-tions of how a cognizer could go about inferring themental states (and behavior) of others using functionalistlaws, a law free version of TT leaves this a mystery. Theoriginal attraction of (folk) functionalism was its delinea-tion of inferential patterns by which third-person ascrip-tion might be accomplished. Alternative versions of TTought to tackle the same problem. Unfortunately, weakerforms of TT have not addressed this problem squarely andseem ill-equipped to handle it.

Similarly, for all her endorsement of an inferentialmode of self-knowledge, Copnik gives us no indication ofhow a cognizer is supposed to make self-ascriptions. Weare only told that the process must be inferential. Butwhat are the data (premises) from which the ascriptionsare inferred? And by what rules of inference? As Shoe-maker correctly notes, "Gopnik tells us basically nothingabout what the immediate objects of'psychological expe-rience' are that provide the basis for our theoreticallygrounded inference to intentional states." The offer of sononspecific a form of TT, the present writer feels, is theoffer of a pig in a poke.

R3. Phenomenology and privileged access

R3.1. The role of phenomenology. A keen recognition ofthe virtues of phenomenological egalitarianism acrossmental aspects is shown by Gunderson. He supportsphenomenological parity for the attitudes by adducingthe entirely plausible case of speaker/hearer asymmetrieswith respect to disambiguation. If I overhear Brown say toJones, "I'm off to the bank," I may wish to know whetherhe means a spot for fishing or a place to do financialtransactions. But if/ say to someone, "I'm off to the bank,"I cannot query my own remark: "To go fishing or to make adeposit?" I virtually always already know. As in the tip-of-the-tongue cases, I am aware of "conceptual structure" (inthis case, what is meant or intended), which is somethingphenomenologically present. The target article mainlysupported a distinctive phenomenology for the attitudetypes. Gunderson's example supports distinctive phe-nomenology for different contents, which was probablywhat Jackendoff (1987) intended as well.

Support for phenomenology also comes from the EDI-TORIAL COMMENTARY. The editor alone appears to ap-preciate my point that not all functional states qualify asmental, so we need some additional or different elementto account for mentality (see sect. 8). Only qualitativecharacter seems a promising line on this problem.

Hill also endorses the role of qualitative characteristicsin the theory of self-ascription but rightly worries aboutthe metaphysical ramifications. Doesn't it commit one toproperty dualism, a generally unpopular metaphysicalposition? I meant to sidestep this question for presentpurposes, not to endorse property dualism, as Hill inter-prets my suggestion. In any case, Hill proposes preciselythe solution to the problem that I had in mind (roughlyspeaking) but did not broach. We can maintain that

qualitative properties are identical with neural proper-ties, thus avoiding property dualism, yet also maintainthat a person can have introspective access to qualitativebut not neural information (facts). Even though the prop-erty "water" is identical with the property H2O, we canget "watery" information about a fluid without gettingspecific chemical information about it. No doubt moreshould be said about the nature of "properties" and"information," an issue that deterred me from embarkingon this topic in the target article. The general lines ofHill's solution, however, seem exactly right.

Velmans also agrees that the qualia of experience areintrinsic to any plausible account of how our own mentalstates are known. He adds that it does not follow thatfunctional accounts of the mind have no place at all. Ofcourse, I agree with this completely, having said that SFis entirely congenial. One illustration he gives of the mixbetween introspective and functional factors, however,leaves me perplexed. "A belief, manifested as 'innerspeech,' might result in part from some tacit theory. If so,changes in tacit theory might result in changes in howbeliefs are experienced." This passage is susceptible totwo interpretations. It might just mean that beliefs can beaffected by the cognizer's theories, an innocuous conten-tion that everyone would accept. Or it might mean that,without changing the belief, a theory might affect how thebelief is experienced. This is more problematic. I alsoremain perplexed by Velmans's positive view, whichholds' that conscious phenomenology is causally impor-tant from a first-person though not from a third-personperspective. Metaphysical facts, such as the causal rele-vance or irrelevance of certain events, are not perspec-tival; they either obtain or not. Velmans's metaphysicalperspectivalism is puzzling.

Rachlin is skeptical about phenomenology and dis-cusses thought experiments in that connection. He claimsthat the "perfect actor" thought experiment is intuitivelyconvincing only over a brief period of time. Our intuitioncollapses when we try to imagine a person who from birthto death reacts appropriately to normally painful stimuliand yet never feels pain. A similar collapse would occur,he intimates, if you learned that your spouse of 50 yearshad been just a machine. That would not lessen your griefover her death, he says. Rachlin is right that it would bedifficult to imagine that someone with whom one hadlived normally for 50 years never experienced qualia.(This, I suggest, is because our tendency to "project"qualia and other subjective states is so powerful.) But ifwe were really convinced of this, I think it would lessenour grief. At least it would baffle and upset us; we wouldnot take it in our stride. This strongly confirms mycontention that our normal conception of people's mentallife is a conception of a qualia-infused life. If our concep-tion were purely functional, we would not be disarmed ordismayed to learn that our spouse had no qualia.

Chalmers challenges the role of qualia by means of asimilar thought experiment. He asks us to consider aconceptually (or logically) possible replica of himself whohas no qualia: Zombie Dave (ZD). He then argues, first,that ZD has beliefs, though they are unaccompanied byphenomenological tinges. Since I accept thought experi-ments as pertinent evidence about conceptual contents, Iagree that people's intuitions about ZD are relevant to thecontents of the attitude terms. My own intuition about

104 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 105: A Gopnik - How We Know Our Minds

Response/Goldman: Folk psychology

ZD, however, is not the same as Chalmers's. First, thereare many attitude terms I find quite inapplicable to ZD. Icould not ascribe to a zombie such terms as wondering,speculating, reflecting on, craving, enjoying, thinkingabout, pondering, deliberating, being astonished, imag-ining, and so forth. It is true that believes goes down moreeasily as applied to a zombie, but I suspect this is becausebelieves is readily understood in the "stored" rather than"activated" sense, which does not imply (current) aware-ness or phenomenology (see the target article, sect. 8).

Chalmers has a back-up argument, however. If oneresists the idea that ZD has beliefs, he says, we can stilluse ZD to show that qualia cannot be the primary elementin our mental state concepts. For ZD ascribes preciselythe same mental states to himself as Chalmers does. Thiscapacity for self-ascription cannot be explained by qualia,since ZD doesn't have any. So Chalmers himself (and therest of us) must not use qualia either. Here I think theargument goes astray. The explanation of how ZD self-ascribes "beliefs" might be quite different from the wayChalmers and the rest of us self-ascribe it. Perhaps we usequalia though ZD doesn't, and our qualia might be caus-ally efficacious. (This may raise questions about the con-ceptual possibilities Chalmers tries to construct; thisposes technical metaphysical issues we cannot pursuehere.) In this case, although ZD's verbal behavior isisomorphic to ours, the meanings of his mental words (orthe ways he represents them) are not equivalent to ours.

Jackson has a different sort of objection to qualia for theattitudes. He claims that it is "of the essence" of theattitudes that a suitable complex of them "by their verynature" points toward one bit of behavior rather thananother. By contrast, no complex of qualia can pointtoward one bit of behavior rather than another. Jackson'sclaim, however, begs the question at issue. Is it part of the"essence" of the attitudes - that is, is it part of our CRs ofthem - that they point toward one bit of behavior ratherthan another? This would entail that we have functionaldefinitions built into our CRs. Whether this is so, how-ever, is precisely what the (empirical) framework I havesketched would have to determine. Jackson is not entitledto take it as a given. It may just be a "factual" rather than a"definitional" truth that certain complexes of desires andbeliefs point toward certain behavior.

Olson & Astington raise more specific objections tophenomenological properties for the attitudes. Their"logical" points, however, seem confused. They firstclaim that beliefs are perspectival, although they thendeny that true beliefs are perspectival. What remainsunclear is why the (putative) perspectival nature of beliefsshould keep them from having a phenomenology. What'sthe connection? Second, they ask how ascribing falsebeliefs would be possible if belief (self-) ascription werebased on introspection. Again, I do not see the difficulty.Granted, you do not learn of a belief's falsity by intro-specting it directly, but you can learn of its falsity (bygetting new evidence). Introspection is relevant not tofinding the truth value of a belief but to identifying itstype and its content.

Olson & Astington also object that if mental stateconcepts were built from introspectively available quali-ties, they should be ascribed later to others than tooneself, which empirical evidence does not suggest oc-curs. This empirical claim is dubious, however. Cassidy's

review of the evidence suggests that there is a temporaldifference between children's ability to report theirown versus others' mental states. In any case, myintrospection-plus-simulation approach need not predicta large temporal discrepancy between self-ascription andother-ascription, although it is compatible with such adiscrepancy. Why couldn't one immediately use simula-tion to extend an initially self-applied concept to others,as Pillow points out? Olson & Astington's second empiri-cal point concerns the simulation heuristic, which I ad-dress below (sect. R5).

The first of Woodfield's questions concerns the in-tended scope of the phenomenological theory. Thistheory will falter, he says, if (a) a token state lacks phe-nomenology altogether, (b) a state's phenomenology isinsufficiently distinctive, and (c) the cognizer is insuffi-ciently skilled with the concept of K to apply it on aphenomenological basis. I agree that introspection wouldnot yield correct (or any) classifications in all cases, butthis is not an objection, because I made no such claim forintrospection. Nor do I mean to deny that other, non-qualitative, information may be utilized in state classifica-tion. I regard it as an empirical question whether, and towhat extent, qualitative characteristics play a role inmentalistic CRs. The details all remain to be discovered,although I have made my conjectures. (Recall that Iexplicitly allowed the possibility of multiple representa-tions in sect. 7, so I am not committed to any singledimension, e.g., the qualitative, being the entire story ofCR contents.) Moreover, even if a mental kind K isrepresented qualitatively, it may take developed skills, asWoodfield rightly says, to deploy this representationsuccessfully. The terms think and certain, for example,may be paired with representations of feelings of confi-dence, and some retrieval and comparison processes maybe involved in choosing exemplars to which a currenttoken state is compared. Good retrieval and comparisonmay be a problematic (and very context-driven) affair.Thus, the frequent suggestion that classification by intro-spection is a "direct" method is misleading. (At the sametime, nondirectness doesn't imply the use of a "theory" inany strict sense of that term.)

R3.2. Is there special first-person authority? A somewhatbroader question than the role of phenomenology is thequestion of whether there is an asymmetry between first-person and third-person knowledge. Gopnik denies suchan asymmetry whereas I, of course, endorse it. One mightaccept such an asymmetry, moreover, even if one didn'taccept the role I tentatively propose for phenomenology.

Harris provides a convincing case from developmentaldata for the asymmetry. After distinguishing between (1)current states of the self, (2) noncurrent states of the self,and (3) states of the other, he indicates that experimentsshow even children to be more accurate for category (1)than for (2) or (3), which is just what the asymmetry thesisclaims. Cassidy's review of relevant literature also sup-ports this, as do the commentaries by Bartsch & Estes,Nichols, and Chandler & Carpendale.

Of course Gopnik tries to assimilate her cases of pastbelief to the category of present belief. This raises thequestion of how far past an episode may be and still qualifyfor category (1). Here Ericsson's commentary is verypertinent. Whereas Gopnik argues that the span of intro-

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 105

Page 106: A Gopnik - How We Know Our Minds

Response/Go\dman: Folk psychology

spection extends beyond several minutes, Ericssonpoints out that most other reviews of memory retention,especially in regard to past thoughts, suggest a muchfaster decay, leading to considerable incompleteness ofrecall. Even adults have difficulty recalling an isolatedpast thought or attitude, and their recall can be biased byintervening events. This speaks strongly against Gopnik'sinclusion of erroneous reports of past mental states incategory (1) and it thereby helps to support the asym-metry thesis.

Pietroski agrees that there is first-person authority vis-a-vis beliefs but questions whether this should be ex-plained in terms of qualia. There is special first-personauthority about verbal claims, he says, yet we need notposit "claim-qualia" to explain this. However, as Gunder-son's discussion suggests, first-person authority aboutclaims may be grounded in first-person authority aboutintentions, which may well have a phenomenologicalbasis. Pietroski apparently favors a highly relational con-strual of beliefs, such that having a belief does not involvehaving a particular inner entity but only a suitable psycho-logical similarity relation to other agents. Doesn't such asimilarity relation, however, have to be partly founded onsomething internal to the believer? And it seems plaus-ible that the believer's special access to that internalsomething creates his peculiar authority. Furthermore,even if there is a relational dimension to the contents ofbeliefs, the differences between the attitude types (as wellas sensation types, which should not be neglected) mustsurely reside mainly in agent-internal differences. Thus,an agents distinctive ability to describe himself correctlyas thinking that P rather than hoping that P must be givena substantial rather than a deflationary explanation.

Armstrong's strategy for deflecting my challenge tofunctionalism is to admit introspection, and hence privi-leged access to mental states, but deny that introspectiveknowledge is limited to intrinsic properties of a state. Thisstrategy, if successful, could go a long way toward meet-ing my challenge. As Armstrong recognizes, though, itwould not be enough for people to have direct introspec-tive access to actual causal connections between tokenevents. They must also have direct introspective access tosubjunctive properties of states. How plausible is this?Direct awareness of physical forces, Armstrong's primeexample, is a moderately plausible case of awareness ofcausality. But is it a case of direct awareness of a subjunc-tive property, or tendency, as he claims? This is quitedubious. We can be aware of a force as an actuallyoperative cause, exerting a push or pull in a certaindirection. That is not equivalent to awareness of a poten-tial for pushing or pulling. Potentials are exactly thethings that need to be identified for type-identification ofmental states according to functionalism. Furthermore,although innate knowledge of mechanical and dynamicallaws has some evolutionary plausibility, innate knowl-edge of state-specific mental laws of the kind postulatedby functionalism is much harder to swallow.

R4. Methodological matters

R4.1. Categorization by matching. Several commenta-tors, especially Heil, Campbell & Bickhard, and Pratt,question my basic framework for categorization, the

"matching" framework, so a few words should be said inits defense. First, in order to test substantive theories ofmental state representation, we need to start with somesort of framework. The matching framework has manyvirtues, including: (1) Some such framework is widelyused in cognitive science, and (2) it is neutral amongsharply contrasting substantive theories, such as RF(representational functionalism) and qualia-based ap-proaches. The matching framework I have in mind postu-lates a matching (exact or approximate) of category repre-sentation (CR) and instance representation (IR) contents.How those contents are concretely represented or imple-mented, however, is deliberately left open. Thus, even aconnectionist network can be thought of as forming a setof complex representations in its layer of hidden units,and of categorizing stimulus inputs by (in effect) matchingthe features encoded by the inputs to certain representa-tions encoded in the hidden units.

I cheerfully acknowledge that the entire frameworktakes (primitive) contents for granted. There is no attemptto explain how the "features" represented in the CRs andIRs get their content. This assumption, however, seemsreasonable; we have to be allowed to start somewhere. Nodoubt, as Campbell & Bickhard indicate, how matchingof features is executed needs explanation, or specifica-tion, but we can't be expected to do everything in onearticle. That simply was not one of the tasks this targetarticle undertook. Heil suggests that the absence of adetailed account of matching leaves as big a mystery as Iaccuse CF of leaving. There is a difference, however. CFseems to deny that there is any content (or criterion) bywhich mental classifications are made, which indeedcreates a mystery. I am not denying that there is somemode or manner by which content matching takes place; Ijust do not tackle that (admittedly difficult) problem in mytarget article.

Pratt constructs an alternative to a matching scheme ofclassification. As I understand it, the way the systemjudges that it believes that P is by waiting for a data-structure P to enter its belief box and then simply taggingthis data-structure. At least three problems loom for thisapproach. First, it seems committed to the incorrigibilityof mental-state beliefs; yet incorrigibility is a suspectdoctrine. Second, it assumes that there are enough"boxes" for every mental predicate or category, a dubiousprospect (as noted in sect. 8 of my target article). Third, itis not clear how the scheme could work for sensations,since there are no data-structures associated with them(nor has anybody ever proposed that there is a box foreach sensation type). Pratt tries to embed his generalscheme within a functionalist framework, but his brand offunctionalism is just SF, not FF. His treatment of theattitudes in terms of data-structures gives no hint that thesystem conceptualizes the attitudes in functional terms,just that its states are functional states. This does notsupport FF.

Morris & Richardson are right to discern a problem inthe matching-framework treatment of the third-personcase. Actually, the same problem arises for first-personpast-tense ascriptions. In neither case are there currentqualitative states to match the qualitative CRs. There areseveral ways to handle this. One possibility is that thereare IRs of qualitative states which represent them in asimilar qualitative code, and which can match qualitative

106 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 107: A Gopnik - How We Know Our Minds

Response/Goldman: Folk psychology

CRs in a fashion analogous to the qualitative states them-selves. Another possibility is dual representations, asmentioned in section 7.

R4.2. Other methodological issues. If I may look pastGurd & Marshall's humor (though not unappreciatively),we seem to be slightly at odds on methodology. Theyapparently think (or think I think) that the way to studyfolk psychology is to consult grannies and accept theirverbal responses as definitive of how they understandmental states. But nobody thinks we should determinehow grannies represent other concepts, such as grand-mother or number, by asking them and accepting theirresponses. Why should this work for mental concepts? Ipropose a more refined methodology, which considerswhat their cognitive systems actually do in ascribingmental predicates. (This seems to be missed by Morris &Richardson too, who suspect me of being concerned withwhat people think they are doing.) Neither methodologyfor pursuing folk psychology, however, is a good way ofgetting at the (whole) reality of mental states. Even if thecommonsense understanding of the mental is (in certainrespects) Cartesian, it does not follow that the truth aboutmental states, especially the whole truth, is Cartesian. Ireject "granny is right," if that means that granny is thefount of all metaphysical truth in the mental domain.

Several points need clarification in connection withDittrich & Lea's commentary. First, the analogy withvisual perception was merely chosen to illustrate CRs andIRs, and to show how the study of CRs could be aided byreflection on the available IRs. No further aspect ofsimilarity between perception and self-knowledge ofmental states was intended. Second, their commentabout qualitative versus quantitative sensations betokensa misunderstanding of my sense of qualitative. As used byphilosophers of mind, this refers to the "subjective feel" ofmental states; it does not contrast with quantitative.Finally, contrary to their assertion, my matching frame-work does not introduce any homunculus.

Butterworth evaluates the two target articles by theirapproximation to an ecological methodology. I don't sharethe Gibsonian framework, but this is not the occasion toelaborate this. Butterworth misreads me as favoring FFrather than opposing it. My opposition to FF does notbreed dualism, however, so perhaps we are not so farapart.

R5. The simulation theory

My remarks on the simulation theory in the target articlewere tangential, so no extended discussion of it is inorder. Let me respond, however, to a few specific prob-lems raised by Perner and by Leslie, German & Happe.One problem mentioned by both commentaries is higher-order simulation. According to Perner, a simulation-based prediction of how Mary will describe John's knowl-edge of the chocolate would require us to "simulate Johnwithin a simulation of Mary." He finds this almost imposs-ible, he reports, yet he has little difficulty answering thequestion. Similarly, Leslie et al. say that a simulationaccount of recursiveness would have to put "mechanismswithin mechanisms," which they doubt will work.

Why shouldn't simulation work? Here is my scenario.

The first thing I do to simulate Mary, in Perner's littleexample, is generate some initial beliefs she would haveabout John. I put myself in Mary's shoes of agreeing withJohn that he will put away the chocolate. I feed anawareness of this agreement into my Mary simulation andallow an inferential process to operate on it. This inferen-tial process outputs the conclusion that John will put thechocolate in some spot X and remember which spot it is.So I ascribe this belief to Mary. (Do I have to simulateMary simulating John in order to arrive at this belief? I amnot sure this is required by the simulation theory, be-cause it's not necessarily part of the theory that agentsknow that they and others use simulation. But it's not adifficulty anyway. To simulate Mary simulating John, all Ihave to do is simulate John, being guided by pretendbeliefs attributed to Mary.) I am now asked what Marywill say if she is asked whether John knows where thechocolate is. I again imagine myself in Mary's shoes bypretending to have the beliefs I initially attributed to her.I feed these beliefs plus the supposed circumstance ofbeing asked whether John knows where the chocolate isinto my decision-making system and it outputs the deci-sion: say "yes." Hence I form the belief that Mary wouldsay "yes." Nothing here seems terribly problematic.

Perner further argues that one cannot use simulation tofigure out what a third party sees, because the visualsystem cannot mock-operate on hypothetical assump-tions. True, the visual system can't do this, but perhapsthe imagistic system can. Just as one can rotate visualimages, one may be able to picture a scene from apermuted perspective. Whether this ultimately involves"theoretical" information is moot, but it doesn't seem torequire knowledge of how the imagistic system works.The task is just "handed" to the imagistic system forexecution.

It is worth adding that I do not claim that all third-person ascription uses simulation and simulation only. Ihave always made room for some amount of inductivelybased information. Nor do I wish to exclude such informa-tion for the first-person case. Leslie et al.'s example of the2-year-old who complains loudly when noticing a nearlyhealed graze is a case in point. I already (briefly) acknowl-edged in the target article (sect. 5) that judgments of one'sown mental states can be affected by (inductively based)expectations. So I make no blanket rejection of "theoreti-cal" inference in self- or other-ascription. I just doubt thatthat's where all the action is, or even most of it.

R6. Summary and conclusion

To sum up: We have seen that when the crucial distinc-tion between scientific and folk functionalism is observed,the problems originally raised for the latter remain inforce. When classical functionalism (CF) is understood asa version of folk functionalism (FF), and when the needfor a contentful microstory of self-ascription is appreci-ated, CF still suffers from the circularity identified in thetarget article. Trying to defend the theory-theory (TT)without FF also incurs many difficulties. First, theradical-conceptual-shift version of TT, which Gopnik es-pouses, is challenged by a variety of recent experiments.Second, forms of TT that distance themselves from FF donot explain in any detail how self-ascriptions and other-

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 107

Page 108: A Gopnik - How We Know Our Minds

Re/erences/Gopnik/Goldman: Knowing our minds

ascriptions of mental states can be generated by theoreti-cal inferences. The phenomenological approach to self-ascription has many virtues, and its alleged defects are byno means clear-cut or decisive. Furthermore, empiricalevidence supports first-person authority as against a per-fect symmetry between first- and third-person knowl-edge. Finally, the simulation approach to third-personascription remains viable, and the basic framework Iproposed retains its attractions.

Given the extent of the controversy, it would be foolishto conclude that any positive theory has been firmlyestablished. But despite the current popularity of TT andFF, and the comparative unpopularity of phenomenol-ogy, we have seen that the former approaches haveserious debilities and that a cognitive theory of folkpsychology may greatly profit from taking phenomenol-ogy seriously.

References

Letters a, c, and r appearing before authors' initials refer to target article,commentary, and response respectively.Aksu-Koc, A. (1988) The acquisition of aspect and modality. Cambridge

University Press. [rAGop]Alexander, R. D. (1987) The biology of moral systems. De Gruyter. [KS]Allport, D. A. (1988) What concept of consciousness? In: Consciousness in

contemporary science, ed. A. J. Marcel & E. Bisiach. Oxford UniversityPress. [KES]

Anderson, J. R. (1983) The architecture of cognition. Harvard UniversityPress. [RLC]

Anderson, U. & Wright, W. F. (1988) Expertise and the explanation effect.Organized Behavior and Human Decision Processes 42:250-69. [KAE]

Armstrong, D. M. (1968) A materialist theory of the mind. Routledge &Kegan Paul. [aAIGol, DMA, DJC, KS]

Armstrong, S., Gleitman, L. & Gleitman, H. (1983) What some conceptsmight not be. Cognition 13(3):263-308. [GR]

Astington, J. W. & Gopnik, A. (1991a) Theoretical explanations of children'sunderstanding of the mind. British Journal of Developmental Psychology9:7-31. (arAGop, aAIGol]

(1991b) Developing understanding of desire and intention. In: Naturaltheories of mind: The evolution, development and simulation of secondorder representations, ed. A. Whiten. Blackwell. [arAGop]

Astington, J. W., Harris, P. L. & Olson, D. R., eds. (1988) Developingtheories of mind. Cambridge University Press (Cambridge). [aAGop,aAIGol, BK]

Baars, B. (1988) A cognitive theory of consciousness. Cambridge UniversityPress. [aAIGol]

Baker, L. R. (1987) Saving belief: A critique of physicalism. PrincetonUniversity Press. [KES]

Baron-Cohen, S. (1989a) Are autistic children behaviorists? An examination oftheir mental-physical and appearance-reality distinctions. Journal ofAutism and Developmental Disorders 19:579-600. [SB-C]

(1989b) Perceptual role-taking and protodeclarative pointing in autism.British Journal of Developmental Psychology 7:113-27. [SB-C]

(1989c) Joint attention deficits in autism: Towards a cognitive analysis.Development and Psychopathology 1:185-89. [SB-C]

(1991a) The development of a theory of mind in autism: Deviance anddelay? In: Psychiatric clinics of North America, special issue on pervasivedevelopmental disorders, ed. M. Konstantareas & Beitchman 14:33-51.Saunders. [SB-C, PLH, AML]

(1991b) Precursors to a theory of mind: Understanding attention in others.In: Natural theories of mind, ed. A. Whiten. Basil Blackwell. [aAGop,SB-C, DCP]

Baron-Cohen, S., Leslie, A. M. & Frith, U. (1985) Does the autistic childhave a "theory of mind"? Cognition 21:37-46. [SB-C, DCP]

Barresi, J. & Moore, C. (1992) Intentionally and social understanding.Unpublished manuscript. [CM]

Bartsch, K. & Wellman, H. (1989) Young children's attribution of action to

beliefs and desires. Child Development 60:946-64. [cAIGol, AML]Bartsch, K. & Wellman, H. M. (1990) Children's talk about beliefs and

desires: Evidence of a developing theory of mind. Presented at the 20thAnniversary Symposium of the Jean Piaget Society, May 31,Philadelphia. [KB]

Bateson, G. (1972a) Steps to an ecology of mind. Chandler. [GEB](1972b) Redundancy and coding, In: Steps to an ecology of mind.

Chandler. [GEB]Belkin, N., Brooks, H. & Daniels, P. (1987) Knowledge engineering using

discourse analysis. Intentional Journal of Man-Machine Studies 27:127-44. [PL]

Bern, D. J. (1972) Self-perception theory. In: Advances in experimental socialpsychology, vol. 6, ed. L. Berkowitz. Academic Press. [BHP]

Bickhard, M. H. (1973) A model of developmental and psychologicalprocesses. Doctoral dissertation, University of Chicago. [RLC]

(1978) The nature of developmental stages. Human Development 21:217-33. [RLC]

(1980) A model of developmental and psychological processes. GeneticPsychology Monographs 102:61-116. [RLC]

(1992) Commentary on the age 4 transition. Human Development 35:182-92. [RLC]

Bickhard, M. H. & Richie, D. M. (1983) On the nature of representation: Acase study of James J. Gibson's theory of perception. Praeger. [RLC]

Biederman, I. (1987) Recognition by components: A theory of human imageunderstanding. Psychological Review 94:115-47. [aAIGol, GR]

(1990) Higher-level vision. In: Visual cognition and action, ed. D.Osherson, S. Kosslyn & J. Hollerbach. MIT Press. [aAIGol]

Block, N. (1978) Troubles with functionalism. In: Minnesota studies inphilosophy of science IX, ed. C. Savage. University of MinnesotaPress. [aAIGol, WEM]

(1980) Troubles with functionalism. In: Readings in the philosophy ofpsychology, vol. 1, ed. N. Block. Cambridge University Press. [GR]

(1990a) Inverted Earth. In: Philosophical perspectives 4, ed. J. Tomberlin.Ridgeview. [aAIGol]

(1990b) Consciousness and accessibility. Behavioral and Brain Sciences13:596-98. [aAICol]

(1991) Evidence against epiphenomenalism. Behavioral and Brain Sciences14(4):670-72. [aAIGol, MV]

Boas, F. (1911) Kwakiutl. In: Handbook of American Indian languages,Oosterhout, N.B. Anthropological Publications (reprinted 1969). [BACS]

Bogdan, R., ed. (1991) Mind and common sense: Philosophical essays oncommonsense psychology. Cambridge University Press(Cambridge). [KES]

Boghossian, P. (1989) Content and self-knowledge. Philosophical Topics 17:5-26. [aAIGol, PL]

Bolton, P. & Rutter, M. (1991) Genetic influences in autism. InternationalReview of Psychiatry 2:67-80. [SB-C]

Bourdieu, P. (1991) Language and symbolic power. Polity Press. [BACS]Bowers, K. (1984) On being unconsciously influenced and informed. In: The

unconscious reconsidered, ed. K. Bowers & D. Meichenbaum.Wiley. [aAIGol]

Brentano, F. (1874/1973) Psychology from an empirical standpoint, (ed. O.Kraus. Schattle Translated by L. L. McAlister.). Routledge & KeganPaul. (Original work published in 1874, Humanities). [rAGop, PZ]

Bretherton, 1. & Beeghly, M. (1982) Talking about internal states: The

acquisition of an explicit theory of mind. Developmental Psychology18:906-21. [PLH]

Brown, C. & Luper-Foy, S. (1991) Belief and rationality. Synthese 89:323-29. [JR]

Bruner, J. (1983) Child's talk: Learning to use language. Oxford UniversityPress (Oxford). [SB-C]

Bryant, P. E. (1982) The role of conflict and agreement between intellectualstrategies in children's ideas about measurement. British Journal ofPsychology 73:243-51. [PLH]

Burge, T. (1988) Individualism and self-knowledge. Journal of Philosophy

85:649-63. [aAIGol]Burt, C. (1962) The concept of consciousness. British Journal of Psychology

33:229-42. [SSR]Butterworth, G. (1991) The ontogeny and phylogeny of joint visual attention.

In: Natural theories of mind, ed. A Whiten. Basil Blackwell. [aAIGol]Butterworth, G. E. & Jarrett, N. L. M. (1991) What minds have in common

is space: Spatial mechanisms serving joint attention in infancy. BritishJournal of Developmental Psychology 9:55-72. [GEB]

Byrne, R. & Whiten, A. (1988) Machiavellian intelligence: Social expertise andthe evolution of intelligence in monkeys, apes, and humans. OxfordUniversity Press. [MT]

Campbell, J. (1992) The role of physical objects in spatial thinking. In:

Problems in the philosophy and psychology of spatial representation, ed.B. Brewer, N. Eilan & R. McCarthy. Blackwell (in press). [JR]

108 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 109: A Gopnik - How We Know Our Minds

Re/erences/Gopnik/Goldman: Knowing our minds

Campbell, K. (1985) Pain is three-dimensional, inner, and occurrent.Behavioral and Brain Sciences 8:56-57. [aAlGol]

Campbell, R. L. (1992) A shift in the development of natural-kind categories.Human Development 35:156-64. [RLC]

Campbell, R. L. & Bickhard, M. H. (1986) Knowing levels and developmentalstages. Karger. [RLC]

Carey, S. (1983) Are children fundamentally different kinds of thinkers andlearners than adults? In: Thinking and learning skills: Research and openquestions, ed. S. Chipman, J. Segal & R. Claser. Erlbaum. [CNJ]

(1985) Conceptual change in childhood. MIT Press. [aAGop, aAlGol,RLC]

(1988) Conceptual differences between children and adults. Mind andLanguage 3:167-81. [aAGop, aAlGol]

Chandler, M. J. (1988) Doubt and developing theories of mind. In:Developing theories of mind, ed. J. W. Astington, D. Olson, & P. Harris.Cambridge University Press. [aAGop, DZ]

Chandler, M. J., Fritz, A. & Hala, S. (1989) Small-scale deceit: Deception asa marker of two-, three-, and four-year-olds' early theories of mind. ChildDevelopment 60:1263-77. [aAGop, cAIGol]

Charman, T. & Baron-Cohen, S. (1992) Understanding drawings and beliefs:A further test of the metarepresentation theory of autism (ResearchNote). Journal of Child Psychology and Psychiatry 33:1105-12. [AML]

Charness, N. (1991) Expertise in chess: The balance between knowledge andsearch. In: Toward a general theory of expertise: Prospects and limits,ed. K. A. Ericsson & J. Smith. Cambridge University Press. [KAE]

Chase, W. G. & Simon, H. A. (1973) Perception in chess. CognitivePsychology 4:55-81. [aAGop]

Cheney, D. & Seyfarth, R. (1990) How monkeys see the world. University ofChicago Press. [MT]

Chi, M., Glaser, R. & Rees, E. (1982) Expertise in problem-solving. In:Advances in the psychology of human intelligence, vol. 1, ed. R. J.Steinberg. Erlbaum. [aAGop]

Chomsky, N. (1980) Rules and representations. Basil Blackwell. [rAGop, KS]Churchland, P. M. (1979) Scientific realism and the plasticity of mind.

Cambridge University Press. [aAIGol, WEM](1981) Eliminative materialism and the propositional attitudes. The Journal

of Philosophy 78:67-90. [aAIGol](1984) Matter and consciotisness. MIT press. [aAGop, DRO, DCP](1985) Reduction, qualia and the direct introspection of brain states. Journal

of Philosophy 82(l):8-28. [aAIGol, SD](1988a) Perceptual plasticity and theoretical neutrality. Philosophy of

Science 55:167-87. [AL](1988b) Folk psychology and the explanation of human behavior.

Proceedings of the Aristotelian Society, suppl. 62:209-21. [SD](1988c) Reduction and the neurological basis of consciousness. In:

Consciousness in contemporary science, ed. A. J. Marcel & E. Bisiach.Oxford University Press. [KES]

(1989) A neurocomputational perspective: The nature of mind and thestructure of science. MIT Press. [SD, KES]

Churchland, P. M. & Churchland, P. S. (1981) Functionalism, qualia, andintentionality. Philosophical Topics 12:121-45. [aAIGol]

Clark, A. (1987) From folk psychology to naive psychology. Cognitive Science

11:139-54. [KES]Compton, P. & Jansen, R. (1990) A philosophical basis for knowledge

acquisition. Knowledge Acquisition 2:241-57. [PL]Cooper, L. (1991) Dissociable aspects of the mental representation of visual

objects. In: Mental images in human cognition, ed. R. Logic & M. Denis.Elsevier. [aAIGol]

Cultice, J. C , Somerville, S. C. & Wellman, H. M. (1983) Preschooler'smemory monitoring: Feeling-of-knowing judgments. Child Development54:1480-86. [BHP]

Cummins, R. (1989) Meaning and mental representation. MITPress. [aAIGol]

D'Andrade, R. (1987) A folk model of the mind. In: Cultural models inlanguage and thought, ed. D. Holland & N. Quinn. CambridgeUniversity Press (Cambridge). [aAIGol, KES]

Davidson, D. (1980) Essays on action and events. Oxford University Press

(Oxford). [aACop](1984) First person authority. Dialectica 38:101-11. [PMP](1987) Knowing one's own mind. Proceedings and Addresses of the

American Philosophical Association 60:441-57. [rAGop, aAIGol, PMP](1989a) The myth of the subjective. In: Relativism, ed. M. Krausz. Notre

Dame University Press. [BACS](1989b) What is present to the mind? In: The mind of Donald Davidson,

ed. J. Brandl & W. Gombocz. Rolopi. [PMP]De Groot, A. (1946/1978) Thought and choice in chess. Mouton. [aAGop,

KAE]

Dennett, D. (1969) Content and consciousness. Humanities Press. [aAIGol](1978a) Beliefs about beliefs. Behavioral and Brain Sciences 1:568-

70. [rACop](1978b) Brainstorms. MIT Press. [aAIGol, SB-C, AM](1978c) Toward a cognitive theory of consciousness. In: Brainstorms. MIT

Press. [AMo](1987) The intentional stance. MIT Press. [arAGop, SD, PMP](1988) Quining qualia. In: Consciousness in contemporary science, ed. A.

Marcel & E. Bisiach. Oxford University Press. [aAIGol, SD, KES](1991a) Consciousness explained. Little-Brown. [aAIGol, DJC, SD](1991b) Real patterns. Journal of Philosophy 87:27-51. [PMP](1991c) Two contrasts: Folk craft versus folk science, and belief versus

opinion. In: The future of folk psychology, ed. J. D. Greenwood. MITPress. [KES]

Derr, P. & Thompson, N. S. (1993) Reconstructing Hempelian motivationalexplanations. Behavior and Philosophy (in press). [NST]

De Villiers, J. G. & De Villiers, P. A. (1978) Language acquisition. HarvardUniversity Press. [MS]

deWaal, F. (1982) Chimpanzee politics. Harper. [MT]Dittrich, W. H. & Lea, S. E. G. (1992) Motion feature integration and the

perception of intention. Department of Psychology, University of Exeter,Biological Psychology Research Group, Internal Report no.91/05. [WHD]

Donaldson, M. (1978) Children's minds. Fontana. [JMG, MS]Dretske, F. (1981) Knowledge and the flow of information. Bradford Books

MIT Press. [arAGop, RLC](1988) Explaining behavior. MIT Press. [aAICol]

Ericsson, K. A. (1988) Concurrent verbal reports on reading and textcomprehension. Text 8(4):295-325. [KAE]

Ericsson, K. A. & Crutcher, R. J. (1991) Introspection and verbal reports oncognitive processes - two approaches to the study of thought processes: Aresponse to Howe. New Ideas in Psychology 9:57-71. [KAE]

Ericsson, K. & Simon, H. (1980) Verbal reports as data. Psychological Review87:215-51. [aAIGol, SN]

(1984) Protocol analysis: Verbal reports as data. MIT Press. [aAICol,KAE]

Estes, D., Wellman, H. M. & Woolley, J. D. (1989) Children's understandingof mental phenomena. In: Advances in child development and behavior,ed. H. Reese. Academic Press. [KB, PLH]

Evans, G. (1982) The varieties of reference. Oxford University Press. [RMG]Fales, E. (1990) Causation and universals. Routledge & Kegan Paul. [DMA]Fetzer, J. H. (1989) Language and mentality: Computational, representational,

and dispositional conceptions. Behaviorism 17:21-39. [JHF]1990) Artificial intelligence: Its scope and limits. Kluwer Academic. [JHF](1991) Philosophy and cognitive science. Paragon House. [JHF](1992) Primitive concepts: Habits, conventions, and laws. In: Definitions

and definability: Philosophical perspectives, ed. J. H. Fetzer, D. Shatz &G. Schlesinger. Kluwer Academic. [JHF]

Fetzer, J. H. (1993) Philosophy of science. Paragon House, (in press). [JHF]Flanagan, O. (1992) Consciousness reconsidered. MIT Press. [aAIGol]Flavell, J. H. (1988) The development of children's knowledge about the

mind: From cognitive connections to mental representations. In:Developing theories of mind, ed. J. W. Astington, P. L. Harris, & D. R.Olson. Cambridge University Press. [arAGop, aAIGol, BHP]

Flavell, J. H., Everett, B. A., Croft, K. & Flavell, E. R. (1981) Youngchildren's knowledge about visual perception: Further evidence for theLevel 1 - Level 2 distinction. Developmental Psychology 17:99-103. [aAGop]

Flavell, J. H., Flavell, E. R. & Green, F. L. (1987) Young children'sknowledge about the apparent-real and pretend-real distinctions.Developmental Psychology 23(6):816-22. (aAGop]

Flavell, J. H., Flavell, E. R., Green, F. L. & Moses, L. J. (1990) Youngchildren's understanding of fact beliefs versus value beliefs. ChildDevelopment 61:915-28. [aAGop]

Flavell, J. H., Green, F. L. & Flavell, E. R. (1986) Development ofknowledge about the appearance-reality distinction. Monographs of theSociety for Research in Child Development, Serial No. 212,51(1). [aAGop, KB]

Flavell, J. H., Speer, J. R., Green, F. L. & August, D. L. (1981) The

development of comprehension monitoring and knowledge aboutcommunication. Monographs of the Society for Research in ChildDevelopment 46:5, Serial No. 192. (BHP]

Fodor, J. (1975) The language of thought. Crowell. [cAGop, aAIGol, AL](1981) Representations. MIT Press. [aAIGol]

(1983) The modularity of mind: An essay on faculty psychology. MITPress/Bradford Books. [rAGop]

(1984) Observation reconsidered. Philosophy of Science 51:23-43. [AL]

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 109

Page 110: A Gopnik - How We Know Our Minds

Re/erences/Gopnik/Goldman: Knowing our minds

(1985) Fodor's guide to mental representations: The intelligent auntie'svade-mecum. Mind 94:77-100. [aAGop, JMG]

(1987) Psychosemantics. MIT Press. [aAIGol, JR](1988) A reply to Churchland's perceptual plasticity and theoretical

neutrality. Philosophy of Science 55:188-98. [AL](1990) A theory of content and other essays. MIT Press. [aAIGol]

Fodor, J. A., Fodor, J. D. & Garrett, M. (1975) The psychological unreality ofsemantic representations. Linguistic Inquiry 6:515-31. [GR]

Forguson, L. & Copnik, A. (1988) The ontogeny of common sense. In:Developing theories of mind, ed. J. Astington, D. Olson & P. L. Harris.Cambridge University Press. [aAGop]

Freeman, N. H., Lewis, C. & Doherty, M. J. (1991) Preschoolers' grasp of adesire for knowledge in false-belief prediction: Practical intelligence andverbal report. British Journal of Developmental Psychology 9:7-31. [aAGop]

Frege, G. (1892) Uber Sinn and Bedeutung (On sense and reference).Zeitschrift fur Philosophic und philosophische Kritik 100:25-50. [arAGop]

Frith, U. (1989) Autism: Explaining the enigma. Blackwell. [DCP]Frith, U., Morton, J., & Leslie, A. M. (1991) The cognitive basis of a

biological disorder: Autism. Trends in Neurosciences 14:433-38. [AML]Frye, D. (1992) Causes and precursors of children's theories of mind. In:

Precursors, causes, and psychopathology, ed. D. Hay & A. Angold.Wiley (in press). [PZ]

Frye, D., Zelazo, P. D. & Palfai, T. (1992) The cognitive basis of theory ofmind. Manuscript submitted for publication. [PZ]

Gallup, G. (1986) Self-awareness and the emergence of mind in humans andother primates. In: Psychological perspectives on the self, vol. 3, ed. J.Suls & A. Greenwald. Erlbaum. [MT]

Garfleld, J. L. (1988) Belief in psychology: A study in the ontology of themind. MIT Press. [KES]

Garrett, M. (1990) Sentence processing. In: Language, ed. D. Osherson & H.Lasnik. MIT Press. [aAIGol]

Gelman, S. A. & Markman, E. M. (1986) Categories and induction in youngchildren. Cognition 23:183-209. [LJR]

Gibson, J. J. (1966) The senses cotisidered as perceptual systenis. Houghton-Mifflin. [GEB, RLC]

(1977) The theory of affordances. In: Perceiving, acting and knowing, ed. R.Shaw & J. Bransford. Erlbaum. [RLC]

(1979) The ecological approach to visual perception. Houghton-Mifflin. [aAGop, RLC]

Giddens, A. (1984) The constitution of society. Polity Press. [BACS]Gillberg, C. (1991) What is autism? International Review of Psychiatry 2:61-

66. [SB-C]Gleitman, H. (1981) Psychology. W. W. Norton. [aAIGol]Goldman, A. (1989) Interpretation psychologized. Mind and Language 4:161—

85. [aAGop, aAIGol, CM, DCP](1992a) In defense of the simulation theory. Mind and Language 7:1-2,

104-19. [aAGop, aAIGol, DCP](1992b) Empathy, mind, and morals. Proceedings and Addresses of the

American Philosophical Association, vol. 66, no. 3. [acAIGol]Gombrich, E. (1960) Art and illusion. Bolinger Foundation. [DRO]Gomez, J. C. (1991) Visual behaviour as a window for reading the mind of

others in primates. In: Natural theories of mind: Evolution, developmentand simulation of everyday mindreading, ed. A. Whiten.Blackwell. [DCP]

Copnik, A. (1982) Words and plans: Early language and the development of

intelligent action. Journal of Child Language 9:303-18. [aAGop](1984) Conceptual and semantic change in scientists and children: Why

there are no semantic universals. Linguistics 20:163-79. [aAGop,aAIGol]

(1988) Conceptual and semantic development as theory change. Mind andLanguage 3(3):197-217. [arAGop, aAIGol, JP]

(1990) Developing the idea of intentionality: Children's theories of mind.Canadian Journal of Philosophy 20(l):89-114. [aAGop, BK]

Gopnik, A. & Astington, J. W. (1988) Children's understanding of

representational change and its relation to the understanding of falsebelief and the appearance-reality distinction. Child Development 59:26-37. [aAGop, KWC, SSR]

Gopnik, A. & Graf, P. (1988) Knowing how you know: Young children's ability

to identify and remember the sources of their beliefs. Child Development59:1366-71. [aAGop, SN]

Gopnik, A. & Meltzoff, A. (in press). Minds, bodies and persons: Young

children's understanding of the self and others as reflected in imitationand "theory of mind" research. In: Self-awareness in animals andhumans, ed. S. Parker & R. Mitchell. Cambridge UniversityPress. [rAGop]

Copnik, A. & Slaughter, V. (1991) Young children's understanding of changesin their mental states. Child Development 62:98-110. [aAGop, cAIGol,

KWC, PLH, JP, MS]Gopnik, A. & Wellman, H. (1992) Why the child's theory of mind really is a

theory. Mind ond Language 7(1&2):145-71. [arAGop, DCP](in press) The "theory theory" of cognitive development. In: Domain

specificity in culture and cognition ed. L. Hirschfield & S. Gelman.Cambridge University Press. [arAGop]

Gordon, R. (1986) Folk psychology as simulation. Mind and Language 1:158-71. [aAGop, aAIGol, WHD, CM, DCP]

(1992a) Reply to Stich and Nichols. Mind and Language 7:81-91. [aAIGol,RMG]

(1992b) Reply to Pemer and Howes. Mind and Language 7:92-98. [RMG](1992c) The simulation theory and the theory theory. Mind and Language

7:1-2, 11-35. [aAGop, RMG, CM, DCP](1992d) The simulation theory: Objections and misconceptions. Mind and

Language 6:5-27. [AMo]Greenwell, M. (1990) Knowledge elicitation: Principles and practice. In:

Understanding knowledge engineering, ed. M. McTear & T. Anderson.Ellis Horwood. [PL]

Greenwood, J. D., ed. (1991) The future of folk psychology. MITPress. [KES]

Grice, H. P. (1975) Logic and conversation. In: Syntax and semantics, vol. 3,Speech acts, ed. P. Cole & J. L. Morgan. Academic Press. [MS]

Gruber, T. (1989) The acquisition of strategic knowledge. AcademicPress. [PL]

Gunderson, K. (1990) Consciousness and intentionality: Robots with andwithout the right stuff. In: Propositional attitudes - the role of content inlogic, language, and mind, ed. C. A. Anderson 6t J. Owens. Center forthe Study of Information, Stanford University. [KG]

Hadwin, J. & Perner, J. (1991) Pleased and surprised: Children's cognitivetheory of emotion. British Journal of Developmental Psychology 9:215-34. [DRO]

Harman, G. (1990) The intrinsic quality of experience. In: Philosophicalperspectives 4, ed. J. Tomberlin. Ridgeview. [aAIGol]

Harre, R. (1983) Personal being. Blackwell. [BACS](1989) The "self' as a theoretical concept. In: Relativism, ed. M. Krausz.

Notre Dame University Press. [BACS]Harris, P. (1989) Children and emotion: The development of psychological

understanding. Basil Blackwell. [aAIGol, WHD](1991) The work of the imagination. In: Natural theories of mind, ed. A.

Whiten. Basil Blackwell. [aAGop, aAIGol, MCh, PLH, CM](1992) From simulation to folk psychology: The case for development. Mind

and Language 7:120-44. [aAIGol, PLH]Hayes, P. (1985) The second naive physics manifesto. In: Formal theories of

the commonsense world, ed. J. Hobbs & R. Moore. Ablex. [aAIGol]Heckhausen, H. (1982) The development of achievement motivation. In:

Review of child development research, vol. 6, ed. W. W. Hartup.University of Chicago Press. [WHD]

(1983) Concern with one's own competence: Developmental shifts inperson-environment interaction. In: Human development: Aninteractional perspective, ed. D. Magnussen & V. Allen. AcademicPress. [WHD]

Heil, J. (1992) The nature of true minds. Cambridge UniversityPress. [JH]

Hill, C. S. (1991) Sensations: A defense of type materialism. CambridgeUniversity Press. [CSH]

Hill, T., Lewicki, P., Czyzewska, M. & Boss, A. (1989) Self-perpetuating

development of encoding biases in person perception. Journal ofPersonality and Social Psychology 57:373-87. [MCz]

Hill, T., Lewicki, P., Czyzewska, M. & Schuller, G. (1990) The role oflearned inferential encoding rules in the perception of faces: Effects ofnonconscious self-perpetuation of a bias. Journal of Experimental SocialPsychology 26:350-71. [MCz]

Hobson, P. (1987) On acquiring knowledge about people and the capacity for

pretense. Psychological Review 97:114-21. [MT](1991) Against the theory of Theory of mind. British Journal of

Developmental Psychology 9:33-51. [aAGop]Hoflman, R. (1987) The problem of extracting knowledge of experts from the

perspective of experimental psychology. Al Magazine 8:53-67. [PL]Hogrefe, G. J., Wimmer, H. & Pemer, J. (1986) Ignorance versus false belief:

A developmental lag in attribution of epistemic states. Child Development57:567-82. [aAGop]

Holt, E. H. (1914) The concept of consciousness. Macmillan. [NST]Holt, E. H., Marvin, W. T , Montague, W. P., Perry, R. B., Pitkin, W. P. &

Spaulding, E. G. (1912) The new realism: Cooperative studies inphilosophy. Macmillan. [NST]

110 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 111: A Gopnik - How We Know Our Minds

Re/erences/Gopnik/Goldman: Knowing our minds

Horgan, T\ & Woodward, J. (1985) Folk psychology is here to stay. ThePhilosophical Review 94:197-226 [KES]

Hume, D. (1748) An enquiry concerning human understanding. ClarendonPress. [aAIGol]

(1777) Enquiry concerning the human understanding. Reprinted in: Hume'sEnquiries (1902), ed. L. A. Selby-Bigge. Oxford UniversityPress. [DMA]

Humphrey, N. (1984) Consciousness regained. Oxford UniversityPress. [KS]

Hutton, D., Pemer, J. & Baker, S. (1991) "Prelief" and "Betence": Children'sconcepts covering cases of pretence and false belief. Poster presented atthe Annual Conference of the British Psychological Society'sDevelopmental Section, University of Cambridge. [JP]

Jackendoif, R. (1987) Consciousness and the computational mind. MITPress. [arAICol, KC]

Jackson, F. (1982) Epiphenomenal qualia. Philosophical Quarterly 32:127-36. [aAICol, KC, FJ]

(1986) What Mary didn't know. Journal of Philosophy 83:291-95. [aAIGol,KC, FJ]

James, W. (1884) What is an emotion? Mind 9:188-205. [WHD](1890/1950) The principles of psychology, vol. 1. Dover. [PZ]

Johnson, C. (1988) Theory of mind and the structure of conscious experience.In: Developing theories of mind, ed. J. Astington, P. Harris & D. Olson.Cambridge University Press. [aAGop, aAIGol, CNJ]

Josephson, B. D. & Hauser, H. M. (1981) Multistage acquisition of intelligentbehaviour. Kybernetes 10:11-15. [BDJ]

Just, M. A. & Clark, H. H. (1973) Drawing inferences from thepresuppositions and implications of affirmative and negative sentences.Journal of Verbal Learning and Verbal Behavior 12:21-31. [LJR]

Kahneman, D. & Miller, D. (1986) Norm theory: Comparing reality to itsalternatives. Psychological Review 93:136-53. [JH]

Karmiloff-Smith, A. (1979a) A functional approach to child language.Cambridge University Press (Cambridge). [DCP]

(1979b) Micro- and macro-developmental changes in language acquisitionand other representational systems. Cognitive Science 3:81-118. [DCP]

(1981) Getting developmental differences or studying child development?

Cognition 10:151-58. [DCP1(1988) The child is a theoretician, not an inductivist. Mind and Language

3(3): 183-97. [aACop]Karmiloff-Smith, A. & Inhelder, B. (1974/75) If you want to get ahead, get a

theory. Cognition 3(3):195-212. [aAGop, aAIGol]Keil, F. (1987) Conceptual development and category structure. In: Concepts

and conceptual development, ed. U. Neisser. Cambridge UniversityPress. [aACop]

(1989) Concepts, kinds, and cognitive, development. MIT Press. [aAIGol,RLC, AL, LJR]

Keysar, B. (1991) The illusory transparency of utterances: A problem ofperspective. In: The 32nd Annual Meeting of the Psychonomics Society,San Francisco, CA. [BK]

Kiparsky, P. & Kiparsky, C. (1970) Fact. In: Progress in linguistics, ed. M.Bierwisch & K. E. Heidolph. Mouton. [LJR]

Kopp, C. B. (1982) Antecedents of self-regulation: A developmentalperspective. Developmental Psychology 18:199-214. [WHD]

Kripke, S. A. (1982) Wittgenstein on rules and private language. HarvardUniversity Press. [RLC]

Kuhn, D., Amsel, E. & O'Loughlin, M. (1988) The development of scientificthinking skills. Academic Press. [RLC]

Landau, B. (1982) Will the real grandmother please stand up? Thepsychological reality of dual meaning representations. Journal ofPsycholinguists Research 11:47-62. [aAIGol]

Lane, D. M. & Robertson, L. (1979) The generality of the levels of processinghypothesis: An application to memory for chess positions. Memory bCognition 7:253-56. [KAE]

Lange, C. G. (1885) Om Sindsbevaegelser: Et psyko-fysiologiske Studie.Rasmussen. (English version: James, W. & Lange, C. (1922) Theemotions. Williams & Wilkins.) [WHD]

Lempers, J. D., Flavell, E. R. & Flavell, J. H. (1977) The development invery young children of tacit knowledge concerning visual perception.Cenetic Psychology Monographs 95:3-53. [arAGop]

Leslie, A. (1987) Pretense and representation: The origins of "theory of mind '

Psychological Review 94:412-26. [arAGop, JMG, AML, DCP](1988) Some implications of pretense for children's theories of mind. In:

Developing theories of mind, ed. J. W. Astington, P. L. Harris & D. R.Olson. Cambridge University Press. [aAGop]

Leslie, A. M. & Frith, U. (1990) Prospects for a cognitive neuropsychology ofautism: Hobson's choice. Psychological Review 97:122-31. [AML]

Leslie, A. M. & Roth D. (1992) What autism teaches us about

metarepresentation. In: Understanding other minds: Perspectives fromautism, ed. S. Baron-Cohen, H. Tager-Flusberg & D. Cohen. OxfordUniversity Press (Oxford) (in press). [AML]

Leslie, A. M. & Thaiss, L. (1992) Domain specificity in conceptualdevelopment: Neuropsychological evidence from autism. Cognition43:225-51. [AML]

Lewicki, P. (1986) Nonconscious social information processing. Academicpress. [MCz]

Lewicki, P., Czyzewska, M. & Hoffman, H. (1987) Unconscious acquisition ofcomplex procedural knowledge. Journal of Experimental Psychology:Learning, Memory and Cognition 13:523-30. [MCz]

Lewicki, P., Hill, T. & Czyzewska, M. (1992) Nonconscious acquisition ofinformation. American Psychologist 47:796-801. [MCz]

Lewis, C. & Osborne, A. (1991) Three-year-old's problem with false-belief:Conceptual deficit or linguistic artifact? Child Development 61:1514-19. [aACop]

Lewis, D. (1966) An argument for the identity theory. Journal of Philosophy63:17-25. [aAIGol]

(1970) How to define theoretical terms. Journal of Philosophy 67:427-46. [aAIGol]

(1972) Psychophysical and theoretical identifications. Australasian Journal ofPhilosophy 50:249-58. [aAIGol, GR]

Loar, B. (1981) Mind and meaning. Cambridge University Press. [aAIGol)Lyons, W. (1986) The disappearance of introspection. MIT Press. [KES]Marr, D. (1981) Vision. MIT Press. [rAGop]McCarthy, J. & Hayes, P. (1969) Some philosophical problems from the

standpoint of artificial intelligence. In: Machine intelligence, ed. B.Meltzer & D. Michie. American Elsevier. [aAICol]

McCloskey, M. (1983) Naive theories of motion. In: Mental models, ed. D.Gentner & A. Stevens. Erlbaum. [aAICol]

McGraw, K. & Westphal, C , eds. (1990) Readings in knowledge acquisition.Ellis Horwood. [PL]

McNamara, T. & Sternberg, R. (1983) Mental models of word meaning.Journal of Verbal Learning and Verbal Behavior 22:449-74. [aAIGol]

Meltzoff, A. N. (1990) Foundations for developing a concept of self: The roleof imitation in relating self to other and the value of social mirroring,social modeling and self practice in infancy. In: The self in transition:Infancy to childhood, ed. D. Cicchetti & M. Beeghly. University ofChicago Press. [rAGop]

Meltzoff, A. & Gopnik, A. (1993) The role of imitation in understandingpersons and developing theories of mind. In: Understanding other minds:Perspectives from autism, ed. S. Baron-Cohen, H. Tager-Flusberg & D.Cohen. Oxford University Press (Oxford) (in press). [arAGop]

Meltzoff, A. & Moore, M. (1977) Imitation of facial and manual gestures byhuman neonates. Science 198:75-78. [aAIGol]

(1983) Newborn infants imitate adult facial gestures. Child Development54:702-09. [aAIGol]

Millikan, R. (1984) Language, thought, and other biological categories.Bradford Books/MIT Press. [arAGop, aAIGol]

(1986) Thoughts without laws, cognitive science with content. PhilosophicalReview 95:47-80. [aAIGol]

(1989) In defense of proper functions. Philosophy of Science 56:288-302. [aAIGol]

Mitchell, P. & Lacohee, H. (1991) Children's early understanding of falsebelief. Cognilion 39:107-27. [aAGop, acAIGol, JP, MS]

Moore, C , Pure, K. & Furrow, D. (1990) Children's understanding of themodal expression of speaker certainty and uncertainty and its relation tothe development of a representational theory of mind. Child Development61:722-30. [arACop]

Morton, A. (1980) Frames of mind. Oxford University Press. [aAICol](1993) Mathematical models: Questions of trustworthiness. British Journal

for the Philosophy of Science (in press). [AMo]Moses, L. J. & Flavell, J. H. (1990) Inferring false beliefs from actions and

reactions. Child Development 61:929-45. [aAGop]Miihlhausler, P. & Harre, R. (1990) Pronouns and people: The linguistic

construction of social and personal identity. Blackwell. [BACS]Murphy, G. & Medin, D. (1985) The role of theories in conceptual

coherence. Psychological Review 92:289-316. [aAIGol, RLC]Nagel, T. (1974) What is it like to be a bat? Philosophical Review 83:435-

50. [aAIGol, FJ]Nelson, K. A. (1992) The emergence of autobiographical memory. Human

Development 35:172-77. [RLC]Newell, A. & Simon, H. (1976) Computer science as empirical inquiry:

Symbols and search. Communications of the Association for ComputingMachinery 19:113-26. [aAIGol]

Newell, A. (1990) Unified theories of cognition. Harvard UniversityPress. [RLC]

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 111

Page 112: A Gopnik - How We Know Our Minds

Re/erences/Gopnik/Goldman: Knowing our minds

Nisbett, R. E. & Ross, L. (1980) Human inference: Strategies andshortcomings of social judgment. Prentice-Hall. [arAGop, MCz, SN]

Nisbett, R. E. & Wilson, T. D. (1977) Telling more than we can know: Verbalreports on mental processes. Psychological Review 84:231-59. [aAGop,aAIGol, KB, BK, SN]

Norris, R. & Millan, S. (1991) Theory of mind: New directions. SocialPsychology Seminar. [DCP]

O'Neill, D., Astington,.]. W. & Flavell, J. (1992) Young children'sunderstanding of the role that sensory experiences play in knowledgeacquisition. Child Development 63:474-91. [arAGop]

O'Neill, D. & Gopnik, A. (1991) Young children's understanding of thesources of their beliefs. Developmental Psychology 27:390-97. [aAGop,SN, BHP]

Peirce, C. S. (1868/1955) Some consequences of four incapacities. In:Philosophical Writings of Peirce, ed. J. Buchler. Dover. [JHF]

Perner, J. (1991a) On representing that: The asymmetry between belief andintention in children's theory of mind. In: Children's theories of mind,ed. D. Frye & C. Moore. Erlbaum. [JP]

Pemer, J. (1991b) Understanding the representational mind. MITPress. [aAIGol, arAGop, KB, RLC, AML, BHP, DCP]

Perner, J. & Howes, D. (1992) "He thinks he knows"; and moredevelopmental evidence against the simulation (role-taking) theory. Mindand Language 6:66-80. [DRO, JP]

Perner, ] . , Leekam, S., & Wimmer, H. (1987) 3-year-olds' difficultyunderstanding false belief: Cognitive limitation, lack of knowledge orpragmatic misunderstanding. British Journal of DevelopmentalPsychology 5, 125-37. [aAGop, MS]

Perner, J. & Ogden, J. E. (1988) Knowledge for hunger: Children's problemwith representation in imputing mental states. Cognition 29:47-61. [KWC]

Perner, J. & Ruffman, T. (1992) Theory of mind: You catch it from your sibs.Unpublished manuscript. [rAGop]

Piaget, J. (1954) The construction of reality in the child. Basic Books. [GEB](1977) Recherches sur labstraction re'fle'chissante (2 vol.). Presses

Universitaires de France. [RLC](1986) Essay on necessity. Human Development 29:301-14. (Originally

published 1977.) [RLC]Pillow, B. H. (1989) Early understanding of perception as a source of

knowledge. Journal of Experimental Child Psychology 47(1):116—29. [arAGop, KWC]

Popper, K. (1968) The logic of scientific discovery. Harper & Row. [AL]Potter, S. (1952) One-upmanship. Holt. [cAGop]Povinelli, D. J. & DeBlois, S. (1992) Young children's understanding of

knowledge formation in themselves and others. Journal of ComparativePsychology 106:228-38. [KWC]

Pratt, C. & Bryant, P. E. (1990) Young children understand that looking leadsto knowing (so long as they are looking into a single barrel). ChildDevelopment 61:973-82. [rAGop, KWC]

Premack, D. & Woodruff, G. (1978) Does the chimpanzee have a theory of

mind? Behavioral and Brain Sciences 1:515-26. [aAIGol, DCP, JP]Putnam, H. (1960) Minds and machines. In: Dimensions of mind, ed. S.

Hook. New York University Press. [aAIGol](1967) The mental life of some machines. In: Intentionality, minds and

perception, ed. H. Castaneda. Wayne State University Press. [aAIGol](1975) The meaning of "Meaning." In: Language, mind and knowledge, ed.

K. Gunderson. University of Minnesota Press. [aAIGol](1981) Reason, truth and history. Cambridge University Press

(Cambridge). [DRO](1988) Representation and reality. MIT Press. [PMP]

Pylyshyn, Z. (1980) Computation and cognition. Issues in the foundations ofcognitive science. Behavioral h Brain Sciences 3:111-32. [cAGop]

(1984) Computation and cognition. MIT Press/Bradford Books. [rAGop]Quine, W. V. O. (1956) Quantifiers and propositional attitudes. Journal of

Philosophy 53:177-87. [aAGop, DRO]Rachlin, H. (1985) Pain and behavior. Behavioral and Brain Sciences 8:43-

53. [aAIGol, HR](1993) Two sciences of psychology: The discovery of mental mechanisms and

the description of mental life. Oxford University Press (in press). [HR]Rakover, S. S. (1983) Hypothesizing from introspections: A model for the role

of mental entities in psychological explanation. Journal for the Theory ofSocial Behaviour 13:211-30. [SSR]

(1990) Metapsychology: Missing links in behavior, mind and science.

Paragon/Solomon. [SSR](1992) Consciousness explained? A commentary on Dennett's Consciousness

explained. International Studies in Philosophy (in press). [SSR]Reid, T. (1788) Essays on the active powers of man. Reprinted in: Thomas

Reid's inquiry and essays (1975), ed. K. Lehrer & R. E. Beanblossom.Bobbs-Merrill. [DMA]

Rey, G. (1983) Concepts and stereotypes. Cognition 15:237-62. [aAIGol,GR]

(1985) Concepts and conceptions. Cognition 19:297-303. [GR](1992) Sensational sentences switched. Philosophical Studies. [GR]

Rips, L. J. (1989) Similarity, typicality, and categorization. In: Similarity andanalogical reasoning, ed. S. Vosniadou & A. Ortony. CambridgeUniversity Press. [LJR]

Rips, L. & Conrad, F. (1989) Folk psychology of mental activities.Psychological Review 96:187-207. [aAIGol]

Rock, I. (1983) The logic of perception. MIT Press. [rAGop]Rommetveit, R. (1985) Language acquisition as increasing linguistic

structuring of experience and symbolic behavior control. In: Culture,communication, and cognition: Vygotskian perspectives, ed. J. V.Wertsch. Cambridge University Press (Cambridge). [BACS]

Roth, D. & Leslie, A. M. (1991) The recognition of attitude conveyed by

utterance: A study of preschool and autistic children. British Journal ofDevelopmental Psychology 9:315—30. (Reprinted In: Perspectives on thechild's theory of mind, ed. G. E. Butterworth, P. L. Harris, A. M. Leslie& H. M. Wellman. Oxford University Press.) [AML]

Russell, B. (1905) On denoting. Mind 14:479-93. [arAGop]Russell, J., Mauthner, N., Sharpe, S. & Tidswell, T. (1991) The "windows

task" as a measure of strategic deception in preschoolers and autisticsubjects. British Journal of Developmental Psychology 9:331-49. [JR]

Ryle, G. (1949) The concept of mind. Barnes & Noble. [aAGop]Samet, J. (1993) Autism and the theory of mind: Some philosophical

perspectives. In: Understanding other minds: Perspectives from autism,ed. S. Baron-Cohen, H. Tager-Flusberg & D. J. Cohen. OxfordUniversity Press. [DZ]

Sartre, J. P. (1948) Why write? In: Critical theory since Plato, ed. HazardAdams. Harcourt Brace Jovanovich. [KG]

Saunders, B. A. C. (1992) The invention of basic colour terms.1SOR. [BACS]

Schachter, S. & Singer, J. (1962) Cognitive, social and physiologicaldeterminants of emotional state. Psychological Review 69:379-99. [aAIGol, BWK]

Schacter, D. (1989) On the relation between memory and consciousness. In:Varieties of memory and consciousness: Essays in Honor of EndelTulcing, ed. H. Roediger III & F. Craik. Erlbaum. [aAIGol]

Schacter, D., Cooper, L. & Delaney, S. (1990) Implicit memory for visualobjects and the structural description system. Bulletin of the PsychonomicSociety 28:367-72. [aAIGol]

Schacter, D., Cooper, L., Delaney, S., Peterson, M. & Tharan, M. (1991)Implicit memory for possible and impossible objects: Constraints on theconstruction of structural descriptions. Journal of ExperimentalPsychology: Learning, Memory, and Cognition 17:3-19. [aAIGol]

Schiffer, S. (1981) Truth and the theory of content. In: Meaning and

understanding, ed. H. Parret & J. Bouveresse. de Gruyter. [aAIGol]Searle, J. R. (1980) Minds, brains and programs. Behavioral and Brain

Sciences 3:417-57. [aAGop, KG, BK](1983) Intentionality: An essay in the philosophy of mind. Cambridge

University Press (Cambridge). [aAGop, AL]

(1984) Minds, brains and science. Harvard University Press. [aAGop](1990) Consciousness, explanatory inversion, and cognitive science.

Behavioral and Brain Sciences 13:585-642. [arAGop, aAIGol, DJC,WHD, PL, PZ]

(1991) Intentionalistic explanations in the social sciences. Philosophy ofSocial Science 21(3):332-44. [rAGop, BACS]

Sellars, W. (1956) Empiricism and the philosophy of mind. In: Minnesotastudies in the philosophy of science 1, ed. H. Feigl & M. Scriven.University of Minnesota Press. [aAIGol, SN]

(1963) Science, perception and reality. Routledge & Kegan Paul. [rAGop,

DRO]Shatz, M., Wellman, H. M. & Silber, S. (1983) The acquisition of mental

verbs: A systematic investigation of the child's first reference to mentalstate. Cognition 14:301-21. [aAGop, PLH]

Shepard, R. (1978) The mental image. American Psychologist 33:125-

37. [DRO]Shoemaker, S. (1975) Functionalism and qualia. Philosophical Studies 27; 291-

315. [aAIGol, DJC, BWK, SS](1988) On knowing one's own mind. Philosophical Perspectives 4:187-

214. [SS](1991a) Qualia and consciousness. Mind 100:507-24. [SS](1991b) Rationality and self-consciousness. In: The opened curtain, a U.S.-

Soviet philosophy summit, ed. K. Lehrer & E. Sosa. Westview. [SS]

112 BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1

Page 113: A Gopnik - How We Know Our Minds

Re/erences/Gopnik/Goldman: Knowing our minds

Shotter, J. (1990) Knowing of the third kind: Selected writings on psychology,rhetoric and the culture of everyday life. ISOR. [BACS]

Shotter, J. & Newson (1974) How babies communicate. New Society 29:345—47. [BACS]

Sicgal, M. (1991a) A clash of conversational worlds: Interpreting cognitivedevelopment through communication. In: Perspectives on socially sharedcognition, ed. L. B. Resnick, J. M. Levine & S. D. Teasley. AmericanPsychological Association. [MS]

(1991b) Knowing children: Exf)criments in conversation and cognition.Erlbaum. [MS]

Siegal, M. & Beattie, K. (1991) Whare to look first for children'sunderstanding of false beliefs. Cognition, 38:1-12. [aAGop, cAlGol,MS]

Siegal, M. & Peterson, C. C. (1992) Knowing what to say: Experiments onfalse beliefs, truthfulness, and suggestibility in young children.Unpublished manuscript, University of Queensland. [MS]

Siginan, M., Mundy, P., Ungercr, J. & Sherman, T. (1986) Social interactionsof autistic, mentally re^Stled, and normal children and their caregivers.Journal of Child Psychology and Psychiatry 27:647-56. [SB-C]

Smiley, P. & Huttenlocher, J. (1989) Young children's acquisition of emotionconcepts. In: Children's understanding of emotion, ed. C. Saarni & P. L.Harris. Cambridge University Press (Cambridge). [KWC]

Smith, C , Carey, S. & Wiser, M. (1985) On differentiation: A case study ofthe development of the concepts of size, weight, and density. Cognition21:177-237. [aAICol]

Smith, E. & Medin, D. (1981) Categories and concepts. Harvard UniversityPress. [aAICol]

Sodian, B. (1991) The development of deception in young children. BritishJournal of Developmental Psychology 9:173-88. [aAGop]

Sorensen, R. A. (1992) Thought experiments and the epistemology of laws.Canadian Journal of Philosophy 22:15-44. [DMA]

Spitz, R. (1957) No and yes. On the genesis of human communication.International University Press. [WHD]

Stalnaker, R. (1984) Inquiry. MIT Press. [PMP]Stanovich, K. E. (1989) Implicit philosophies of the mind: The dualism scale

and its relationships with religiosity and belief in extrasensory perception.Journal of Psychology 123:5-23. [KES]

Stich, S. (1983) From folk psychology to cognitive science: The case againstbelief. MIT Press. [aAGop, aAIGol, AL, DCP]

Stich, S. & Nichols, S. (1992) Folk psychology: Simulation or tacit theory?

Mind and Language 7:87-97. [AML]Strawson, P. F. (1959) Individuals. Methuen. [AMo, JR]

(1964) Persons. In: Essays in philosophical psychology, ed. D. Gustafson.Anchor Books. [rAGop, DRO]

Sugartnan, S. (1987) Piaget's construction of the child's reality. CambridgeUniversity Press. [CNJ]

Taylor, C. (1985) Philosophy and the human sciences. Cambridge UniversityPress. [aAGop]

Thompson, N. S. (1987) Natural design and the future of comparativepsychology. Journal of Comparative Psychology 101(3):282-86. [NST]

(1992) The many perils of ejective anthropomorphism. In:Anthropomorphism, anecdotes, and animals: The emperor's new clothes?ed. R. W. Mitchell, H. L. Miles & N. S. Thompson. NebraskaUniversity Press (in press). [NST]

Tomasello, M. (1992) The interpersonal origins of self-concept. In: Ecologicaland interpersonal knowledge of the self, ed. U. Neisser. CambridgeUniversity Press (in press). [MT]

Tomasello, M., Kruger, A. & Ratner, H. (1993) Cultural learning. Behavioral

and Brain Sciences 16(3) (in press). [MT]Trabasso, T. & Suh, S. (1992) Understanding text: Achieving explanatory

coherence through on-line inferences and mental operations in workingmemory. Discourse Processes (in press). [KAE]

Trevarthen, C. & Hubley, P. (1978) Secondary intersubjectivity. In: Action,gesture, and symbol: The emergence of language, ed. A. Lock. AcademicPress. [MT]

Tulving, E. (1985) Memory and consciousness. Canadian Psychology 26:1-12. (aAGop]

Tursky, B., Jamner, L. & Friedman, R. (1982) The pain perception profile: Apsychophysical approach to the assessment of pain report. BehaviorTherapy 13:376-94. [aAIGol]

VanLehn, K. (1989) Problem solving and cognitive skill acquisition. In:

Foundations of cognitive science, ed. M. I. Posner. MIT Press. [DCP]

Velmans, M. (1990) Is the mind conscious, functional, or both? Behavioraland Brain Sciences 13(4):629-30. [MV]

(1991a) Is human information processing conscious? Behavioral and BrainSciences 14(4):651-69. [aAIGol, MV]

Velmans, M. (1991r) Consciousness from a first-person perspective. [Responseto commentary]. Behavioral and Brain Sciences 14(4):702—19. [aAIGol,MV]

Vygotsky, L. S. (1966) The development of the higher mental functions. In:Psychological research in the USSR, vol. 1, ed. A. N. Leont'ev & A.Smirnov. Progress. [BACS]

(1986) Thought and language. MIT Press. [BACS]Warrington, E. & Taylor, A. (1978) Two categorical stages of object

recognition. Perception 7:695-705. [aAIGol]Wellman, H. (1988) First steps in the child's theorizing about the mind. In:

Developing theories of mind, ed. J. Astington, P. Harris & D. Olson.Cambridge University Press. [aAIGol]

(1990) The child's theory of mind. Bradford Books/MIT Press. [arACop,aAIGol, KB, SB-C, RLC, BK, SN, JP, DZ]

Wellman, H. M. & Bartsch, K. (1988) Young children's reasoning aboutbeliefs. Cognition 30:239-77. [arACop, KB, AML, DZ]

Wellman, H. M. & Estes, D. (1986) Early understanding of mental entities: Areexamination of childhood realism. Child Development 57:910—23. [aAGop, PLH]

Wellman, H. M. & Gelman, S. (1987) Children's understanding of the non-obvious. In: Advances in the psychology of intelligence, vol. 4, ed. R.Sternberg. Erlbaum. [aAGop]

Wellman, H. M. & Woolley, J. D. (1990) From simple desires to ordinarybeliefs: The early development of everyday psychology. Cognition35:245-75. [aAGop, KB]

White, P. (1988) Knowing more about what we can tell: "Introspective Access"and causal report accuracy 10 years later. British Journal of Psychology79:13-45. [aAIGol]

Whiten, A., ed. (1990) Natural theories of mind. Basil Blackwell. [aAGop]Wilkes, K. V. (1984) Is consciousness important? British Journal of Philosophy

of Science 35:223-43 [KES](1988) Real people: Personal identity without thought experiments. Oxford

University Press (Oxford). [KEfWilson, M. (1991) Privileged access. Department of Psychology, University of

California, Berkeley. [aAIGol]Wilson, T. D. (1985) Strangers to ourselves: The origins and accuracy of

beliefs about one's own mental states. In: Attribution: Basic issues andapplications, ed. J. Harvey & C. Weary. Academic Press. [aAIColl

Wimmer, H. & Hartl, M. (1991) The Cartesian view and the theory view ofmind: Developmental evidence from understanding false belief in self andother. British Journal of Developmental Psychology 9:125-28. [aAGop]

Wimmer, H., Hogrefe, G. J. & Perner, J. (1988a) Children's understanding ofinformational access as a source of knowledge. Child Development59:386-96. [aAGop, KWC, SN]

Wimmer, H., Hogrefe, J-G. & Sodian, B. (1988b) A second stage in children'sconception of mental life: Understanding informational access as origins ofknowledge and belief. In: Developing theories of mind, ed. J. W.Astington, P. Harris & D. Olson. Cambridge University Press. [aAGop,RMG]

Wimmer, J. & Perner, J. (1983) Beliefs about beliefs: Representation andconstraining function of wrong beliefs in young children's understandingof deception. Cognition 13:103-28. [arACop, aAIGol, GEB, DCP, DZ]

Wittgenstein, L. (1953) Philosophical investigations. Macmillan. [aAGop,aAIGol, BACS, AW]

(1967) Zettel, ed. G. Anscombe & G. von Wright. BasilBlackwell. [aAIGol]

Zaitchik, D. (1990) When representations conflict with reality: Thepreschooler's problem with false beliefs and "false" photographs.Cognition 35:41-68. [arAGop, AML]

(1991) Is only seeing really believing? Sources of the true belief in the falsebelief task. Cognitive Development 6:91-103. [rAIGol, PLH, DCP, DZ]

Zelazo, P. D. (1992) Primary experience: Towards a characterization ofnewborn consciousness. Manuscript submitted for publication. [PZ]

Zelazo, P. D., Palfai, T. & Frye, D. (1992) Embedded-rule use in sorting,causality and theory of mind. Infant Behavior and Development (Special1CIS Issue) 15:784. [PZ]

Zelazo, P. D. & Reznick, J. S. (1990) Ontogeny and intentionality. Behavioraland Brain Sciences 13:631-32. [PZ]

BEHAVIORAL AND BRAIN SCIENCES (1993) 16:1 113

Page 114: A Gopnik - How We Know Our Minds

PsychophysiologyJOURNAL OF FOR PSYCHOPHYSIOLOGICAL RESEARCH

EDITOR-IN-CHIEFMichael G.H. Coles, University of Illinois, Champaign

METHODOLOGY EDITORJohn T. Cacioppo, Ohio State University

This prestigious international bimonthly journal plays a key role inadvancing psychophysiological science. The premier journal in its field,Psychophysiology publishes new theoretical, empirical and methodologicalpapers of the highest standards, covering research on the interrelationshipsbetween the physiological and psychological aspects of behavior.

The conceptual, empirical and technical advances reported inPsychophysiology have increasing importance in a wide range of fields,including:

• Cognitive Science and Cognitive Neuroscience• Behavioral Medicine and Health Psychology • Psychiatry

• Psychopathology and Clinical Psychology • Applied Psychology

Each issue of Psychophysiology includes...• empirical and theoretical papers• evaluative reviews of literature• methodological articles• special reports

(featuring brief, current papers of special interest)

• book reviews• letters to the editor• meeting announcements• employment opportunities

Subscription to Psychophysiology(ISSN 0048-5772) Volume 30, 1993:$80.00 for institutions; $49.00 fornon-member individuals; $18.00 forsingle parts.

Send order to:Cambridge University PressJournals Department40 West 20th StreetNew York, NY 1001M211

These prices apply in the US, Canadaand Mexico only. Outside these countries,write to Cambridge University Press, TheEdinburgh Building, Shaftesbury Road,Cambridge CB2 2RU, United Kingdom.

CAMBRIDGEUNIVERSITY PRESS

Subscriber mailing address:

Name

Address

City

$$__$

State/Prov.

Subscription priceCanadians add 7% GSTTOTAL

ZIP

( ) Payment enclosed. (Payment may be madein US dollars or equivalent in Canadiandollars.)

( )VISA ( )MasterCard (Interbank #

Card No. Expires_

Signature