10
Lesser minds © 2015 Joost van der Leij [email protected] Introduction When it comes to the question of how the brain produces consciousness, noone not 1 even scientists, cognitivists or philosophers has a clue. Even if we, from a physical perspective, know everything there is to know about the brain and call this X, then we can always ask: and how is it that consciousness emerges from X? This explanatory gap between our physical explanation of the brain and any explaining of consciousness is, within philosophy, called the hard problem of consciousness, or in short: the hard problem. In principle there are three ways to deal with the hard problem: acknowledge it (Block 2002), deny its existence (Dennett 1991) or change the subject (Noë 2009). This paper acknowledges the hard problem and investigates whether the way Noë changes the subject makes sense. The option of the denial of the hard problem is mostly ignored in this paper. Noë argues for an extended mind. (Noë 2009) Here a person a brain and a body; and is embedded in an environment. Consciousness is an activity of this person and as such extended in the sense that the environment is, in part, an element of the consciousness of this person. In order to evade the hard problem Noë wants to change the hard problem by putting pressure on the idea that on the one hand a physical explanation of the brain is enough for the hard problem. And that on the other hand he can put pressure on the idea that consciousness is limited to the brain. In this way Noë wants to make the hard problem of consciousness a special case of the hard problem of life in general. Expanding consciousness to include more of the body and environment is going to make the hard problem even more difficult. No, to me it seems that if progress can be made at all in solving the hard problem, we need to make the mind smaller rather than bigger. But not so small as some philosophers like Dennett (Dennett 1991) do that the hard problem completely disappears. The smallest version of the mind short of complete disappearance is Stich’s Syntactical Theory of Mind (STM), a highly controversial idea that is not even well supported by Stich himself. (Stich 1983) Fortunately, Stich hasn’t formally rescinded STM either. There is an argument from Block’s harder problem of consciousness (Block 2002) (not to be confused with the hard problem of consciousness) that shows that the most interesting way forward in the hard problem is a syntactical approach. The harder problem of consciousness is that even if we are able to somehow get a highly reliable story about how our human brain produces consciousness, we still need a better story of consciousness in general to cover future robots and/or computers and/or aliens out of space that we are going to meet eventually; and who all impress us sufficiently that we start to believe that 1 Our cluelessness about the brain is such that we even don’t know which verb describes the relation between brains and consciousness. Does consciousness arise or emerge? Does the brain produce or make consciousness. My choice of words is as independent of any theory as possible. 1

Lesser Minds

Embed Size (px)

Citation preview

Lesser minds © 2015 Joost van der Leij ­ [email protected]

Introduction When it comes to the question of how the brain produces consciousness, no­one ­ not 1

even scientists, cognitivists or philosophers ­ has a clue. Even if we, from a physical perspective, know everything there is to know about the brain and call this X, then we can always ask: and how is it that consciousness emerges from X? This explanatory gap between our physical explanation of the brain and any explaining of consciousness is, within philosophy, called the hard problem of consciousness, or in short: the hard problem. In principle there are three ways to deal with the hard problem: acknowledge it (Block 2002), deny its existence (Dennett 1991) or change the subject (Noë 2009). This paper acknowledges the hard problem and investigates whether the way Noë changes the subject makes sense. The option of the denial of the hard problem is mostly ignored in this paper. Noë argues for an extended mind. (Noë 2009) Here a person a brain and a body; and is embedded in an environment. Consciousness is an activity of this person and as such extended in the sense that the environment is, in part, an element of the consciousness of this person. In order to evade the hard problem Noë wants to change the hard problem by putting pressure on the idea that on the one hand a physical explanation of the brain is enough for the hard problem. And that on the other hand he can put pressure on the idea that consciousness is limited to the brain. In this way Noë wants to make the hard problem of consciousness a special case of the hard problem of life in general. Expanding consciousness to include more of the body and environment is going to make the hard problem even more difficult. No, to me it seems that if progress can be made at all in solving the hard problem, we need to make the mind smaller rather than bigger. But not so small as some philosophers like Dennett (Dennett 1991) do that the hard problem completely disappears. The smallest version of the mind short of complete disappearance is Stich’s Syntactical Theory of Mind (STM), a highly controversial idea that is not even well supported by Stich himself. (Stich 1983) Fortunately, Stich hasn’t formally rescinded STM either. There is an argument from Block’s harder problem of consciousness (Block 2002) (not to be confused with the hard problem of consciousness) that shows that the most interesting way forward in the hard problem is a syntactical approach. The harder problem of consciousness is that even if we are able to somehow get a highly reliable story about how our human brain produces consciousness, we still need a better story of consciousness in general to cover future robots and/or computers and/or aliens out of space that we are going to meet eventually; and who all impress us sufficiently that we start to believe that

1 Our cluelessness about the brain is such that we even don’t know which verb describes the relation between brains and consciousness. Does consciousness arise or emerge? Does the brain produce or make consciousness. My choice of words is as independent of any theory as possible.

1

they are, like us, conscious too. Or risk becoming human chauvinists and claim that only people like us or creatures with neurological brains like us can ever become conscious. In this paper I take it that no­one wants to be chauvinistic like that. My game plan for this paper is that I will first explain the hard problem and the harder problem of consciousness. The second step will be to show that even though it is possible to have a physical explanation for each creature’s particular form of consciousness. It is impossible to come up with a unitarian physical description of consciousness in general due to the unitarian character of phenomenology of subjective experience. If you want to have a theory of consciousness in general it has to be non­physical. Block proposes that the brain can be interpreted as "a syntactic engine driving a semantic engine". (Block 1995) Noë on the other hand has an argument against the idea that meaning, i.e. semantics, is in our heads. (Noë 2002) If that turns out the to be the case, so I will argue, then we have no other avenue left for progress on the harder problem than a syntactical approach. Fortunately, Stich’s concept of fat syntax has enough body to make a start.

The harder problem In order to understand what the harder problem of consciousness is and why it differs from the hard problem, it is good to clearly understand what the hard problem is. The hard problem is that even if you do have a full understanding of the physical system that produces a phenomenological experience the question remains why this particular physical system produces that phenomenological experience rather than another or no physical system at all. (Block 2002 ­ p. 394) People can consciously see bundles of only very few photons as was shown by Hecht in an experiment in which weaker and weaker flashes of green light are sent to people in a pitch dark room. (Hecht 1942) So in principle it would be possible to understand how these photons interact with our retina, how our retina sends a signal to the brain and how the brain processes this signal so that we consciously see a faint flash of green light. Nevertheless nothing in our physical description of this process will explain why the interaction of neurons through neurotransmitters and what else, will result in our consciously seeing a faint green light at all, or why this light is faint and green rather than something else. The hard problem is so hard that we, i.e. scientists, philosophers and people in general, have no clue to even where to begin to look for an answer. Nevertheless, the harder problem is even harder than this. For even if we come up with an answer that explains why neurological activity in our brain produces a faint green light, then we are still at a loss when we encounter other creatures for whom we lack any rational ground for denying that they also see a faint green light in Hecht’s experiment but whose bodies radically differ in their physical description from ours. Block’s example is Commander Data from Star Trek who is superficially functional isomorphic to us and who claims to be conscious. (Block 2002 ­ p. 401) Block is very careful not to jump to any conclusions and makes clear that the harder problem only exists if you think that:

a. consciousness can be scientifically explained; b. there can be a functional description of consciousness entailing that two different

creatures can be superficially isomorphic similar to each other ­ what this entails is

2

that superficially we can describe what another creature experiences functionally somewhat the same as seeing a faint green light in Hecht’s experiment;

c. there is a non­skeptic argument to be made for the problem of other minds, i.e. you accept that other people are conscious like you are;

d. there is a way so that the same functional description can be realized or even constituted in different physical systems. (Block 2002 ­ 398­401)

If you accept these four conditions, then if we ever build a robot like Commander Data he will present us with the harder problem. For Commander Data is superficially isomorphic to us in the sense that superficially he seems to reason and experience like us. If he sees rain drops he seems to come to believe that it is raining and if he desires to stay dry he will use an umbrella. As we believe that other humans have minds like ours, it will be an open question to whether Commander Data is really conscious, even if the default view of our naturalistic and scientific outlook is that Commander Data is not conscious. For we can’t find any rational ground to deny Commander Data his consciousness. Nevertheless Commander Data is a radically different physical realization of our superficially isomorphic functionality. Now the harder problem makes its entrance. For in the situation as described above, we not only need to find an answer to the hard problem of why we humans have phenomenological experiences, but we need an additional theory of why Commander Data would or could have the same. In principle this could mean that we would have separate theories: one for humans, one for Commander Data and one for whatever creature we find next that convinces us that it is conscious. This alone makes the harder problem indeed harder than the hard problem. Having to do everything over again after we have done it once is trivially harder than only having to do it once. But this is not what makes the harder problem harder. What makes the harder problem really harder is that there is little reason to expect that phenomenology is anything but unitary. If both we and Commander Data consciously see a faint green light in Hecht’s experiment, it would be weird to think that this would be a radically different phenomenological experience. So here is the harder problem: even if we have a complete set of separate physical explanations of every known conscious creature, then we still lack a theory of consciousness in general that explains why all these creatures have a unitarian experience, i.e. why they all see a faint green light if they are superficially functionally isomorphic with us. If we are clueless about the hard problem, then we are meta­clueless about the harder problem, for in order to solve the harder problem we first need to have at least two different physical descriptions in radically different physical creatures (one of them humans). In fact I think that as long as we don’t want to be chauvinistically human or chauvinistically neurological, we don’t even need other creatures to be superficially functionally isomorphic to us. A human chauvinist would claim that only humans can be conscious. A neurological chauvinist has a little less extreme point of view, namely that only creatures who have a neurological organ like our brain can be conscious. Block sees that such an attitude would beg the question in regard to Commander Data’s consciousness. (Block 2002 ­ p.413) If

3

only creatures who have a brain like us can be conscious then Commander Data, lacking such a brain, can’t be conscious. Block thinks that it is important that any creature we attribute with consciousness is at least superficially isomorphic functionally to us. But one can ask what the minimal level of superficiality is that still forces us into the harder problem. For if an alien from outer space comes and visits us and turns out to be super­superficially isomorphic functionally to us, but is still able to move around, explore the environment and communicate with us, it would lead us to pretty much the same kind of doubt that we have over Commander Data's consciousness. Even in the case that there is only super­superficially isomorphic functionality, the harder problem still remains. Even if we have a full explanation of human consciousness and the alien's consciousness, then we still have the harder problem of getting a theory of consciousness in general. Non­human creatures can be as weird as anything as long as there is some way they can communicate with us. This may sound pretty much like the Turing Test for consciousness: if the machine can get a human judge through communications to think that he is dealing with a human rather than a machine, then we ought to grant that the machine is like us, i.e. is conscious. But this is wrong. First of all, for the harder problem, there is no need for us to be convinced that the creature is conscious at all. All we need, is that we lack a rational ground for denying the creature's consciousness. Second, if we have a completely determinate machine that passes the Turing Test, we know everything that we can know about the machine and we can rule out consciousness based on that knowledge. The hard and harder problem, both, only exist by the fact that we do not have a complete understanding of the creature. So the Turing Test is too strong in the sense that there is no need for us to think the non­human creature is conscious; lacking rational grounds for denying consciousness is enough. And the Turing Test is too weak in the sense that it doesn't allow physical knowledge about the non­human creature to play a role in determining whether a creature is conscious or not. In fact being anti­chauvinistic as regards to other creatures works pretty much the same way as the charity principle works out in Quine’s radical translation and Davidson’s radical interpretation. (Davidson 1991) The only difference is that instead of translating or interpreting another human whose language you don’t speak, you now do the same with a non­human creature. The charity principle states in this case that if you come to the conclusion that a non­human creature is communicating with you that most of his beliefs have to be correct or he wouldn’t have a language at all. Languages are so holistic that although every thought expressed in a language can be wrong, most of the thoughts that are expressed in that language have to be correct or there would be no language. So any creature that communicates with us and we lack rational grounds for denying that he is conscious, leads us to the harder problem no matter how superficially isomorphic functionally he is. One could even say that if a creature a) communicates with us and b) we lack sufficient knowledge about the physical working of that creature we automatically lack rational grounds to deny its consciousness. Only if we have in depth knowledge about the creature, for instance in the case where the creature is a determinate machine, then

4

we can deny that it is conscious when it communicates with us based on rational grounds stemming from this knowledge.

Fat syntax Hopefully, I have made it abundantly clear how extremely difficult the harder problem is. Should we give up, seeing that the problems are so unsurmountable? Of course not. The question becomes how to continue. One way is to deny that phenomenology exists at all like Dennett does. (Dennett 1991, p.365) But that option is excluded by Block’s first of his four conditions, i.e. we are realists towards our own phenomenology. A different route is to change the subject like Noë does. He prefers to show that consciousness is not closely tied to the brain on the one hand (Noë 2009) and that interactions with the environment through sensorimotor knowledge answers the hard problem in part. (Noë 2001, p. 1011­1012) His strategy is to put pressure on the idea that a complete physical description of the brain will help us solve the hard problem and that his extended mind will put pressure on the idea what consciousness is. For him the hard problem of consciousness is only a special case of the hard problem of life: what is life? and can better be ignored in favor of the hard problem of life. (Noë meeting 2015) Noë doesn’t mention the harder problem explicitly but for him it is an open question whether robots can achieve consciousness. (Noë 2009, p. 166) For Noë consciousness is the activity of exploring the environment. (Noë 2002) It could well be that such a definition is wide enough to encase all possible conscious creatures. For if they don’t explore the environment at all it would be hard to understand how we could meet and communicate with them. Nevertheless, it tells us next to nothing about the phenomenology of the experience. Noë doesn’t deny that there is phenomenology although he does deny that there are qualia in the sense that a quale would be a property of a state as Noë thinks that thinking in terms of states is unfruitful to begin with. (Noë 2001, p. 962) By extending the mind parts of the environment become part of our consciousness which is why Noë is happy when we are out of our heads. (Noë 2009, p.183) Yet, this move makes the hard problem more difficult for to completely described the physical system would not only describe our brains, but also the physical environment. Maybe in the end it turns out such a move is indeed needed to solve the problem. But this would need additional argumentation where there is currently none. Nor is it clear that Noë’s proposal is the only way, let alone the best way, of including more than just the brain into the equation. Block notices that the harder problem affects other philosophers as well. For instance it affects Searle’s Chinese Room Argument. (Block 2002 ­ p. 406) Searle claims that a person who doesn’t speak English but matches Chinese characters input to Chinese characters output as prescribed by some rules from a blind room doesn’t understand Chinese nor does it explain human ability to understand. (Moural, 2003) The Chinese Room Argument is often taken as an argument about consciousness. As such it states that there can’t be consciousness solely on the basis of computation. If there is to be computation at all consciousness arises from the implementation of the computational rules in some physical system. For Searle it is an open question whether computers or robots can be conscious, but if they are conscious more than just the computational rules they are following have to involved. Searle thinks that the Chinese Room Argument shows that there is no consciousness in a system if the systems only functions according to

5

computational rules. Or to be precise: if the system is a Turing machine, i.e. a completely determinate machine, then it doesn't understand Chinese. The problem for Searle with the harder problem is of course the other way around. What if Searle is confronted with a system that communicates with us and we lack sufficient knowledge about how the systems works physically, i.e. it could be a Turing machine, but we don't know for sure. If Searle denies that this system is conscious then he is easily accused of being a human or neurological chauvinist. The way out for Searle seems to be that he can claim that the physical realization of this system, if it is really conscious, adds something to the equation and it is this addition that is responsible for it being conscious. That if the system is conscious that it can't be a Turing machine. But this is to deny the full strength of the harder problem. For what Searle would be saying is that there is no problem as long as we can have a physical explanation of each type of conscious creature. Yet, here Searle overlooks the unitariness of phenomenology. If indeed we would be completely satisfied with individual physical explanations per conscious creature type Searle would not have a problem. But the issue of the harder problem is that we are anything but satisfied with different physical descriptions. What we want after we have come up with these different physical stories, is a unifying theory about consciousness in general. One that is abstracted away from specific physical implementations. What the harder problem is pointing at is that a solely physical story about consciousness is a nice thing to have, but unless it turns out that all conscious creatures in the whole universe have brains like ours, it isn’t the last thing that can be said about consciousness. That we can already imagine such creatures to exist, suggests that what we ultimately want, and what the harder problem is hinting at, is a non­physical or non­biological theory of consciousness. A generalized theory of consciousness abstracts away from specific physical implementations. There are probably many ways this could be possible, but a very common approach is the following. Say we would like to implement a logical AND operator in some physical system. The logical AND operator works such that its output is 0 unless both inputs are 1. If that is the case the AND operators output will be 1 as well. The physical implementation of this logical operator can be an electronic circuit as built in our computers. But it can also be built with one cat and three mice who are either hungry (0) or fed (1). only if both mice are released so they can feed, the cat will exercise enough strength in order to catch both mice so that a third mouse is released and can eat some cheese as described by Block. (Block 1995) Block uses this example to show that the logical operator is “profoundly un­biological”, i.e. it is abstracted away from its physical implementation. The idea is that if any physical system correlates well with logical rules like these, we can interpret that system as implementing those logical rules. Then a distinction can be made between the syntactical description of the logical rules, in our example the “AND”, 0 and 1, and the semantical description of the logical rules, in our example the idea that a conjunct statement is only true (1) if both statements are true (1). This way we get a correlation between a physical system and a syntactical structure. And a second correlation between

6

the syntactical structure and the semantical description. It is important to note that the syntactical and semantical description only exist in the mind of an observer or interpreter. There are no physical 0 and 1 in a physical system that implements an AND operator. Nor is there an understanding in said system that the non­existing 0 and 1 stand for false and true at the semantical level. Only an observer or interpreter can see such a physical system as an implementation of some syntactical and semantical description due to the strong correlations the physical system has. For instance in the cat & mouse example, if that system outputs a fed mouse in 99.999% of the times that statement A and statement B are both true then we can use this correlation as a basis for thinking that the cat & mouse system implements the AND operator both on a syntactical and a semantical level. Almost everyone thinks that you either need both levels or only the semantical level to explain consciousness. Only a handful people if not just one, i.e. Stich, think that a syntactical level is good enough. Nevertheless, there are issues with the semantical level or the level of meanings. One of the reasons Noë likes to extend our mind is that language is “a cultural collective instrument” and that “meaning isn’t in the head”. (Noë 2009, p. 89) Wittgenstein can be interpreted (especially by Kripke) as giving a an argument against the idea that a private language is possible at all. (Wittgenstein 1953, Kripke 1982). Putnam has shown with his Twin Earth that if you take the semantical level to be the content of our consciousness that it is context dependent whether the content of our thoughts are true or false. (Putnam, 1975) These are the arguments that persuade Noë to extend the mind beyond the head, skull and brain. As this makes the hard and harder problem even more difficult, this is reason enough for me to turn the argument on its head: if an extended mind is needed for a semantical description of consciousness then we are better off without such a semantical description. Maybe a syntactical description is good enough. Nevertheless, the reason almost no­one with maybe the exception of Stich thinks that a purely syntactical approach is a good idea, is that supposedly there are very strong arguments against such a syntactical approach. But before I go into the weaknesses of a syntactical approach, let’s first see whether there are any good reasons for using syntaxis rather than semantics. Block gives two reasons in favor of the Syntactic Theory of Mind (STM). The first is that there exists a Mrs. T., a senile old lady who knows that McKinley has been assassinated when you ask her about it, but she can’t infer from the knowledge that McKinley was assassinated that McKinley is dead and buried. This makes semantically no sense. But with a purely syntactical approach it becomes clear that some of the syntactical rules are still working in Mrs. T. She knows McKinley was assassinated but due to some malfunction she can’t infer he is dead and buried. Importantly, Block notices that STM is not only superior when it comes to senile old ladies, but also for “very young children, people with weird psychiatric disorders, and denizens of exotic cultures”. The problem isn’t that we can’t assign content to these people’s consciousness but that “we cannot assign contents to them in our terms”. (Block 1995). STM is one of our current best options when it comes to understanding creatures who differ radically from us like Commander Data. Even though it might be a hard job to assign syntaxis to them, that beats the impossibility of assigning contents to them in our terms.

7

Basically, in favor of STM can be said that “the syntactical perspective is far more general than the content perspective” and that it “allows more fine­grained predictions and explanations than the content perspective”. (Block 1995) Block thinks he can undermine these strengths by pointing out that the physical perspective is even more general and fine­grained. The syntactical perspective is superior to the semantical perspective because it is more general and fine­grained, but it is superior to the physical perspective in respect to the harder problem for it allows a theory of consciousness in general rather than only a theory of consciousness of a specific conscious being. Or to put it differently: when it comes to the harder problem the physical perspective is too fine­grained for explaining the unitariness of phenomenology. Block can’t have it both ways: either the physical perspective is superior, but then we can’t have the harder problem; or the harder problem is our main concern and then there are situations in which the physical perspective is inferior to other perspectives that explain the unitariness of phenomenology. In fact we don't even need weird creatures like Commander Data who might never exist. Other people are already enough to get to the harder problem. If you are a non­skeptic in regard to the problem of other minds, i.e. you accept that other people are conscious like you are, then the harder problem already emerges. For if we were to find a physical description of how person A's brain produces consciousness, this would involve a story about the specific physical stuff of person A. Person B's physical stuff would be different. Not only in the sense that it is built from different molecules, but also that people's brains differ in size and form in the same way their bodies differ. A physical description of how person B's brain produces consciousness would be different in detail from the description of person A. These two stories might be very similar and although very interesting, what we really want is a story about how human brains in general produce consciousness. This general story about human consciousness is already abstracted away from its physical implementation. The story would probably still involve a lot of physical functions. But these physical functions could also be described as the syntax that the physical implementation in person A and person B correlate with. Stich has a third reason why STM is a good idea. This is his principle of autonomy. This principle states that a) if I am replaced with an exact physical copy of me that this replacement doesn’t influence any scientific explanation of my consciousness whatsoever and b) that content of thoughts like “water is wet” doesn’t supervene on my “current, internal, physical state”. (Stich 1983, 1991) Stich thinks that this is what Putnam’s Twin Earth thought experiment shows. For that reason content can’t be part of any scientific explanation of consciousness as it is too context dependent. For Noë this dependency is, as we have seen, reason for extending consciousness to include the context into it. But that seems to do away with the whole idea of a context and it displaces the problem to the context of the context. Stich on the other hand sees the context dependency of content to be a reason to scratch content and focus on the syntactical perspective. Nevertheless he is aware of course that there is enough interaction between the environment and the person. For this reason when he speaks of syntax he means fat syntax, i.e. a syntax that includes the “interaction with stimuli and behavior”. (Stich 1995, p. 146)

8

Conclusion The hard problem is very, very hard. The harder problem is even harder. The way forward is not to make these problems even more harder, like Noë does, but to make consciousness smaller like STM does. Lesser minds may be easier to explain than extended minds. A syntactical approach seems to be the most promising to get to a theory of consciousness in general. One that explains the unitariness of phenomenology. Probably, STM is the least researched option given the amount of criticism it has received. Nevertheless, given that so far no head wind has been made towards even a begin of a solution for the hard problem, this can be considered in favor of STM. Doing away with unnecessary ballast like content makes for a lesser mind and poses an easier problem even if it still remains a hard problem. If content goes, of course so goes intentionality. This is one of the motivations for Stich to develop STM for he has, influenced by Quine, “long been suspicious about the integrity and scientific utility of the commonsense notions of meaning and intentional content.” (Stich 1995 p.140) For Stich these notions are “projective, context sensitive, observer relative and essentially dramatic”. So maybe less is more in the case of consciousness. Well, at least STM deserves a second chance.

Bibliography

Block, N, (1995) The Mind as the Software of the Brain in Kosslyn, S. M., & Osherson, D.

N. An invitation to cognitive science. Cambridge, MA: MIT Press.

Block, N. (2002). The Harder Problem of Consciousness. The Journal Of Philosophy,

99(8), 391. http://doi.org/10.2307/3655621

Davidson, D. (1991). Three Varieties of Knowledge. Royal Institute Of Philosophy

Supplement Roy. Inst. Philos. Suppl., 30, 153–166.

http://doi.org/10.1017/s1358246100007748

Dennett, D. C. (1991). Consciousness explained. Boston: Little, Brown and Co.

Hecht, S. (1942). Energy, Quanta, And Vision. The Journal Of General Physiology, 25(6),

819–840. http://doi.org/10.1085/jgp.25.6.819

Kripke, S. A. (1982). Wittgenstein on rules and private language: an elementary

exposition. Cambridge, MA: Harvard University Press.

Noë Alva, & Thompson, E. (2002). Vision and mind: selected readings in the philosophy of

perception. Cambridge, MA: MIT Press.

O'regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual

9

consciousness. Behavioral And Brain Sciences Behav. Brain Sci., 24(05), 939–973.

http://doi.org/10.1017/s0140525x01000115

Stich, S. P. (1983). From folk psychology to cognitive science: the case against belief.

Cambridge, MA: MIT Press.

Stich, S. P. (1991). Narrow content meets fat syntax in Collected papers. New York:

Oxford University Press.

Wittgenstein, L. (1953). Philosophical investigations. New York: Macmillan.

The meaning of ‘meaning.’ (1975). Mind, Language And Reality Philosophical Papers,

215–271. http://doi.org/10.1017/cbo9780511625251.014

10