428
PERSPECTIVES ON MIND

Perspectives on Mind

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Perspectives on Mind

PERSPECTIVES ON MIND

Page 2: Perspectives on Mind

SYNTHESE LIBRARY

STUDIES IN EPISTEMOLOGY,

LOGIC, METHODOLOGY, AND PHILOSOPHY OF SCIENCE

Managing Editor:

JAAKKO HINTlKKA, Florida State University, Tallahassee

Editors:

DONALD DAVIDSON, University of California, Berkeley

GABRIEL NUCHELMANS, University of Leyden

WESLEY C. SALMON, University of Pittsburgh

VOLUME 194

Page 3: Perspectives on Mind

PERSPECTIVES ON MIND

Edited by

HERBERT R. OTTO Department of Philosophy, Plymouth State College (USNH)

and

JAMES A. TUEDIO Department of Philosophy, California State University, Stanislaus

D. REIDEL PUBLISHING COMPANY

A MEMBER OETHE KLUWER ACADEMIC PUBLISHERS GROUP

DORDRECHT / BOSTON / LANCASTER / TOKYO

Page 4: Perspectives on Mind

Library or Congress Calaloging in Publication Dala

Perspectives on mind I edited hy Herbert R. Otto and James A. Tuedio p. cm.-(Synthese library; v. 194)

Bibliography: p. Includes indexes. ISBN-I3: 978-94-010-8290-7 e-ISBN-I3: 978-94-009-4033-8 DOl: 10.1007/978-94-009-4033-8

1. Knowledge, Theory of. 2. Cognition. 3. Consciousne~s. 4. Phenomenology. 5. Analysis (Philosophy) I. Otto. Herhert R .• 1931- II. Tucdio. James Alan. Ill. Series.

BD161.P45 1987 128'.2-dc 19

Puhlished hy D. Reidel Publishing Companv. P.O. Box 17. 3300 AA Dordrecht. Holland.

87-28465 CIP

Sold and distributed In the U.S.A. and Canada by Kluwer Academic Puhlisher~,

101 Philip Drive. Norwell. MA 02061. U.S.A.

In all other countries, sold and distrihuted by Kluwer Academic Publisher .. Group,

P.O. Box 322. 3300 AH Dordreeht. Holland.

All Rights Reserved © 1988 by D. Reidel Publishing Company, Dordrecht, Holland

Softcover reprint of the hardcover I st edition No part of the material protected by this copyright notice may b.: reproduced or

utilized in any form or by any means, electronic or mechanical including photocopying, recording or by any information storagc and

retrieval system, without written permission from the copyright owner

Page 5: Perspectives on Mind

ACKNOWLEDGEMENTS

The editors of this volume wish to express their gratitude to the many people whose effort, support, and encouragement helped to bring this project to fruition. Our primary debt is to the numerous contributors to this anthology. Their work has served as the occasion for an extended dialogue on the issues within philosophy of mind and related areas. With their help we have endeavored to bring about a useful exchange of ideas, insights, and perspectives between the two main traditions in contemporary philosophy. Their great patience and understanding through the course of this work was, in itself, a major encouragement to us.

We wish to thank our publisher, and, in particular, Professor Jaakko Hintikka and Annie Kuipers for their confidence and extensive assistance in getting us through the many tasks required to complete a project of this magnitude. We extend our thanks, also, to the University of Massachusetts Press, Amherst, for their kind permission to reprint, in modified form, parts of two chapters from Michael A. Arbib's recent book (1985), In Search of the Person: Philosophical Erplorations in Cognitive Science; to the Australasian Journal of Philosophy for rights to material from an article by Yuval Lurie; and to Topoi for the right to include the article by Ronald Mcintyre.

The support given to us by our schools, Plymouth State College of the University System of New Hampshire, and California State University, Stanislaus was greatly appreciated, and Kathryn Dustin Otto's diligent assistance with the electronic tasks was essential to the preparation of the numerous drafts of the text. Finally, we would like to thank Kathryn and Julie for their unfailing patience and understanding.

Herbert R. Otto James A. Tuedio New Hampshire, 1987

Page 6: Perspectives on Mind

Contents

Introdnction

Chapter One BRAIN STATES, MACHINE STATES, AND CONSCIOUSNESS 3

1.1 Consciousness 4

GEORGESREY A Question About Consciousness DAVID WOODRUFF SMITH Rey Cogitans: TIle Unquestionability of Consciousness

1.2 Correspondence

YUVALLURIE Brain States and Psychological Phenomena FORREST WILLIAMS Psychophysical Correspondence: Sense and Nonsense

1.3 Representation

RONALD MciNTYRE Husser! and the Representational TIleory of Mind KATHLEEN EMMETT Meaning and Mental Representation HUBERT L. DREYFUS Husser! 's Epiphenomenology

Chapter Two STRUCTURES OF MENTAL PROCESSING

2.1 Qualia

JAMES Ii. MOOR Testing Robotsfor Qualia ROBERT VAN GULICK Qualia, Functional Eqllil'alcnce, and Computation HENRY W. JOHNSTONE, JR. Animals, QlIalia, and Robots

5

25

33

35

49

56

57

77

85

105

105

107

119

127

Page 7: Perspectives on Mind

viii

2.2 Intentionality

RA YMOND J. NELSON MedlQllism and IllIentiollality: The New World Knot JOHN W. BENDER Knotty, Knotty: Comments 011 Nelson's "New World Knot" CHRISTOPHER S. HILL IlIIelltionality, Folk Psychology, alld Reduc1ioll

2.3 Transaction

JAMES A. TUEDIO Illtentiollal Transaction as a Primarv Structure of Mind STEVE FULLER Sophist I'S. Skeptic: Two Paradigms of IllIelllional TrallsaC1ion WILLIAM R. MCKENNA COlllmelllalyon Tuedio's "lllIentiOlwl TrallsaC1ion "

Chapter 3. MIND, MEANING, AND LANGUAGE

3.1 Schemas

MICHAEL A. ARBIB Schemas, Cognition, and Language: Toward a Naturalist AccoullI of Mind HARRISON HALL Naturalism, SChemas, and the Real Pililosophical Issues in COlllemporary Cognitil'e Sciellce JAN EDWARD GARRETT Schemas, Persons, and Reality--A Rejoinder

3.2 Background

CHRISTOPHER A. FIELDS Background Knoll'ledge and Natural Language Understanding NORTON NELKIN IllIemalitv, Ertemalitv, and IllIellliollalit.\' ROBERT C. RICHARDSON ObjeC1s alld Fields

CUI/fellts

134

137

159

169

181

183

199

209

217

217

219

239

249

260

261

275

283

Page 8: Perspectives on Mind

Contellts ix

3.3 Translation 292

HERBERT R. OTTO Meaning Making: Some Functional Aspects 293 HERBERT E. HENDRY Comments on Otto on Translation 315 STEVE FULLER Blindness to Silence: Some Dysfunctional Aspects of Meaning Making 325

Chapter Four PROSPECTS FOR DIALOGUE AND SYNTHESIS 339

4.1 Convergence 339

JOSEPH MARGOLIS Pragmatism, Phenomenology, and the Psychological Sciences R. W. SLEEPER 77u Soft Impeachmelll: Responding to Margolis JAMES MUNZ In Defense of Pluralism

4.2 Dialogue

EPILOGUE Toward A New Agenda for Philosophy of Mind

APPENDICES

Footnotes Bibliography Subject Index Name Index List of Authors

341

355

365

370

371

377

377 399 415 417 420

Page 9: Perspectives on Mind

INTRODUCTION

Phenomenology and analytic philosophy have skirmished often, but seldom in ways conducive to dialectical progress. Generally, the skirmishes seem more "political" than philosophical, as when one side ridicules the methods of the other or criticizes the viability of the other's issues and assump­tions. Analytic interest in third person objectivity is often spurned by Continental philosophers as being unduly abstract. Continental interest in first person subjectivity is often criticized by analysts as being muddled and imprecise. Logical analysis confronts the power of metaphor and judges it "too ambiguous" for rigorous philosophical activity. The language of metaphor confronts the power of logical analysis and deems it "too restric­tive" for describing the nature and structures of authentic human exper­ience. But are the two approaches really incompatible?

Perhaps because each side of the "divide" has been working at problems largely uninteresting to the "opposition" it has been easy to ignore or underestimate the importance of this issue. But now each side is being led into a common field of problems associated with the nature of mind, and there is a new urgency to the need for examining carefully the question of conceptual compatibility and the potential for dialogue. Analytic thinkers are typically in the business of concept clarification and objective certi­fication. Continental philosophers employ introspection in the interest of a project of description and classification that aims to be true to the full subtlety and complexity of the human condition. Though analytic philosophers generally incline to deductive forms of reasoning, and Continental philosophers to more inductive modes of inquiry, this alone is hardly grounds for concluding that the two traditions are incompatible. Science itself embodies a healthy dialectic between deductive and inductive reasoning. In any event, it is important to consider the possibility of a complementarity of method as well as an underlying commonality of goals--of a philosophical "convergence," as it were.

What then of the respective methods as they stand? A simple response would be this: analytic philosophers should make their precising "cuts" with more respect for the subtlety of the subject matter. They must be less arbitrary in their development of counterexamples and in postulation of hypothetical situations. They should be more sensitive to the point a colleague is trying to make than to weaknesses in his logic of expression. On the other side, Continental philosophers need to be more exacting. less comfortable with vagueness. Without ignoring the complexity of experience, they must try to explain themselves in terms that are indeed "clear and distinct." Somehow, they must seek to isolate the "joints" of experience without slighting the holistic character of the subject matter. They, too. must be more sensitive to the unfolding of ideas than to stylistic or

Page 10: Perspectives on Mind

2 hltrU(/ifuirJfI

procedural differences in a given philosopher's effort to communicate, If concepts and metaphors appear to be vague or confused, we should seek their clarification openly through conversation, the way Socrates advises us in the Meno, "as friends talking together."."

With respect to the human mind, the issue becomes one of determining the nature and role of the various "joints" that sustain the functional unity of embodied subjectivity. Some of these joints are best described in mechanistic terms; others are not so amenable to description in that way. How are we to describe the "interface" between the neuro-physiological input-output mechanisms of the body and the functional life of conscious­ness taken as a subject;,'e installliation of input-output functioning? Is it proper to speak of the body as an input-output mechanism? Is it even necessary that there be "joints" linking physiology and consciousness through some kind of "mental transaction?" Why do these seem so essential­ly a part of the puzzle? Or is all this simply a residual prejudice of the Cartesian program? These questions require us to determine their meaning­fulness before seeking detailed answers. But to do so seems to call for a meshing of perspectives, a blending of the goals and methods of cognitive science with what we might cal1 "cognitive phenomenology." To bring these into a homogenous framework of analysis becomes a paramount philosophical task. Using the diverse perspectives on mind collected together in this anthology, we attempt through our unifying commentary to sketch some of the key features of this framework, and to establish a point of contact across which analytic and Continental philosophers can begin constructive dialogue on a subject matter of common interest.

This anthology offers a number of perspectives on mind, some of which focus on the objective functionality of mental processing, others that focus on the subjective structures of conscious experience. All are perspectives on a single reality. None is exhaustive or privileged. The common reality, mind, is chal1enge enough in its intricacy to call for a multiplicity of approaches. To accommodate the true nature and function of mind, these perspectives must somehow coalesce, for mind is not simply a collection of disjoint aspects. It is holistic, possessing two dimensions for investigation: one providing the power of qualitative discrimination as a felt process; another manifesting neurophysiological occurrences as pub­licly discern able events. We hope our commentary will make the connections between these perspectives more apparent. We hope also that there will emerge in this volume enough of a consensus to form the outline of a new agenda for philosophy of mind, one engendered by a concerted effort to blend the insights and investigations of cognitive science and cognitive phenomenology. The key to a better understanding of the nature of mind is to be found in open-ended dialogue between the two schools of thought. To this end, we seek to estahlish the l'iabilitv of such an exchange. If we have actually initiated dialogue, so much the better.

Page 11: Perspectives on Mind

Chapter One

BRAIN STATES, MACHINE STATES, AND CONSCIOUSNESS

Developments in cognitive science suggest that important breakthroughs may be imminent with respect to some of the key issues in artificial intelli­gence. A better understanding of some of the more subtle features of human problem-solving skills, together with successful computational simulation of such skills, seems to hold promise of more complete answers to the hard questions about how human beings organize knowledge, and how they apply it in problem-solving situations. For example, we are quite adept at using strategies that we ourselves monitor, evaluate, and, if need be, augment, refine, or replace as the situation demands. How do we do this? What are the operations and functions that make this self-monitoring possible?

Recent success with "introspective programs" capable of incorporating data into the command hierarchy of a machine's operating system suggests that computational mechanisms are attaining the capacity to actually extend and refine their own capabilities. When they gain the further capacity to modify the introspective coding itself, it is argued, they will possess genuine intelligence. Of course, these mechanisms would have to become aware of their limitations and of the extent to which their knowledge and skills can reasonably be applied. But even here, truly introspective programs will give a system the capacity to know what it knows, to know what it can do to evaluate various methods at its disposal for solving problems, and to recognize when it is incapable of solving a problem.

Such achievements will intensify cognitive science research. Yet lingering questions cast doubt on the overall prospects in the quest for genuine artificial intelligence. For instance, even if "introspective programs" yielding "human-like" behavior are possible, would such mechan­isms really possess "consciousness?" Second. can a theory of discrete computational processes capture the complexity of actual psychological phenomena? Finally. would such mechanisms entertain "meaning" the way we do? These three critical questions are addressed in this chapter.

Georges Rey argues that there is a strong possibility that the first question should simply be rejected as misleading and unfruitful. Yuval Lurie. though inclined to answer the first question affirmatively. holds that a negative answer is indicated regarding the second. since psycholog­ical phenomena COllllot--as required by the logic of correspondence theory--be individuated. because their content, namely meaning, is intricately interwoven with the content of other psychological phenomena. This points to the need for a holistic theory of mental operation. an approach which contrasts sharply with Rey"s. Focussing on the third

3

Page 12: Perspectives on Mind

4 Chapter Olle

question, Ronald McIntyre examines the "representational character" of mental phenomena. Emphasizing the mediating role of intentional content, McIntyre lays the basis for a critique of "causal" theories of mental and linguistic reference, while questioning the viability of AI research stra­tegies that attempt to reduce semantic content to formal syntax. These are the three main contributions to this chapter. Each may be viewed as advancing an argument for a specific constraint its author thinks needs to be imposed on empirical research concerned with the study of "mind." Commentaries follow each paper.

1.1 Consciollsness

In recent years, cognitive science has been influenced by the idea that mental capacity can be measured in terms of the degree to which "rational regularities" are instantiated in behavior. Where these regularities are instantiated in a machine's behavior, many cognitive scientists would say, to that extent it has a mind. But does this imply that such a mechanism is also conscious? What exactly are we referring to when we use the term 'consciousness"'1 Does its use imply that something exists ill additioll to the rational regularities that are thought by some to comprise the essence of mental capacities? In the first paper Professor Rey addresses these questions in the context of developments in cognitive science. There, he argues, the emerging picture of mind indicates a strong possibility that our notion of consciousness is, if not outright prejudice, then surely a confusion arising from faulty analysis.

He asks us to imagine a machine programmed to draw inferences, have preferences, operate with beliefs. engage in sensory transduction, and use language. We are to include the capacity for recursive self-reference, as well as a special variable in the operating system designed to function "as an internal name of the receiver of the inputs, the entertainer of the thoughts, and the instigator of the actions of the machine." (Rey: this volume, p. 14) Would such a machine be conscious? Rey argues that even though the machine would display the requisite rational regularities, there would be no basis for concluding that it had something answering to the common term 'consciousness'. Indeed, there would appear to be no basis for saying that we, ourselves, are any different from such a machine. In other words, there simply is no referent for this mysterious and elusive term. Such a view challenges a fundamental intuition common to most of llS. For "among ordinary beliefs about consciousness," writes Rey. "none seems more powerful or more certain than that we each know immediately in our own case, in a special way that is immune to any serious doubt, that we are conscious." (p. 6) Rey calls this view of the undeniability of conscious­ness the "Cartesian Intuition," and sets out to undermine its credibility as a starting point for reflections on the nature of the human mind.

Page 13: Perspectives on Mind

GEORGES REY

A Questiou About Consciousness

For my part, when I enter most intimately upon what I call myself, I always stumble upon some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure ... Were all my perceptions removed by death, and could I neither think, nor feel, nor see, nor love, nor hate, after the dissolution of my body, I should be entirely annihilated, nor do I conceive what is further requisite to make me a perfect non-entity. [Hume 173911965, Vol. 1, p. 6]

In this well-known passage, Hume raises a particular kind of criticism against a primitive notion of the soul. That criticism might be put this way: once we attend to the full details of our mental lives, the notion of a simple soul, of some piece of our mentation that remains unchanged through all changes of our lives seems unacceptably crude and simplistic. It has no place in the ultimate story about ourselves. We would seem merely to be imposing it upon the really quite diverse portions of our lives in an effort to underwrite metaphysically the special concern we feel towards our futures and our pasts. [I]

I shall not be concerned with the correctness of Hume's criticism, or with whether there might be some line of defense of the primitive view. Rather, I shall be concerned with whether a very similar kind of criticism mightn't also be raised against our ordinary notion of consciousness. Consciousness has received a great deal of press recently, both popular and professional. It is once again a serious object of study in psychology and psychobiology [2], and one even finds it figuring in accounts of quantum mechanics. [3] For all the interest of the notion, however, it is none too clear what, if anything, is being researched or appealed to in such accounts. On the one hand, consciousness is supposed to be (at least in one's own case) the most obvious thing in the world; on the other, no one seems to be able to say anything very illuminating about it.

Like Hume, I propose to examine the notion of consciousness in the light of the actual details of our mental life, or what we seem so far to know about those details. Unlike Hume, however, I shall not restrict my attention merely to introspection (much less "perceptions"), nor, like many of Hume's followers, to an analysis of our ordinary talk, although I shall not ignore these either. What I shall do is consider some plausible theories about the nature of human mentation that are available in recent psychology, psychobiology, and artificial intelligence, and attempt to

5 H. R. Duo and 1. A. Tuedio (eds.), Perspecrives on Mind, 5-24. © /988 by D. Reidel Publishing Company.

Page 14: Perspectives on Mind

6 Georges Rey

determine approximately where consciousness, as we ordinarily conceive it, might fit in. I think we shall find, as Hume did with at least the primitive notion of the soul, that it appears not to fit. The most plausible theoretical accounts of human mentation presently available appear not to need, nor to support, many of the central claims about consciousness that we ordinarily maintain. One could take this as showing that we are simply mistaken about a phenomenon that is nevertheless quite real, or, depending upon how central these mistakes are, that consciousness may be no more real than the simple soul exorcised by Hume.

This latter conclusion would, of course, be extraordinarily puzzling. Among ordinary beliefs about consciousness, none seems more powerful or more certain than that we each know immediately in our own case, in a special way that is immune to any serious doubt, that we are conscious. "I see clearly that there is nothing which is easier for me to know than my own mind," remarked Descartes (164111911, Vol. I, p. 157). Thought being "whatever is in us in such a way that we are immediately conscious of it," he took as one of his certainties his "clear and distinct idea" of himself as a "thinking thing" (164111911, Vol. 1, p. 190). Someone today might put it thus: "No matter what your theories and instruments might say, you cannot provide me any reason to doubt that I am conscious right here now."

I shall call this view about the infallibility of first-person present-tense beliefs about consciousness the "Cartesian Intuition." It is particularly this intuition about consciousness that I think turns out to be problematic in the light of present theories. I shall provide reasons for doubting that oneself is conscious, and, then, since first-person infallibility seems to be so central to the ordinary notion, I shall argue that that doubt, in conjunction with those theories, further provides reason for thinking that nothing is conscious. But, of course, few of us are going to be persuaded in this (or any?) way to give up a belief in consciousness. The question about consciousness that I want to raise, then, is this: how are we to understand our insistence on the existence of consciousness given that we cannot find a place for it in any reasonable theory of the world? My strategy will be as follows. I will discuss various mental phenomena and some reasonable theories that have been advanced to explain them. 1 shall then consider in each case the plausi­bility of regarding each particular phenomenon, with 01' without the others, as a candidate for the role of consciousness. In each case, I shall consider whether the phenomenon could occur unconsciously. I think that in a surprising number of cases we will find that they can.

Judgments about these matters, however, can often be distorted by what might be called "ghostly" intuitions: our judgments ahout mental states can often be affected by a belief in a background condition that myster­iously presupposes consciousness and so cannot explain it. This belief may

Page 15: Perspectives on Mind

A Question About Consciousness 7

take the form of an explicit commitment to a dualistic "ghost in the machine"; but it may also appear in subtler forms in purely materialistic approaches. Too often the ghost is replaced by an equally obscure "complexity" in the machine: "Of course," declares the thoroughly modern materialist, "we are machines, but"--and here he waves his hands towards properties as fabulously mysterious as any conceived by the dualist--"we are very, very complex ones." I call this view "facile materialism." (It appears in other guises, sometimes, as in John Searle's (1980, 1983, 1984) appeals to some crucially unspecified "biology" in our brains.) Until we begin to explain the specific kinds of complexity, biology, or special substances that are relevant to mental phenomena, such appeals only serve as ways to evade the mind/body problem, not solve it.

1 shall consider these ghostly intuitions in due course. However, to avoid contamination by them in our consideration of candidate analyses of consciousness, I shall consider in each case the plausibility of regarding an existing machine that exhibited the candidate phenomenon, in a fashion that we can understand, as thereby conscious. 'think that we shall find such proposals unacceptable, just as Hume would have found proposals to identify a particular perception, or a cluster of them, with his soul unacceptable. This reluctance to identify consciousness with any particular mental operations, or with any combination of them, I shall take to be evidence of a defect with the notion of consciousness similar to the defect H ume found with the notion of a soul.

Someone, of course, might insist that the term 'consciousness' be defined before we undertake such an investigation so that there might be a reasonable way to evaluate the proposals I shall consider. But that approach would misconstrue the problem. Part of the question I mean to raise is that we really don't have any good definition of ·consciousness·. We have a wide variety of usages of the term and its related forms [4], each of which can be assimilated to or distinguished from the others, depending upon the purposes at hand. Whether they can be assimilated into a single or several different definitions seems to me a problem that is inseparable from forming a general theory of the mind. [5] But those who want to insist upon definitions can regard what follows as so many efforts to provide one, with a reason in each case to reject it.

II

One of the soundest reasons for taking a particular description of an object seriously is that the object obeys laws in which the description figures. Amoebas, for example. can be regarded as literally alive, since they obey many of the laws of living things. Similarly, an object can be regarded as literally possessing a mental life insofar (and perhaps only insofar) as it obeys psychological laws. Now. to be sure, given the still

Page 16: Perspectives on Mind

8 Georges Rey

adolescent state of psychology as a science, there really aren't any fully developed psychological laws available. But there are law sketches (cf. Hempel 1965, pp. 423-425). Among them, few seem to be more central and basic than those that attempt to capture what might be called the "Rational Regularities": these are the regularities among a creature's states whereby they instantiate the steps of inductive [6], deductive, and practical reasoning. So far, the best explanation [7] of such remarkable capacities seems to be one that postulates mental processes in the animals whereby they are ahle to perform at least rudimentary inductions and deductions, and are able to base their behavior upon some one or other form of practical reasoning: e.g. they generally act in ways that they believe will best secure what they most prefer. [8) It is these regularities that justify us in ascribing beliefs and preferences to anything at all: were an object not to satisfy them, neither in its behavior nor (more importantly) in its internal processing, it would be difficult to see any reasonable basis for such ascription, and insofar as an object does satisfy them, there would seem to be a very firm basis indeed. They certainly seem to form the basis we ordinarily employ in ascribing beliefs and preferences and other mental states to people and animals in the many useful and sometimes insightful ways we do. They are a central part of what Max Weber (192211980) called "Verstehen" or "empathic" explanation, and of what many recent philosophers have come to call "intentional explanation," the creatures (or, more generally, the systems) whose behavior is in this way explained being "intentional systems" (Dennett 197111978).

I don't mean to suggest that we yet fully understand this form of explanation, much less all the hehavior of the explicanda. Notoriously, there is the prohlem of the "intentionality" of the mental idioms: the heliefs and preferences that enter into these rational relations are "about" things that seem (e.g. physically) quite Llm'elated to them--my beliefs about Socrates are quite remote from and unlike Socrates--and often about "things," such as Santa Claus or the largest prime, that don't exist at all. And then there is the problem simply of specifying clearly wherein the rationality of our thought consists. These problems are among the hardest in philosophy.

For all their difficulty. however, there have been some advances. Philosophers have noticed that both the intentionality and rationality of thought suggests that thought involves relations to "~p,,~s~l1fafions. e.g. sentences, or some oLher kind of structured intermediary between the agent and the world. Such postulation explains the familiar failures of co­referential substitution associated with intensionality. and at the same time provides a suitable vehicle for the kinds of fonnal theories of rea­soning familiar from the study of logic. [9] At any rate, Harman (1973). Fodor (1975). Field (1978) and Stich (1983). have postulated just such a system of representation. a "language of thought." encoded and computed

Page 17: Perspectives on Mind

A Question About Consciousness 9

upon in the nervous systems of psychological beings, as a way to explain their intentional rationality. Putting their point a little perversely, we might say that intentionality and rationality--the two properties that seem peculiar to psychological beings--are to be explained by a surptising hypothesis: thinking is spelling (and transformations thereof).

I shall not be concerned here with whether or not this hypothesis is actually true for human beings. It will be enough for our purposes that this hypothesis provides a possible explanation of thought, one possible way that something could manage to satisfy the rational regularities. For, if that's true, then regardless of how people actually do manage to think, any system that could consistently spell and transform suings of symbols according to certain rules would still, by its resulting satisfaction of those regularities, qualify as a thinking thing.

The "Language of Thought" hypothesis suggests, then, the possibility in ptinciple of constructing a machine that could think. Surprisingly, this is not so remote a possibility in practice either. In particular, it would seem entirely feasible (although, for reasons I shall come to, not awfully worthwhile) to render an existing computing machine intentional by providing it with a program that would include the following:

(1) the alphabet, formation, and transformation rules for quantified modal logic with indexicals, e.g. David Kaplan's "Logic of Demonstratives," as the system's Language of Thought;

(2) the axioms for a system of inductive logic, and an ab­ductive system of hypotheses, with a "reasonable" func­tion for selecting among them for a given input;

(3) the axioms for decision theory, with some set of basic preferences;

(4) in addition to the usual keyboard, various transducers (e.g. a television camera) for supplying inputs to (2);

(5) devices (e.g. printers, mechanical hands) that permit the machine to realize its outputs (e.g. its "most preferred" basic act descriptions).

The machine would operate roughly as follows. The input supplied by (4) would produce "observation" sentences that would be checked against comparable deductive consequences of the hypotheses provided by (2); hypotheses would be selected whose consequences "best matched" the obser­vation sentences; those hypotheses in turn would serve as input to (3),

Page 18: Perspectives on Mind

10 Georges Rey

where, on the basis of them, the given preferences, and the decision­theoretic functions, a "most preferred" basic act description would be generated, and then be executed by (5).

Someone might wonder how such a machine could have genuine intention­ality, how, for example, the sentences being entertained by the machine could be "about" anything. [10] There are a variety of answers to this question. Perhaps the simplest for purposes here is provided by a suggestion of Stampe (1977), who proposes to regard sentence meaning as a species of natural meaning. Just as the 11 rings in a tree trunk "mean" the tree is II years old because, under ideal conditions, the II rings are produced as a causal consequence of the tree's being that old. so would a sentence selected by our machine mean that a certain state of affairs obtains because, under ideal conditions, it would be produced as a causal consequence of those states of affairs. Thus, '(Ex)Sx,h,n' might mean for the machine, "There is a square in front of me now," because under ideal conditions--good lighting, direct placement of the stimulus before the transducer--it would select that sentence (putting it, for example, in what Schiffer (1980) has called a "yes" box) as a causal consequence of there being a square directly in front of it. The sentences of such a machine would be "about" the states of affairs that would, under ideal circumstances, cause the machine to select them, [II] We might call these causal regularities "ideal detection regularities,"

Satisfying ideal detection regularities by means of (I )-(5) is a way that the machine could obey the Rational Regularities, complete with intentionality. We would certainly be able to explain and predict its worldly behavior and internal states on the basis of those regularities, It would be natural to say that the reason it did or will do such and such is that it believes this and most prefers that; for example, that the reason that it pushed the hutton is that it most preferred putting the pyramid atop the cube, and thought that pushing the button was the best way of doing so. We would, that is, find it natural to adopt towards it what Dennett (19711J978) has called "the intentional stance," Unlike Dennett, however, I see no reason not to take the resulting ascription of beliefs and preferences entirely literally. For, again, what better reason to take a description of an object seriously than that the ohject obeys laws into which that description essentially enters? We would seem to have the best reason in the world therefore for regarding the computer so programmed as a genuine thinking thing,

Caveat emptor: for all its rationality and intentionality, there is also every reason to think that such a machine would, at least for the foreseeable future. be colossally stupid. We need to distinguish two senses of "AI": artificial illfelligel1(,c and artificial illfentiol1alit1', A machine can exhibit the latter without exhibiting vety much of the former. And this is because artificial intentionality requires only that it obey

Page 19: Perspectives on Mind

A Question About Consciousness 11

rational and ideal detection regularities, performing intelligently under ideal conditions, e.g. situations in which the light is good and the stimuli are presented squarely in front. Intelligence requires doing well under non-ideal conditions as well, when the light is bad and the views skewed. But performing well under varied conditions is precisely what we know existing computers tend not to do. [12] Decreasingly ideal cases require increasingly clever inferences to the best explanation in order for judgments to come out true; and characterizing such inferences is one of the central problems confronting artificial intelligence--to say nothing of cognitive psychology and traditional philosophy of science. We simply don't yet know how to spell out the 'reasonable' of the proposed program's clause (2). Philosophers have made some suggestions that, within narrow bounds, are fairly plausible (e.g. statistical inferences, Bayesian metrics, principles of simplicity, conservatism, entrenchment), but we know that they aren't even approximately adequate yet.

But, meagre though the suggestions might be, the proposed inductive principles, within bounds, are not unreasonable. Existing programs are, I submit, adequate to satisfy ideal detection regularities for a wide range of concepts (e.g. mathematical and geometric concepts, concepts of material objects, and many of their basic properties). In suitably restricted environments, and particularly in conjunction with the relatively better understood deductive and practical regularities of (I) and (3), they provide a rich basis for serious explanation and prediction of our computer's states and behaviors in terms of its preferences and beliefs. Limited rationality, to a point, is still rationality. I see no reason to think that we can't right now devise programs like (I )-(5) that would get us to that point (which is not to say that, given the system's general stupidity, it would actually be worth doing). [13]

For purposes here, what the practical possibility of this machine does is to underscore a point that has been gradually emerging from a century's work in psychology: that, contrary to the spirit of Descartes, the letter of Locke, most of the definitions in the O.E.D., and the claims of many theorists down to the present day. consciousness must involve something more thall mere thought. However clever a machine programmed with (1)-(5) might become, counting thereby as a thinking thing. it would not also count thereby as conscious. There is, first of all the fact that no one would seriously regard it as conscious. [14] But, secondly, in support of this intuition, there is now substantial evidence of systems of at least the richness of (1)-(5) that are clearly unconscious. Besides the standard clinical literature regarding peoples' unconscious beliefs and motives, there are a large number of "self-attribution" experiments detailing different ways in which people engage in elaborate thought processes of which they are demonstrably unaware. Subjects in these experiments have been shown to be sensitive to such factors as cognitive dissonance

Page 20: Perspectives on Mind

12 Georges Rey

(Festinger 1957). expectation (Darley and Berschied 1967). numbers of bystanders (Latane and Darley 1970). pupillary dilation (Hess 1975). positional and "halo" effects (Nisbett and Wilson 1977). and subliminal cues in problem solving and semantic disambiguation (Maier 1931. Zajonc 1968. Lackner and Garrett 1972). Instead of noticing these factors. however. subjects often "introspect" material independently shown to be irrelevant. and. even when explicitly asked about the relevant material. deny that it played any role. These factors. though. clearly played a role in the regularities that determined the subjects' actions. Thus. whatever consciousness turns out to be. it will need to be distinguished from the thought processes we ascribe on the basis of rational regularities.

How easily this can be forgotten. neglected. or missed altogether can be seen from proposals about the nature of consciousness current in much of psychobiological literature. The following is representative: [15]

Modern views ... regard human conscious activity as consisting of a number of components. These include the reception and processing (recoding) of information. with the selection of its most important elements and retention of the experience thus gained in the memory; enunciation of the task or formulation of an intention. with the preservation of the corresponding modes of activity. the creation of a pattern or model of required action. and production of the appropriate program (plan) to control the selection of necessary actions; and finally the comparison of the results of the action with the original intention ... with correction of the mistakes made. (Luria 1978)

What is astonishing about such proposals is that they are all more or less satisfiable by almost any information processing system. Precisely what modern computers are designed for is to receive. process. unify. and retain information; create (or "call") plans. patterns. models. sub-routines to control their activity; and. to compare the results of its action with its original intention in order to adjust its behavior to its environment--this latter process is exactly what the "feedback" mechanisms that Wiener (1954) built into homing rockets are for! Certainly most of the descriptions in these proposals are satisfied by any recent game-playing program (see e.g. Berliner \980). And if genuine "modalities." "thoughts." "intentions." "perceptions." or "representations" are wanted. then I see no reason to think that programming the machine with (\)-(5) wouldn't suffice [16], but without rendering anything a whit more conscious.

Something more is required. There are many proposals that have been or might be made. but what is disturbing about all of the ones I have encountered is that they seem to involve either very trivial additions to (1)-(5). 01' absolutely no additions whatsoever. I'll consider some of the more plausible ones.

Page 21: Perspectives on Mind

A Question About Consciousness 13

A natural extension of the notion of an intentional system has else­where been developed by Dennett (1978) into what we might call the notion of an "n-order intentional system." A "first-order" intentional system is one that has beliefs and preferences merely by virtue of obeying rational regularities. A "second-order" intentional system is one that not only has beliefs and preferences by virtue of obeying such regularities, but in particular has beliefs and preferences about beliefs and preferences. It might, for example, engage in deliberately deceptive behavior, attempting to satisfy its own preferences by manipulating the beliefs and preferences of some other system. An "n-order" intentional system is simply a general­ization of these notions: it has beliefs and preferences about beliefs and preferences about beliefs and preferences ... to any arbitrary degree, n, of such nestings.

This might be regarded as a promising suggestion about the nature of consciousness until one considers some work in computer science of Brown (1974) and Schmidt and D'Addami (1973), They have devised a program called the "Believer System" that essentially exploits the Rational Regularities as a basis for explaining the behavior of people who figure in some simple stories, For example, from descriptions of someone gathering together some logs and rope and subsequently building a raft, the program constructs (by iterations of means-ends reasoning) the motive that the agent wanted to build a raft, and imputes to him a plan to the effect that gathering together some logs and rope was the best available means of doing so. The program is hardly very imaginative. But then neither are we much of the time when we ascribe beliefs and preferences to each other on what I have argued above seems to be the very same basis,

The perhaps surprising moral of this research would seem to be that, if a system is intentional at all, it is a relatively small matter to render it n-order intentional as well. One would simply allow the program at some juncture to access itself in such a fashion that it is able to ascribe this very same "Believer System" to the agent as part of that agent's plan. Given that every time it reached that juncture it would be able to further access itself in this way, it would be able to ascribe such ascriptions, and such ascriptions of ;uch ascriptions, indefinitely, to a depth of nesting limited only by its memory capacity. We might call this extension of the "Believer System:"

(6) The Recursive Believer System

It is the paradigm of the kind of program that is realizable on existing machines. Given that the Rational Regularities afford a sufficient basis for the ascription of beliefs and preferences in the first place, a machine programmed with (1)-(6) would be capable of having beliefs and preferences about beliefs and preferences to an arbitrary degree of nesting. That is

Page 22: Perspectives on Mind

14 Georges Rey

to say, it would be relatively easy to program an existing machine to be n-order intentional.

Someone might protest that being seriously n-order intentional--and maybe intentional at all--requires not merely having objective attitudes, but also having attitudes essentially about oneself, attitudes de se. For example, it's not enough that I might believe that Georges Rey is presently thinking certain things; I need to be able to think that I am presently doing so. What's wanted is a special way of referring to oneself as we do in English when we use T; what, e.g. Chisholm (1981, p. 24) [17] calls the "emphatic reflexive." I see no reason, however, why we mighn't endow our computer with this form of reference, We need simply constrain the use of a specific variable in the program's Language of Thought (e,g. 'i ') so it functions as an internal name of the receiver of the inputs, the enter­tainer of the thoughts, and the instigator of the actions of the machine, but specifically controlled by rules for these roles. Imposing such con­straints is already available by virtue of Kaplan's Logic of Demonstratives included in our clause (\). As a result of using such a language, the machine would, for example, when directly stimulated by a square and observing '(Ex)Sx,h,n' ("There's a square here now"), be able to conclude '(Ex)(Sx,h,n & Pi,x), ("and I perceive it"); and similarly for others of its beliefs, preferences, decisions and actions.

Would such a machine programmed with a suitably indexicalized recursive believer system be conscious? Human consciousness is often thought to consist in self-awareness. [1 R] Rosenthal (1984: sect. 6, 1986; and forthcoming) and Smith (1986) have recently defended such an hypothesis. But, in view of the relatively small addition that (6) makes to (1)-(5), it is hard to see why we should believe it. Moreover, Dennett (197611978, pp. 279-280) himself remarks on a number of cases in which the presence of nested reasonings does not at all require consciously entertaining them. On the contrary, people seem to be quite poor at consciously entertaining merely second-order intentions: for example, it is unlikely that people are consciously aware of the kinds of intentions Grice (1957) and Schiffer (1972) claim underlie communication. Or consider an example of a "reciprocal perspective" that Laing, Phillipson, and Lee (1966) find so crucial in explaining domestic interactions:

From the point of view of the subject. the starting point is often between the second and third order level of perspective. Jill thinks that Jack thinks that she does not love him, that she neglects him, that she is destroying him, and so on, although she says she does not think she is doing any of these things ... She may express fears lest he think that she thinks he is ungrateful to her for all that she is doing, when she wants him to know that she does 1I0t think that he thinks that she thinks he thinks she does not do enough. (pp. 30-31)

Page 23: Perspectives on Mind

A Queslion About Consciousness 15

Such deeply nested intentions probably affect our behavior efficiently only so long as we are nOI struggling to make ourselves conscious of them. In any case, the authors discuss examples of Rorschach and intelligence tests in which, they say, responses were often affected by unconscious reciprocal perspectives (1966, pp. 42-44). But, if Jill's thought that Jack's thought about her thoughts can be unconscious, why should her thoughts about her own thoughts have to be conscious? Why should consciousness be required to pop up only at that level and not at more complex ones? [19] In view of the complexity of peoples' actual unconscious thoughts, and the simplicity of the machine I have described, it would certainly be surprising were nested intentionality or self-consciousness to provide the condition on consciousness we are seeking.

111

Throughout my discussion of human cases so far, I have been relying on the reportability of a stimulus or mental state as a necessary condition for consciousness (of that stimulus or state). Elsewhere, Dennett (1969) advances this as a criterion of consciousness, or at least of what he calls "awareness 1 "--A is aware1 that p at time I if and only if p is the content of A's speech center at time I,--as opposed to "awareness," which involves contents ascribed to explain the agent's rational behavior generally. [20] To avoid too close an association with specifically speech mechanisms, it would probably be better to put p into the machine's language of thought, and have it stored in a special "buffer memory" used as a source of avowals and other speech acts. [21] We might further require that the sentences in this location include second-order emphatic reflexive self-ascriptions of psychological states (the internal translations of e.g. "I think the light is on" or "I'd like some lemonade"), which would be available to be translated into the agent's natural language. Would such an arrangement be sufficient for consciousness?

Existing computers already have the capacity to report in a public language--e.g. programming languages--upon at least some of their own internal states. Increasingly, these languages resemble fragments of English; and in some cases large portions of English are parsed and used for communication between machine and user (for an amusing example, there is Weizenbaum's (1965) notorious ELIZA program, designed to provide the responses of a Rogerian psychotherapist). Now, to be sure, capturing full English is difficult: its syntax is intricate and not obviously separable from its semantics, which is not obviously separable from the worldly wisdom of English speakers, an understanding of which, as mentioned earlier, is beyond the means of present artifical intelligence. [22] Pending progress there, one probably wouldn't have a very stimilating or far-ranging conversation with any existing machine. But one could probably

Page 24: Perspectives on Mind

16 Georges Rev

do as well with regard at least to introspection as one does with the vast majority of human beings. All one would need to do is supplement the program that includes (1)-(6) with:

(7) a fragment af English adequate to describe/express the mental states entered in executing (1)-(6), descrip­tions which are produced as a reliable consequence of being in those states.

We might simply include with (1 )-(6) a specific instruction to temporarily store in a special buffer a description (in the Language of Thougnt) of (most [23]) every mental state the machine enters immediately after enter­ing it. This would be compiled into the English supplied by (7) whenever an avowal or introspective report was requested or otherwise motivated. Since by clause (6) the machine is already n-order intentional, it could respond to such requests with just the sort of nested intentions that Grice and Schiffer have argued are essential to human linguistic communication. The syntax and semantics needed for communication of at least these kinds of introspective reports would seem to be quite managably limited, and isolable from the as yet unmanagable syntax and semantics of complete English. Conversing with the machine would be like talking with an ex­tremely unimaginative, but unusually self-perceptive human being, who knew mostly only about his own psychological states. I submit that would count as conversing with a full-nedged introspector nonetheless.

So, would a machine programmed with (I )-(7) be conscious? It is hard to see why. As I've said, versions of (7) are already being run on exist­ing computers. If one were inclined to think (1)-(6) insufficient, then adding (7) would amount to little more than running (l )-(6) on an existing machine with an odd, special purpose compiler. Most any computer in the country could be made conscious in this way in about a week!

There are further mechanisms and processes to which we might turn. Attention and short-term memory might seem promising. Humans appear to be able to concentrate their cognitive processing in one area or modality of stimulation often to the near exclusion of others. There has been a good deal of research in this regard on "short term memory" (Miller 1956), on the nature of selective filtering of signals (Kahnemann 1973), and on the relation of such filtering to feedback and "feed-forward" (or plan-related) processing (Pribram 1980). Some writers, noting the association of these roles with consciousness, have suggested that they he taken as constitutive of it. Thus, Piaget (1976) writes:

If a well-adapted action requires no awareness, it is directed hy sensori-motor regulations which can then automate themselves. When on the contrary an active regulation becomes necessary, which presupposes

Page 25: Perspectives on Mind

A Question About Consciousness 17

intentional choices between two or several possibilities, there is an awareness in function of these needs themselves. (p. 41)

The trouble with these sorts of processes as candidates for consciousness is that they don't make any further demands whatsoever on a machine of the sort we've been considering. Machines with suitable memory necessarily have a limited number of work addresses into which material from long-term storage, as well as some of the present inputs to the system, can be placed for short term, "on line" processing. That the capacity of these addresses is limited, and that the selection of content for them is contingent upon the plan (or program) being implemented, which in turn is sensitive to feedback, goes without saying. Certainly any machine equipped to deal with (1)-(7) would need to be designed in such a fashion: there is, for example, the buffer memory we included to execute (7). Such centralized work addresses might well be precisely the place at which high-level decisions in a program--e.g. whether or not to continue a particular sub-routine, whether to call a new one--might be made, causing the machine to make "intentional choices between two or several possibilities," to "formulate new goals," and thereby to "modify its habitual action patterns." [24] But where in any of this is there any need of consciousness? Again, if this were sufficient for consciousness, then practically every computer in the country would be conscious already!

"But," the reader may be anxious to ask, "What about sensations? Surely a device capable of them would thereby qualify as conscious." Here, to be sure, the issues are a little complicated; but not, 1 fear, in the end very helpful. First of all, in clause (3) of our program we've already allowed for transducers that would convert e.g. electromagnetic wave forms into signals that would issue in "observation sentences" in the system's language of thought. Given the apparent modularity of perception and observation (see Fodor 1983), we should suppose that the sentences issuing here are in a special vocabulary, involving predicates whose use is heavily constrained in ways analogous to the constraints on essential indexicals like '1'. Just as [ can believe I am in Maryland only under specific computational circumstances, so [ can believe I am seeming to see red only when I am in fact receiving specific signals from (or into) my visual module. Insofar as these signals might be responsible, by means of the inductive processing involved in (2), for confirming and disconfirming hypotheses about the lay of the land and the probable shape of things to come, it would be reasonable to regard the process as at least a functional equivalent of visual perception.

We might also include under (3) sensors that would signal to the machine the presence of certain kinds of damage to its surface or parts of its interior. These signals could be processed in such a way as to cause in the machine a sudden, extremely high preference assignment, to the

Page 26: Perspectives on Mind

18 Georges Rey

implementation of any sub-routine that the machine believed likely to reduce that damage andlor the further reception of such signals, Le, it would try to get itself out of such states, The states produced in this way would seem to constitute a functional equivalent of pain, Insofar as these processes could, either by mistake or by program design, be self-induced, the machine would be subject to the functional equivalents of hallucinations and its own deliberate imaginings,

But, of course, it is the sensations--the colors, the pains, the hallucinations--themsell'es that are important, not mere "functional equivalents" of them, Most of liS would pretty surely balk at claiming that a machine that ran on (1)-(7) alone should be regarded as reall,' having the experience of red just because it has a transducer that emits a characteristic signal, with some of the usual cognitive consequences, whenever it is stimulated with red light. But I'm not sure what entitles us to our reservations, [25] for what else is there? In particular, what else is there that we are so sure is there and essential in our own case? How do we know we "have the experience of red" over and above our undergoing just some such process as I have descrihed in this machine'? What more do we do than enter a specific cognitive state when certain of our transducers are stimulated with red light? How do we know lI'e "have the experience of red" over and above our undergoing jllst such a process as I have described in this machine'? [26] Certainly it's not because we have some well-confirmed theory of sense experience that distinguishes us!

Whether or not it's especially well-confirmed, something like a theory with a long tradition to it claims that we have some sort of "privileged access" to such experiences, "direct, incorrigible" knowledge of their qualitative feel. [27] Now, it's not entirely clear how this claim is to be made out. If it is the claim that believing one is having a sensation entails one's having it, then we would be forced to concede that the machine I've described reallv does have them after all, What with its transducers, inductions, and nested self-ascriptions, it would acquire sensory beliefs that it seemed to see red; and that would be difficult to distinguish from the belief that it's having a red sensation (d, Sellars 1956), Someone could of course object that the entailment--this privileged access--holds only for us, not for machines, But we should then entitled to know why, Or do we have privileged access not only to our sensations, but to the fact of our privileged access as well? I forbear from following out the question-begging regress this line of reasoning suggests,

Several philosophers have recently proposed that the qualitative character of aLII' sensations is tied not merely to our cognitive structure, but to our physiology as well (see Block 1978, Searle 1981, Jackson 1982, and Maloney 1985), Elsewhere, (Rey 1980) I have argued that one thing that may distinguish us from any machines yet envisaged are many of our emotions, For there is strong psychobiological evidence that the capacity

Page 27: Perspectives on Mind

A Question About Consciollsness 19

for e.g. depression, anger, fear depends upon the presence in our brains of certain hormones and neuro-regulators (e.g. norepinepherine, testosterone), or at least upon certain as yet unknown properties of those substances. [28] We have no reason to believe that, whatever those properties turn out to be, they will be available in existing computational hardware. To the contrary, given the extraordinarily high level of functional abstraction on which cognitive processes can be defined, it would be a surprising coincidence if they were. [29]

I think it would be rash to clutch at these emotions and their associated hormones and neuro-regulators as providing the conditions of consciousness that we are seeking: our feelings (e.g. anger, grief) are, after all, not always conscious, nor are moments without feeling unconscious. However, perhaps a similar dependence upon non-cognitive properties of our bodies and brains is essential to our having sensations and qualitative states. This dependence would have to be spelt out and justified. At the moment, it is to my knowledge utterly obscure how we might do that, much less precisely what those properties might be.

However, any such appeal to a non-cognitive condition is open to the following difficulty. Call the further non-cognitive condition, be it neurophysiological or otherwise, condition K. It would follow from the psychological theories we have been considering together with an insistence on K that it would be metaphysically possible [30] for someone to believe she is in a particular sensory state without actually being in it: for it would be metaphysically possible for her to be in the position of our computer, satisfying (I )-(7) without satisfying K. But this amounts to an extraordinary contribution to anesthesiology. For it would then be open to surgeons, or others adept at dealing with K, to eliminate K without disturbing a patient's cognitions. A patient might undergo an operation fully believing that she was in intense pain and very much preferring she wasn't, but be reassured by the surgeon that nevertheless, lacking K, she wasn't actually experiencing any sensations at all. She only thought she was. (I remind the reader that this is precisely the position in which we were willing to leave our computer, helplessly programmed with merely (1)-(7). This consequence seems clearly unacceptable; and so therefore does appeal to condition K. [31]

IV

This last argument can be expanded and applied to the problem of consciousness itself. Just as it seems perfectly possible to program an existing machine to believe it's in pain and experiencing other sensory states, so does it seem to be possible to program it to believe it's consciolls. It might come to believe this, as we often seem to do, simply as a consequence of being perceptually functional: e.g. it might

Page 28: Perspectives on Mind

20 Georges Rey

automatically (be disposed to) enter a sentence 'Ci' into the aforementioned "attention" buffer whenever any other sensory sentence is already there. We can even suppose it has many of the beliefs about consciousness that we have: e.g. that it's a state that something is in if it's perceptually functional, moreover, a state that something is in if it thinks it is; a state, indeed, that something can never be given any reason to doubt it's in. That is, we could provide our machine with:

(8) The Cartesian Intuition

The machine could think and print out, "I see clearly that there is nothing easier for me to know than my own mind," and proceed to insist that "no matter what your theory and instruments might say, they can never give me reason to think that I am not conscious here, now." After all, such beliefs are relatively simple second-order ones, easily specified by means of (6).

If someone now replies that we've only provided the machine with the functional equivalent of consciousness, we may ask, as we did in the case of sensory experiences, what more is required? In particular, what further properties other than those provided by (I )-(8) can we reasonably demand as a necessary condition on consciousness? As in the case of sensory experiences, we would seem to be faced with the problem of appealing to what might be called "arcane" conditions: i.e, conditions, like the supposed physiological condition K, about whose presence or absence a person is entirely fallible, But if a person is not infallible about a necessary condition for her consciousness then she is not infallible about her consciousness itself. [32] This is a consequence for which our ordinary notion of consciousness is, I submit, radically unprepared.

Surprisingly enough, this argument can be deployed against the very appeals that are supposed to save consciousness from materialism: e.g. dualistic substances, contra-causal freedom, spontaneous creativity, and the like. I won't consider these conditions in detail here. Suffice it to say that they are all clearly independent of (1 )-(8) and are about as arcane as conditions can get: not even our best scientific theories are in a position to establish whether they obtain, Quite apart from whether such conditions actually make sense, there is the serious epistemological question of how in the world people are supposed to tell whether they've got what it takes. A particularly appealing arcane condition that nicely illustrates this point is life. Requiring it as necessary for conscious­ness initially seems to capture many of our intuitions, and would explain why we balk at ascribing consciousness to the machines I have imagined, and perhaps to any machine at all. [33] Maybe if something realizing (I )-(8) were also alive, our reluctance to regard it as conscious would diminish.

There are, however, a number of problems with this condition. In the first place, one has to be careful to distinguish 'life' in the biological

Page 29: Perspectives on Mind

A Question About Consciousness 21

sense, from 'life' in the sense that is merely synoymous with 'consciousness' (as in "life after death," which in the first sense would be self-contradictory), Obviously, it is only the first sense that presents a substantial condition. Once we focus on that condition, however, it is by no means clear that people generally regard it as a condition on consciousness. Many people do think of consciousness Clife' in the second sense) after death as at least a possibility, and of many apparently non-biological beings (angels, gods [34]) as being conscious. Moreover, I don't think their judgments about something's being conscious would change merely as result of learning that that thing wasn't biological. [35] Indeed, are we really so certain--as certain as we are that they are conscious--that we and our close friends actually are alive? We seem to know this only in the usual "external" way in which we know about other theoretical facts of the world--mostly by taking other peoples' words for it. Perhaps future research will reveal that some of us are artifacts, machines, cleverly constructed at MIT out of fleshlike plastics and surreptitiously slipped to our parents the day they say we were born. Surely if we were to discover this about ourselves we would not think it showed that we were not conscious. Thus, even life is too arcane a condition to require for consciousness, if consciousness is to be something of which we are infallibly aware.

Of course, if life or some other arcane condition is not essential to consciousness, then perhaps one ought after all to regard a computer programmed with (1)-(8) as conscious. However, for all my faith in the mechanical duplicability of the other specific aspects of mentation that I have discussed, I must confess that I find myself unable to do so. I am unnerved, and I find most other people unnerved, by the possibility of these machines--not, mind you, by the possibility of any machine being conscious, since we are hardly in a position to speculate about such dimly imagined possibilities, but by the possibility of existing machines, programmed merely by (1)-(8), being so. It simply seems impossible to take their mental life all that seriously: to feel morally obliged (not) to treat them in certain ways (not to unplug them, not to frustrate their preferences, not to cause them pain). It's as though they lack a certain "inner light," an inner light that we tend to think awakens our otherwise unconscious bodies and bathes many of our thoughts and feelings in such a glow as to render them immediately accessible to our inner, introspective eye and somehow of some intrinsic moral worth. We see this light each of us only in our own case; we are only able to /linfer" it, however uncertainly, in the case of other human beings (and perhaps some animals); and we are unwilling to ascribe it to any machine. [36]

As I draw this familiar picture out and compare it with the details of our mental lives that I have considered, it seems to me appallingly crude, as simplistic a conception of human psychology as the idea of a soul is as

Page 30: Perspectives on Mind

22 Georges Rey

an account of personal identity. Just what sort of thing is this "inner light" supposed to be? What possibly could be its source? How is it "perceived," necessarily each in his own case, not possibly by any other? What is its relation to attention, reasoning, nested intentions, problem solving, decision making, memory? Somehow, these deTailed questions seem inappropriate, a little like asking of a Fundamentalist, "Just how did God go about creating the world in six days?" "How did His saying 'Let there be light!' bring about there being light?" Indeed, just as the Fundamentalist seems to believe in his account of the world independently of scientific research, so do we seem to believe in our consciousness and the machine's lack of it independently of any reasonable arguments. [37]

Perhaps the problem is better seen the other way around: once we have accounts of the various processes I have mentioned. what is added by consciousness? What further light does this inner light shed upon our minds? What phenomena is unexplained without it? Perhaps there is something. But, perhaps too, as Hume found in the case of personal identity, there is nothing more, and it would be wrong~headed to identify consciousness with any of these actual processes, singly or together. None of them play particularly the moral role that consciousness traditionally is supposed to play. There would seem to be no actual thing or process (or even "function" [38]) that our past usages have been "getting at" (cf. Putnam 1975). That would seem to afford at least one reason for doubting that the term refers. i.e. it would give a reason for doubting there really is such a thing as consciousness at all.

This doubt, however. is pernicious. Once we allow it, it would seem that the concept of consciousness no longer has a hold. Although arguments about necessary conditions for the application of a concept are difficult in general to defend, it would seem that, if we abandon the Cartesian Intuition, we've lost what little hold on the notion of consciousness that we have. But if the truth of the Cartesian Intuition is a necessary condition on the applicability of the notion of consciousness, then the mere possibility of a machine of the sort I have described not being conscious entails that the there is no such thing as consciousness.

We should be clear about precisely what the consequences would be were we to give up our belief in consciousness. It would by no means entail any extravagant Behavioristic or Eliminitivist claim that no mental terms at all are scientifically respectable, much less that none of them succeeded in referring. 1 have used mental terms throughout my descriptions of the capacities of people and machines, and I dOllbt very much that they could ever be reasonably eliminated. But one needn't be committed thereby to evel), pre-scientific mentalistic term. or to finding for every such term some post~scientific equivalent. Some terms may simply have to go, as 'angels' did in an account of planetary motion, and as 'the soul' does in our account of our personal identities. Nor would one be committed to

Page 31: Perspectives on Mind

A Question About Consciousness 23

abandoning the term in ordinary talk. If the term 'conscious' is only meant to indicate that a person is, say, awake and capable of intentional, attended activity on which she might be able to report, then the term is clearly harmless enough. I think it is often used in this way: it would seem to be the usage underlying such claims as those of Moruzzi (1966) and Penfield (1975) that locate consciousness merely in the activation of the reticular formation. We need only notice that, according to such usage, a computer programmed with just clauses (1)-(5), if my earlier arguments are correct, would qualify as conscious too.

In view of the doubts that I have raised here, what are we to make of our beliefs about consciousness? In a famous passage, Wittgenstein (1953/1967, p. 97e) writes: "only of a living human being can one say: it has sensations ... is conscious or unconscious," and, more recently, Karl Pribram (1976, p. 298) innocently remarks, "I tend to view animals, especially furry animals, as conscious--not plants, not inanimate crystals, not computers. This might be called the "cuddliness criteIion." I don't see any justification in these claims; they seem, in Block's (1978) phrase, arbitrarily chauvanistic, "speciesist." But they may accurately describe the pattern of our ascriptions, unjustifiable though that pattern may be. We may be strongly inclined to think of ourselves and our biological kin in a special way that we are not disposed to think of machines, or at least not machines that don't look and act like us. Should a machine look and act as human beings normally do--indeed, should we discover that one of us is a machine--then we would think of it in the same way. We might, of course, try to justify this disposition by behaviorism (and I think this accounts for much of the attraction of that othelwise bankrupt theory), or, failing that, we might try to find some inner condition that would mark the distinction that we want. We are tempted, I think, to try to ground the difference that is so vivid to us in the kind of metaphysical difference that the traditional picture of consciousness suggests, and to claim for it a certainty to which we feel introspection naturally entitles us. (On the extent to which introspection may be susceptible to this sort of imposi­tion, see Nisbett and Wilson 1977.) But then we find, as I have found in this paper, that no such inner condition exists. In all theoretically significant ways we seem to be indistinguishable from the "mere machines" from which we nevertheless insist upon distinguishing ourselves.

If some story like this were true, perhaps all we could do is acquiesce to our apparently arbitrary biases. We would need to abandon the attempt to find for them any false metaphysical buttressing in some special condition of consciousness, just as we need to abandon the attempt to find such buttressing for our personal identities in same special soul. In both cases, of course, the consequences for moral theory would be disappointing: we would have trouble justifying the special concern we feel towards people and animals, just as we have trouble justifying the special concern we feel

Page 32: Perspectives on Mind

24 Chapter One

towards our futures and our pasts (d. Parfit 1971a. 1971b). Human reason would turn out to have the peculiar fate, that in one species of its beliefs it would be burdened by questions that it would not be able to ignore, but which it would never be able satisfactorily to answer.

--'J--

A key element in Rey's paper seems to be his conception of the discrete character of mental operations. He has argued that the appearance of "conscious awareness" is fully accounted for by functional processes which can be identified and individuated into discrete operations. These discrete operations can then be replicated in a computational manner. Thus, the appearance of conscious awareness can be replicated, too, since the computational mechanism would be quite able to assert the immediate and "undeniable" fact of its own consciousness. But consciousness as something sui generis plays no role at all in this scenario. The machine is surely mistaken when it asserts the Cartesian Intuition, just as, indeed, we may be. Although Rey does not take himself to have demonstrated conclusively that human beings are mistaken when they assert the Cartesian Intuition, he does feel he has established the plausibility of this hypothesis.

Rey seems to assume that the mind is merely a set of operations. If this were indeed the case, then "consciousness" would be a mere stage in mental processing, a stage where yet another operation is "added" to the phases of mental processing already completed. Because Rey can find no such operation that deserves to be correlated with the common notion of consciousness, he sees no option but to propose the "disturbing possibility" that humans may in fact be deceived whenever they assert the Cartesian Intuition. But is this disturbing possibility merely the result of faulty analysis? What if it is a mistake to look for "consciousness" as a correlate to some specific mental operation? What if consciousness is integral to all truly mental operations, though not itself an operation?

It is an undeniable fact that our attention is directed to objects and objective states of affairs. Even Rey hesitates to call this feature of mental life into doubt. This capacity for the mind to "entertain" the presence of reality is often referred to as the "intentionality" of consciousness. and is thought by some philosophers to represent the crucial structure of mental life. One can turn to the writings of Edmund Hllsserl and John Searle for important examples of intricate theories of intentionality that have heen designed to capture the nature of this apparently essential characteristic of mental processing. In the following commentary on Rey's position, David Woodruff Smith offers a glimpse of this latter approach to the study of mind.

Page 33: Perspectives on Mind

DA VID WOODRUFF SMITH

Rey Cogitans: The Unquestionability of Consciousness

When people speak of "consciousness", Wittgenstein counselled. language has gone on holiday. Au contmire: When people speak against consciousness, consciousness has gone on a holiday. When philosophers question the exis­tence of consciousness. they are out of touch with human experience. Theor­izing has cut them off from a basic feature of even their own experience-­viz .• consciousness. Their position is intellectually schizoid.

Consciousness is an embarrassment to functionalism and to the computational-representational theory of mind. Whatever the causal andlor computational role of a mental state, it seems that same function might be pelformed without consciousness. So functionalism or computationalism-­which would identify a mental state with its causal or computational role-­cannot account for consciousness. (Unless it can be shown that being cons­cious changes the causal or computational role of a mental state.) It would be convenient. then. for the functionalist or computationalist, if someone could show that consciousness does not exist.

Georges Rey has offered an intriguing argument against the existence of consciousness. [1] His argument may be summarized as follows: The human mind. we assume, has consciousness. Characteristic of the human mind. many have proposed. are the capacities of belief and inference. preference. self-reference. language-use, introspection, and sensol), information-transduction. Now. a modern computing machine might realize all those psychological capacities yet lack consciousness. But so far as we know. we might be just such machines. Therefore. we should conclude that consciousness may not exist--even in us.

Rey offers this argument as a reductio ad absurdum of our everyday assumption that consciousness exists. He candidly allows. though, that it may be a reductio only of certain computational analyses of belief. I have a related worry: colllm computationalism. intentional or representational states like belief are not defined merely by their syntactic or formal properties, even if computation runs on syntax alone; if modern logic has taught us anything, it is that syntax does not semantics--and hence representation--make. But apart from that issue. if we are indeed a kind of computer, then our kind have semantics and intentionality even if today's digital computers do not. Thus. the central issue here is consciousness.

Behind Rey's initial concerns I see a more basic argument that is more

25 H. R. Otto and J. A. Tuedio (eds.), Perspectives on Mind. 25-34. © 1988 by D. Reidel Publishing Company.

Page 34: Perspectives on Mind

26 Dadd Woodruff Smith

revealing in regard to consciousness. At issue is not yet whether a compuler has consciousness, but whether even lI'e do. The basic argument:

(I) A being might have certain psychological capacities yet lack consciousness: viz., the capacities of belief and inference, preference, self-reference, language­use, introspection, and sensory transduction.

(2) So far as we know, we are just such beings,

(3) Therefore, so far as we know, there is no such thing as consciousness.

(By the way, these formulations in no way legitimate Wittgenstein's empty worry that philosophical language construes consciousness as a thing, e.g., a stone.) I see two problems with this argument as a reductio of our assumption of consciousness. They concern respectively the two premises.

Rey has argued carefully for premise (I)--;n the case of a computing machine. I too endorse this premise, even without appeal to computers: all those types of psychological states a being can have unconsciollsly, human beings sometimes have (even sensory transduction can occur in humans with­out consciousness, e.g. in so-called blindsight). But Rey seems to assume, for the sake of argument, that consciousness consists in some such psycho­logical abilities as those cited. That is a mistake. Various psychological states can be either conscious or unconscious. Consciousness consists, then, not in states like belief or des;re, but in a certain property that mayor may not inhere in those states. Later, I shall try to say--;n a rudimentary way--what that property is. It seems the word 'consciousness' has been used with two meanings: before Freud it meant simply mind; after Freud it means the property of being conscious, either a subject's being conscious or a mental state's being conscious. Rey's argument conflates these meanings.)

What of premise (2)'1 It is false, as I think we all somehow know. We do know that we have conscious thoughts, desires, perceptions, etc .. and so we know that we are not beings that have these capacities but lack consciousness (unconscious computers perhaps). And so the argument fails: we do know that consciousness exists, at least in our own case. How do we know this? By virtue of consciollsness, by virtue of being conscious. This point flows naturally from a proper accollnt of what consciousness is. Of course it is part of the Carlesian intuition. But it must be carefully separated from Descartes' further c1aims--as we shall see.

In effect, Rey argues for premise (2) hy asking rhetorically, "How clo you know you are not just such a computer, with heliefs, etc., but without

Page 35: Perspectives on Mind

Rey Cogitans: the Unqllestionability of ConscioLisness 27

consciousness?" What sort of claim is this? If it is a radical skepticism (one can never know anything), then it has no special relevance for cons­ciousness. If it is an empirical claim of cognitive science (as yet we have no clear theory of what consciousness is or what its causal or computational role in the human nervous system may be), then it remains to be seen whether cognitive science will accommodate consciousness. Recall, consciousness does not consist in the psychological capacities cited in premise (1). But if Rey's claim is a challenge to the Cartesian intuition, a limited skepticism aimed at consciousness itself, then it can serve nicely to focus our attention on the nature of consciousness.

However, Rey's contra-Cartesian considerations do not reach the heart of the Cartesian intuition. Suppose, Rey suggests, we were to program a computing machine to print out "I see clearly that there is nothing which is easier for me to know than my own mind" or even "I am conscious now". Surely, as Rey implies, this capacity, when added to the others, would not render a computer conscious! Rey seems to think this point undermines the Cartesian intuition--as if Descartes' knowing he is conscious is a matter of his being able to mouth the words "I am conscious", or "cogito." But consciousness does not consist in this capacity--with or without the others--so Rey's challenge does not touch the Cartesian intuition. Again, contrary to Rey's rhetoric, the specified functions of the machine-­including printing out things like "I am conscious"--would not constitute its seeming to the machine that it is conscious. The question is whether the machine really is conscious and is on that basis consciously judging and declaring that it is conscious. (We have agreed that a system--perhaps even a machine--could make rational judgments without being conscious. But could it rationally judge that it is conscious when it is not? Perhaps-­say, if the machine were deviously deceived about its inner workings. But, in most circumstances, if it did I'd want the system reprogrammed!)

The questions remain. What is consciousness? Do we know we have consciousness? How? Which other forms of life have consciousness? Which forms of computing machines, if any, have consciousness? Descartes was absolutely right about one thing:

When (i.e. at the time) I am conscious, I know I am conscious.

Of course, Descartes' central focus was a further principle:

When I am conscious, f know I exist.

Indeed, this was Descartes' own explanation (in one translation) of his more famous dictum "Cogito Clgo slim." But the first principle is the core Cartesian intuition about consciousness. Behind the Cartesian intuition, I want to suggest, lies another principle:

Page 36: Perspectives on Mind

28 Darid WoodlU./f Smit!z

When I am conscious, or in a conscious mental state, I am aware of my being in that state.

Or better ph rased:

When I am consciously thinking (wanting, seeing, ... J such-and­such, I am aware of my so thinking (wanting, seeing, ... ).

In fact, consciousness just is that awareness: that is what we must be clear about (and what Rey's argument is /lot clear about). I am assuming the modern theory--common to psychoanalysis and cognitive psychology--that some mental states are conscious and others are unconscious. Consciousness is thus a certain property that a mental state may have; it consists in the subject's being aware of the mental state while it transpires. Descartes, presumably, did not know about unconscious mental states--but he knew about conscious ones, merely by being conscious, by having conscious thoughts, desires, etc. Indeed, consciousness begets knowledge:

When I am conscious, or in a conscious mental state, I know I am in that conscious mental state.

Or better:

When I am consciously thinking (wanting, seeing, ... J such-and­such, I know I am consciously so thinking (wanting, seeing, ... ).

This is the Cartesian intuition in more modern garb. And I think it is quite true: just as perception brings us knowledge of the world around us, so consciousness brings us knowledge of our own mental states--when they are conscious. The point is not that we infer and so believe or theorize that we have conscious mental states. The point is rather that in !zaring such a mental state I have an awarmess of that state--I experience it. And that awareness gives me knowledge.

But if Descartes was right about this knowledge, he was wrong about its epistemic strength. He was wrong--many today will agree--about the kind of certainty he claimed for that knowledge. When I am consciously thinking, I am aware of my so thinking, and by virtue of that awareness I know that I am consciously so thinking:. (Let us not pause over what is the correct analysis of knowing: let us agree that I am in position to form the belief that I am so thinking, my belief would be true, have reasonable justification, have appropriate causal history, etc.--whatever is required for knowing.) But is my knowledge of my own conscious mental states incorrigible, indubitable, ar apadictic--as Descartes claimed (according: ta the comman interpretations)?

Page 37: Perspectives on Mind

Rey Cogilans: 771e Unqueslio1labilily of C01lsciousness 29

Evidently not. It may seem to me that I am thinking or feeling one thing, in that I have the necessary awareness, and yet my real thought or emotion is something else. Such is the evidence of modern psychology, both clinical and experimental. I want to grant the point, but maintain the core Cartesian intuition and with it the principle of consciousness formu­lated above. (But could one have an awareness as if of some mental states, and not be conscious? That seems to be a contradiction in terms. In that limited sense, then, perhaps Descartes was right: when conscious, I know incorrigibly that I am conscious. Yet there are degrees of consciousness. Sometimes while in the process of waking, I am only "half awake" and unsure whether I am awake, consciously thinking or seeing, or only still dreaming, or adrift in "primary process," accessing unconscious ideas in a less-than­conscious way. In that mode, I have some sort of partial and vague aware­ness of mental states, yet I am not fully conscious and my knowledge of such states is evanescent at best, and quite corrigible.)

I have sought here to separate three claims:

(I) When I am consciously thinking such-and-such, I am aware of my so thinking.

(2) When I am consciously thinking such-and-such, I know that I am consciously so thinking.

(3) When I am consciously thinking such-and-such, I know indubitably (incorrigibly, apodictically) that I am consciously so thinking.

The first is the central principle of consciousness, and is true. The second is the core Cartesian intuition about consciousness, and is also true. The third is the leading principle of Descartes' larger program of epistemology, but is false. Separating these three principles is vital in evaluating Rey's argument against the existence of consciousness. The third principle has been the obsession of recent anti-Cartesians, and drew much of Rey's fire. But only the second--which does not entail the third-­is at issue in Rey's argument. It is true, I think, and so the key premise in Rey's argument is false: in having consciousness we know we have it and that is how we know that we are not beings who have beliefs, desires, etc., yet lack consciousness. But most important. most basic, is the first principle. Insofar as Rey identifies consciousness with a variety of psychological capacities (belief. desires, etc.), he fails to appreciate what consciousness is--and so he slights the first principle.

Page 38: Perspectives on Mind

30 Dnvid WoodlUf{ Smith

III

What, then, is consciousness? Most basically, consciousness consists in the awareness one has of an experience while it transpires, Somewhat more precisely:

A mental state is consciolls if and only if the subject when in that state is aware of being in that state.

Or betler:

One consciously thinks (wants, sees, ... j such-and-such If and only if when one so thinks (wants, sees, ... j one is aware of one's so thinking (wanting, seeing, ... j.

It is not easy to say what is the structure of that awareness; I have tried elsewhere, in a story too long for this essay. [2] Suffice it here to note that the awareness must be internal to the experience. It cannot be a separate mental act of simultaneous reflection or introspection, or a subsequent retention; it cannot, on pain of infinite regress.

Somehow, because the awareness is built into the experience, the subject knows he is consciously so thinking (wanting, etc.). Thus, from the most basic characterization of consciousness flows the Cartesian principle: when I am conscious, I know I am conscious.

And yet, what our own consciousness tells us about consciousness is quite limited indeed. We do not know, by virtue of consciousness itself, what is its neurological basis, what if any is its computational role in processing various kinds of information, or even what is its psychological role in the formation of our various attitudes, desires, feelings, and decisions. These are all matters of empirical theory--and importantly different levels of theory. But whatever theories we develop about consciousness, these are theories about the phenomenon of consciousness that we all experience every day.

IV

In consciousness--in our awareness of our own passing conscious mental states--we experience consciousness. There is thus a lived inconsistency in questioning--consciously--the existence of consciousness. The res cogilalls--say, the Georges Rey cogitans--who so questions consciousness has somehow lost sight of his own consciousness. How can this be?

It is a common assumption among modern, or post modern , anti -mental­ists--from Quine to ROlty to the Churchlands and Rcv--that claims about mind are purely theOl·~tical claims: part of a th~ory about various

Page 39: Perspectives on Mind

Rey Cogitans: The Unquestionabiliry of Consciousness 31

organisms (and machines?), either an entrenched folk theory or an emerging scientific theory. As with physics, so with psychology, the story goes. As with quarks, so with the elements of mind: from the unconscious forces of the id, to the unconscious processing of language and of visual information, and on to even the propositional attitudes (belief, desire, etc.). And now consciousness itself!

But consciousness is not a purely theoretical entity; it is. in away. observable. For we experience it in our own conscious mental processes. Strictly speaking. we do not obsen'e consciousness in a mental act of perception. Rather. consciousness is a property of observation itself. i.e. conscious perception. What's more. it is a property of (most) activities of theorizing. including theorizing about consciousness--even theorizing about the non-existence of consciousness.

Indeed. strictly speaking, the traditional theory/observation distinction does not apply. For in consciousness we have a kind of awareness of some of our mental states. But that awareness is not itself sensory perception. the classical empiricist model for observation. And neither is it a form of judgment or belief. much less a purely theoretical judgment.

In any event. consciousness is part of what we must explain in our developing theories of the world. We know it is there. not (merely?) because we observe it in the behavior of humans and other creatures. and not merely because we postulate it in various creatures, but to begin with because we all experience it. In fact. we experience consciousness not only in ourselves, through consciousness, but also in others. through empathy--yet another natural phenomenon we must explain. Any theory of mind--such as the computational theory--that cannot account for consciousness we must deem. then, inadequate at best.

To be sure. our scientific. philosophical, and cultural advances may well change the way we think and talk about consciousness. Contrary, perhaps, to the consciousness-raising work of Rene Descartes. But, nonetheless, the existence of consciousness we cannot coherently question. Not so long as we are. like Descartes, res cogitalls.

--'1/--

If we accept Smith's analysis. Rey is saying that consciousness is not integral to mental operations. By implication, Smith concludes, Rey is contending that mental operations lack the sort of awareness commonly associated with "consciousness." This is the clearest sign, for Smith and his fellow phenomenologists. that Rey has erred. For phenomenologists advocate beginning from the experiential fact of "intentionality," or, as it is sometimes described, the "directed" character of consciousness. In

Page 40: Perspectives on Mind

32 Chapter One

other words, the phenomenologist begins by describing what he takes to be a basic fact of experience: we are conscious, and our conscious states are always intimately related to an experiential content. Rey questions the integrity of this starting point. He asks whether we can say for certain that our "awareness" of being conscious is undeniable proof that we are in fact conscious. Are we really different from a machine that has been programmed with Rey's various clauses--a machine that would, by means of recursive functions and nesting procedures literally swear it was conscious (when in fact it was not)? What if we have been programmed in a way that merely gives us the impression (without the reality) of being conscious in a way that is always intimately correlated with an experiential content? Would we not be tempted, if programmed in this fashion, to assert that we are conscious, and that intentionality is a basic feature of all conscious states?

Well, what if we have been programmed in the way Rey describes, and what if this programming is in fact sufficient to give us the appearance of being conscious, and of having the mental capacities that we routinely associate with this appearance (such as the freedom to ponder possi­bilities)? Would it make any sense to conclude that such awareness is an illusion? Hardly, or Rey would not be so interested in building this feature of mental operations into his set of clauses. But, then, if the intentional structure of awareness is not an illusion, it must be a reality. Perhaps, therefore, we should try to determine whether this reality is simply computational (as Rey seems to hold) or whether it is an undeniable feature of conscious mental processing (as Smith seems to hold). In other words, is intentionality (and with it, consciousness) merely epiphenomenal? Or is it a real and essential characteristic of human consciousness? Is intentionality merely a residual effect of computational operations programmed into us by genetic and environmental factors? Or is it an integral component of mental operations, a component that gives structure and meaning to the life-experiences of an individual?

If Smith's intuitions are correct, intentionality is simply not reducible to an "operation," "state," or "neural event." This is due primarily to the character of its role as a "part" in the process. For while it is plausible to think of mental operations as discrete parts of a cognitive system, it clearly seems wrong to think about intentionality in this way. If we must think of intentionality as a part in the cognitive system, we should think of it as a "total" part, since the properties it exhibits seem to be intrinsic to the entire system of mental function, and may well function as the organizing principle of that system.

One can also seek leverage against Rey's position by challenging the credibility of his third person orientation. Having failed to discern any way in which intentionality and consciousness come into playas discrete mental factors, Rey has concluded that it is plausible to doubt the

Page 41: Perspectives on Mind

CommentalY 33

existence of consciousness, and thus entirely conceivable that intelligent behavior proceeds in accordance with rational regularities which, at least in principle, could be instantiated in the operating system of a compu­tational mechanism. But this assumes that a third person ontology is appropriate for explaining the mind. As we shall see in subsequent papers, it can be argued that a third person ontology cannot distinguish between something that has a mind and something that only behaves as if it had a mind. To tolerate this ambiguity is to lose touch with a primary fact of mental life: that it appears to lend itself far more readily to analysis from the standpoint of ajirs( person ontology.

To push this point a little further, we might say that consciousness is the first person standpoint (with all its implicit and explicit levels of awareness); the third person standpoint, on the other hand, is essentially a derived standpoint. Our fascination with the pursuit of "objectivity" has led to an apparent inversion of this relationship. In the process, we lose sight of the basic ground of the third person standpoint, until at last someone like Rey is compelled by his logic to admit the "disturbing possibility" that the first person standpoint is either an illusion, or a recursive function accessible to third person points of view. Those (like Smith, Searle, and Husserl) who reject both of these alternatives feel that the facts (and common sense) are on their side. Yet the successes of the physical sciences have shown that common sense often misreads the facts, and our ordinary intuitions can be distort­ed by false appearances. So why trust a first person account as evidence for the existence of consciousness?

Is there a special "mark" of consciousness that appears only to a first person standpoint? Smith has argued that beliefs and desires are one's own beliefs and desires (or at least they seem to be, which amounts to the same thing!). For what else can it mean to say that "I seem to believe" or "I seem to desire" in cases where I do believe, or I do desire, if not that these beliefs and desires are part of a conscious system that is an index of one's own subjectivity? Or is it conceivable that this experience of subjectivity is an illusion? Smith would argue that it may be conceivable from a third person standpoint, but not from a first person standpoint. Hence, as he sees it, there is undeniable evidence for the existence of consciousness as an essential feature of mental processing. But isn't this the very "Cartesian Intuition" that Rey's analysis was designed to call into question?

1.2 Correspondence

The hypothesis that mental phenomena are "correlated" with brain states often plays a key role in reflections about the nature of mind. When the term 'correlation' is used in this context, it is commonly assumed

Page 42: Perspectives on Mind

34 Chapter One

that some form of psycho-physical "correspondence" or some kind of "isomorphism" is necessary in order to explicate the notion sufficiently to provide an empirical basis for theories of mind, and thus to help unravel the riddle of mind/body dualism.

In order to work out difficulties associated with this strategy, many of its proponents have been inspired to turn toward reductionist programs in search of a more precise explanation of the relation between brain function and psychological processes. There has evolved from this, particularly in analytic circles, a tendency to discount the concept of the "mental," and to attempt to explain the appearance of cognitive states by reference to logical functions implemented in brain activity. While this approach has stimulated considerable work in cognitive science, it has not succeeded in rendering the concept of the mental superfluous for theories about the relationship between brain states and conscious psychological phenomena. There remains the question of how to account for subjectivity and for the plethora of qualitative experiences that sustain our common­sense notion of mind.

The following essay, by Yuval Lurie, attempts to discredit the correspondence hypothesis without discrediting the concept of the mental. Professor Lurie's principle thesis is that mental phenomena cannot be individuated in a way that would satisfy the logical requirements of the "correspondence" view of the relation between brain states and mental states. The key to his argument lies in his notion of the intricate relation that holds between a mental phenomenon (like a desire or belief) and its conceptual and empirical "background." Lurie concludes that this complexity renders mental phenomena "undeterminable" in nature, and thus incapable of individuation. This in turn undermines the correspondence hypothesis as a viable research strategy for empirical investigations of the relationship between brain states and mental phenomena.

If Lurie is right, beliefs, desires, and intentional phenomena in general cannot be individuated into discrete psychological states that could then be correlated with discrete brain states. This would seem to imply that "meaning" does not reside in discrete "mental states." What does it imply about "mental states" themselves? Is there a sense in which the very concept of a "mental state" ceases to have a reference? If so, what does this imply? One might hold that, since the mind is surely physical, this implies that the notion of a mental state (as something apart from the neuro-computational activity of the brain) should be abandoned. But what if one does /lot hold that the mind is simply physical? What view of "mind" are we left with at the conclusion of Lurie's essay? What is implied by the notion that mental phenomena like beliefs and desires are intricately bound up with conceptual and empirical contexts in ways that render them wholly "undeterminable?"

Page 43: Perspectives on Mind

YUVAL LURIE

Brain States and Psychological Phenomena

The correspondence hypothesis is a conjecture to the effect that psycho­logical phenomena correspond (in one-to-one fashion) to certain states and processes in people's brains. It suggests that for each and every (different) psychological phenomenon there is a different brain state or process with which it is uniquely correlated. [I] This hypothesis, often referred to in philosophical literature as "The Principle of Psycho­Physical Isomorphism," is purported to provide the empirical foundation on which a variety of conflicting mind-body theories are constructed, as well as the source of the "riddle" which such theories aim to unravel. [2]

In what follows I shall argue, contrary to what some critics of the correspondence hypothesis have claimed, that the hypothesis is wrong /lot because it too strongly constrains the results which future neurophysio­logical research may (or may not) reveal, but rather because it is logic­ally incoherent. Moreover, as the source of this incoherence pertains to the psychological side of the alleged correspondence, it cannot be remedied by a shift from one-to-one correspondence to many-to-one (brain states-to­psychological phenomena) correspondence. In section I, I shall review the relevant part of the debate concerning the correspondence hypothesis (as applied to "mental states")' showing (a) that despite internal differences of opinion all are committed to the logical possibility of a one-to-one correspondence and (b) that attempts to reformulate this hypothesis (as a many-to-one relationship) do not alter the basic assumption on which it is constructed: namely, that psychological phenomena can be individuated and placed, one by one, in correspondence to--either one or many--physical states. In the second section, this assumption will be shown to be conceptually incoherent. Then, in section 3, I will suggest a general conclusion which can be drawn from all this regarding the logical status of mind-body theories.

I. 771e Correspondence Debate

Although the correspondence hypothesis can be seen to underlie a vast area of philosophical thinking about the nature of mental life, in recent years it has been subjected to certain criticisms. The basic criticism of the hypothesis is that it lays down overly strong sllbsta/ltive cO/lstraillfs (on the physical side of the equation) which, we are told, philosophers have no business laying down, and which, it is claimed, are not justified by discoveries in the field of neurophysiological research. Critics argue that it is possible for brain states which exhibit substantive differences

35 H. R. Otto and 1. A. Tuedio (eds.), Perspl'ctil'l's on Mind, 35--48. © 198H h}' D. Reidel Publishing Company

Page 44: Perspectives on Mind

36 YUl'a[ Lurie

to correspond to the same psychological phenomenon. It is claimed that there is no reason to suppose, in the case of different (possible?) species, or in the case of different individuals, or even in the case of a single individual at different times, that the brain states which corre­spond to a particular psychological phenomenon are always the same. [3]

To answer such objections, attempts have been made to formulate the correspondence hypothesis in a more abstract version, whereby references to "brain states" are replaced by references to "structural" or "functional" states (i.e. states, of whatever sort of embodiment, which have some unique structure or function). [4] It is then argued that brain states which correspond to the same psychological phenomenon, even when they do exhibit notable differences, nevertheless share some unique structural or function­al features--in virtue of which a one-to-one correspondence can be defined and discovered. Hilary Putnam, who in recent years has been one of the more forceful and sophisticated advocates of the correspondence hypothesis, has argued in this vein by making use of the notion of a Turing Machine. [5] Putnam's idea is that it is possible in principle to provide a description of a human mind by means of a "machine table" of a Turing Machine. So that" ... any robot with the same 'machine table' (as that of a human being) will be psychologically isomorphic to a human being." [6] A Turing Machine, Putnam explains, can be realised in many different ways, so that it is possible for physically different brain states to correspond to the same machine-table state of a Turing machine, which itself corresponds to one particular psychological phenomenon. Thus, even if different indi­viduals differ in substantive ways with regard to their brain mechanisms, in each case the correspondence instantiates the same one-to-one relation­ship relative to an abstract description of rhe internal mechanism.

The problem with Putnam's suggestion is that it hinges on the assump­tion that brain states (as well as psychological phenomena) can be placed in correspondence to an abstract description of a Turing Machine. It has been argued, by Ned Block and Jerry Fodor, that this assumption involves substantial constraints. [7] Their position is of interest, not so much because of the criticism they direct against the correspondence hypothesis, but because this hypothesis plays a crucial role in their rejection of alternatives, as well as in the position they finally adopt themselves. First, they claim that "even if ... physical states are in a correspondence with psychological states, we have no current evidence that this is so." [8] This, however. is merely to acknowledge that in principle there could be such evidence. Indeed. they concede that it is logically possible that a one-to-one correspondence does obtain. Second. even their own position. namely that psychological phenomena may be placed in correspondence to "computational states" (which they define as states of an automaton characterised by its "inputs. outputs. andlor machine-table states") also presupposes a one-lo-one correspondence--albeit relative to a different

Page 45: Perspectives on Mind

Brain States and Psychological Phenomena 37

"typing" of physical states and to specification of the particular mechan­ism. Third, although they argue against the Turing Machine version of the correspondence hypothesis, they reject the idea of a many-to-one relation­ship (between brain states and psychological phenomena). This idea, unless it is restricted to a "distinct" set of disjuncts, is devoid of meaningful content. But even if it is restricted to a distinct disjunction, there is no reason to believe that, like a one-to-one relationship, it is lawlike.

This suggests the following. [f the proponent of the many-to-one position wants to provide discoverable, lawlike statements about the rela­tionship between certain physical states and psychological phenomena, he must be able to specify a set of one-to-one correlations, each of which is lawlike with respect to a restricted range, say, particular organisms, particular circumstances, particular stages of development, or combinations of these. It is the disjunctive biconditional that is many-to-one and not lawlike; each of the disjuncts, however, will be correlated with the given psychological phenomenon in a lawlike way, provided that the biconditional is understood to hold in a restricted range. This is because a lawlike relationship just is a one-to-one correspondence.

[n fact the above observation points to a profound misconception shared by the participants in the many-to-one vs. the one-to-one debate. For the notion of "one-to-one correspondence" is not only an empirical conjecture concerning future data, but is a regulative principle defining conditions of adequacy for neurophysiological theory. [9) As such, claims to the effect that no evidence has been found for confirming the correspon­dence hypothesis turn out to be uninteresting, since they are not actually denied. The point is, that for a neurophysiological theory to falsify this hypothesis in an interesting way, there must first be available an inter­esting theory about brain mechanisms. But such a theory, by most accounts, has yet to be devised. Indeed, the reason that current theories are not yet interesting is that they do not enable us to place brain states in correspondence with various aspects of mental life. So there is always an escape hatch for proponents of the correspondence hypothesis. For as long as no interesting relationship can be discovered between brain states, on the one hand, and particular psychological phenomena, on the other hand, the theory used for distinguishing brain states can be regarded as "unin­teresting." But once it is assumed that some interesting relationship is available, even if it is only between a (disjunctive) set of brain states and some particular psychological phenomenon, it is no longer clear why this fact should not be taken into account in the construction of a theory that aims to classify brain states so that different states can be regarded as merely different instances of the same (kind of) brain state--and, with respect to which, a one-to-one correspondence can be defined.

[n retrospect. it seems the debate concerning the correspondence hypothesis has been misconceived from the very start. For throughout this

Page 46: Perspectives on Mind

38 Yuval Lurie

debate philosophers have focused primarily on issues pertaining only to one side of the equation, namely, the physical side. But this is justified only if psychological phenomena (like brain states) can be individuated, one by one, so that it at least makes sense to talk about placing them in such a relation to brain states. In the following section, I shall examine the correspondence hypothesis relative to psychological phenomena which philosophers commonly call "mental states," i.e. beliefs, hopes, wishes, desires, intentions, and so forth. I intend to show that the notion of a correspondence between brain states and such psychological phenomena is incoherelll. Contrary to what proponents and critics of the hypothesis have been assuming--namely that it is logically possible that certain brain states can correspond to psychological phenomena--I wish to show that it is logically impossible for any such relationship to exist. [10]

2. 771e Incoherence of the Correspondence H)pothesis

Any attempt to confirm the correspondence hypothesis requires that it first be formulated with respect to a specific psychological phenomenon. Let us consider the following case. It is announced that the central library at Ben-Gurion University will be closed next Monday. An announce­ment to this effect is posted on a placard in front of the library. Various people read this announcement or hear about it from friends. Others, of course, never find out about it. Thus it may be assumed that some people will acquire a belief that the central library at Ben-Gurion University will be closed next Monday, while others will not. On the correspondence hypothesis, then, there will be some unique brain state which corresponds to this belief and only to this belief. [11] Does this make sense?

To confirm such a hypothesis requires that this belief and only this belief be correlated with some unique brain state. How can such a correlation be established? Presumably by adopting standard empirical procedures. We gather a group of people who have this belief and a group who lack it, and proceed to examine whether there is some unique brain state in the brains of the people in the first group which is missing from the brains of those in the second group. It is conceivable that such a brain state may be found. If so, we then inform the people who lack both the belief and the brain state that the library will be closed on Monday to see whether this unique brain state now emerges in their brains as well. Again, we inform the people who have both the belief and the brain state that the announcement has been cancelled. and proceed to investigate whether the unique brain state previously identified in them disappears. It is conceivable that the emergence and disappearance of the brain state may I11deed correspond to the emergence and disappearance of the belief. It is tempting to suppose. therefore. that whatever problems there are in confirming this hypothesis, they are of a technical nature only. For there

Page 47: Perspectives on Mind

Braill Stares and Psychological Phenomena 39

seems to be nothing intrinsically problematic here. II is my contention that this way of looking at the matter obscures

the nature of the problem. For no matter how these experiments are imagined, there are many other beliefs which also set the two groups apart. For example, those in the first group, unlike those in the second group may just as well be said to believe that,

(1) no one will be able to enter the library next Monday, (2) no one will be allowed to study in the library next Monday, (3) no one will have a chance to work in the library next Monday, (4) no one will be able to take books from the library next Monday, (5) no one will be able to return books to the library next Monday, (6) the doors of the library will be locked next Monday, (7) none of the librarians will be in the library next Monday, (8) none of the usual readers will be in the library next Monday, (9) no one will be working in the library next Monday, (10) the library will not be operating normally next week, (11) the tallest building on campus will be shut next Monday, (12) the place where most of the philosophy books are kept will stay

closed next Monday.

This list can be extended indefinitely. Thus, the number of beliefs which people may acquire alongside the belief that the library will be closed on Monday, is not determinable. Nor is there any reason to suppose that a belief that the central library will be closed on Monday is the only belief which people will have once they find out about the announcement. So there is also no reason why we should suppose that having such a belief is the only thing which sets the people in the first group apart from the people in the second. Similarly, there is no reason to suppose that the correla­tion drawn between the belief that the central library will be closed on Monday, and some particular brain state, is the only correlation that can be drawn between the brain state in question and a belief. The question how the correspondence hypothesis is to be confirmed remains unanswered.

Many philosophers, I suspect, will be dissatisfied with what has been said so far. "Look," they may say. "the people in question are only apt, as you yourself put it, to acquire other beliefs as well. Hence, they may have these other beliefs or they may not. There is no necessary connection between having a belief that the library will be closed on Monday and any of the other beliefs you have listed. You do not really know whether the people in question will acquire any of these other beliefs. Moreover, even if they do, since there is no necessary connection between those beliefs and a belief that the library will be closed on Monday, there need not be any il1lrinsic difficulty in correlating this particular belief with a particular brain state."

Page 48: Perspectives on Mind

40 Yuval Lurie

Indeed, my aim in listing these beliefs was to provide examples of beliefs which may be acquired by a person when he acquires the belief that the library will be closed on Monday, but which neither entail nor are entailed by that belief. But this is as far as the above objection extends. The point is: a belief that a particular library will be closed on some particular Monday cannot be acquired from a conceptual void. It requires a "conceptual background"--that is, knowing what libraries are, knowing what it means for a library to be closed (as opposed to, say, for a door to be closed), knowing all sorts of regulations, norms, and conventions regarding all sorts of things. (For example, knowing that, when it is announced that the library will be closed on Monday, this means that it will be closed throughout all of Monday, not for just one second.) Knowing such things is what enables people to acquire such a belief when confronted with such an announcement.

Moreover, there is a very specific empirical background to the lives people lead. Libraries are of one sort or another, and reasons for closing them are also of one sort or another. Some libraries occupy a number of buildings, others only one; some have guards, others do not; some always close on Monday, others do not; some close on Monday for renovation, others because they have been broken into, or because there is no budget to main­tain them, or because of fire. Indeed, both the scope of the conceptual background necessary for acquiring a belief and the scope of the empirical background against which people in fact acquire such beliefs, are /lot detel7l1inable. They are undeterminable in two important senses. First, it is not possible to specify, one by one, all the "items" of which such backgrounds are comprised. Second, they are backgrounds against which people are not only enabled to acquire an undeterminable number of beliefs when confronted with such an announcement, they, in fact, bring about the acquisition of an undeterminable number of beliefs if any at all is acquired. While I cannot specify which of various (logically independent) beliefs (and hopes, wishes, desires, intentions) will be acquired once a belief that the library will be closed on Monday is acquired, I can state that it is (logically) impossible to acquire ani), this belief. For given the nature of these two "backgrounds", unless some other beliefs are also acquired, there is no reason to suppose that a person really understands what it means for the library to be closed and, hence, also no reason to suppose that he has acquired the belief that it will be closed.

It is tempting to suppose that the problem confronting us is merely one of not being able to isolate in particular cases a specific belief from other aspects of mental life which surround it, and that this problem can be overcome by devising (inductive) procedures for distinguishing the belief in question. Since there is no necessal)' connection between anyone of the beliefs apt to accompany the belief that the central library will be closed and that belief itself, it may seem, at least in principle, that

Page 49: Perspectives on Mind

Brain Srares and PsycllOlogical Phenomena 41

this ought to be possible. For example, a situation may be improvised where it will be announced that although the library will be closed next Monday, the librarians will be working, or that the doors will remain unlocked, or that people will be allowed to walk in and return books, and so forth. In this way, it may seem possible to eliminate the acquisition of each belief which previously accompanied the belief that the library will be closed and thus arrive, eventually, at a correlation between the single belief that the library will be closed next Monday and some unique brain state.

It is important to understand just what this suggestion amounts to. While it is possible to imagine cases where a person will acquire a belief that the library will be closed yet not acquire any particular one of the accompanying beliefs previously mentioned, it is impossible to imagine a case where none of these beliefs will be acquired together with that belief. For, first of all, the number of beliefs which accompany the belief that the library will be closed is not a determinable one. So we do not really know what it means to attempt to eliminate "all" of them. Second, a library, when it is closed, must be closed in some pal1icular way. A library simply cannot be closed if the case is so set up that none of the accompanying beliefs can be acquired. If the doors are to be left open, if people are to be allowed to study in the library, to take out books, and so forth, then the library will not be closed. Hence, the inductive proce­dures suggested for individuating the belief that the library will be closed next Monday from those beliefs that accompany it, or could accompany it, can at best eliminate only some of these beliefs at anyone time.

However, there is another problem with this (inductive) procedure. In the original situation, the announcement that the library will be closed next Monday was understood to suggest, among other things, that the doors to the library will be locked and that no one will be allowed to enter, that no one will be working, and so forth. (This is not to say that an assertion to the effect that the library will be closed next Monday entails any of these claims. This is just what such an announcement is usually taken to suggest and what it is usually understood to mean.) We are now asked to imagine cases where, although there will be an announcement that the library will be closed, one or another of the facts previously understood to be true will not be true. Hence, it turns out that the library will be closed in an entirely different way than it was previously understood to have been closed. But if an assertion to the effect that the library will be closed turns out to suggest different things in these different cases, it is not obvious that it can be taken to mean that very same thing. (This can readily be seen from the fact that the sort of evidence which would confirm each announcement in these different cases is also different.) To suggest that nevertheless in all cases people finding out about the announcement will have acquired a belief which can be de-

Page 50: Perspectives on Mind

42 Yum! Lurie

scribed as a "belief that the library will be closed next Monday," and which in all cases corresponds to the same brain state, is no longer very interesting, For they will have acquired thereby different beliefs. (This can be seen from the fact that it is possible for two people to acquire a belief so described, although what confirms one person's belief disconfirms the other person's belief,)

Still, it may seem that since the belief that the library will be closed next Monday is the common element on which all these cases turn, and since in all these cases the same unique brain state is discerned, there is reason for concluding that this belief, and only this belief, corresponds to the brain state in question. This conclusion, as I understand it, is based on taking the belief that the library will be closed next Monday as the central core around which these different cases revolve and, as such, as a discrete phenomenon having an independent essence of its own and which, owing to the complexity of both the empirical and conceptual backgrounds against which it is acquired, just happens always to be surrounded with other beliefs. Now the fact is that we are unable to determine what the essence of such a belief is. If asked to specify what this belief consists in, the most we can do is to list various accompanying beliefs. However, all these imaginary experiments were entertained for the purpose of divorcing the belief that the library will be closed on Monday from any single belief which may accompany it. And we assume such experiments could succeed in establishing this point. Thus, whatever this belief consists in, it call1lot be identified with any olle of these other beliefs or, for that matter, with all of them conjoined. For, (1) we do not know what "all of them" means, and (2) there is no single belief without which a belief that the library will be closed next Monday cannot be acquired. Yet, in none of the cases considered was it possible to acquire only the belief that the library will be closed next Monday. It was always accompanied by other beliefs. So, the conclusion that the belief at issue can be separated from the others is simply wrong. What these purported experiments actually establish is that the belief in question consists, in each of these different cases, of a different set of an undeterminable number of different beliefs. The belief that the library will be closed next Monday is not, therefore, a discrete phenomenon having a determinable essence. It is a belief which has neither a central core to it nor clearly drawn boundaries. It is complex, not discrete. So the brain state in question not only turns out to correspond to many beliefs in each case considered, but in each of these cases it corresponds to a different set of an undeterminable number of beliefs all of which amount to having a belief that the library will be closed next Monday.

Two points need emphasis. First. although I have only argued that a belief that the library will be closed next Monday is not the sort of phen­omenon which could correspond to a state or process in a person's brain,

Page 51: Perspectives on Mind

Brain States and Psychological Phenomena 43

this argument can be extended to any belief, intention, or hope. Second, it is important to realise that what faults the correspondence hypothesis is not simply that psychological phenomena are complex rather than simple, but that psychological phenomena do not possess a determinable essence in the way they would have to for the correspondence hypothesis to make sense.

3. A Conclusion Regarding Mind-Body 7heories Generally

If what I have claimed is true, then it seems that most current mind­body theories in circulation are incoherent, however plausible they may initially appear to be. For what mind-body theories aim to furnish is a coherent account of the purported correspondence between mental life and certain physical states and processes by reducing it to a particular and familiar relationship, whether a "causal" or an "identity" relationship, or whatever. Thus, mind-body theories of any such general kind can now be seen for what they are--philosophical attempts to put forward an explan­ation for a relationship which, on logical grounds, simply cannot exist.

4. Postscript (1987)

There are three issues I should like to reconsider. The first concerns functionalism and its relationship to the correspondence hypo­thesis. It seemed obvious to me at the time that the argument against the hypothesis pertains to functionalism as well. However, this intuition has not always been shared. I wish, therefore, to focus on it once again. The second issue concerns three notions of indeterminacy of the mental which have been run together and which I should like now to distinguish. The third pertains to my conclusion in the last section regarding the incoher­ence of the correspondence hypothesis. I would like to formulate it in a somewhat different manner and in a less stringent fashion.

To begin, functionalism is a view from which, it is supposed, impor­tant conclusions can be drawn regarding the relationship between the mental and the physical. In functional theories of mind, psychological phenomena are said to be identical with either machine-table states (Putnam) or computational-states (Fodor and Block), or with the realization of a functionally organized flow chart linking selected informational subsystems into an integrated information processing system (Dennett, Maloney). [12] Formulating the correspondence hypothesis as I have, I find that function­alism presupposes a correspondence (law-like) relation between the mental and (a sub-set of) the functional--one which is instantiated as an identity relation. No such relation, however, is presumed to exist between the functional and the physical. Functional states, we are told, may be "realized" by all sorts of physical (or "spiritual") things. Hence, while the mental corresponds to the functional, it bears no such relation to the

Page 52: Perspectives on Mind

44 YUl'ai Lurie

physical. For functionalism the mind-body problem turns out to be a prob­lem of how correspondence between the mental and the functional is to be accounted for and explained.

Consider once again the example discussed earlier. On a functional theory of mind, a belief that the central library at Ben-Gurion University will be closed Monday corresponds to a unique functional state--a machine table state, a computational state, or the realization of some particular flow chart. However, as I have argued, it is possible to con'elate this belief with such a state only if other beliefs are correlated with this state as well. This kind of relation is described in the literature as a "many-one" relationship (in this case, one between the mental and the func­tional) and is not deemed to be lawlike. The problem, then, for function­alism is to explain why, given such a correlation, it should be assumed that a unique (one-to-one) correspondence obtains between this state and some one particular belief out of an indeterminable set of beliefs.

Advocates of functionalism can resort here to either of two alterna­tives, neither of which seems very satisfying. The first is the suggestion that the specific belief under consideration has a discrete nature, not to be confused with any other belief which may accompany it. Thus the state correlated with this belief will need to be decomposed so as to extract from it some sub-state which corresponds uniquely to this belief. This line of thinking is based on the presupposition that the belief in question is a "simple phenomenon," possessing an essence not revealed in any of the accompanying beliefs. The problem is that we do not know how to extract a single belief from all that surrounds it in mental life. The idea of a "simple belief" is at best unclear. The second alternative is to go the other way: identify the belief in question with the set of beliefs which accompany it. On this view, the belief considered is held to be complex, consisting, in different circumstances, of different accompanying beliefs. The state correlated with this belief is then assumed to be complex as well, consisting perhaps of a set of sub-states, each of which corresponds to one of the accompanying beliefs. Now, aside from the fact that it is not clear why the "accompanying beliefs" are supposed to differ in this respect from the belief in question (which is held to be complex), the problem is that in different cultural circumstances different accompanying beliefs will comprise this complex belief. Subsequently this belief will be correlated with different sets of sub-states. This kind of correlation yields a one-many relationship between the mental and the functional--the sort of relationship said to obtain between the functional and the physical and which, we are told, is lIot of a law-like nature,

The claim that psychological phenomena are not sets of determinable phenomena relative to the correspondence hypothesis requires further explanation. There are at least three ways in which the notion of undeter­minability has been used, and they need to be adequately distinguished,

Page 53: Perspectives on Mind

Brain States and Psychological Phenomena 45

One use of the notion seems to be this: functionalism, like its predeces­SOl"!!, behaviorism and physicalism, is a theory which aims to provide a universal and objective method for representing mental life in scientific, culturally neutral discourse--describing mental life in "Nature's own language," as it were. To gain this end, it is required that psychological phenomena be correlated in one-to-one fashion with certain functional or physical states. One argument against such an approach is that the parti­cular nature of psychological phenomena cannot be determined in discourse which only allows for reference to states, functional or physical. The reason is that the mental is not only a species of the natural but also of the cultural. As such, a portion of what constitutes its nature simply cannot be revealed in such discourse. On this view, my claim that the belief about our library being incapable of determination within the frame­work of the correspondence hypothesis reduces to the (rather trivial) claim that there is no universal, culturally neutral discourse available for representing the unique nature of libraries in all cultures. Either way, the assumption underlying this line of reasoning is that a given discourse is wedded to a particular ollfology. A radical shift in discourse, it is then supposed, precludes identification of (or reference to) the same phenomena. While I think this line of reasoning is basically correct, it nevertheless fails to do justice to the correspondence hypothesis. Its point, precisely, is to overcome this problem of identification. It asks us to assume that only one-to-one matching of phenomena relative to two different discourses is possible. Thus, the nature of a phenomenon pre­sented in a given cultural discourse, it is supposed, can be represented in another discourse (lacking any of the cultural connotations of the former) without making the same reference.

The point that needs to be stressed, then, is that a cultural phenom­enon emerges within an historical context. Libraries can differ from one another in many ways. And it is this that makes it difficult to understand how their particular but different natures in different cultural settings are to be represented in some "universal discourse." The argument against the feasibility of such a venture is familiar: if we cannot know what the nature of all (possible) libraries in all (possible) historical contexts consists in, then there is no ground for claiming that within some form of universal discourse it will be possible to represent such a nature. Now I take it that at least part of what makes this age both a post-Wittgenstein age and a scientific age is that while nothing is to be expected in this direction from philosophical analysis, we ought, nevertheless, to keep our minds open to the possibility that science may be able to come up with the goods. But surely it is more than just likely that we can never represent satisfactorily in some universal discourse the particular nature libraries may acquire in different cultures. So there is nothing to which libraries in general can be said to correspond. Now, there is nothing strange about

Page 54: Perspectives on Mind

46 Yuval Lurie

this c1aim--so why are we thought to be saying something highly mysterious and to be proceeding against the whole grain of science by extending this remark from libraries to minds?

All this having been said, my main concern has been with a somewhat different indeterminacy of the mental as implied by approaches based on the correspondence hypothesis. It is an indeterminacy which seems to be more specific to psychological phenomena, and it is to that I now turn. For beliefs to correspond systematically to certain states (functional or phy­sical) they must be capable of being individuated from one another. Given this prerequisite for using the correspondence hypothesis, it is crucial that we be able to determine whether what is described, for example, as "the belief that the central library at Ben-Gurion University will be closed on Monday" and what is described within what I previously called "accompanying beliefs," refers to different beliefs or to olle alld the same belief. For if in a particular context, the concept of belief precludes this question from being answered satisfactorily, then the notion of one­to-one correspondence between beliefs and a set of states requires a deter­mination regarding the nature of belief which is based on misunderstanding its conceptual boundaries. It demands that we be able to extract from this concept an answer that it is incapable of yielding. Obviously we can think of examples when a belief described in one way (e.g. that the library will be closed) and a belief described in another way (e.g. that no one will be allowed to work in the library) will be judged as different. But all that shows is that when called upon we are able to judge descriptions to be non­synonymous, and that we can think of examples in the context of which this difference in meaning entails a difference in reference.

To suppose that synonymy is criterial for individuating beliefs is to suppose that with respect to the mental (as opposed, perhaps, to the physi­cal) meaning (of the description used) determines reference. Yet there is nothing inherent in the concept of belief in its non-philosophical use which tends to support this bit of metaphysics. For, imagine that you and I have a belief which we both describe as a belief that the central library at Ben-Gurion will be closed on Monday. And, let us further suppose, your belief pertains to one Monday and mine to another. Should we still assume that both of us have been referring to the same belief? Surely your belief might turn out to be true while mine turns out to be false. Or consider the following. Knowing that I plan to work in the library on Monday, a friend tells me that I had better make other plans. The library, he ex­plains, will be closed on Monday. Later. someone else tells me that I need to change my plans for working in the library on Monday because readers will not be allowed to make use of it on that day. Now there is nothing amiss in supposing that in the context of libraries and their norms as they are familiar to us, both persons have expressed the same belief. Indeed, within this context what falsifies the one belief falsifies the other as

Page 55: Perspectives on Mind

Brain Srares and Psychological Phenomena 47

well. The fact that in some other context, or relative to some other his­torical situation, assertions of this sort may be used to refer to differ­ent beliefs does not entail that they do so in this particular case.

Whether a set of descriptions used in expressing beliefs are to be judged to refer to one or more beliefs depends on the context in which a determination of this sort is required. What this means is that we are not entitled to universalize from a particular case to all possible cases in which these descriptions are may playa role. There is no universal method for individuating beliefs on the basis of descriptions being used to refer to beliefs. Indeed, it is precisely this indeterminacy about reference that renders our psychological idiom unsystematic--and, of course, so handy for describing people (sometimes even machines) and their engagement in the world. So there is no universal answer to the question: should a given set of alternative descriptions be judged to refer to one, or to many, beliefs? The apparently intractable problem facing the correspondence hypothesis is that it is essential to it that an answer to this question be given.

I now come to the last issue I wish to reconsider. What should we conclude about the utility of the correspondence hypothesis? Previously I assessed it as logically incoherent. But I now tend to think that this way of stating the conclusion rests on the false assumption that a radical distinction can be drawn between empirical and conceptual issues. As I no longer think that this is so, the conclusion needs to be reformulated. I am inclined now to believe that attempts to operate with the correspondence hypothesis can be given some credence if viewed as an effort to represent mental life from a vantage point not accessible in ordinary discourse. In this way, functionalism, for example, may be seen as attempting to impose a mechanical state description on the mind by ordering descriprions of psych­ological phenomena according to a machine-table, or computational or flow­chart system of description. The picture of mind arrived at in this way should be seen as one, among other possible pictures, open to us when contemplating the nature of mind. If chosen, it provides a novel perspective on mental life--one in which the mind can be viewed as rho ugh it were a mechanism of some sort. In so doing, we perhaps gain a us~ful perspective about the nature of people--of course, we lose another, older, more humane one. What the correspondence hypothesis affords us is a way of picturing people as certain sorts of complex mechanical systems. Happily, alternative ways of regarding people are still open.

--v--

Lurie's argument is dependent on the fact that beliefs, desires, and intentional phenomena in general cannot be conceived in isolation from their conceptual and empirical contexts. This in turn implies that it is a mistake to address the mind-body problem by seeking a correspondence

Page 56: Perspectives on Mind

48 Chapter One

between discrete psychological phenomena and discrete brain states. But has Lurie made a convincing case for this conclusion?

Here is a possible objection. Suppose we were to grant the contention that "belief A" always occurs together with "belief B" (among perhaps an indeterminate number of others). Isn't it possible that these two beliefs might possess uniquely different causal roles, so that belief A would cause me to act differently than if I were to act under the influence of belief B? It might even be the case that, say, only belief A causes me to act. If this were so, why couldn't we seek a confirmation of the correspondence hypothesis by appealing to the particular brain state that is correlated with the behavior in question?

Such an objection overlooks the essential point in Lurie's paper, however. He is not arguing the "soft" thesis that belief A and belief B are always somehow conjoined. As Hume has already demonstrated, such a position would continue to imply the presence of discrete psychological phenomena. But this is precisely what LUlie finds objectionable. Thus we can only conclude that Lurie is stressing the stronger thesis that belief A is intricately interconnected with belief B, together with a whole network of conceptual and empirical phenomena. This is a crucial point, for it suggests, among other things, that meaning does not reside in discrete mental states. Where then does meaning reside? How are we to understand the relation between meaning and psychological phenomena?

Most analytic philosophers assume a materialist theory of mind. Lurie is an exception. His position doesn't presuppose materialism. On the contrary, it suggests a view of the mind that is consistent in many ways with the views of phenomenologists. As the following commentary by Forrest Williams illustrates, there is one aspect of Lurie's position in particular which complements the work of Edmund Husser!.

Professor Williams offers a phenomenological reformulation of the more critical aspects of Lurie's position. He does this by emphasizing the notion of "belief clusters," which in turn provides for a phenomenological characterization of the nature of mental phenomena. He concludes that while the object of our belief (or any other intentional object) may be present to us in experience as a discrete phenomenon, the belief itself (or any other intentional state of mind) is never a discretely discernable phenomenon. For this reason, Williams concludes, we can know in advance that in the context of empirical research it is meaningless to pose, much less to propose solving, the riddle of correspondence between "brain states" and "mental states."

Page 57: Perspectives on Mind

FORREST WILLIAMS

Psychophysical Con'espondence: Sense and Nonsense

Amadeus, a beginning music student, is given a set of cassette tapes, each of which is coded with a number of colored dots. He is told that the tapes are recordings of various compositions, each of which was written for some accompanied solo instrument. He is to listen to them, examine the colored dots, and draw up a table showing which combinations of colored dots correspond to which solo instruments; for example, it might be the case that if and only if the cassette is marked either with one yellow and one blue dot or with two blue dots will the composition be for accompanied flute. He is assured that there does exist a regular and discoverable correspondence between one or more color codes and the featured instrument. Amadeus, accepting this correspondence hypothesis in good faith, sets about the task. Unfortunately, he has been given by mistake a box of tapes which contains only a number of recordings of various orchestral suites.

We leave Amadeus to his plight; not dissimilar, if I understand Yuval Lurie correctly, to that of the psychologist who hopes to establish a one-one (or many-one) correspondence between one or more brain states and a certain belief-state (e.g., the belief that the central library at Ben Gurion University will be closed next Monday). The trouble is, one might say, the psychologist is thinking about the similarity between various mental states much as a musician might listen to a number of otherwise variable performances that are all alike in featuring a certain solo instrument; whereas a mental state, if Lurie is right, is far more like an orchestral suite, which is not organized around a lead instrument. Hence the psychophysical researcher, like Amadeus, is doomed to frustration.

II

Experimental psychologists who are actually engaged in body-mind research might of course protest that they often establish by inductive procedures more or less reliable correlations between certain states on the bodily side and certain pains, pleasures, tickles, "mental flashes," etc., on the mental side; and that the correspondence hypothesis must therefore be a viable one.

Such (in some sense) "psychic" phenomena are not, however, the "mental states" under discussion. The issue is "beliefs, hopes, wishes, desires, intentions, and so forth" [1]. These psychological phenomena

49 H. R. Otto and J. A. Tuedio (eds.), Perspectives on Mind, 49-56. © }988 by D. Reidel Publishing Company.

Page 58: Perspectives on Mind

50 Forrest Williams

differ in an extremely important way, it seems to me, from proprioceptive sensations (e.g., toothaches). pervasive bodily or even "psychic" feelings (lassitude, jitters. and so forth), imagistic flashes, and the like [2]. Indeed, they are what both analytic and continental philosophy often call "intentional experiences." Verbally articulated, they exhibit the special propositional form,

Hintentional verb)-that p

e.g., 'I believe that p' or 'I hope that p'. In the analytic literature, these have often been termed "indirect statements," "opaque constructions." or "expressions of propositional attitudes." [3] As Bertrand Russell once noted, these might well be taken to define the subject matter of psychology in its most interesting form. [4] In any event, the possibility--indeed, the familiar reality--of finding certain fairly reliable correlations between certain physical states and certain "psychic" states of a non­intentional sort does not, as I see it, damage Lurie's thesis.

Confining the discussion entirely to Lurie's "mental states," then, we may say that there is no iso/able mental item of the character,

the-belief-(wish, doubt, etc.)-that p

Rather, there occurs, according to Lurie, nothing less than a whole cluster of beliefs. For example, the belief that the Ben Gurion library will be closed next Monday amounts to a lot of beliefs, e.g. such beliefs as that 'closed' means the doors will be locked, that 'next Monday' means all day rather than some part of the day, etc. For no belief can occur apart from a legion of beliefs, generated both by the "empirical background to the lives people lead" and by a "conceptual background necessary for acquiring" the belief in question. (Lurie: this volume, p. 40)

And the psychological plot thickens. Lurie further observes that the number and variety of beliefs in such a belief-cluster is quite indeter­minable. Would-be supporters of the correspondence hypothesis will thus be confronted in each instance with a set of beliefs which is indeterminably large. Might they, nevertheless, hope to specify the wanted psychological correlate for a psychophysical equation by picking out, across a number of different occurrences of what happens mentally when people read the sign on the library door, some common-denominator belief which recurs in each and every set? Unfortunately not, since no inductive procedure will work where the number of beliefs in a cluster is not only large, but indeterminably large. Moreover, there is no recurrent core belief (e.g. the belief that-the-library-etc.), that stands out among all the others. because there is no such thing as a discrete "core-belief" anywhere to be found. When various people believe that p, there are only indeterminably large clusters

Page 59: Perspectives on Mind

Psychophysical Correspondence: Sense and Nonsense 51

of beliefs, with no one belief that is essential, from individual to individual, to all the various clusters that qualify. The challenge to the correspondence hypothesis, therefore, is not based solely on the internal complexity of a particular believing state, but on the absence of any discrete component that constitutes it as such-and-such a belief state. Thus, Lurie is able to conclude that on the mental side of the alleged correlation are to be found any number of quite differently constituted clusters which may "amount to having a belief that the library will be closed next Monday" (p. 42). To recall the earlier musical analogy, it is somewhat as if we had walked into what we imagined to be performances of several concertos featuring the same instrument, and busied ourselves with tl)'ing to determine which musician is the soloist, and which players are the accompanists, when the compositions happen to be, not concertos, but orchestral suites.

As a result, the experimental psychologists can anchor only one end of the correspondence relation in some recurrent particular. They may well succeed in characterizing a number of brain states, at different moments, or in different skulls, as one and the same state. For example, various brain states may exhibit the same electrical pattern, have the same loca­tion, and stand in the same causal relations to certain other physical states. Yet they remain unable to discern any psychological element which is common to two or more mental states and which thereby marks them as the same sort of mental state. Mental states, Lurie claims, "do not possess a determinable essence in the way they would have to for the correspondence hypothesis to make sense" (p. 43). Hence the clincher: the correspondence hypothesis is not just empirically false or improbable--it is "incoherent" (p. 38), it does not "make sense" (pp. 38, 43; my emphasis).

I think Lurie is entirely right in this conclusion, and that it has devastating implications for any proposed empirical research which would hope to correlate to a brain state (or other physical event), a mental state or intentional act, under the regulative principle or hypothesis of psychophysical correspondence. However, I would like to turn now from Lurie's thesis, as I have glossed it, to the further question of its cognitive status, and consider this latter question from a philosophical point of view no doubt somewhat alien to his--roughly, from a Husserlian point of view.

IV

For Lurie, as we have seen, the underlying issue that proponents of psychophysical correspondence must face is not a factual one, in any usual sense of the term. It is a matter of what does and does not make sense. In general, this issue of constraints upon meaning is, interestingly enough, precisely what betrays a philosophical issue for Husserlian

Page 60: Perspectives on Mind

52 FOIrest Williams

phenomenology (5), no less than for much of modern linguistic philosophy. My question, therefore, which I want to raise in a phenomenological spirit, has to do with Lurie's contention that the correspondence hypothesis is "logically impossible" (p. 38), that it "attempts to put forward an explanation for a relationship which, on logical grounds, simply cannot exist" (p. 43). In what sense is it not just empirically dubious but rather "logically impossible"? Does that mean that the con'espondence hypothesis is analytically a priori contradictory in its wording, to invoke Kant's terminology, like the logically contradictory hypothesis that, say, to every happily married man living on the east side of Park Avenue there correspond two happily married bachelors on the west side? Something far more interesting, surely, than a mere logical inconsistency is intended by Lurie's critique. He rests his case on a non-trivial thesis about what mental states are like.

But how, one might wonder, can one come to know what mental states are like? Presumably, one must in some manner examine them, and then report on them, to oneself and to others, who may in turn be disposed to confirm or disconfirm such reports themselves. In sum, you, Lurie, I, Husserl, must evidently reflect on them.

However, if such reflection were itself simply one more variety of empirical observation, this time conducted "internally" rather than by external perception--as in the fairly discredited tradition of so-called "introspective" psychology--Lurie's conclusion could only be advanced as just one more empirical generalization. To pronounce that the correspondence hypothesis "makes no sense" is clearly a much stronger conclusion than that.

Aficionados of Husserl will recognize in the philosophical issue posed here precisely the motive for the celebrated "epoche," in which one suspends all concern with establishing or exploiting factual knowledge about physical and psychological events, and hence voluntarily deprives oneself of the support of inductive investigation and empirical arguments, in order to reflect (not "introspect") upon consciousness. A singular structure of consciousness that this epoche reveals to reflection is precisely what Lurie terms "mental states," that is, in Husserl's parlance, a subject's intentional processes within a flux of consciousness which is always "consciousness-of ... (something)." These intentional processes, or intentional Erlebllisse, while not separable, allow us to attend in thought to various distinguishable types, including types expressible as 'believing that p,' 'hoping that p,' 'remembering that p,' etc.

Perhaps the most striking thing about these types of intentional Erlebnisse occurring in the flux of consciousness is that they exemplify a complex set of universal and necessal)' featl/res of an interesting (non-trivial) sort which (to avoid certain connotations traditionally attached to the terms 'essence' and 'essential') Husserl often calls by the

Page 61: Perspectives on Mind

Psychophysical Correspondence: Sense and Nonsense 53

less common term, "eidetic" features. For the limited purpose of the present discussion, only a few of these

"eidetic" features exhibited by intentional consciousness need be noted. First of all, the eidetic features of "mental states" or "intentional Erlebnisse" can be seen to contrast necessarily and universally with eidetic features of perceptible objects in certain ways. Thus, analyzing the structure of the intentional object of perceptual experience, just as this object is intended, one discovers what Husserl terms a "substratum, X," a discrete unit which (requiring, in some sense, careful description) necessarily bears certain "properties," and which necessarily remains the same or changes in its appearance, over a period of time, according to its causal relations with still other discrete unities surrounding it. What X is, therefore, amounts to how it behaves under the causal influences of other X's. It is a transcendent "something" that behaves as a stone behaves, or as a piece of cheese behaves.

By contrast, unlike the intentional object of perception, the "mental state" is not a "substance" which can come to be known through its observable "behavior." It is not, therefore--as we ordinarily use the term--an "it," a discrete "X" among other X's, at all. Rather, the intentional Erlebnis is inherently temporal and changing, though not in the sense that a discrete perceptible thing may change its "properties" over a measurable span of time. Rather, being "temporally spread out" is intrinsic to the very character of Erlebllisse. Consequently, to borrow Lurie's useful terminology, there is nowhere discernible in consciousness a discrete intentional act, a "core" to the Erlebnis of, say: I-believing­that the library will be closed. (p. 42) A !0I1iori, there is no parti­cular act in consciousness which is available for nomination as the best candidate for a possible empirical correlation with, say, one or more brain states. Rather, intentional consciousness is essentially a flux of temporally "retentional" and "protentional" phenomena, a flux which can only be read off abstractively as exhibiting a variety of certain typical structures. For example, I can distinguish the abstract stmoure of remembering as contrasted to the abstract structure of perceiving, of judging, of believing. and so on. But I cannot tag any single, discrete belief-act, which could then be declared essential to my believing that the library will be closed on Monday. Rather--much as Lurie says--l come concretely upon a flux comprising an innumerable "set" of phenomena. It is a kaleidoscope--or better. a continuous cinema--exemplifying a complex network of eidetic structures. Thus, it seems to me, Husserl could only echo Lurie's thesis that "the" belief in question "is complex, not discrete." that it "has neither a central core to it nor clearly drawn boundaries": that it "consists. in each of [the] different cases, of a different .leI of an undeterminable number of different beliefs" (p. 42): and that. as a result, the correspondence hypothesis simply //lakes no sense.

Page 62: Perspectives on Mind

54 Forrest Williams

This state of affairs regarding intentional consciousness, Husserl would contend, is a matter of eidetic necessity, not of empirical conjecture. It thereby determines in advance of an)' empirical research that certain questions call1lot be meallillgfully posed. The correspondence hypothesis turns out to be, on phenomenological grounds, just as Lurie has said, a 1I0n-sensical hypothesis [5].

v

Husser! would not say, though, that the impossibility of the psychophysical hypothesis is a logical impossibility. We certainly need that notion to cover verbally contradictory hypotheses, such as those concerning "married bachelors." The impossibility rests, rather, upon certain entirely necessary and universal principles which concern structural features of consciousness and its object, and which therefore are non-trivial principles. These eidetic structures are, I think, sui generis. They are not reducible to purely logical constraints upon thought, and their violation, consequently, is not a mere violation of logic. If we may say that empirical generalizations are "meaningless" when they are so couched that they cannot in principle be verified by any observations; and that purely formal propositions of logic are "meaningless" if they are contradictory: then we might say that certain hypotheses--such as certain hypotheses referring to mental states--are "meaningless" in yet a third sense: they violate certain eidetic truths about consciousness. Hence there results from such hypotheses a fundamental "incoherence" that is far from obvious, and that is not disclosed either by empirical observation or by logical analysis. The phenomenological objection to the correspondence hypothesis, in consequence, is that it is eidetically meaningless, in the sense that Husserl gave to that notion.

Much the same affinity between a phenomenological assessment and Lurie's assessment could be found, I think, with respect to his important remark that "the scope of the empirical background against which people in fact acquire such beliefs, [is] not deteI7l1inable." (p. 40) Here, perhaps Martin Heidegger's work in Beillg alld Time on the essential structure of Daseill--its ''In-der-Welt-sein'' structure--and Husserl's Krisis, would be especially relevant. [6]

In any event, these phenomenologically-oriented remarks may be sufficient to indicate how I would interpret Lurie's contention, with which I entirely agree, that the hypothesis of psychophysical correspondence is not just empirically false, but does not even make sense.

Page 63: Perspectives on Mind

Commentat), 55

--v--

The point of agreement between Lurie (the analytic philosopher) and Williams (the phenomenologist) is highlighted by the latter's contention that any hypothesis which asserts a correspondence between brain states and conscious mental experience violates eidetic truths about consciousness. Due primarily to the expanse of conceptual and empirical background material in the context of which one acquires a given belief or desire, there are limits within which such hypotheses must be constrained. In particular, as both Lurie and Williams argue, beliefs and other intentional phenomena are never isolated, discrete mental phenomena, but are always "nested" within a flux of mental phenomena that intricately cross-reference one another in ways that are mutually influencing.

The flux of psychological phenomena is of indeterminable scope, for its horizons expand continually as one moves to fix its perimeters. Further complicating its character, Williams explains, is the fact that this flux of subjectivity is subtended by a temporal arc sustained by retentional and protentional mental phenomena which are integrally related to the present moment of conscious life. Thus whatever one might attempt to designate as a "mental state" is already situated in, and sustained by, this temporally structured flux of mental phenomena.

To propose a hypothesis about mental states that overlooks this eidetic feature of consciousness is to propose something that is, in the final analysis, meaningless in a special sense of the term. Lurie has argued that the correspondence hypothesis is meaningless in a logical way. Williams strengthens this claim on the basis of Husserl's thesis that there are (to quote Williams) "certain entirely necessary and universal princi­ples which concern structural features of consciousness and its object" (this volume, p. 54) which cannot be explained on the basis of the corre­spondence hypothesis regarding relations between brain activity and the qualitative experiences common to all conscious beings. On the basis of this line of reasoning, Williams concludes that the correspondence hypo­thesis is "eidetically" meaningless insofar as it violates certain "eidetic truths" about consciousness. What we need, he proposes, is a hypothesis about minds and brains that begins from the premise that a "mental state" is more like an "orchestral suite" than a "concerto." (p. 51)

Indeed, we might do well to drop the notion of "mental state" altoge­ther, for if Lurie and Williams are correct, the life of mind is a holistic phenomenon that involves an intricate network of concomitant features. Most of these features happen to function in the "background" of our mental life sustaining those which happen, at a given time, to be operating on the "surface" of consciollsness. This, of course, leaves us with a new puzzle: how does "background" meaning function so as to sustain and augment the meaning that is evident to liS on the surface of consciollsness?

Page 64: Perspectives on Mind

56 Chapter One

1.3 Representation

The following essay, by Ronald Mcintyre, emphasizes the role of intentionality as an intrinsic feature of mental states and psychological phenomena, Drawing on the phenomenology of Edmund Husserl, Professor Mcintyre argues that an adequate analysis of the mind should be ontologicaliy neutral with respect to the "seat" of consciousness (be it a computer or a brain), and with respect to the relation that might hold between consciousness and a reality other than the mind within which this consciousness is manifest. He compares this starting point to Putnam's (and Fodor's) "methodological solipsism," and then sets out to analyze the positions of Fodor and Husserl with respect to the nature of mental representation. Mcintyre proposes that we investigate the semantic relation that connects the mind to its world, without doing violence to the intentional character of mental experience, and without imposing a theory of causal mechanisms (which he claims would violate the stipulation of ontological neutrality).

This proposal is quite antithetical to the one proposed earlier by Professor Rey, and appears to hinge on the very assumption questioned by Rey. Mclntyre's analysis seems also to hinge on the thesis that meaning. as an intrinsic property of mental states. is not reducible to the formal syntax that othelwise underlies our cognitive capacities. On the other hand, Mclntyre seems to leave open the possibility that mental states can be individuated by their meaning-contents. If this is in fact a part of Husserl's position. as Hubert Dreyfus and others have claimed. then perhaps adherents to Lurie's position might argue for a refinement of Husserl's theory of intentionality more consistent with the thesis that psychological phenomena (particularly intentional states of mind) are incapable of individuation. Finally. there is the central issue of Mclntyre's paper. which involves questions about the nature of mental and linguistic reference. Here we must be sensitive to the analvsis of semantic content offered by Mcintyre (in defense of Husserl's "intei'nalist" theory of mental representation). As Mcintyre illustrates in his paper. Husserl's analysis of semantic reference differs considerably. though in subtle ways. from the theory proposed by functionalists like Jerry Fodor. despite other similarities in their research programs.

Page 65: Perspectives on Mind

RONALD MciNTYRE

Husserl and the Representational Theory of Mind

Husserl has finally begun to be recognized as the precursor of current interest in intentionality--the first to have a general theory of the role of mental representation in the philosophy of language and mind. As the first thinker to put directedness of mental representations at the center of his philosophy, he is also beginning to emerge as the father of current research in cognitive psychology and artificial intelligence.

So writes Dreyfus in his introduction to Husserl, ImellIionaliry and Cognitive Science. [1] These provocative comments launch a most interesting discussion of Husserl's relationship to important recent work in philosophy of mind, especially that of Fodor and Searle. If Dreyfus is right, Husserl himself is the author of a proto-Fodorian theory of mental representations, and the tasks he conceived for transcendental phenomenology anticipate modern-day research projects in artificial intelligence and cog11l1lve sCIence. But Dreyfus is a critic of such efforts: indeed, he believes that Heidegger's reasons for rejecting the very possibility of transcendental phenomenology are basically right. Thus, his ultimate goal in comparing Husserl with "modern mentalists" such as Fodor is to show that both can be tarred with the same brush.

In this paper I shall be reexamining these comparisons from a standpoint that is more sympathetic toward Husserl and that attempts to be more neutral toward contemporary "representational" theories of mind. I have discussed Searle's views in relation to Husser!'s elsewhere [2], and so my focus here will be on Fodor and Husser!. As a contributor to the Dreyfus anthology and an advocate of the general line of Husserl interpretation represented in it. I am interested in dissociating that interpretation from Dreyfus' strong computatiollalist reading of Husser!. [3] However, I agree with Dreyfus that there are some remarkable points of agreement between Husserl and contemporary representationalists; my strategy will be first to push these as far as I plausibly can (or perhaps even a bit further in some instances) and only then to draw out the points of disagreement. By doing so. I hope not only to sharpen these points of agreement and disagreement but also to show where and how Husser!'s views on meaning and intentionality would suggest modifications in the representational approach to an understanding of mind.

Although I shall disagree with Dreyfus' characterization of Husserl as an advocate of a formalist or computationalist type of cognitivism. then. I am also concerned to show that Husserl and contemporary cognitivists

57 H. R. Ouo and 1. A. TuedlO (eds.). Pt"rspatives ml ,\find, 57-76. © 1988 hJ D. Rridel Publi~hinR Company,

Page 66: Perspectives on Mind

58 Ronald McIntwe

share much common ground. In particular. I shall argue. Fodor and Husserl share a methodological principle that marks them both as opponents of "naturalistic" psychology, and Fodor seeks an understanding of the nature of mind that shares some of the goals of Husserl's "transcendental" phenomenology. Furthermore (on the interpretation I favor, at any rate), HusserJ's noematic Sinnc can be seen--up to a point--as a version of what Fodor calls "mental representations", having both formal (or "syntactic") and representational (or "semantic") properties and so forming a kind of "language of thought." Nonetheless, I shall argue, Husserl differs in important ways from Fodor and other contemporary representationalists on each of these points. These differences culminate in an importantly different conception of the intentional, or representational, character of mind and the role of meaning in our mental life.

I. Methodological Solipsism and Phenomenological EiJoche

In his much discussed article (1980), Fodor endorses a thesis Putnam first called "methodological solipsism". As described by Putnam, methodo­logical solipsism is "the assumption that no psychological state, properly so-called, presupposes the existence of any individual other than the sub­ject to whom that state is ascribed." [4) Fodor characterizes it, somewhat more broadly, as the "Cartesian" view that "there is an important sense in which how the world is makes no difference to one's mental states." [5) Although man)' mental states are intentional and so stand for or represent things as being external to the mind, these mental states themselves--on this assumption--have a kind of intrinsic character of their own, which is just as it is even if there actually exists no mind-independent world at all. And if that is so, then a theory of mind per se--one designed to effect an understanding of this intrinsic character of mental states--ought to be one that even a consistent solipsist could accept. The point is not to affirm solipsism, of course, but to proceed as though it were true, so that the resulting account of mind presupposes nothing about the natural (especially causal) relations between the mind and its actual environment or anything else about the "natural" setting in which minds are embedded.

Now, HusserJ's methodology, which he calls "phenomenological reduction". takes its departure from this very thesis about the indepen­dence of mind from Iinaturai ll reality. "No real being," he says, "is essel1Tial for the being of cOllSciollmess itscif." (6) Hence, HusserJ's version of methodological solipsism: "Let us imagine ... the whole of nature, physical nature above all, ·annihilated· ... My consciousness, however much its constituent experiences would be changed, would remain an absolute stream of experience with its own essence." [7) Indeed, Husserl thinks. we each have a kind of first-person knowledge of the intrinsic features essential to mind (or "consciousness", as he prefers to say) that

Page 67: Perspectives on Mind

Husserl and the Representational 77lCOIJ of Mind 59

is independent of the truth or falsity of our beliefs about the world. And so he thinks that a properly philosophical (or phenomenological) account of mind should be consistent with what he cal1s "epoche" or "bracketing"--i.e. that it should appeal only to internal features of mind that we know after an epoche, a suspension, of al1 our beliefs about extra-mental reality.

The fact that methodological solipsism, or epoche, is so contro­versial, with decriers ranging al1 the way from Heidegger to Wittgenstein to Skinner to Putnam to Dreyfus, makes its endorsement by both Fodor and Husserl a significant point of agreement. However, it should be noted that even its contemporary proponents disagree about just where it leads, and Fodor and Husserl endorse it for rather different reasons.

Fodor is interested in "mental causation", the causal role that mental states play in behavior. And as he observes, this role often seems more dependent on how the world is represented to us in our mental states than on how the world actual1y is. For example, Oedipus' desire for Jocasta produced radical1y different kinds of behavior, first courtship and later self-directed violence. Why? Not because of any significant change in Jocasta--she was his mother all along--but because of a change in how Jocasta was represented to Oedipus in his mind. This reason for endorsing methodological solipsism is further reinforced by Fodor's commitment to a computational account of mental processes and mental causation. "Computa­tions" are operations on formal, or syntactic, elements intel71al to a sys­tem, and so these operations and the behaviors they produce are independent of any relationship those elements bear to the rest of the world. Accord­ingly, he notes, computationalism has no chance of being a true theory of mind unless the assumption embodied in methodological solipsism is true.

Husserl emphasizes two different considerations. First, the represen­tational character, or intentionality, of mental states itself displays a certain independence from the reality of what is represented. Thus, a mental state may represent or be "directed toward" an object or state of affairs that does not actually exist at all; and, where what is represented does exist, the properties it is represented as having need not coincide with those it actually has. There is a crucial difference between Husserl and the computationalists here, as we shall see. Husserl's other main consideration is epistemological: what we know about the representation of reality in our mental states is epistemologically prior to what we know about the nature of reality itself, since we have no access to reality except \'ia our mental representations of it. Thus, Husserl thinks, a philosophical understanding of the foundations of beliefs about natural reality must ultimately derive from a study of mental representation, and so that study itself cannot, on pain of circularity. be dependent on the truth of those beliefs. This view leads Husserl to a morc radical version of methodological solipsism than described by contemporary representation­alists and results in his "transcendental" version of phenomenology.

Page 68: Perspectives on Mind

60 Ronald Mclntyre

2. Functionalism, COlllputationalislI/, and Transcendental Phenomenology

I shall pass over some large differences between Fodor and Husser!, but I do not mean to suggest them to be trivial. For example, Husserl believes that epoche, the suspension of our naturalistic beliefs, can almost immediately deliver up the data for proper philosophizing if it is properly carried through and followed by a special kind of introspection, or "reflection", On the contents of one's own consciousness. He takes this phenomenological reflection to be indubitably reliable, and the pronounce­ments issuing from it are not mere speculative or inductive generalizations but necessary or "eidetic" truths about consciousness. Claims such as these mark radical differences between the methods Husserl characterizes as uniquely phenomenological and those employed by contemporary cognitivists.

A second difference is perhaps less radical than it first appears. Methodological solipsism, as Putnam described it, assumes the existence of no individual except "the subject" of the mental states in question. But what is this "subject"? Husser! characterizes transcendental phenomenology as the study of "transcendentally purified" or "absolute" experiences of the "transcendental ego", as opposed to "real" or "empirical" experiences of the "psychological" or "empirical ego". Such distinctions suggest a heavy dose of metaphysics which Fodor and many other contemporary philos­ophers would be most loathe to swallow. Appearances notwithstanding, however, I want to argue that there is a major point of agreement between Fodor (and contemporary representationalists generally) and Husserl here.

The point of agreement is this: neither Fodor nor Husserl--neither cognitive science nor transcendental phenomenology--ciaims to offer a naturalistic theory about how mental processing actually takes place in human minds or brains. Rather, the goal of each is to find abstract general analyses of what is involved in various kinds of mental activities, analyses that apply with equal validity to any sort of entity capable of that kind of mental activity, no matter what its actual physical make-up and no matter what physical processes actually enable it so to perform.

For Fodor and proponents of artificial intelligence this point should be readily apparent. Their claim is not that human minds or brains are phvsically like inorganic computers or that the processes in which human thought is carried out are physically similar to those involved in computer processing; rather, they claim that the same "play-by-play accounts" (as Cummins calls them [8]) are descriptive of hoth certain mental capacities of humans and certain information processing capacities of computers. Research in artificial intelligence is concerned with finding these "play-by-play accounts", articulated in flow charts or computer programs, and it deals with these abstract objects rather than with the specific physical make-up of the hardware that may "instantiate" them. As such, these research efforts exploit the ontological neutrality characteristic of

Page 69: Perspectives on Mind

Husser! and the Representational 77leol)' of Mind 61

functionalist theories of mind. What is essential to mentality, functionalism says, is not the kind of substance that is capable of having mental states, but certain sorts of logical or structural (standard functionalism says causal) relationships of a mental state to others and to sensory "inputs" and behavioral "outputs". As Fodor says:

Functionalism, which seeks to provide a philosophical account of this level of abstraction, recognizes the possibility that systems as diverse as human beings, calculating machines and disembodied spirits could all have mental states. In the functionalist view the psychol­ogy of a system depends not on the stuff it is made of (living cells, metal or spiritual energy) but on how the stuff is put together. [9]

For this reason, functionalists are widely given credit for having made a major advance over both behaviorism and physicalism as well as dualism. But Husserl explicitly articulates such an "ontologically neutral" approach to the understanding of mind that predates functionalism by a half-century. He sees that a consistent anti-naturalism in fact requires it, for naturalism includes not only beliefs about individuals "other than the subject" but also beliefs about the subject herself, insofar as subjects are psycho-physical natural organisms in causal contact with other things and occupying the very same world of nature as they. He accordingly urges that the method of epoche, if rigorously applied, must yield an account of mind that is independent of the truth or falsity of all our naturalistic beliefs, including these beliefs about the actual psychological or physical nature of human subjects themselves. Thus, with the phenomenologist's imagined "annihilation" of nature, Husserl says in a passage I earlier quoted elliptically, "there would be no more animate organisms and therefore no more human beings. I as a human being would be no more ... But my consciousness ... would remain an absolute stream of experience with its own essence." [10] Husserl's phenomenological descrip­tions of this remaining "consciousness" and its "absolute" experiences are therefore not intended as naturalistic accounts of the "empirical ego", the ego as naturally embodied in us, or of its experiences as "real" psycholog­ical or physical processes. Rather, they are intended as distinctively philosophical accounts of "transcendental" features of mind: transcendental inasmuch as those features constitute mentality itself (at least of the sort we humans have), no matter how they are in fact actually realized in us or in whatever other beings they are. It is the subject of experience thus transcendentally described that Husser! calls the "transcendental ego", and its mental states or experiences understood at this level of abstraction constitute what he terms "pure" or "absolute" experience.

Thus, as Smith and I have argued [II], Husserl"s doctrine of the transcendental ego and its pure experiences is primarily a methodological

Page 70: Perspectives on Mind

62 Ronald Mcintyre

or an epistemological, rather than a metaphysical, doctrine. It is not the view that there is a second ego standing behind and manipulating the activities of the empirical ego; rather, it is the doctrine that there is an ontologically neutral level of description of the ego and its activities that is metllOdologically independent of any natural description of what the ego and its experiences are in fact like. [12] Like the functionalists and the computationalists, then, Husser! seeks abstract accounts that would capture what is common to various mental capacities, no matter how different in their natural make-up the entities having these capacities may be. In a passage written in 1925 (an especially telling passage, because Husserl is here explaining with approval the aims of his Logical Investigations, written 25 years earlier), he explicitly says just this:

... Whenever something like numbers, mathematical manifolds, propos­itions, theories, etc .... come ... to be objects of consciousness in subjective experiences, the requisite experiences must have their essentially necessary, everywhere identical, structure. In other words, whether we take us men as thinking subjects, or whether we imagine angels or devils or gods, etc., any sort of beings that cOllnt, compute, do mathematics--the counting, mathematising internal doing ... is, if the logical-mathematical is to result from it, in a priori necessity everywhere essentially the same... A realm of uncondition­ally necessary and universal truths [describes] the ... psychic life of any subject at all insofar as it is to be thought, purely ideally, as a subject that knows in itself the mathematical ... The same holds [not only for mathematics but] for all investigations of psychic correlations relating to objects of every region and category ... Precisely thereby a novel idea of psychology is presented ... Instead of the fact of human subjects of this earth and world, this psychology deals ... with ideal essences of any mathematising and, more generally, of any knowing subjectivity at all. [13]

Terminology and unconditional necessity aside, one can see that Husserl's emphasis is not on ontological embodiment, but on an "everywhere identical structure" that he takes to be exemplified in similar experiences. And it is just this emphasis that I claim he shares with the functionalists.

Of course, Husserl cannot himself be a functionalist of the standard "causal-role" sort, i.e., he cannot explicate mental states in terms of their causal relations to one another and to the world, for causality (in any naturalistic sense) is "bracketed" by phenomenological epoche. But the computationalist version of functionalism also abstracts away from causal relations among mental states, turning instead to certain inferential relations among mental representations as a way of accounting for these causal relationships. However. we should not be too quick to assume that

Page 71: Perspectives on Mind

Husserl and the Representational 171eOlY of Mind 63

Husserl must, therefore, be a computationalist: that depends on whether these "everywhere identical structures" are to be articulated in computational terms. And on that issue, Husserl's remarks just a few pages later ought at least to give us pause:

Since we have all formed the concept of a priori science in mathema­tics ... we tend understandably to regard any a priori science at all as something like a mathematics; a priori psychology, therefore, as a mathematics of the mind. But here we must be on our guard ... By no means does this type pertain to every kind of a priori.

The psychic province ... is a completely different essential type By no means is the entire science of the type of a mathematics. [14]

3. Mental Representations and Noematic Meanings

So far I have argued that Husserl and Fodor are in basic agreement on two key points: that mental states have an intrinsic character of their own that can be explicated without reference to extra-mental things, and that what is essentially mental in this intrinsic character is properly explica­ted at an ontologically neutral level of abstraction. Fodor advocates com­putationalism as compatible with these two claims, and he characterizes it as but a special case of a more general theory he calls the "Representa­tional Theory of Mind". By "Representational Theory of Mind" (abbreviated 'RTM'), Fodor means a theory that attempts to explain important features of mind by appeal to a system of internal "mental representations". Whether Husserl is sympathetic to computationalism or not, he shares a great deal with Fodor if he, too, is an advocate of RTM.

According to RTM (see Fodor, 1980), each mental state is essentially a relation to a mental representation, which (purportedly) stands for or represents some, usually extra-mental, thing or state-oF-affairs. Repre­sentational relations between the mind and the extra-mental world are thus "mediated" relations: each is a composition of the relation between the mental state and its associated mental representation and the relation (if any) between that mental representation and an appropriate extra-mental item. But RTM also holds that there are relations among mental representa­tions themselves. Mental processes, naturalistically speaking, are causal relations among mental states; however, according to RTM, these causal relations are mirrored in the relations that obtain among the mental repre­sentations corresponding to these causally related mental states. Thus, at the "transcendental" level of abstraction, mental processes can be explica­ted in terms of the relations that obtain among their associated mental representations. (And if these relations are computational, they can be captured in appropriately devised computer programs: hence, artificial

Page 72: Perspectives on Mind

64 Ronald McIntyre

intelligence.) For adherents of RTM. methodological solipsism is an invitation to ignore mind-to-world relations and to focus instead on this system of mental representations and the relations among them.

Now, there is certainly at least a structural similarity between this description of RTM and Husserl"s approach to the intentionality of mind. According to Husserl, each mental state is essentially a relation to an entity he calls a "noema", one component of which--called the "noematic Silln--(purportedly) stands for or represents a thing or state of affairs, usually something extra-mental. [15] Intentionality, or representation, is again a "mediated" affair: a mental state represents an object only "via" its noema. (N.b.: This isn't to say that noemata are the immediate objects toward which mental states are directed. Husserl insists that represented or "intended" objects of mental states are ordinary sorts of entities; the noema is introduced to explain holl' mental states come to represent these ordinary things.) He also holds that mental processes can be explicated as relations among noemata themselves: indeed, that explication is precisely the task of transcendental phenomenology. For him, phenomenological epoche, is an invitation to ignore the de facto relations of mind to the world and to focus instead on these noemata and the relations among them.

In fact, Husserl's views can be pushed even closer to Fodor's than this. Fodor characterizes the system of mental representations for an individual person as a "language of thought", and--wllh some important differences--this is also an apt description of Husserl"s conception of noemata. Fodor believes that mental representations have both "syntactic" and "semantic" properties, in the same sense that the elements of a natural language do. Sentences, for example, differ from one another in "shape" as the words they comprise are different andlor differently arranged: thus, 'Marvin is melancholy' is syntactically different from 'Marvin is happy'. Similarly, Fodor holds, the belief that Marvin is melancholy and the belief that Marvin is happy are relations to mental representations that differ in syntactic structure. But expressions in natural language also have seman­tic properties, paradigmatically meaning, reference, and truth-value. Fodor conceives mental representations as having these same kinds of properties. Thus, to believe that Marvin is melancholy is to be related to a mental representation that "stands for" Marvin. that "represents" him "as" melancholy, and that is "true" or "veridical" if and only if Marvin is melancholy. The relations among mental representations that explicate mental processes. Fodor therefore holds, are the same sort of syntactic and semantic relations that obtain among sentences. Finally, Fodor believes, the syntactic and semantic properties of natural languages are inherited from their more fundamental counterparts in systems of mental representa­tions. [16] Natural language is in this sense the "expression of thought": the translation of the medium of thought. mental representations, into a public medium of linguistic communication.

Page 73: Perspectives on Mind

Husserl and the Represelllational 71!eory of Mind 65

Fodor's rnental representations, then, are rnental syrnbols, cornplex sentence-like cornbinations of sirnpler word-like elernents, having rneaning and truth-value (and presurnably tokened in the brain, in a way that sorne as yet untold naturalistic story will eventually explain). Now, Husserl's noernatic Sinne are not rnental symbols in this sense. That is, they are not word-like or sentence-like entities that have rneanings; rather, noernatic Sinne are rneanings (hence, "Sillne"). But despite this irnportant point, to which I shall return, noernatic Sinne are like Fodorian rnental represen­tations in several significant respects. These sirnilarities derive frorn the fact that noernata, too, are conceived in analogy with language.

Just as speech, for exarnple, consists in ternporal sequences of rneaningful sounds, so thinking (or any rnental process), on Husserl's view, consists in ternporal sequences of rneaningful rnental states or events. Indeed, Husser! thinks, the rneanings we express in speech or writing are essentially the sarne entities--noernatic Sinne--that rnake rneaningful rnental episodes possible: the purpose of language is to express what is "in our rninds", so that others rnay represent to thernselves the sarne object we have in rnind and in the sarne way; and in order for that to take place, he thinks, the rneanings we express rnust be the very sarne noernatic rneanings that deterrnine the representational character of these "thoughts". [17] Hence, while Husserl also holds that "language is the expression of thought", his version of this thesis differs frorn Fodor's. On Fodor's version, one rnight say, we think in rnental "words" that get translated into a public language when we speak or write, while on Husserl's version we think in "rneanings" that get expressed in a public language.

According to Husserl, then, the rneanings of expressions in a natural language are derivative frorn their rnore fundarnental counterparts in systerns of noernata. Given this view of the relation between noernata and linguistic rneanings, it is not surprising that Husserl thinks of noernata as having syntactic and sernantic properties. Frege, for exarnple, holds linguistic rneanings to be syntactically structured abstract entities: just as a sentence consists of syntactically distinct parts put together in syntac­tically perrnissible ways, he thinks, so the proposition expressed by that sentence consists of correspondingly distinct rneanings put together in correspondingly perrnissible ways. Husserl sirnilarly thinks of noernatic Sinne as structured abstract entities, rnirroring the syntactic structures of the linguistic expressions that would express thern. To think "this is white", for exarnple, is to be related to a noernatic Sinn structured into two distinct rneaning-cornponents: an "X"-cornponent (as he calls it) express­ed by 'this' representing the object being thought about, and a "predicate­sense" expressed by 'is white', which represents the property predicated of the object as represented. [181 The sernantic, especially referential. properties of noernatic SillIle are sirnilarly rerniniscent of Frege: like Frege, Husserl holds that meaning deremrilles reference. Thus, he thinks.

Page 74: Perspectives on Mind

66 Ronald McIllfyre

representational or intentional properties of a mental state are determined by its noematic Sinn. I will be discussing this point in the next section.

There are good grounds for construing Husserl"s noematic Sinne as a version of what Fodor calls "mental representations" and so taking Husserl as an early advocate of the Representational Theory of Mind. Noematic Sinne constitute for Husserl a "medium" in which mental processes take place; this medium is syntactically and semantically characterizable and thus fundamentally language-like; mental states represent extra-mental things by virtue of how these noematic Sinne relate to the extra-mental world; and mental processes can be understood. at an ontologically neutral level of abstraction. in terms of relations among these noematic Sinne independent of the actual relations that obtain between mental states and the extra-mental world. Mental representations. at least prima facie. play these same roles for Fodor.

On the other hand. Husserl's view of noemata as meanings rather than as symbols hal'ing meaning may be enough to show that they cannot be characterized as mental representations in any legitimate Fodotian sense. If so, Husserl is not an advocate of RTM at all. However, I think such an easy dismissal would only hide deeper differences between Husserl and Fodor. After all, it seems trivially easy to modify Husserl"s theory into a genuine version of RTM: simply postulate a system of truly Fodorian mental representations and let noematic Sinlle be the meanings of these mental represellfations rather than of mental states themselves. (The result would conform to what Bach characterizes as "conceptual" rather than exclusively "formal" methodological solipsism and would be a version of what Stich calls "strong RTM" [19].) What I hope to show is that the resulting version of RTM would still--so long as the meanings of mental representations are noematic Sinne as Husserl conceived them--be radically different from contemporary, especially computationalist, versions. Part of the reason why this is so comes out in an argument Husserl himself gives against the Fodorian view that symbols or "signs" mediate the relations between mental states and their objects. A symbol functions as a symbol, Husserl notes, only by virtue of being itself the object of a mental state, in which it is apprehended (interpreted. represented) as representing something other than itself. Thus, that apprehension would have to be pia a second symbol that represents the first. and so on ad infinitum. The "sign-theory", Husserl says. fails to explain mental representation. and for the very same reason that the traditional "image-theory" of ideas cannot. [20] The comparison is interesting because Fodor also, for a dif­ferent reason. rejects the "image-theory" in favor of a computationalist version of the "sign-theory". What saves this latter theory from Husserl's objection is that computationalism is designed precisely to show how mental symbols do their work without functioning as symbols, i.e. independently of their semantic or representational properties. Husserl's rejection of men-

Page 75: Perspectives on Mind

Husserl and tile Representational 771eOlY of Mind 67

tal symbols in favor of noematic Sinne is based on the very opposite view: that the meanings of these symbols, not just the symbols or their syntactic features, would have to do the work of explaining mental representation. What is seriously at issue between Husserl and computationalists is the notion of meaning itself and its role in mental representation.

4. Meaning, Intentionality, and Mental Representation

The Representational Theory of Mind, as Fodor characterizes it, is but a framework--albeit a rather specific and controversial one--for discussing traditional problems about mind. In this section I shall discuss the prob­lem of mental representation, or intentionality, itself within this frame­work. Dreyfus assumes the contemporary notion of mental representation is just an updated version of the Husserlian notion of intentionality, but this identification is by no means self-evident: the problem of intention­ality as Husserl conceived it and the problem of mental representation currently so-called seem to be radically different problems.

For this discussion, let us assume that noematic Sinne can, despite the qualifications we have already noted, be characterized as mental representations and that, for both Fodor and H usserl, the problem of mental representation is a matter of the "semantics" of mental representations. The problem is subject to various possible solutions: there are numerous approaches to the semantics of linguistic representation and the number of approaches to the semantics of mental representation is surely no smaller.

In Fodor (1980) there is an account of mental representation modeled on the so-called "causal theory" of linguistic reference. According to that theory, the fundamental relation between language and the world is causal: for each (actually referring) name, there are complicated causal chains connecting its various occasions of use to some unique item in the world, that item being thereby the "referent" of the name; other forms of reference (e.g., the reference of definite descriptions) are derivative from such causal forms. Similarly, Fodor sees the representational or "referential" properties of mental representations as causal relations: "what makes my thought about Robin Roberts [for example] a thought about Robin Robel1s," he says, "is some causal connection between the two of us." [21] That is, a mental state is "about" Robin Roberts just in case it is related to a mental representation that itself stands in an appropriate causal relation to Robin Roberts himself; this causal relation is the "semantic" relation--the relation of representation or intentionality--that relates the mental representation to that which it represents. Fodor gives few details about how this theory is to work, and (his more esoteric reasons aside) it is easy to see why. Since world-to-mind causality cannot be explicated independently of how the world is, the account itself is incompatible with a thorough-going endorsement of methodological solipsism.

Page 76: Perspectives on Mind

68 Ronald Mclllfyre

One would suppose this result to show either that methodological solipsism cannot provide an adequate theory of mind (since it cannot account for intentionality) or that the causal account of intentionality is incorrect (since it is incompatible with methodological solipsism), but Fodor draws a different conclusion: because mental representation, so understood, falls outside the realm of what can be explicated by the methodological solipsist, mental representation itself is not a strictly "mental" feature of mental states. If Fodor is right. mental represen­tation is no proper concern of the Represelllational Theory of Mind! This rather odd result for Fodor. the representationalist and methodological solipsist. pays off for Fodor the computationalist. though. Computing machines can make no use of the representational properties of the symbols they employ. but--on this view--they are not thereby deficient in anything essentially "mental".

This result, if unmodified. contrasts sharply with Husserl's views on intentionality. The methodological solipsist. like the practitioner of phenomenological "epoche". "brackets" --makes no use of--anything extra­mental: the world of nature. our minds conceived as natural entities. and the causal relations between them are "bracketed" by this methodology. For Fodor, this means that mental representation itself. being a causal relation. is included in these "bracketed" items. But Husserl. throughout his entire career. consistently maintained that intentionality is the pril1lQly feature his methodology of epoche is designed to explicate. To take but one, quite pointed, example:

If I perceive a house .... a relationship of consciousness is contain­ed in the perceptual experience itself. and indeed a relation to the house perceived in it itself ... Of course there can be no talk of external-internal psychophysical causality if the house is a mere hallucination. But it is clear that the momentary experiencing is in itself not only a suhjective experiencing hut precisely a perceiving of this house. Therefore. descriptively. the object-relation belongs to the experiencing. whether the object actually exists or not. [22)

For Husserl, then. the intentionality of a mental state is a feature inherent in the mental state itself. independent of its de facto (especially its causal) relationships to extra-mental things or states-of­affairs. Let us be careful to note. however. that it is the "object­relation". and not the object. that "belongs to the experiencing".

In fact. though. Fodor's and Husserl's positions are not quite contradictory. due to an ambiguity in the notion of mental representation. or intentionality. itself. Smith has employed a useful distinction between the intentional. or representational. relation achieved in a mental state and the intentional. or representational. character of the mental state

Page 77: Perspectives on Mind

Husserl and the Representational Theory of Mind 69

itself. [23] Consider the following case. I peer under my bed, spot a coiled rope, scream "Snake!", and flee the room. Just what did I see? What was my visual representation a representation "of"? In one sense, of course, what I "saw" was a coiled rope, and in that sense my experience was "of" or "about" the rope: the rope was what my visual experience was actually related to, via a mental representation. In Smith's terminology, then, my experience was representationally related to a rope. And Fodor's causal story surely captures at least part of what is involved in this relation: the rope was related to my mental representation by virtue of being the distal stimulus that gave rise to it. In another sense, though, I "saw" a snake: I am generally disposed to fear snakes, not ropes, and I feared this object only because I took it to be a snake. Phenomenolog­ically speaking, my visual experience had the intentional, or representa­tional, character of being "of" or "about" a snake. And for that sense of representation Husserl's solipsistic story sounds right: my mental state had that representational character even though what it was actually related to was a rope, and it could have had that same character even if there had been no appropriate distal stimulus at all. Intentional, or representational, relations then, concern the way mental states and mental representations actually "hook up" with the world; and of course Fodor is right in thinking that those relations (whether simply causal or not) are not independent of how the world is. But if Husserl is right, mental states and mental representations themselves have an intrinsic representa­tional character, which makes them as though actually related to extra­mental things whether they are so or not.

The problem of intentionality for Husserl, then, is not to explain how mental states actually relate to the world but to explain how they have the phenomenological or "internal" character of relating to anything at all. Husserl "solves" this problem by appealing to a "semantics of reference" quite different from the causal account. A mental state is intentional in character by virtue of its relation to a noematic Sinn. How so? Because noematic Sinne are meanings and, Husserl apparently thinks, it is simply an intrinsic and irreducible (though not completely unanalyzable) property of meanings to represent. Husserl in fact holds a strong version of the familiar Fregean thesis that meaning determines reference. Speaking of linguistic meaning (which he calls 'Bedel/tllng') he says:

Reference to the object is constituted in the meaning [Bedeutl/ng]. To use an expression meaningfully [mil Sinn]. and to refer expressively to an object (to form a presentation of an object), are thus one and the same. It makes no difference whether the object exists or is fictitious or even impossible ... [24]

Thus, even in the absence of any actual referent, Husseri apparently

Page 78: Perspectives on Mind

70 Ronald Mclntyre

thinks, the meaning of an expression not only makes it meaningful but gives it a referential character as well; and he take, just the same view of noematic Sinne and intentionality: "The phenomenological problem of the relation of consciousness to an objectivity has above all its noematic side. The noema in itself has an objective relation, specifically through its particular 'Sinn'''. [25] In the final analysis, then, H usserl says of noematic Sinne essentially what Searle says of mental states themselves: they have "intrinsic", as opposed to "derived", intentionality (but, n.b., Searle's own account of intentionality explicitly rejects the invocation of meanings, especially as abstract entities). In Searle's view, mental states have "conditions of satisfaction" and so are intentional, whether any states of affairs actually "satisfy" them or not, simply because that is a fundamental property of the kind of entities that mental states are. [26] Meanings or noematic Sinne, similarly, are conceived by Husserl as intentional, not because of any relations they bear to anything else; (e.g. not because they are "interpreted" by someone or caused in some particular way) but simply because they are a sort of entity whose very nature is to be representational. On this view, the noematic Sinn itself will not, of course, be the sole determinant of which object a mental state is actually related to (causally or otherwise), but its intentional character will determine which object it must be related to in order to be "satisfied".

If mental states really do have such an internal or phenomenological intentional character, then modern mentalists cannot simply give the problem of intentionality over to the extra-mental "natural" sciences. But it is also difficult to see how this problem of intentionality could be solved using only the functionalist or computationalist resources to which contemporary representationalists usually restrict themselves. Indeed, the problem here is not unlike the widely recognized one of accounting for phenomenal qualities, such as pain, in functional terms. It seems obvious to many that such phenomenal qualities are primitil'e features of mental states and so cannot be reduced to causal roles, computations, or to any­thing else. And. although Husser)'s view is apparently much less obvious to most, he believes essentially the same is true of intentional character.

What I should like to do now is contrast HusserJ's view of intrinsic intentional character with actual representationalist, especially computa­tionalist, accounts of representational character. Unfortunately, however, contemporary representationalists seem not to consider the problem of intentional character. at least not ill any direct way. Indeed, I suspect that computationalists are more wont to deny the existence of intentional character than they explicitly admit. For one thing, since it is very counter-intuitive to suppose that machine states or the symhols ill computer programs have intrinsic intentional character, a deep commitment to the computer model of mind would surely tempt one to deny that mental states or mental representations have it either. (Thus the Church lands argue, though

Page 79: Perspectives on Mind

Husserl and the Representational 7heory of Mind 71

not on behalf of computationalism, that "our own mental states are just as innocent of 'intrinsic intentionality' as are the states of any machine simulation." [27] But few computationalists are so candid.) In the second place, most contemporary representationalists have been deeply impressed by Putnam's famous "Twin-Earth" arguments. [28] These purport to refute the Fregean thesis that meaning determines reference and to show, more gener­ally and comra Husserl, that nothing intrinsic to mental states can suf­fice to determine which object a mental state represents. Accepting that conclusion tempts one to decide that intentionality or mental representa­tion is emirely an "external" matter and that the problem of "internal" representational or intentional character, which Husserl's appeal to noema­tic Sinne is supposed to solve, has simply disappeared. Clearly Putnam's arguments, and others like them, raise important issues, but let me suggest that these issues have not been conclusively resolved to Husserl's detri­ment. To cite but three examples, Bach, Searle, and Smith have offered independent accounts of how "indexical" mental contents can determine (or in Bach's case, partially determine) the object of mental states in Putnam­like cases. [29J Furthermore, however it is to be explained. there is a "mental side" to intentionality that is as much a "phenomenological fact" of our mental life as are consciousness and self-awareness; good philosophy demands that there be limits on the degree to which theory can do violence to these facts. In the next section, accordingly. I want to consider whether a computationalist or formalist theory of mind might yet be render­ed compatible with intentionality as Husserl conceives it.

5. Was Husser! a FOl11lQlist?

Husserl never underestimated the richness and complexity of our mental life; hence. he characterized transcendental phenomenology--his attempt to explicate mental Iife--as "an infinite task". But he also never wavered from his conviction that this richness and complexity is. at bottom. under­standable. Indeed, he thought, the very concept of consciousness as intentional, meaningful experience requires the imposition of some sort of rationale on what would otherwise be but an inchoate welter of meaningless sensations. It is the "noema" of a mental state or experience that places it within the context of such a rationale, by relating it, in rule-governed ways, to what Husserl calls a "horizon" of past experiences and future possible experiences of the same object or state of affairs.

For example (considerably simplified), suppose I see a particular object as a tree. The noematic Sinll of this experience includes the predicate-sense "tree", and it is hy virtue of this sense that I perceive the object as a tree rather than something else. But, Husserl holds, this sense does not do its work of characterizing or prescribing the object in isolation from the rest of my mental repertoire. I believe that trees come

Page 80: Perspectives on Mind

72 Ronald MclllIyre

in various varieties, that trees are physical objects and so are three­dimensional, and so on. Within the context of such beliefs, the sense "tree" foretokens or "predelineates" a range of further possible exper­iences in which the object before me would be characterized in further possible ways: as an oak or an elm, for example; as black or brown on the side now hidden from me; and so on. In this way, Husserl says, the Sinn relates the present experience and its object to an indeterminate or non­specific, and open-ended, horizon of possible experiences. But despite the indeterminacy of this horizon, Husserl believes, it has a rational, coher­ent structure: the Sillll of the present experience, in conjunction with the Sinne of relevant background beliefs, limit in rule-governed ways the kinds of further experiences that can belong to it. To understand this exper­ience and its intentionality is ultimately to understand how its Sinn is related to the Sinlle of these background beliefs and to the Sillne of its horizon of possible further experiences. [30]

To explicate an experience noematically or phenomenologically is, then, to uncover these relations among noematic Silllle and the rules that describe them, and so to unfold its inherent rationale. If Husser!'s belief--that for every conceivable human experience there is such an internal rationale, independent of that experience's actual relations to the natural world--is a "cognitivist" belief, then without doubt Husser! is a cognitivist. But Dreyfus is not content to characterize Husserl as a cognitivist in the general sense I have just agreed to: he thinks Husserl was ajol7lzaliJt, and so at least an incipient computationaliJt:

Whether in fact Husser! held what Fodor calls the computational theory of mind--that is, whether according to Husserl ... the predicate­senses [in the noematic Silln] do their job of representing objects ... and of unifying diverse experiences ... strictly on the basis of their shapes (i.e. as syntactic system independent of any interpretation)-­cannot be so easily determined. There is, however, considerable evidence ... that Husserl thought of the noemata as complex jomza{ structures [and] there is no evidence which suggests that he ever thought of the rules he was concerned with as Jemalllie. [31]

Dreyfus raises two issues here: do noematic SillllC represent objects and unify experiences strictly "as a syntactic system", and are the rules that describe mental states or mental processes purely non-"semantic"? Since I think, colltra Dreyfus, that the first issue is "easily determined," let me turn to it first.

The advocate of computational ism who does not deny such notions as meaning and intentional character is free to explain them in terms of some­thing more congenial to the formalist. What Fodor calls "functional-role semantics" [32]. for example. attempts to explain at least some of the

Page 81: Perspectives on Mind

Husser! and the Representational TileOlY oj Mind 73

"semantic" properties of mental states in terms of their causal relations to other mental states (and to causal inputs and outputs). Although a critic of this effort, Fodor suggests that computationalism could make use of it by recognizing an isomorphism between the callsal network of mental states and an appropriate network of purely Jonnal or syntactic relations among mental representations. Any "functionally" explicable semantic properties of mental states or mental representations would then be, if not reducible to, at least replaceable by formal relations among mental repre­sentations. Accordingly, if intentional character were such a semantic property of mental representations, it too would be effectively explained in strictly syntactic or formalist terms: to understand the intentional character of a mental representation (or a noematic Sinn, if this were Husserl's view) would just be to understand its formal or syntactic rela­tions to other mental representations (or noematic Sill/Ie). Dreyfus seems to think it is at least debatable that Husserl held some view like this.

I have already agreed with Dreyfus that noematic Sinne have syntactic properties and so stand in certain formal relations to one another. And, as we just saw, Sinne "do their job of representing objects" only within the context of a network of other Sinne, the Shllle of mental states comprised by the horizon of the given experience. But that does not mean that their representational properties are reducible to the formal relations among the Sill/Ie in this network. For one thing, Husserl always characterizes this network in terms of "semantic" relations among Silllle, i.e., in terms of their intentional character: he even defines the horizon as consisting of experiences directed toward the same object. [33] More importantly, since Husserl holds that intentional character is determined by meaning, this reductionist view of intentional character requires a most peculiar account of meaning. It combines the Husserl-Frege thesis that meaning determines reference or intentional character with the radically anti-Fregean view that meaning reduces to syntax. So far as I can tell, not even contemporary representationalists hold this mixture of views. [34] And I simply do not know of any passages in Husserl's writings that suggest he ever thought that meaning is in any way reducible to syntax. Not only are there powerful systematic considerations to the contrary; we have already seen what are only just a few of the many passages that argue for a quite different, Frege-like, theory of meaning. [35] F'urthermore. Husser! himself sometimes explicitly addresses the question of whether meaning and intentional character can he reduced to relations among merely formal elements or meaningless "contents". always arguing that they canllot. F'or example, he rejects the "sensationalist" view of consciousness, the view that consciousness consists of nothing hut sensations and complex relations among them. And on what grounds? On the grounds that sensations are "meaningless [Sil1l1los] in themselves" and so "could give forth no 'meaning' ['Silll1']. however they might be aggregated". [36] It is hard to believe

Page 82: Perspectives on Mind

74 Ronald Mcillfyre

that Husserl could offer this argument against sensationalism while also believing that the meaningfulness, and hence the intentional character, of noematic Sinne could be reduced to formal relations among them based solely on their shapes. And he in fact says, just a few pages later:

... Transcendemal Phenomenology ... must come to consider experi­ences, not as so much dead matter, as "complexes of content", which merely are but signify nothing, mean nothing, as elements and complex-structures, as classes and subclasses ... [it must] instead master the in principle unique set of problems that experiences as illlentional offer, and offer purely through their eidetic essence as "consciousness-of'. [37]

Now, the fact that meaning and intentional character are not reducible to syntactic relations among formal structures also relates to the second issue Dreyfus raises. the issue of whether the "rules" that concern Husserl are "semantic". Indeed, insofar as "non-semantic" just means "formal". I find it hard to understand why Dreyfus thinks there is "no evidence" here. Husserl always describes these rules as rules for relating experiences on the basis of their inrellIionaf character. not on the basis of the "shapes" or the "formal structure" of their associated noemata. (For example, he says, each category of object "prescribes the rule for the way all object subordinate to it is to be brought to full determinacy with respect to meaning and mode of gil'enness." [38]) And Dreyfus offers no comment on such remarks of Husserl's as this: "Transcendental theories of constitution arise that. as 11011-f0171wl. relate to any spatial things whatever .... " [39] As a mathematician and logician Husserl was quite familiar with the notion of a "formal" or "syntactic" theory: yet. he held that even the laws of logic apply to the phenomenological description of experience in ways that are not purely "syntactic". [40] And he in fact goes to some lengths to distinguish phenomenology, as an "eidetic" (or a priori) discipline, from fonnal eidetic disciplines such as mathematics. Consider:

Since the mathematical disciplines ... represent the idea of a scienti­fic eidetic. it is at first a remote thought that there could be other kinds of eidetic disciplines, non-mathematical. fundamentally differ­ent in their whole theoretical type than familiar ones. Hence .... the attempt. immediately doomed to failure. to establish something like a mathematics of phenomena can mislead [one] into abandoning the very idea of a phenomenology. But that would be utterly wrong. [41] ... We start from the division of essences and essential sciences into material and formal. We can exclude the formal. and therewith the whole aggregate of formal mathematical. disciplines. since phenomen­ology obviously belongs to the material eidetic sciences. [42]

Page 83: Perspectives on Mind

Husserl and tile Representational 77leol), of Mind 75

Transcendental phenomenology ... belongs ... to a fllndamelllal class of eidetic sciences total/y differelll from ... mathematical sciences. [43]

Just why is Husserl unwilling to consider phenomenology a "formal" science? There are probably many reasons, but let me suggest just one. I have been urging that a system of Husserlian mental representations would be one whose operations are carried out, not on their formal properties alone, but by virtue of their meaning and representational character as well. Syntax, on the other hand, is noted by Husserl as dealing only with pure "forms" obtained by abstracting away from all such meaningful "content". [44] Accordingly, even those kinds of operations that can be formalized and thus described syntactically--e.g., logical and mathematical operations--are not carried Ollt syntactically in ordinary thought and experience. In a similar vein, Dretske (1985) has argued that even mathematical thinking, such as adding numbers, is not the same thing as manipulating formal symbols. The symbols being manipulated by a person who is adding must represent numbers, and so have meaningfor tllat person, Dretske urges, and she must manipulate them as she does at least partly because they mean what they do for her. If the same manipulations are performed, but as purely formal operations on symbols that mean nothing to the system performing them, the performance is at best a simulation--not a true instance--of adding. I see Husserl's views on the formality issue as very much like these.

Of course, to deny that human thought and experience are purely formal or computational does not entail opposition to research in artificial intelligence. The position I attribute to Husserl, and it is also mine, does assert that artificial intelligence is "artificial" precisely because it is only formal and so devoid of what is truly "mental". If that is so, then computationalism is false as a theory of mind, and so is what Searle denounces as "strong AI"--the view that computers, and humans as well, are minded simply by virtue of their ability to do certain kinds of syntactic manipulations. [45] But that still leaves open the possibility of "weak AI" --artificial intelligence as the project of simulating various cognitive mental capacities by constructing formal analogs of them. Husserl himself, in the midst of the discussion from which I have been quoting, seems to leave open this very sort of possibility:

The pressing question is admittedly not answered thereby whether within the eidetic domain of [phenomenologically] reduced phenomena (either in the whole or in some part of it) there could not also be, alongside the descriptive, an idealizing procedure that substitutes pure and strict ideals for intuited data and that would even serve--as a counterpart to descriptive phenomenology--as the basic medium for a mathesis of experience. [46]

Page 84: Perspectives on Mind

76 Chapter One

Dreyfus has shown that at least one major advance in artificial intelligence, Minsky's notion of "frames", turns on ideas first developed by Husserl--without the heuristic benefit of the computer. [47] I suspect there is more in Husserl's careful descriptions of experience that would help construct his anticipated "counterpart" to a science of the (real) mind. Nonetheless, phenomenology remained for Husserl a descriptil'e discipline, descriptive of intrinsically intentional experiences, as they are experienced.

--fJ--

On McIntyre's view, Fodor and Husserl both hold a representational theory of mind, but only Husserl's theory embodies an analysis of the role of representational content. This is because Husserl's theory defines "mental states" relative to their semantic content (or noematic Sinne) , whereas Fodor's theory defines them relative to their formal or syntactic structure. Husserl's position, which McIntyre endorses, is based, therefore, on the thesis that meaning is an integral component of a mental state, and that this meaning is in fact a "representational content" that relates the mental state to something other than itself. It is said to "mediate" the relation between consciousness and experiential phenomena in a way that renders the representation of reality in our mental states "epistemologically prior" to what we know about the nature of reality itself.

This appears to raise an interesting rebuttal to Rey's "disturbing possibility" hypothesis. According to Husser! and McIntyre, the issue isn't whether we are deluded by intricate programming manifesting itself (to a self?) as the (illusory?) phenomenon of "self-directing awareness." Rather, the issue is mental reference. Hence, the challenge is not to ex­pose the "disturbing possibility" hypothesis as harmless to a philosophy of mind that features reflections on the structure of consciousness; rather, the challenge is to show how intentionality works. For, if it were not for the intentional life of mind, the meaning inherent in subjective experience would be thoroughly non-existent.

The following commentary by Kathleen Emmett offers reasons why we should not be tempted by the Husserlian strategy. Professor Emmett is critical of the attempt by Husserl to generate a unified theory of meaning that accounts for both mental and linguistic reference. She argues for an "externalist" account of linguistic reference that is compatible with Fodor's thesis about mental states and their connection to the semantic content of experience (but which questions the plausibility of the thesis of mental representation). If we accept her analysis, since such an account is incompatible with the "internalist" account of semantic content proposed by Husserl, we undermine our rebuttal of Rey.

Page 85: Perspectives on Mind

KATHLEEN EMMETT

Meaning and Mental Representation

Husserl and Fodor both accept a representational theory of mind (RTM). Both individuate mental states by their contents, which are provided by mental representations or noematic Sinn. Both adhere to methodological solipsism; mental states are theoretically isolated from environmental and social causes and effects. Their principled blir,dness to mind-world causal connections ally Husserl and Fodor in a common antipathy to "natutalistic psychology" which would insist that mental states cannot be identified without considering their causes and the contexts in which they occur.

The RTM is a quasi-linguistic theory of mind. Mental representations have both semantic and syntactic features. Taken as a system they are a language of thought. McIntyre has argued that Husserl and Fodor differ about the source of meanings of mental representations and he suggests that Husser]"s view is more nearly correct. 1 agree with his interpretations of both Husserl and Fodor, but 1 do not share his conclusion. HusserI"s view, and it is also Searle's, is that mental states refer to things in the world in the same way that sentences of natural language do. This assimilation of natural language and the language of thought seems to me mistaken. 1 restrict my case to the question of referring. My point will be that mental representations cannot achieve reference in the same way that expressions in ordinary language do refer. Furthermore the only available alternative for a theory of reference for mental representations WOUld, if adopted, create an enormous gap between the semantics of natural language and those of mental representations, the closure of which was supposed to be one of the main advantages of the RTM.

As McIntyre sees it the point of contention between Husserl and Fodor is whether the semantic features of mental states--i.e., their capacity to represent, mean or refer--are internal or external to those states. Fodor holds an externalist view; the semantic properties of any given mental state consist in its relations to some extramental object or state of affairs. [11 What makes my thought that water is wet a thought about water is a causal relation between my mental representation and water. For Husserl the semantic features of a mental state are nonrelational intrinsic properties of that state. Mental state S represents water's being wet just in case S contains noematic Sinn N, and N is a representation of water's being wet. Mental states do not necessarily place us in a referential or causal relation to the environment. Their intentional contents "refer" and "represent" what they do even in the absence of any existing appropriate objects.

McIntyre's point can be expressed in terms of a distinction between

77 Il. R. Quo and 1. A. rued/{) (ed.l.j, Perspecril'cs on Mind, 77-84,

Page 86: Perspectives on Mind

78 Kathleen Emmett

the intentional character of a mental state and its intentional relations to objects in the world.

Intentional ... relations ... concern the way mental states and mental representations actually "hook liP" with the world; ... [but] mental states and mental representations themselves have an intrinsic representational character. which makes them as though actually related to extra-mental things whether they are so or not. [2]

McIntyre's claim is that Fodor and Husserl disagree about the respective roles of intentional character and intentional relations in specifying the intentional object of a mental act. For Fodor the referent or intentional object of a mental state (what that state is "about") is determined by its external intentional relations. For Husserl the referent or intentional object of a mental state is determined by the intentional character of that state. Husserl, like Searle, holds an "internalist" RTM, according to which all its semantic features are internal, intrinsic features of mental states. Husserl the internalist and Fodor the externalist are addressing very different problems. Fodor's problem is accounting for the represen­tational features of mental acts. Representing, like meaning and referring, is a semantic feature of mental representations. So it is relational; hence a theOl)' of representation requires an account of organism-environment interactions. Fodor doesn't solve his problem: by his own admission he is unable to account for the representational features of mental states within a computational psychology. Laws relating my thoughts to water cannot be articulated until we know how to pick out water: "characterizing the objects of thought is methodologically prior to characterizing the causal chains that link thoughts to their objects." [3] That means knowing what water (really) is, and that, alas, is lip to the chemists. If psychology includes a theory of reference for thoughts about water psychologists must wait for the completion of chemistry. In order to get on with psychology Fodor assumes that "mental processes have access only to formal (nonsemantic) properties of representations." [4] In doing so he abandons the semantics of mental representations to the non­psychological sciences.

Mcintyre sides with Husserl (and with Searle) in embracing intrinsic intentionality and rejecting the formality condition. He remarks that any computational accollnt that respected the formality condition would be "false as a theory of mind .... " [5] He also suggests that Husserl"s view of intrinsic intentionality can be defended from attacks by Putnam and others who deny that meanings arc "in the head": "these issues have not been conclusively resolved to Husserl"s detriment," he writes. He adds "there is a 'mental side' to intentionalitv that is as much a 'phenomenological fact' of our mental life ;s are consciousness and

Page 87: Perspectives on Mind

Meaning alld Mental Reprcselllation 79

self-awareness; good philosophy demands that there be limits on the degree to which theory can do violence to these facts." [6]

Yet there are problems with the internalist RTM. On the internalist position semantic properties must include both descriptive and referential components. Noematic Sinne and intentional contents cannot be exclusively descriptive Fregean senses. A mental representation whose semantic properties were solely confined to Fregean senses could secure reference only to a generic intentional object. The difference between my thinking about a deciduous tree and my thinking about this particular oak is specificity of my mental representation. The RTM must be able to explain how my thought of this oak is a thought of it rather than of any tree that satisfies the descriptive sense of my mental representation. There are two ways to secure reference to a particular tree; one is to let the context present only one satisfactory case, one instance of an oak. This route is closed to an internalist like Husserl or Searle for they are committed to securing the reference of mental representations by intramental features alone. To paraphrase Searle:

... the speakers' internal Intentional content is [sufficient] to determine what he is referring to, either in his thoughts or in his utterances ... [7]

The referent of a mental representation cannot depend on the "external, contextual, nonconceptual, causal relations" between intentional contents and those features of the world that the contents are about. So the solution to the problem of specificity cannot be left to the context. The other solution is to add a demonstrative component to the descriptive sense so that what is meant is "this oak tree" rather than "an oak tree."

This is Searle's solution. Intentional states specify their own "conditions of satisfaction" by means of a representational demonstrative component. The conditions of satisfaction for a belief are its truth-conditions, for a perception, its accuracy. The conditions of satisfaction are internal to the mental state even though the representation of the conditions of satisfaction point outside the state into the world. Searle manages this internalist account by making the conditions of satisfaction self-referential. For example the conditions of satisfaction of seeing this oak tree refer to the oak only as the cause of "this very experience." This retains the internality of the conditions of satisfaction while achieving specificity of reference.

What I want to argue now is that reference in natural language is governed by factors that are not available for an internalist semantics of mental representations. Referring is something people do. Expres­sions--bits of language--do not refer. Speakers use expressions to refer; when what they intend to refer to by using an expression comports with the

Page 88: Perspectives on Mind

80 Kathleen Emmett

practices of the linguistic community in which they are speaking they will generally be understood by their audiences to have referred to what they intended to refer to. To say that a name 'E' refers to some thing S is shorthand for saying that in this linguistic community speakers who use 'E' intending to refer to S will generally, other things being equal, be understood. This is not a theory of reference, for it does not explain how the consonance between speakers' intentions and hearers' expectations is grounded. It is a view about the conditions under which referring is possible, and is the starting point for a theory of reference.

This picture of referring with names, or with other demonstrative expressions, is consistent with both of the theories of naming being defended in current debates, the causal theory and the description theory. The description theory is that a name refers (that is, a speaker uses a name to refer) to its bearer in virtue of an associated description or cluster of descriptions of the bearer. The causal theory is that the use of a name refers by virtue of being causally connected through a chain of uses back to the name's bearer or to the ceremony or baptism of naming. The differences between these two theories [8] are less important for my purposes than the one decisive similarity. Both accounts assume that a name is being used by a speaker to some audience. Neither the causal theory nor the description theory are accounts of how names by themselves refer. They are accounts of what must obtain when a speaker uses a name intending to refer to something believed to be its bearer and is understood as having done so.

Most proponents of both theories are clear about the role of intention and communal expectations in securing reference. Kripke, for example, explicitly denies that the causal theory explains meaning independently of intentions and social practices. Both are involved in fixing the reference of proper names. His theory "takes the notion of intending to use the same reference as a given." [9]

In general our reference depends not just on what we think ourselves, but on other people in the community, the history of how the name reached one, and things like that. It is by following such a history that one gets to the reference. [10]

This cannot be the way reference is secured with mental representations, however. An internalist must, by virtue of his theory, adopt a theory of reference according to which mental representations refer independently of the intentions of their "users" and of the practices of the local community. An internalist account must not invoke the intentions of the users of those representations for two reasons. Mental representations, unlike referring expressions in natural languages, are not used. I do not

Page 89: Perspectives on Mind

Meaning and Mental Representation 81

think by manipulating mental representations. When I think about an object o there is a connection between the mental representation which is the content of my intentional act and O. but the connection does not come into being because I intend it. Thinking about things isn't a matter of intending to communicate and choosing the best means for accomplishing that end. This is not just because in thinking I am as it were talking to myself and thus am ensured a comprehending audience of one. Thinking is not an intentional (i.e., purposeful) activity at all. This point is often overlooked because thinking is taken to be the paradigm intentional (i .e., imbued with aboutness) activity. Second, intending to use R to refer to 0 is itself an intentional act whose success would require another, prior, act of intending. If referring depends on intending to refer then since intending to refer is itself an intentional act with an object the resulting account of referring would be circular.

If it turns out that in natural language reference depends on what the user intended and how those intentions comport with local practice then we need a very different theory of reference for mental representations. The challenge for the proponents of an internalist version of RTM, then, is to provide an account of reference that does not depend on communicative intentions and communal expectations.

Proponents of RTM can respond to the challenge in either of two ways. They may argue that I have overemphasized the role of speakers' intentions and communal expectations in determining reference in natural language. I doubt that this project could be successful. The challenge would be to spell out a theory of reference that would make it suitable for an account of how mental representations refer. Conversely a proponent of RTM could agree that we need a different theory of reference for mental represen­tations, one that makes reference independent of the conventional factors that govern normal discourse. There is one obvious candidate for such a theory, namely Grice's notion of "natural" meaning (meaning N). [11] Meaning N is illustrated by the following examples:

That fever means the body is rejecting the transplant.

The Board's recommendations mean a bad year for the humanities.

Grice notes two central differences between meaning N and "nonnatural" meaning (meaning NN). In the case of meaning N, 'x means that p' entails p. It cannot be said that "That fever means the body is rejecting the transplant, but the body is not rejecting the transplant." Natural meaning is a sign or symptom or bit of evidence for what is meant. Secondly, what means N is a fact rather than a person. No one means N that p: x means N that p but doing so is not an action that a person could perform. If something a person does means something, what the person does--under the

Page 90: Perspectives on Mind

82 KarMeen Eimnerr

description that makes it a case of meaning--is a case of nonnatural meaning.

Adopting meaning N for mental representation would provide a ready explanation for the fact that we do not make intentional use of our representations in meaning one thing or another. Wittgenstein once asked how [ know which of two friends with the same name [ am writing to when [ begin a letter "Dear John": the answer now available seems to be that my thinking of one John rather than the other is not a willful action on my part. [am entertaining a representation whose factual correlation ties it to one friend, and not to the other. The contents of our mental representations would be as little subject to alteration by conscious intervention as the chemical composition of our blood. Like the composition of my blood there would be things [ could do to alter my mental representations, but nothing as direct as simply willing that they change.

Furthermore, a theory of natural meaning for mental representations would be nicely consistent with physicalism. Contemporary internalist representational theorists, unlike Husserl, deny that conscious introspection is the best or only access to mental representations. Mental representations are features of the brain, and it is to be expected that there will be empirically discovered correlations between brain states suspected to be representations and their meanings. The language of thought was never supposed to be readily readable.

On the other hand, if mental representations have meaning N it would be difficult to explain how mental representations are related to the propositional attitudes with whose content they have been routinely identified. For example, if 'x means N that p' entails p then the normal propositional contents of beliefs and other mental states could not he substituted for p on pain of claiming that all our beliefs are true and our fondest wishes fulfilled. Some latitude for error and falsehood must be allowed in the operation of mental representations if those are to serve the gritty cognitive roles that contemporary internalists envision for them. Reports of beliefs and desires will not, on the view being considered, be straightforward reports of the contents of mental repre­sentations. So anyone tempted to adopt a theory of natural meaning for mental representations will be forced to explain the relation between the symptomatic empirical meanings of representations and the conventional non­natural meaning of linguistic reports and expressions of mental states. One of the consequences of adopting a theory of natural meaning for mental representations may be having to abandon the hope of tying the semantics of natural language to the semantics of the language of thought. It seems to me that would seriously undermine the plausibility of the thesis of mental representation altogether.

Page 91: Perspectives on Mind

Commentary 83

--fJ--

It should be evident that Emmett is arguing from a standpoint that isolates the problem of linguistic reference from that of mental reference. This, of course, is in opposition to the Husserlian strategy mapped out by Mcintyre, which considers the problem of mental reference to be at the vel}' I'oot of the problem of linguistic reference. Whereas Husserl develops his account of linguistic reference on the basis of an analysis of the semantic content of mental states, Emmett organizes her thoughts relative to the problem of linguistic reference by proposing a theory of semantic reference incompatible with Hussed's internalist account of mental reference. She then draws the conclusion that Husserl's analysis of intentionality is incapable of providing the unified account of mental and linguistic reference envisioned by Mcintyre.

Emmett emphasizes the problem of reference with respect to proper names. She suggests that a correct understanding of this problem will reveal a key weakness in the Husserlian analysis of consciousness advocated by Mclntyre. She argues against trying to discern how proper names "by themselves" refer. Instead, she would try to determine the conditions which allow a speaker to use a proper name with the intention of referring to something "believed to be its bearer," and in a way that is "understood as having done so" by those who are privy to this use. Mclntyre would surely question the merit of this rendering of the problem of semantic reference. We have encountered the thesis that a third person orientation is inappropriate for the study of minds since it preempts our ability to discern crucial evidence intrinsic to the nature of consciousness. Mcintyre is stressing a similar point. To search for meaning from a third person standpoint camouflages the fact that meaning is always constituted from a first person standpoint.

Another central aspect to Mclntyre's position is his critique of the notion that Husserl was advancing a proto-computational theory of mind, In response to an earlier position advocated by Hubert Dreyfus, McIntyre questions the extent to which Husserl's approach to the study of mental representation is compatible with the directions exhibited by contemporary research in cognitive science. By emphasizing important differences between Husserl's "semantic" rendering of mental representation and Fodor's "syntactic" rendering. Mclntyre tries to show that Dreyfus' critique of the cognitive science movement fails to apply to Husserl's position. Mcintyre concludes that because there are no significant affinities between Husserl's "phenomenological" approach to the study of mental representation and Fodor's "methodological solipsism." it is a mistake for Dreyfus to include Husserl's theory of intentionality within the parameters of a critique of cognitive science. In the following commentary, Professor Dreyfus offers his assessment of the extent to which Mcintyre's discussion

Page 92: Perspectives on Mind

84 Chapter One

succeeds in sheltering Husserl's posltton from the type of criticism advanced by Dreyfus against the computationalists,

Though Dreyfus concurs with McIntyre's assessment of the important differences between Husserl's and Fodor's research programs, he sets out to defend the thesis that Husserl's position remains vulnerable to the general thrust of his critique of computational theories of mind. He draws on the early work of Martin Heidegger and on recent developments in our understanding of the key thresholds in skill development. In the course of his discussion, Dreyfus relies on the distinction between "attentive" and "nonattentive" modes of awareness. He admits that attentive awareness exhibits the sort of relation between "subject" and "object" which lends itself to the Husserlian form of intentional analysis. But he cautions us against adopting this strategy, claiming that it would inhibit our ability to appreciate the full range of meaning intrinsic to attentive modes of awareness. Dreyfus argues that the intentional content of an attentive mode harbors only a distorting fragment of the intentionality which would account for the possibility of this awareness. He defends this claim by spelling out the implications of a "training wheel" analysis of skill development. He identifies the extent to which nonattentive modes of awareness support our capacity to operate effectively at the attentive level, and chides Husserl for having failed to appreciate the extent to which nonattentive awareness differs in character from attentive forms of awareness. He is especially critical of Husserl's presumption that both forms of awareness share the character of harboring an intentional relation between a "detachable subject" and a "detachable world." Does this suggest that Husserl's study of mind and meaning misrepresents the spirit of his all-important commitment to ontological neutrality? At issue is the extent to which Husserl's form of intentional analysis conceals the non-Cartesian character of the mind's primary relationship to objects and the world.

Page 93: Perspectives on Mind

HUBERT L. DREYFUS

Husserl's Epiphenomenology

Ronald Mclntyre has written the account [ should have written, situating Husserl judiciously with respect to several issues in Cognitivism. His basic criticism of my introduction to Hussert. Intentiollality alld Cogllitive Scie/1ce is well taken. Husserl was not a computationalist. Still, I feel the two intuitions that led me to criticize Husserl as a cognitivist, and a computational one at that, remain intact, and [ am happy to have this opportunity to thank Mclntyre for his helpful criticism and to restate my analysis of the issues in a way which, I hope, will be more accurate and persuasive.

As [ see it there are two separate but related issues to be addressed. The first is: Was Husserl sufficiently cognitivist to be vulnerable to Heidegger's critique? Mcintyre seems to think that the question whether "Heidegger's reasons for rejecting the very possibility of transcendental phenomenology are basically right" is somehow affected by the computation­alist issue, so that by setting the record straight that Husserl's Iloema is semantic not syntactic, he is dissociating Husseri from my critique of Artificial Intelligence (AI) and thereby being "more sympathetic towards Husserl". I will argue that Husserl's transcendental reduction, based as it is on his account of intentional content, is untenable on phenomeno­logical grounds alone, independently of whether he holds mental representa­tions to be semantic or syntactic, and that Heidegger has sketched a devastating critique of this aspect of Husserl's cognitivism.

The second issue is less clear cut. It concerns Husserl's claim that semantic content is essential for understanding the mind, while allowing that a mathesis of experience might be possible. [t seems to me that if Husserl accepts \I'cak AI (as Mclntyre illuminatingly puts it) and admits that all mental activity might be formalizable--that a "mathesis of experience" is in principle possible--there is no explanatory job left for semantic content. In that case "descriptive phenomenology", as an inventory of the meaningful contents of transcendental consciousness, although not compatible with strong AI, is compatible with Jerry Fodor's brand of computational cognitivism.

Turning first to Heidcgger's objections to Husserl's cognitivism, it seems to me that Heidegger has two basic objections to Husserl's transcendental reduction and to methodological solipsism. Both objections question the possibility of separating subject and object. One critiqUE focuses on the absence of the subject/object distinction in the experience of everyday coping. The other questions the possibility of treating the background conditions of any intentional state (in Husserl's terms the

85 JI. R. Olio Imd 1. A. Tuedio (cds.), Pervp('cIH'CS Of! '\ltnd, 85-104.

© 11)88 by D. Reidel Puhftshinf!, Compan\'

Page 94: Perspectives on Mind

86 Huben L. Dreyfus

outer horizon) as a network of intentional states. Both objections depend on a description of everyday coping which purports to show that, folk psychology and the philosophical tradition notwithstanding, skillful action cannot be understood in terms of an immanent subject sphere containing representations which refer, successfully or unsllccessfully, to a transcendent object. It is this account of action which allows Husserl and Fodor to suppose that they can bracket existence and describe a self-sufficient sphere of intentional content.

Heidegger does not deny that we are conscious. Rather he wants to show that only certain forms of awareness have intentional content and that these forms of awareness do not have the necessary role in our ongoing everyday activities which the tradition has supposed. Being and Time shows how much of everyday activity, i.e., how much of the human way of being, one must describe without recourse to intentional content. The importance of returning to a description of everyday practice, as a more basic form of intentionality and transcendence than can be found in the subject/object tradition, is explained by Heidegger in his course, 77le Basic Problems of Phenomenology, given in 1927, the same year he published Being alld Time.

The task of bringing to light Oasein's existential constitution leads first of all to the twofold task, intrinsically one, of intelpreting more radically the phenomena of illlelllionality and transcendence. With this task ... we run up against a central problem that has remained unknown to all previous philosophy and has involved it in remarkable, insoluble aporia. (BP, p. 162) [1]

Heidegger seeks to show that the directed coping of everyday practice (which he calls primordial intentionality) is the fundamental mode of Oasein's activity and that mental states with intentional content (let us call this full intentionality) are derivative, both because intentional content need not be present for coping to occur and because all directed activity presupposes a transcendent horizon or background which cannot be accounted for in terms of intentional content. As Heidegger puts it:

It will turn out that [full] intentionality is founded in Oasein's transcendence and is possible solely for this reason--that transcendence cannot conversely be explained in terms of [full] intentionality. (BP, p. 162).

To make his first point. that the subject/object relation is based on an inaccurate description of everyday action, Heidegger describes the actio vity of using things such as hammers and doorknobs. Heidegger claims that to describe accurately what is going on in such activity we have to break out of our traditional Cartesian assumptions concerning mental content.

Page 95: Perspectives on Mind

HusserZ's EpiphenomenoZogy 87

The achieving of phenomenological access to the entities which we encounter, consists in thrusting aside our interpretive tendencies, which keep thrusting themselves upon us and running along with us, and which conceal not only the phenomenon of 'concern', but even more those entities themselves as encountered of their own accord in our concern with them. (BT, p.96) [2]

If we succeed in "letting things show themselves as they are in themselves" we discover that the equipment we are using has a tendency to "disappear". We are not aware of it as having any characteristics at all.

The peculiarity of what is proximally ready-to-hand is that, in its readiness-to-hand, it must, as it were, withdraw in order to be ready-to-hand quite authentically. (p. 99)

When hammering a nail, for example, "The hammering itself uncovers the specific 'manipulability' of the hammer" (p. 98), but I am not aware of any determinate characteristics of the hammer.

Not only is the equipment transparent, but so is the user. Heidegger call the user's transparent everyday way of taking account of equipment, "circumspection." Strictly speaking, we should not even say that circumspection is "taking account", if by that we smuggle in a little bit of intentional content. To see what Heidegger is getting at here, it is essential to do phenomenology, to work out a detailed account of everyday skilled activity. My brother, Stuart, and I have sketched such an account in our book, Mind Over Machine. [3] We seek to show, by a study of the five stages through which adults being taught a skill usually pass that a fully cognitivist account of skills in terms of rules applied to objective features is appropriate for novice behavior, and that a fully inten­tionalist account that requires deliberate attention is still appropriate for the competent performer, but that no intentional content at all is involved at the highest level of expertise, which is the level at which most of us operate most of the time. Here is a condensed account:

Stage I: Novice

Normally, the instruction process begins with the instructor decomposing the task environment into context-free features which the beginner can recognize without benefit of experience. The beginner is then given rules for determining actions on the basis of these features, like a computer following a program.

For purposes of illustration, let us consider two variations: a bodily or motor skill and an intellectual skill. (I) The student automobile driver learns to recognize such interpretation-free features as speed

Page 96: Perspectives on Mind

88 Hubel1 L. Dreyfus

(indicated by his speedometer) and distance (as estimated by a previously acquired skill). Safe following distances are defined in terms of speed; conditions that allow safe entry into traffic are defined in terms of speed and distance of oncoming traffic; timing of gear shifts is specified in terms of speed, etc. (2) The novice chess player learns a numerical value for each type of piece regardless of its position, and the rule: "Always exchange if the total value of pieces captured exceeds the value of pieces lost." He also learns that when no advantageous exchanges can be found, center control should be sought, and he is given a rule defining center squares and one for calculating extent of control.

Stage 2: Advanced Beginner

As the novice gains experience coping with real situations, he begins to note, or an instructor points out, perspicuous examples of meaningful additional components of the situation. After seeing a sufficient number of examples, the student learns to recognize them. Instructional maxims can now refer to these new situational aspects recognized on the basis of experience, as well as to the objectively defined lIoll-situational features recognizable by the novice.

The advanced beginner driver uses (situational) engine sounds as well as (non-situational) speed in his gear-shifting rules. He shifts when the motor sounds like it is straining. He learns to observe the demeanor as well as position and velocity of pedestrians or other drivers. He can, for example, distinguish the behavior of the distracted or drunken driver from that of the impatient but alert one. No number of words can take the place of a few choice examples in learning these distinctions. Engine sounds cannot be adequately captured by words, and no list of objective facts enables one to predict the behavior of a pedestrian in a crosswalk as well as can the driver who has observed many pedestrians crossing streets under a variety of conditions.

With experience, the chess beginner learns to recognize over-extended positions and how to avoid them. He begins to recognize such situational aspects of positions as a weakened king's side or a strong pawn structure despite the lack of precise and universally valid definitional rules.

Stage 3: Competence

With increasing experience, the number of features and aspects to be taken account of becomes overwhelming. To cope with this information explosion, the performer learns, or is taught, to adopt a hierarchical view of decision-making. By first choosing a plan, goal or perspective which organizes the situation and by then examining only the small set of features and aspects that he has learned are relevant given that plan, the

Page 97: Perspectives on Mind

Husser! 's Epipilellomelloiogy 89

performer can simplify and improve his performance, A competent driver leaving the freeway on a curved off-ramp may, after

taking into account speed, surface condition, criticality of time, etc, decide he is going too fast. He then has to decide whether to let up on the accelerator, remove his foot altogether, or step on the brake, He is relieved when he gets through the curve without mishap and shaken if he begins to go into a skid,

The class A chess player, here classed as competent, may decide after studying a position that his opponent has weakened his king's defenses to the point where an attack against the king becomes a viable goal. If the attack is chosen, features involving weaknesses in his own position created by the attack are ignored as are losses of pieces inessential to the attack, Removal of pieces defending the enemy king becomes salient. Successful plans induce euphoria and mistakes are felt in the pit of the stomach,

In both of these cases we find a common pattern: detached planning, deliberate assessment of elements that are salient with respect to the plan, and analytical, rule-guided, choice of action followed by an emotionally involved experience of the outcome. The experience is emotional because choosing a plan, a goal or perspective is no simple matter for the competent performer. Nobody gives him any rules for how to choose a perspective, so he has to make up various rules which he then adopts or discards in various situations depending on how they work out. This procedure is frustrating, however, since each rule works on some occasions and fails on others, and no set of objective features and aspects correlates strongly with these successes and failures. Nonetheless the choice is unavoidable. While the advanced beginner can hold off using a particular situational aspect until a sufficient number of examples makes identification reliable, to perform competently requires choosing an organizing goal or perspective. Furthermore, the choice of perspective crucially affects behavior in a way that one particular aspect rarely does.

This combination of necessity and uncertainty introduces an important new type of relationship between the performer and his environment. The novice and the advanced beginner, applying rules and maxims, feel little or no responsibility for the outcome of their acts. If they have made no mistakes, an unfortunate outcome is viewed as the result of inadequately specified elements or rules. The competent performer, on the other hand, after wrestling with the question of a choice of perspective or goal, feels responsible for. and thus emotionally involved in, the result of his choice. An outcome that is clearly successful is deeply satisfying and leaves a vivid memory of the situation encountered as seen from the goal or perspective finally chosen. Disasters, likewise, are not easily forgotten.

Remembered whole situations differ in one important respect from remembered aspects, The mental image of an aspect is flat; no parts stand

Page 98: Perspectives on Mind

90 Hubert L. Dreyfus

out as salient. A whole situation, on the other hand, since it is the result of a chosen plan or perspective, has a "three-dimensional" quality. Certain elements stand out as more or less important relative to the plan, while other irrelevant elements are forgotten. Moreover, the competent performer, gripped by the situation that his decision has produced, experiences the situation not only in terms of foreground and background elements but also in terms of opportunity, risk, expectation, threat, etc. As we shall soon see, if he stops reflecting on problematic situations as a detached observer, and stops thinking of himself as a computer following better and better rules, these gripping, holistic experiences become the basis of the competent performer's next advance in skill.

Stage 4: Proficiellcy

Considerable experience at the level of competency sets the stage for yet further skill enhancement. Having experienced many situations, chosen plans in each, and having obtained vivid, involved demonstrations of the adequacy or inadequacy of the plan, the performer involved in the world of the skill, "notices," or "is struck by" a certain plan, goal or perspec­tive. No longer is the spell of involvement broken by detached, conscious planning. Since there are generally fewer "ways of seeing" than "ways of acting," after understanding without conscious effort what is going on, the proficient performer will still have to think about what to do. While doing so, elements presenting themselves as salient are assessed and com­bined by rule to yield decisions how best to manipulate the environment.

On the basis of prior experience, a proficient driver approaching a curve on a rainy day may sense that he is traveling too fast. Then, on the basis of such salient elements as visibility, angle of road bank, criticalness of time, etc., he decides whether to take his foot off the gas or to step on the brake. (These factors were used by the competent driver to decide that he was going too fast.)

The proficient chess player, who is classed a master, can recognize a large repertoire of types of positions. Recognizing almost immediately and without conscious effort the sense of a position, he sets about calculating the move that best achieves his goal. He may, for example. know that he should attack, but he must deliberate about how best to do so.

Stage 5: £\pe/1ise

The proficient performer, immersed in the world of his skillful activity, .fees what needs to be done, but decides how to do it. Given enough experience with a variety of situations, all seen from the same perspective but requiring different tactical decisions, the proficient performer presumably decomposes this class of situations into subclasses,

Page 99: Perspectives on Mind

Husserl's Ep;phenomenology 91

each of which share the same decision, single action, or tactic. This allows for the immediate intuitive response to each situation which is characteristic of expertise.

The expert chess player, classed as an international master or grandmaster, in most situations experiences a compelling sense of the issue and the best move. Excellent chess players can play at the rate of 5-10 seconds a move and even faster without any serious degradation in performance. At this speed they must depend almost entirely on intuition and hardly at all on analysis and comparison of alternatives,

Stuart recently performed an experiment in which an international master, Julio Kaplan, was required to add numhers rapidly presented to him audibly at the rate of about one number per second, while at the same time playing five-second-a-move chess against a slightly weaker, but master level player. Even with his analytical mind completely occupied by adding numbers, Kaplan more than held his own against the master in a series of games. Deprived of the time necessary to see problems or construct plans, Kaplan still produced fluid and coordinated, long-range strategic play.

Kaplan's performance seems somewhat less amazing when one realizes that a chess position is as meaningful, interesting, and important to a professional chess player as a face in a receiving line is to a professional politician. Almost anyone can add numbers and simultaneously recognize and respond to faces. even though each face will never exactly match the same face seen previously, and politicians can recognize thousands of faces, just as Julio Kaplan can recognize thousands of chess positions similar to ones previously encountered. The number of classes of discriminahle situations, built up on the basis of experience, would have to be immense. It has been estimated that a master chess player can distinguish roughly 50,000 types of positions.

Automobile driving probably involves ability to discriminate a similar number of typical situations. The expert driver, generally without paying attention, not only knows by feel and familiarity when an action such as slowing down is required, but he knows how to perform the action without calculation and comparing alternatives. He shifts gears when appropriate with no awareness of his acts. What must be done, simply is done.

As experts we act on the basis of vast past experience of what to do in each situation, or, more exactly, our behavior manifests skills that have been shaped by a vast amount of previous dealings, and in most cases when we exercise these dispositions everything works the way it should. Lest one think that by speaking of this past experience as accounting for the discrimination of 50,000 cases I have reintroduced some new sort of 1loemQ or representational content, it is important to know that there are models of brain function that do not use symbolic representations at all. Researchers who call themselves "new connectionists" are building devices and writing programs that operate somewhat like neural nets. These

Page 100: Perspectives on Mind

92 Hubert L. Dreyfus

parallel distributed processing systems can recognize patterns and detect similarity and regularity without using local representations. [4] The states in a connectionist machine cannot always be interpreted as symbols representing features or aspects of the skill domain. There is nothing in the program that can be interpreted as equivalent to conscious or uncons­cious mental content. There simply are no mental-level representations. Two developers of this alternative model note that in such models "informa­tion is not stored anywhere in particular. Rather it is stored everywhere. Information is better thought of as 'evoked' than 'found'." [5]

It seems that the beginner, advanced beginner and competent performer must direct attention to features, to aspects of the skill domain and to goals, presumably by way of mental content; but with talent and a great deal of involved experience one eventually develops into an expert who intuitively sees what to do without having to pay this sort of deliberate, step by step attention. The tradition has given an accurate description of the beginner and of the expert facing an unfamiliar situation. An expert does not focus on aspects of the situation or try to achieve goals. He "automatically" does what normally works and, of course, it normally works.

Heidegger describes this basic level of everyday activity as a kind of "sight" which does not require deliberate, thematic attention.

The equipmental contextllre of things, for example, the contexture of things as they surround us here, stands in view, but not for the contemplator as though we were sitting here in order to describe the things, not even in the sense of a contemplation that dwells with them. The equipmental contexture can confront us in both ways and in still others, but it doesn't have to. The view in which the equipmental contexture stands at first, completely unobtrusive and unthought, is the view and sight of practical circumspection, of our practical orientation. "Unthought" means that it is not thematically apprehended for deliberate thinking about things; instead, in circumspection, we find our bearings in regard to them .... When we enter here through the door, we do not apprehend the seats, and the same holds for the doorknob. Nevertheless, they are there in this peculiar way: we go by them circumspectly, avoid them circumspectly, ... and the like. (BP, p. 163)

One should not think of this everyday coping as zombie-like. Rather it requires intense involvement. Aron Gurwitsch gives a good account of the mastelful concentration which athletes sometime call "flow".

[W]hat is imposed on us to do is not determined by us as someone standing outside the situation simply looking on at it; what occurs and is imposed are rather prescribed by the situation and its own

Page 101: Perspectives on Mind

Husser! 's Epipilellomell%gy 93

structure; and we do more and greater justice to it the more we let ourselves be guided by it, i,e" the less reserved we are in immersing ourselves in it and subordinating ourselves to it. We find ourselves in a situation and are interwoven with it, encompassed by it, indeed just "absorbed" into it. This is in essential opposition to "being over against," "being at a distance," "looking at," "making objects present," by means of cogitative consciousness. [6]

Our skills usually function so transparently that they free us to act deliberately in other areas of our lives wherein we are not so skilled.

We should try to impress on ourselves what a huge amount of our Iives--working, getting around, talking, eating, driving, etc.--is spent in this state of flow, and what a small part is spent in the deliberate subject/object mode, which is, of course, the mode we tend to notice when we stop to do philosophy and which has therefore been studied in such detail by the tradition. Explaining behavior in terms of beliefs and desires is even enshrined in folk psychology. For these reasons the range of non-deliberate activity should astonish us as it no doubt astonished Heidegger when he managed to struggle through to the phenomenon, He is no doubt thinking of himself too when he says of Aristotle:

Aristotle was the last of the great philosophers who had eyes to see and, what is still more decisive, the energy and tenacity to continue to force inquiry back to the phenomena '" and to mistrust from the ground up all wild and windy speculations, no matter how close to the heart of common sense. (BP, p. 232)

His account of skilled activity enables Heidegger to introduce both a new kind of intentionality (transparent use) which is not that of a self­contained subject, and a new sort of entity encountered (equipment) which is not a determinate, isolahle object. If this account claims to be "more primordial", however, it cannot ignore the traditional account of subjects and objects, but, rather, must put them in proper perspective. Thus Heidegger must point out how intentional content and its objects enter the picture. He seeks to show that the tradition has brought them in too early in the analysis and that, moreover, the tradition has mis-characterized them so as to give them a foundational significance they cannot support.

Heidegger introduces traditional intentionality at the point where there is a disturbance or breakdown. For example, if the door-knob is stuck we find ourselves IfTing to turn the door-knob, desiring that it turn, expecting the door to open, etc. (where this is not meant to imply that we were trying, desiring, expecting, etc. all along). As Searle puts it when discussing the place of intentional content, "Intentionality rises to the level of skill." Although he concentrates on breakdown, Heidegger's

Page 102: Perspectives on Mind

94 Hubel1 L. Dreyfus

basic point is that: intentionality rises to the level of deliberate attention. This need not be a reaction to a disturbance or the absence of a skill, but can be "a more precise kind of circumspection, such as 'inspecting', checking up on what has been attained (etc.)" (BT, p. 409). Deliberate attention and thus full intentional consciousness is also present, for example, in curiosity. designing and testing new equipment, and in repairing old equipment. Since our skills serve long-range goals, there will, as Searle points out, always be a sense in which each stage of what we are doing is something we are doing intentionally, i.e., in order to realize some explicit intention. Heidegger's point is that while this activity is intentional in Searle's sense, its details are not contained in nor explained by the intentional content of conscious intentions. Since one could not execute an action without this flexible, adaptive activity, Heidegger calls it "primordial intentionality."

The structure of deliberate action is that of a subject with mental content directed towards an object. Deliberate action is not yet deliberative however. Only if the situation requires reflective planning do we shift into the deliberative mode. We can do this without changing the already fully intentional structure of deliberate consciousness. In deliberation one simply stops and considers what is going on and plans what to do, all in a context of involved activity. Here one finds the sort of reasoning familiar in folk psychology and studied in the tradition as the practical syllogism. As Heidegger puts it:

The scheme peculiar to [deliberating] is the 'if-then'; if this or that, for instance, is to be produced, put to use. or averted, then some ways and means, circumstances, or opportunities will be needed. (BT, p. 410)

Deliberation can be limited to a local situation or it can take account of what is not present. Heidegger calls long-range planning "envisaging".

Deliberation can be performed even when that which is brought close in it circumspectively is not palpably ready-to-hand and does not have presence within the closest range. [n envisaging, one's deliberation catches sight directly of that which is needed but which is un-ready-to-hand. (p. 410)

Envisaging thus has the kind of aboutness or directedness to something beyond the local situation which Husser! calls referring to distinguish it from indicatillg. [7] But, Heidegger warns, the tradition does not pause to describe the phenomenon carefully, and so gets into trouble.

How does the tradition misinterpret envisaging? The traditional account supposes that a subject is related to an object solely by means of

Page 103: Perspectives on Mind

Husser/'s Epip/zenomenology 95

some mental content. On this account of intentionality, mental representations are assumed to be special meanings in the mind of the subject which can be described in complete independence of the world. As we have seen, Husserl claims that the phenomenologist can study such content by performing the phenomenological reduction, bracketing the world and reflecting directly on the intentional content. Heidegger warns: "[A] correct understanding of this structure has to be on its guard against two common errors which are not yet overcome even in phenomenology (erron­eous objectivizing, erroneous subjectivizing)" (BP, p. 313). Heidegger specifically rejects the traditional view that our ability to relate to objects requires a subject or mind containing "internal representations".

It is not to be interpreted as a 'procedure' by which a subject provides itself with representations of something, [representations] which remain stored up 'inside'. (BT, p. 89)

Heidegger does not, however, want to deny that when skillful coping reaches its limit and requires deliberate attention we become a subject conscious of objects; he wants, rather, to describe this subject accurately, and interpret it anew.

what is more obvious than that a 'subject' is related to an 'object' and vice versa'? This 'subject-object-relationship' must be presuppos­ed. But while this presupposition is unimpeachable in its facticity, this makes it indeed a baleful one, if its ontological necessity and especially its ontological meaning are left in the dark. (p. 86)

How then are representations involved when our activity requires attention? The essential characteristic of representations according to the tradition is that they are purely mental, i.e. they can be analyzed without reference to the world. Mind and world, as Husserl puts it, are two totally independent realms. Heidegger focuses on this point:

This distinction between subject and object pervades all ... modern philosophy and even extends into the development of contemporary phenomenology. [n his Ideas, Husserl says: "The theory of categories must begin absolutely from this most radical of all distinctions of being--being as consciousness Ires cogitalls} and being as being that 'mallifests' itself in consciousness, 'transcendent' being [res extellsa]." "Between consciousness [res cogi!alls] and reali ty [res extensa] there yawns a veritable abyss of meaning." (BP, pp. 124-25)

Heidegger rejects this traditional interpretation of the independence of the mental. He argues that even when people have to act deliberately and

Page 104: Perspectives on Mind

96 Hubel1 L. Dreyfus

so have beliefs, desires, plans, follow rules, etc., their mental contents cannot be directed toward anything except on a background of primordial in­tentionality--skilled practices which Heidegger calls, "meaning", "the world", or sometimes perversely, perhaps to upset Husserl, "transcendence".

[T]he structure of subject-comportment [intentionality], is not something immanent to the subject which would then need supple­mentation by a transcendence; instead, transcendence, and hence intentionality, belongs to the nature of the entity that comports itself intentionally. Intentionality is neither something objective nor something subjective in the traditional sense. (BP, pp. 313-14)

Since on his account intentionality only makes sense if there are shared practices in terms of which Dasein acts and understands itself, Heidegger can say to Husserl:

If, in the ontology of Dasein, we 'take our departure' from a worldless "I" in order to provide this "I" with an object and an ontologically baseless relation to that object, then we have 'presupposed' not too much, but too lirrle. (BT, p. 363)

Heidegger's point can be best illustrated by looking at the way rules work in everyday activity. Take speech act rules for example. When I am acting transparently, making and keeping promises, I do not need any rules at all. I've simply learned from cases and by now I am a master promiseI'. But if some difficult case occurs which exceeds my skill I can then invoke a rule, e.g. that in order not to be accused of breaking a promise one must either keep one's promise or explicitly revoke it. The important thing to notice is the sort of rule this is. It is not a strict rule whose condi­tions of application are stated within the rule. It is a ceteris paribus rule. Sometimes there are allowable exceptions, such as I was sick, or I saw that what I promised would hurt you, etc. The rule applies "everything else being equal", and we do not, and could not, spell out what "everything else" is, nor what counts as "equal". Yet in practice we usually agree. Ceteris paribus rules work thanks to our shared background practices. Alld these practices are skills al1d so require 110 intolliol1al content.

Deliberate activity is in general dependent upon Dasein's involvement in a transparent background of coping skills. Thus, even when rules, beliefs, desires, etc. play an explanatory role, they cannot be treated as self-contained representations that can be pryed off from the world by a transcendental reduction. All cognitivists, when faced with this problem, resort to the same strategy. They claim that the background is a belief system, what Husserl sometimes called a network of beliefs, such that the intentional content of the background can be pulled into transcendental

Page 105: Perspectives on Mind

Husser! 's Epiphenomen%g)' 97

subjectivity and thus under the reduction. When Husser! in Krisis attempts to meet Heidegger's critique in Being and Time he makes exactly this move.

[W]e move in a current of ever new experiences, judgments, valuations, decisions .... none of these acts, and none of the validities involved in them is isolated: in their intentions they necessarily imply an infinite horizon of inactive validities which function with them in flowing mobility. The manifold acquisitions of earlier active life are not dead sediments; even the background ... of which we are always concurrently conscious but which is momentarily irrelevant and remains completely unnoticed, still functions according to its implicit validities. [8]

Husserl is making here the move the skill model should help us resist. He is assuming that we once learned how to cope by figuring things out and that the intentional states, i.e., beliefs, rules, etc. we once formed, are still playing a role, albeit an unconscious one, in producing our current behavior. However, if one looks at skill acquisition, rules used by the beginner and advanced beginner are more like training wheels. They serve to begin the process of accumulating experiences of whole situations, but after enough experience has been accumulated they are simply left behind. Of course, one cannot prove that the early rules are not still functioning in the unconscious. One can only point out that the fact that an expert can be led to recollect them does not show he still uses them, nor are there any a priori or a posteriori reasons to argue that he does.

Mcintyre performs the same sleight of hand when, faithfully following Husserl, he transforms the horizon into a set of beliefs.

But, Husserl holds, this sense does not do its work of characterizing or prescribing the object in isolation from the rest of my mental repertoire. I believe that trees come in different varieties, that trees are physical objects and so are three-dimensional, and so on. Within the context of such beliefs, the sense "tree" foretokens or "predelineates" a range of further possible experiences in which the object before me would be characterized in further possible ways ...

[T]he Sil1n of the present experience, in conjunction with the Sinne of relevant background beliefs, limit in rule-governed ways the kinds of further experiences that can belong to it. [9]

This example is a special case of what AI researchers call the common sense knowledge problem. For example, common sense physics--our understanding of three dimensional objects like trees as well as how physical objects bend, fold, float, drip, stick, scratch, roll etc.--has turned out to be extremely hard to spell out in a set of facts and rules.

Page 106: Perspectives on Mind

98 Huben L. Dre)jus

When one tries, one either requires more common sense to understand the facts and rules proposed or else one produces formulas of such complexity that it seems highly unlikely they are in a child's mind. [10] It just may well be that the problem of finding a theO/y or rationale of common sense physics is insoluble. By playing with all sorts of liquids and solids for several years, the child may simply have developed an ability to discriminate thousands of typical cases of solids, liquids, etc. each paired with a typical skilled response to its typical behavior in typical circumstances. There may be no rationale of common sense physics more simple than a list of all such typical cases.

What Heidegger is objecting to is that cognitivists treat the background as a kind of knowledge rather than as a kind of know-how. McIntyre rightly remarks that for Husser! the commitment to intentional content even in the background skills is an unquestioned basic assumption.

If Husser!'s belief--that for every conceivable human experience there is such an internal rationale, independent of that experiences's actual relations to the natural world--is a "cognitivist" belief, then without doubt Husserl ;s a cognitivist. [II]

Heidegger reacts not by trying to prove that Husserl's faith is false, but by showing it is bad phenomenology.

Since the issue turns upon whose approach allows a more accurate des­cription of the mental activities involved in skilled behavior, Heidegger's or Husser!'s, it does not help Husserl to point out that at least he does not share the errors of computational cognitivism, as Mcintyre seems to hope it will. To help Husser! one would have to show that the account of everyday skills in terms of intentional content and his related claim that the background of all skilled activity can, in principle, be analyzed in terms of additional intentional content--both required by his transcenden­tal reduction--are, good phenomenology, i.e. rest on accurate description of experience. This no one has tried to do because the rationalist tradition which descends from Plato to Descartes to Leibniz to Kant to Husser!, has made the cognitivist account seem obvious. Or, as Heidegger would put it, it is definitive of our philosophical tradition that both primordial intentionality and the world are systematically passed over.

What, then are the implications of McIntyre's convincing argument that Husserl would have strongly opposed the view that John Searle has attacked as strong AI, the view that intentionality just is the manipulating of formal representations? I would like to suggest that when it comes to relating Husserl to ful1fledged computational cognitil'ists like Fodor the importance of this fact may not be as great as McIntyre implies.

The issue turns on the role of syntactic and semantic mental content in explaining mental activity. The question is. what is to be explained

Page 107: Perspectives on Mind

Husser/'s Epiphenomenology 99

and what would count as an explanation? Mcintyre tells us that "Husserl thinks of noemata as having syntactic and semantic properties." [12] He then makes the excellent point that both Husserl and Fodor reject the idea that what makes a mental symbol a symbol is that the mind takes it to be one, This WOUld, indeed, lead to a regress. But, as Mcintyre points out, Husser! and Fodor get out of the difficulty of explaining the role of symbols in diametrically opposed ways.

[Clomputationalism is precisely designed to show how mental symbols can do their work without functioning as symbols, i.e., independently of their semantic or representational properties. Husserl's rejection of mental symbols in favor of noematic Sinnc is based on the very opposite view: the view that the meanings of these symbols, not just the symbols themselves or their "syntactic" features, would have to do the work of explaining mental representation. [13]

Both accept the need to explain how mental states get intentionality. But whereas for Husserl nocmata illSt are semantic states, the question whether syntactic representations get meaning by way of their causal role or some other way, need not concern Fodor.

What is striking here is that in spite of a fundamental metaphysical difference concerning the nature of meaning, Husserl and Fodor both "solve" the problem of intentionality by setting it aside. Fodor leaves it to natural science, while "Husser! apparently thinks it is simply an intrinsic and irreducible (though not completely unanalyzable) property of meanings to represent." [14] For Husser!. intrinsic intentionality is simply a "wlInderbar phenomenon" to he taken for granted, while for Fodor it is some­one else's worry. The key point is that both Husserl and Fodor agree that to explain mental functioning one need not explain how primitive elements making up mental representations manage to have truth conditions. But what about explaining how the mind works? On that point the difference between Husserl and the formalizers at first seems decisive. When it comes to identifying, classifying, synthesizing, and in general manipulating mental content, then, as Mclntyre says "a system of Husserlian mental representa­tions would be one whose operations are carried out, not on their formal properties alone, but by virtue of their meaning and representational character as well." [IS] This does seem to be Husserl's view and to defend it he seems to have used the sort of example Mcintyre finds in Dretske:

[T]hat even mathematical thinking, sllch as adding numbers, is not the same thing as manipulating formal symbols. The symbols being manipulated hy a person who is adding must represent numbers, and so have meaning, for thar penon ... and she must manipulate them as she does at least partly because they mean what they do for her. [16]

Page 108: Perspectives on Mind

100 Hubel1 L. Dreylus

But this is very puzzling. It is all very well to describe mental functioning in terms of meaning, but when one wants to explain how the mind actually works, the only account that has been put forward that connects up with modern science is the computational one. Precisely what makes the computer model so attractive, as Fodor points out in his Scientific American article [17], is that while we can't understand how the brain can act on meanings, we can make sense of an operation being carried out on a representation if we think of lomwl rules manipulating lomlal representations as in a digital computer. In this respect citing Drestke in support of Husserl is taking in a Trojan horse. Drestke agrees with the computationalists but adds that the only way to explain how we can manipulate symbols in terms of their meaning is to add to our account of the formal manipulation of formal symbols a causal account of reference.

This seems to leave Husser! looking like a firm anti-formalist ready, as for instance John Searle is, to await an account from neuro-science that reveals the causal role of meanings. This is probably the view Husserl should have held, but Mcintyre's commitment to the truth leads him to dig up a passage in which Husser! seems to give an inch to the formalists and may be in danger of losing a mile. After discussing the possibility of a formal axiomatic system describing experience, and pointing out that such a system of axioms and primitives--at least as we know it in geometry--could not describe everyday shapes such as "scalloped" and "lens-shaped", HusserI leaves open the question whether everyday concepts could nonetheless be formalized. (This is like raising and leaving open the AI question whether one can axiomatize common sense physics.) Husserl concludes:

The pressing question is admittedly not answered thereby whether within the eidetic domain of [phenomenologically] reduced phenomena (either in the whole or in some part of it) there could not also be, alongside the descriptive, an idealizing procedure that substitutes pure and strict ideals for intuited data and that would even serve--as a counterpart to descriptive phenomenology--as the basic medium for a IIlathes;s of experience. [18]

Although Husserl is not at this point raising the question whether a formal model of experience could contain a formal analog of intentionality, he is not foreclosing the possibility. The most we can conclude is that for Husseri. while a descriptive accOLint of mental life cannot ignore semantics, a formal moclcl of experience which substitutes syntax for semantics may nonetheless be possible. ThIIS, although Husserl implicitly rejects strong AI which holcls that intentionality just is formal manipulation (perhaps plus physical causality), he leaves open the possibility of a sophisticated version of computational cognitive psychology. Admitting the possihility of mathesis moves Husser! from

Page 109: Perspectives on Mind

Husser! 's Epiphenomenology JOJ

Searle's side to Fodor's. If we could model mental activity using strict rules operating over fully explicit and context-free predicate senses, and could analyze the context into a network of these precise plimitives, the explanation of behavior would have no need for brain-based semantics. What explanatory power would it add to say that the rules apply to the symbols by virtue of the symbols' meaning if the rules would apply in exactly the same way even if the symbols had no meaning?

Both Fodor and Drestke admit that we must describe the mind semant­ically, but both hold that we must explain it syntactically. I once heard Fodor concede in response to Searle's Chinese Room argument roughly that, while formal operations are necessary conditions for explaining mental operations, they do not provide sufficient conditions, precisely because they cannot explain the experience of intentionality. Of course, such a concession would not bother Fodor or Drestke or, indeed, any computational cognitivist, since it follows from their formal model of the mind that, where explanations of the mind are concerned, conscious experience, intentionality, and meaning in general are merely epiphenomenal.

Once Husserl allows the possibility of a mathesis of experience, he could only disassociate himself from this sophisticated computationalist view if he could show that, besides the manipulation of mental content on the basis of its syntactic structure, there are mental operations in which semantics play an in'educible role. But once one holds that the semantic properties of noemata mirror their syntactic ones, that even skilled activities and the background can be absorbed into noematic content, and that there must be a rationale of experience--i.e., once Husserl has made all the cognitivist moves criticized by Heidegger--it looks like one has allowed the possibility of an explanation of mental life in which meaning and consciousness play no essential role. Then, to recall the famous froth on the wave metaphor, it seems that semantics only comes along for the ride. But this would reduce descriptive phenomenology to epiphenomenology, and leave Husserl compatible with computational cognitivism after all. Despite our differing views on the centrality of consciousness and semantic content, both Mcintyre and I would see such a result as a sure sign that Husserl and fallen in with bad company.

--'J--

Three points advanced in this essay deserve special consideration. The first arises from Heidegger"s reflections on "circumspective concern." Dreyfus argues that the "expertise" we exhibit in coping with everyday concerns is a primary mode of intentional awareness which does not lend itself to analysis in terms of "intentional content." The second point is basic to his argument against representational theories of mind: he argues that such theories are based on the fallacious assumption that "expertise"

Page 110: Perspectives on Mind

J02 Chapfer One

depends on rules and beliefs that have emerged from prior stages in the development of an individual's problem-solving skills. The third point concerns Dreyfus' belief that the possibility of a formalized model of experience (HusserJ's concept of a "mathes(\ of experience") trumps the value of a semantic description of mind and thus renders HusserJ's position impotent as a contribution to cognitive science.

Turning to the first point, Dreyfus contends that "expertise" in coping with evel)'day concerns is a primal), mode of intentionality which cannot be accounted for in terms of intentional content. When it comes to the use of hammers and doorknobs, he argues, we cannot offer an adequate description of what is going on unless we "break out of our traditional Cartesian assumptions concerning mental content" (Dreyfus: this volume, p. 86), He would have us be more sensitive to the way in which interpretive schemas of "Husserlian" intentionality conceal the character of our primary relationship to objects and the world. In this relationship, he argues, things with which we are concerned have a tendency to "disappear" in the course of our involvements: they "withdraw" from the focus of attentive awareness. As practicing expert~, we, too, recede from view (we recede beyond the horizons of attention) and become "transparent" to ourselves in the midst of our involvements. These involvements exhibit the sort of "directed" character which marks them for intentional analysis. But do they manifest the "subjective" and "objective" components integral to Husserlian intentionality? Drawing on Heideggcr's concept of "circumspective concern," Dreyfus argues that a dissection of nonattentive awareness into "subjective" and "objective" components distorts phenomenological description. What, after all, do we mean by "nonattentive awareness" if not that these features of experience are transparent to consciousness? But why speak here of consciousness? If we say these components are "transparent" to consciousness, are we not implying that they exist without being noticed? Strictly speaking, Dreyfus reports. these "components" are not found in the experience of experts who are absorbed in the practice of their expertise. So any phenomenological description of the nonattentive forms of intentional awareness will be a dist0l1iol1 of the evidence should it include reference to "subjective" and "objective" components.

But if these components are missing from experiences characterized by nonattentive involvements, how can we possibly discern evidence of inten­tional content? If we cannot discern any intentional content in these experiences, how are we to ascertain that these modes of awareness have an intentional character? Because these absorbing experiences seem to lack intentional content, Dreyfus draws two conclusions: first, that it is a mistake to assume that "consciousness" is operating at this level of involvement: and secondly, that Husseri's "prescription" for intentional analysis is both incomplete anci seriously misleading as an indicator of how the human mind operates. Since the key to this argument lies in the belief

Page 111: Perspectives on Mind

Commentary 103

that intentional content is intrinsic only to attentive modes of awareness, we should consider more carefully why this belief is attractive to Dreyfus, and why Husserl took it to be false.

The second major point advanced by Dreyfus concerns the assumption, attributed to Husserl, that minds are places where "mental content" resides in the form of representations which serve to relate conscious subjects to objects of experience. "On this account of intentionality," Dreyfus writes, "mental representations are assumed to be special meanings in the mind of the subject which can be described in complete independence of the world." (p. 95) But can these "internal representations" be analyzed without invoking reference to the world? Siding with Hddegger, Dreyfus contends that "mental contents cannot be directed toward anything except on a [shared, transparent] background of primordial intentionality--skilled practices." (p. 96) Because he considers these background practices a form of "know-how" rather than "knowledge," Dreyfus classifies them under the heading of skill behavior. On the basis of his analysis of the five major thresholds to skill acquisition, he draws the inference that intentional content is not endemic to these background practices. He concludes, it is a mistake to presume that they could be "pulled into transcendental subjectivity and thus under the reduction." (pp. 96-97)

This exposes a seemingly pivotal weakness in the Husserlian presump­tion that in the process of moving from the novice stage to that of expert, one develops an ability to cope in the world by figuring things out and learning to act in accordance with rules and beliefs which continue to function at advanced levels of ability, even as they become increasingly transparent to the user. Dreyfus sets out to correct tllis view on the basis of his provisional analysis of expertise. If his "training wheel" theory of skill acquisition is correct, experts do not require discrete rules and beliefs to aid them in the identification and manipulation of problem-solving situations. That is, "after enough experience has accumulated, [these rules and beliefs] are simply left behind." (p. 97) One might expect this to mean that the rules and beliefs become trans­parent, that they "disappear" only in a figurative sense. But Dreyfus is advancing a stronger thesis. In the problem-solving practices of the expert, rules and beliefs which brought the individual to this level of expertise fall away and cease to play any role whatsoever. Were this not so, Dreyfus maintains, they would become impediments to the emergence of expertise. What are we to make of this position? Has Dreyfus marshalled sufficient evidence in support of his claims? If not, what evidence would be sufficient to arbitrate his dispute with Husserl and the computation­alists?

Dreyfus concludes with an analysis of the extent to which Mcintyre has succeeded in insulating Husserl from the force of Heidegger's implicit critique of computationalism. He points to Husserl's acceptance of the potential for a "mathes;s [or formalized model] of experience" and suggests

Page 112: Perspectives on Mind

104 Chapter aile

that this acceptance turns on an important distinction between description and explanation. By spelling out the force of this distinction. Dreyfus takes himself to have exposed a key weakness in the research strategy of Husser)'s descriptive phenomenology; for it appears that Husserl is accep­ting the potential for a syntactic explanation of mind, something which would seem to trump the value of a semantic description of mind. For the possibility of a matiJesis of experience would seem to imply that "where explanations of the mind are concerned, conscious experience, intention­ality and meaning in general are merely epiphenomenaL" (p. 101) But if meaning and consciousness play no role in the explanation of mental operations, what is the value of descriptive phenomenology? What. if anything. does Husserlian phenomenology have to offer cognitive science?

Mcintyre's primary concern was to dissociate Husserl from the formalist project to reduce the study of semantic content to an analysis of syntactic, rule-governed relations between mental representations. Allowing for both semantic description and formalist explanation of mind (which Dreyfus takes Husserl to be doing) seems to leave semantic descrip­tion superfluous to an understanding of how the mind works. On this view, the latter is given epistemic priority. Though it may be that intentional analysis plays a key role in the former. the results of intentional analysis would seem to be entirely pointless for anyone looking to explain the basis of cognitive processing. So even if one accepts Mclntyre's conclusion that Husserl was not the sort of formalist who argues for a reduction of semantic characteristics to formal syntactic procedures, it still seems to Dreyfus that Husserl was enough of a formalist to render descriptive phenomenology a useless appendage to cognitive science. This might possibly account for why Dreyfus brands Husserl's position "epiphenomenology. "

In the next chapter. we will retrace our steps and attempt to resolve some of the questions raised here. We will begin with issues concerning the qualitative character of experience in order to see the extent to which philosophy of mind should be concerned with the intentional character of psychological phenomena. This will lead to reflections on the specific character of the "taking" function through which intelligence appears to be manifest. This in turn will suggest issues concerning the circuit of "intentional transaction" by means of which the mind seems to relate to its environment.

Page 113: Perspectives on Mind

Chapter Two

STRUCTURES OF MENTAL PROCESSING

The "qualitative" character of subjective awareness remains an enigma to philosophers and psychologists. How do qualia arise in exp!:rience? How is it possible that they affect behavior? Are qualia structured by a sub­jective point of view? Or do they arise passively, perhaps as a conse­quence of the objective functions of neurobiological activity? Should we view qualia as emergent phenomena--that is, as having a nature and function beyolld the scope of neuroscience? Or should they be conceived as epi­phenomenal, that is as nothing more than subjective manifestations of neurobiological processes, and hence as phenomena that can be explained by reductioll to neurobiological theory?

2.1 Qualia

Proponents of reductionist strategies have found no reason for con­cluding that the qualitative character of subjective awareness lies beyond the scope of neuroscientific theory. But why should we expect such awareness to be manifest to someone occupying a third-person standpoint, a standpoint appropriate only for objective investigation of neurological activity? Is it conceivable that a neuroscientist might actually be able to comprehend the qualitative character of my experience simply by under­standing the nature and function of my neurological make-up? Would such a view capture the meaning-components intrinsic to qualitative experience? Or is there an aspect of mental processing destined to lie beyond the reach of neurobiological theory?

Rey's argument discounted ordinarily accepted "evidence" for the claim that consciousness plays a special role in mental processing. Indeed, he proposed that essential mental operations could be individuated and instan­tiated in a computational mechanism by means of a network of "rational regularities" that supplant completely any role that might be imagined for the qualia. But Lurie cautioned us specifically against accepting theories that postulate discrete mental phenomena. If Rey is right, and Lurie as well, then computational reduction would somehow have to allow for holistic mental phenomena. This suggests a need for a holistic view of Ileurological processing. to be modeled. perhaps. on a similar view of memal processing.

On the other hand. Mcintyre argued that mental processing exhibits a global structure that defies capture by reductionist stategies. Proponents of Rey's position would surely object to Mcintyre's view of the mind as a "meaning generator" But has Rey's argument actually undermined the credi-

105

Page 114: Perspectives on Mind

]06 Chapter TIl'o

bility of the thesis that consciousness plays a crucial, global role in mental processing? Supporters of Mclntyre's phenomenological view would reply that Rey's strategy overlooks the very feature of mental processing that makes qualia and meaning possible. Are these counterarguments convin­cing? Suppose a robot were capable of human-like responses in problematic situations. Would it make any functional difference to the robot whether it was capable of entertaining qualitative experiences? Would it make any defensible difference to liS? How could we even determine whether it had such a capacity? What test might we employ to settle the issue?

Rey proposed one kind of test--computational programs that function in accordance with "rational regularities" mimicking human mental operations. Were they to react to problem·solving situations in ways indistinguishahle from our own, would it matter whether the robot was experiencing qualia? Rey avoids this question, for he apparently assumes that the concept of "qualia" is every bit as suspect as the concept of "consciousness" when it comes to developing an ontology of mental processing. But since there is serious doubt whether Rey's program would in fact capture the essential features of mental processing, we must not shy from the issue: to what extent 1I'0uid such a mechanism exhibit a capacity to entertain qualitative experience? If not at all, would we be justified in concluding that the robot lacked a crucial structure of mental processing? Or would it be more appropriate to conclude that the qualia isslle is irrc!el'alll to questions concerning the nature and structure of such processing?

In the next paper, James H. Moor proposes and analyzes strategies for testing computational mechanisms for evidence of qualitative experience. Professor Moor argues that sllch tests can never be decisive. It is im­possible. he concludes, to justify the claim that a robot cuuld entertain qualitative experiences "functionally analogous" to those we experience. But it would be a mistake, he adds, to make a great deal of the issue when assessing the design of robots. For if a robot exhibits functionally analogous behavior, then Moor sees nothing to be gained from "testing" whether or not the mental processes of the robot manifest a level of sub­jective awareness. Moor maintains that research should focus instead on determining which mental operations are associated with our behaviors when we are talking of experiencing qualia. [t will suffice to design machines capable of simulating these operations. If it turns out that robots behave as we do. we will find it impossible to prove--although difficult not to believe--that such mechanisms are entertaining subjective experiences like our own. From this perspective, success or failure of the AI/cognitive science enterprise turns, not on successful production of qualitative ex­perience in robots, but on the degree to which computational mechanisms exhibit problem-solving behavior analogous to our own in those situations where, from our point of view, experience of quality seems so important.

Page 115: Perspectives on Mind

JAMES H. MOOR

Testing Robots for Qualia

1. 77le Mea/IMe/a! Dis/inction

A computer recently electrocuted its owner just after he had purchased a another computer. Did the computer kill from jealousy? Was it seeking revenge? Such explanations are fun to give, but I assume that nobody takes them seriously. Among other things, the behavior of today's computers is not sophisticated enough to even begin to convince us that they actually have qualitative experiences such as emotions and feelings.

Moreover, any attempt to give emotions and feelings to a computer by adding some affective behavior seems superficial. Imagine a good chess­playing computer enhanced to display emotion. The superior chess-playing computer might emit a synthetic chortle during a game when a human opponent made a particularly stupid move. Such a computer might gloat after winning a game by saying something like "nice try for a human." If it lost, the computer might have a temper tantrum. But these particular enhancements make the computer more obnoxious than feeling. "lIser unfriendly" computers are no more emotional than "user friendly" ones. Such behavior may arouse our feelings; but it is not really an expression of the computer's feelings. Common sense tells us that behind the facade of behavior there is still emptiness. A computer is emotionally hollow--void of feeling.

Perhaps a more promising approach to constructing a computer with qualitative experiences is to base its design on the internal workings of a human being. For the sake of argument, let uS suppose a researcher conducts an extensive study of the human brain and related chemistry, and he becomes thoroughly knowledgeable about how the brain functions. Certain complex systems of the brain are understood to be responsible for certain feelings and emotions and for producing particular behavior patterns. Suppose the functionality of these systems is meticulously duplicated in computer hardware. Wetware is converted to dryware. The functionality of the systems, the relationship of inputs and outputs, is maintained although the makeup of the systems is changed. It will be useful to connect the inner systems to outer motor and sensory systems which are also essentially computer circuitry. The result of this endeavor is a robot that has an electronic brain which is functionally analogous to a human brain and has peripheral devices which are analogous to human motor and ~:ensory systems. Now if the researcher has done the job properly, the robot should act in the world much the way a human being does. The robot should see with artificial eyes and grasp with artificial hands. From time to time [he robot should show feelings. If its hand is squeezed too hard, it should

107 H. R. Otto and 1. A. Tuedio (eds.), Perspectives on Mind, 107-11R. © 1988 by /). Reidel Publishing Company.

Page 116: Perspectives on Mind

108 James H. Moor

react accordingly. But, does such a robot really have qualitative experiences? Does it

really have sensations, feelings, and emotions? According to a functionalist view of a mind, the answer is "yes". Functionalism is not so much a single theory as a constellation of theories which share a central notion that a mind can be understood as a complex functional system. (Putnam, 1960: Fodor, 196811981). On the standard functionalist inter­pretation the components of a functional system can be realized in many ways both biologically and nonbiologically. On this view, humans are computers that happen to be made out of meat. Of course, it is also possible on this analysis of mind for a computer made out of electronic components to have a full mental life.

I think many people remain skeptical about the potential inner life of a robot, because even if a robot behaves in sophisticated ways, it seems to be made out of the wrong stuff to have qualitative experiences. Paul Ziff, who denies that robots can have feelings, puts the point in the following way:

When clothed and masked they may be virtually indistinguishable from men in practically all respects: in appearance, in movement, in the utterances they utter, and so forth. Thus except for the masks any ordinary man would take them to be ordinary men. Not suspecting they were robots nothing about them would make him suspect.

But unmasked the robots are to be seen in all their metallic lustre. (1964, p. 99)

How important is the meat/metal distinction with regard to having feelings and emotions? Is biology crucial for a mental life? It seems possible, if not likely, that a system which is functionally equivalent to a human being but made out of non biological parts may behave as if it had a mind, but in fact have no subjective experiences at all. This, I take it, is the point of the standard "absent qualia" objection to functionalism. If a functional theory doesn't capture qualia, i.e., our qualitative experiences, then it is an inadequate theory of mind. (Block,1980a,b)

In this paper I want to examine some tests and arguments which are designed to resolve the issue of whether an electronic robot is made out of the wrong stuff to have qualitative experiences such as sensations, feelings and emotions. I will assume the robot under discussion behaves in a manner closely approximating human behavior and that it has an internal organization which is functionally equivalent to relevant biological systems in a human being.

Page 117: Perspectives on Mind

Testing Robots/or Qualia 109

2. The Transmission Test

One approach to gathering non behavioral evidence for robotic experience is to tap into the inner processes of a robot's brain and to transmit the results. The transmission can be either indirect or direct. With indirect transmission the robot's inner processes are connected via a transmission link to a display board which can be examined by our sensory systems. The display board contains output devices which reveal what the robot senses. A television screen shows us what the robot sees, a speaker let's us hear what the robot hears, and so on. Various dials indicate the levels of emotional states.

If such a test were actually run, I think we would be skeptical about the results. Suppose there is an area on the display board which allows us to feel what the robot feels with its artificial hand. The robot touches a hot piece of metal with its hand, and we in turn touch the appropriate place on the display board. The board feels warm to us. Does the robot really feel the warmth? Or does the display board get warm merely as the result of a straightfOlward causal chain which is triggered by a hot piece of metal contacting the robot's hand?

Even if the information on the display board were sophisticated, as it would be with a television picture, I don't believe we would regard it as a reliable indicator of what the robot actually experiences. Some years ago there was a robot built at Stanford called Shakey. Shakey rolled about several rooms plotting pathways in order to travel from one place to another. Shakey had a television camera which allowed it to compute its position relative to other objects. Researchers could watch a television set to see what Shakey saw. But did Shakey see anything? Shakey used some of the information from the television camera input, but why should we believe Shakey actually experienced anything? Television cameras are transmitters of information, not experiencers. Thus, evidence about inner experiences gathered from a television display is not convincing. The television camera in Shakey could have been mounted on a cardboard box and still have transmitted the same robust pictures.

There are two clear shortcomings of this indirect transmission test. First, the evidence gathered by us is limited to information on the display board. The information on the display board may be so abstracted from the nature of actual experiences that it will not be persuasive. For example. evidence for emotional states in the form of dial readings is less convincing than ordinary emotional behavior itself. Second, the information picked up by our sense organs may tell us nothing more than the state of the display board. What we want to know about are the actual e.lperiellces of the robot. The display board output is determined by causal processes, but this causal chain may not reflect the robot's experiences. A videotape machine provides a nice display on a television but the

Page 118: Perspectives on Mind

110 James H. Moor

videotape machine itself presumably has no qualia. The transmission test can be improved, however, by eliminating the

display board and transmitting the information in the robot's brain directly to a human brain. James Culbertson has proposed an experiment along this line. In his words, "The way to show that the machine is sentient, i.e., experiencing sensations, percepts, and/or mental images, is to connect it to the nervous system of a human observer." (1982, p. 6)

Suppose we set up a direct transmission test in which the analogous portions of the robot's brain are connected with a transmission link directly to a human brain. Now we can imagine that when the robot touches a piece of hot metal, the human in the test experiences what he would experience if he had touched a piece of hot metal. The experiences passed to the human monitor in the direct transmission test are not limited to sensory experiences. Presumably, emotional information can be passed on as well. If the robot feels angry, then the human monitor will feel anger. Hence, in the direct transmission test the subjective experiences of the robot can be experienced directly by a human being.

But is this direct transmission test really a good test? Perhaps it is an improvement over the indirect version, but the fundmental difficulty which lurks behind the indirect version lurks behind the direct version as well. Is it the case that the human monitor experiences what the robot experiences? 01', is it the case that the robotic apparatus simply generates experiences in a human? The human monitor has experiences initiated by the transmission link connected to the robot, but a human subject would also have experiences if connected to any machine generating similar signals.

The situation is not unlike actual results of brain probes on human subjects. Electrodes are used sometimes to stimulate various regions of a patient's brain and the patient reports having various kinds of experiences. In this situation, nohody maintains that the electrodes along with the associated electronic devices are actually having the experiences which are then transmitted to a human. Rather, the explanation is that the electrodes, when properly used, activate neural mechanisms which generate experiences in a human. Perhaps, this is all that is happening in the direct transmission test.

The problem with the transmission test is similar to a problem which confronted John Locke. Locke had to explain which of our ideas really represent external reality and which are largely a product of our mind when influenced by external reality. The Lockean problem \'is-a-vis the transmission test is to distinguish information which represents an internal reality of a robot from information which is largely a product of our own mind when influenced by signals from the transmission link.

The issue comes down to this. If we are already convinced that entities made out of electronic components can have qualia, then the

Page 119: Perspectives on Mind

Testing Robots for Qualia 111

transmission test seems well-grounded. We are actually tapping into a robot's experiences. But, if we are not convinced that a robot can have qualia, then the transmission test has little force. The robotic apparatus is viewed as a device which generates experiences in us but not one which has experiences of its own. Moreover, a negative result in the transmission test is not conclusive either. A functionalist, for example, can argue that a negative result shows only the inadequacy of the transmission link.

The key to our Lockean predicament is to get rid of the transmission link altogether. We must devise an experiment that does not involve transmission so that we can determine even more directly whether or not a device made out of electronic components can have qualia. Let's consider a thought experiment which allows us immediate access.

3. The Replacement Test

Suppose that our robot's brain isn't modelled on just any human brain but on Sally's brain. After the electronic brain is constructed, Sally suffers some brain damage. Suppose further that the damaged portions are critical to her pain system. Sally now feels no pain. Because pain provides important warning of injury, Sally would like to have the damage repaired. Biological repair is not possible but an electronic solution is. Scientists decide to remove the analogous portions of the robot's electronic brain and install them in Sally's brain in orde:r to restore her pain system to nOlmal functioning.

Installation of electronic devices in humans is not farfetched. Electronic devices are now implanted in the nervous system to block unwanted pain signals. Pacemakers are electronic implants that regulate heart function. Other implants in humans regulate and rdease chemicals. In our experiment a portion of the electronic brain is implanted along with an intelface mechanism which permits the normal biological activity to interact with the electronic mechanism.

After replacement of the damaged portion of Sally's brain with the electronic analog, we are in a position to test the memal result of the replacement. We touch Sally's foot with a pin and we ask Sally, "Do you feel pain?" Let's suppose Sally replies, "No, I feel nothing." We move the pin to other locations and insert it somewhat deeper. Each time Sally denies feeling pain. Would such a result show that Sally, equipped with the computer implant, did not feel pain? It is unclear. The assumption of the replacement experiment is that the computer component is a functional equivalent of the portion of the brain being replaced. But, the present evidence would suggest that this assumption was not satisfied. That is, behavioral evidence that Sally is not feeling pain is equally evidence that the computer implant is not functioning properly. If the computer implant

Page 120: Perspectives on Mind

112 James H. Moor

were truly functionally equivalent, then the output from it to the speech center would be such that Sally would say that she did feel pain. In other words, if Sally had an intact brain which functioned properly, then the information going into the pain center indicating pain in her foot would be followed by information leaving the pain center enroute to the speech center, and finally resulting in Sally saying "ouch" or at least being put into a state such that when asked about pain, Sally acknowledges pain. An adequate computer replacement must do the same thing.

So, let us suppose the computer implant in Sally's head is adjusted or replaced with another computer component so that the correct functionality is achieved. Once this is done, Sally responds normally when pricked with a pin. She says, "Ouchl" and readily acknowledges pain when asked.

But, suppose we are not convinced by Sally's report of her pain. We ask Sally, "Does the pain feel the same as the pain you felt when your brain was functioning normally and you received such a pin prick on the foot?" Sally might say that it does, but suppose she says, "No." Sally claims that she feels something, but it is quite different from the way it used to feel when she was pricked on the foot. What does such a result show? The evidence indicates that Sally feels something but it isn't quite the normal feeling of pain. The evidence for abnormality of the feeling can, thus, be taken as evidence for readjusting the functionality of the computer implant so that the report of the pain experience is a report of normal pain feeling. In other words, if the replacement were really a functional equivalent of the original brain pain center, then the information sent to the memory areas in the brain should duplicate the information that would have been sent by a normal functioning brain pain center. Because this is not the case in light of Sally's assertion that her current feeling is much different from her old feeling of pain, some functional adjustment is again needed. After the adjustment is made, Sally readily tells us that not only does she feel pain, but the pain is just the same as the pain she used to feel with her brain intact.

As we can see from the foregoing, a difficulty with the replacement test is that the evidence gathered for or against computer feelings is still essentially indirect. Behavioral evidence indicating a lack of feeling is equally evidence for the improper functioning of the computer implant. In principle, the behavioral evidence can always be manipulated by adjusting the functionality of the computer replacement component. This makes the test inconclusive. Appropriate behavior may be the result of massaging the evidence, and thus not indicative of inner feelings at all. Perhaps the test can be made more direct. Rather than a third person report, what is needed is afirsr person report.

So in an attempt at a direct version of the replacement test, a gallant researcher decides to have the electronic components implanted in himself. Now he will know immediately whether or not he feels pain and

Page 121: Perspectives on Mind

Testing Robots for Qualia 113

whether the pain is just like the pain he used to feel, Of course, he already knows on the basis of the indirect replacement test that his outward behavior will indicate pain, hence others will think he is in pain whether he is actually in pain or not. Indeed, there will be no way for him to signal to others about the nature of his inner experiences. It will do little good to prearrange an eye signal, for instance, where three quick blinks of the right eye means "Trust my behavior; I'm really in pain" and three quick blinks of the left eye means "Ignore my behavior; I feel no pain." Eye blinks are behavior, and if the electronic components are functioning properly then the scientist should give the trustworthy signal. If he didn't give the right signal, then his fellow scientists would know the implant was not functioning properly and would make the appropriate adjustments so that he did give the right signal in the future.

Is this direct version of the replacement test decisive at least for one person'? I think not. It seems obvious that our behavior is highly dependent on our beliefs. Thus, it may not be possible for our guinea pig scientist to believe he is not in pain and behave completely as if he is. In other words, for the scientist to act consistently as if he is in pain it will be necessary to implant a functional unit that gives him the belief he is in pain. A highly schematic functional configuration looks like this:

,------------> [Reflex Behavior]

[Pain System] --> [Belief System] --> [Other Behavior]

Hence, even if the computer implant does not generate the feeling of pain, the scientist himself will delusionally believe that it does. He will be sincere, though mistaken, in his reports that he feels pain which is similar to the pain he use to feel when his brain was exclusively biological.

My skepticism about the direct replacement test is based on a hypothesis about the way our brain works, viz., that our beliefs are instantiated some way in our brains and as such they playa critical causal role in determining our behavior. Of course, an actor can produce pain behavior without believing he is in pain. But an actor can break character; he can choose not to display the pain behavior. This is not the case in the direct replacement test. Our assumption here is that the behavior of the guinea pig scientist is the same as the behavior of someone who really is in pain. It is hard to imagine such consistent pain behavior occurring without the appropriate belief in pain. I am not suggesting. as I think Shoemaker (1975) does, that an adequate pain belief requires the feeling of pain. My hypothesis is the empirical claim that for a complete

Page 122: Perspectives on Mind

114 James H. Mool'

and convincing repertoire of pain behavior the agent must believe he is in pain. 1 think my empirical hypothesis is reasonable, but it may be wrong--and even if it is right. there may be the possibility that the belief system can be bypassed with regard to the behavioral output, yet receive information directly from the pain system. The functional arrangement would be this:

I >[Reflex Behavior]

I L [Pain System] --> [Belief System] > [Other Behavior]

Can the direct replacement test be relied on in this set up? This arrangement does look more favorable, for now it is not required that the belief system contain the belief that the scientist is in pain in order to causally generate the appropriate pain behavior. Of course, the scientist with the implant will still say he is in pain and act as if he is in pain in the appropriate circumstances. but there is now the possibility that he will believe he is not in pain. It now seems that the scientist can know from a first person point of view whether or not the computer replacement really gives him the suhjective feeling of pain. But, because his behavioral repertoire is disconnected from his belief system, he will not be able to relay information to his fellow scientists about the results of the test. The situation may seem strange indeed if he believes that he is not in pain but finds his body acting as if he is in pain. For example, he may observe his own body uncontrollably writing articles about the success of the implant and how it really generates the experience of pain while knowing all along that the implant is a fraud and doesn't do anything except generate the outward appearances of pain.

Even within this functional configuration there may be difficulties. Some outputs of the pain system will become inputs to the belief system. The designers of the computer replacement for the pain system can determine what the guinea pig scientist will believe in the direct replacement test by their choice of outputs of the computer implant. Thus, the designers of the computer replacement can guarantee what the guinea pig scientist will believe about his subjective experiences whether he has them or not. What this suggests is that even this special version of the direct replacement test is not decisive. The guinea pig scientist's belief about his qualia will be both uncertain and ineffable.

In summary, the transmission test and the replacement test are inconclusive. If these tests are viewed through a functionalist framework, then either the right results can be guaranteed or the wrong results can be explained away. Whether robots really have qualia is a contingent matter which cannot be rigorously empirically tested. I don't believe this

Page 123: Perspectives on Mind

Testing Robotsfor QlIalia 115

defeats functionalism, but it does suggest that a signit1cant part of the argument for functionalism must be non-evidential.

4. The Argllmelll for QlIalia

What defense does a functionalist have against the charge that it is possible that robots which instantiate functional systems like the ones humans instantiate lack qualia? This problem is the robot corollary to the problem of other minds. Part of the defense is to address the problem of other minds. After all, it is possible that some humans who instantiate biological systems like the ones we instantiate lack qualia. There are many ways of partitioning the human population, granting qualia to some and not to others. Perhaps all humans have qualia except those of the opposite sex, or those who are born in a country other than one's native land, or those who lived during the 19th-century, or those who are not identical with me. These hypotheses are bizarre but not inconsistent. Why do we reject them? A traditional answer is that these other humans are similar to us (me!). By analogy we (I) grant them qualia. But I think this traditional answer is inadequate. A skeptic has only to argue that though these other humans are similar, they are not similar enough.

A better answer is that the attribution of qualia gives us essential explanatory power. Imagine what it would be like if one seriously denied that some humans had qualia. The most ordinary behavior of members of this group would become virtually incomprehensible. How would we understand the actions and words of an absent-qualia human who on a winter's day came inside shivering and complained at length about the cold? There is an extension, but no inflation, of ontology in granting others qualia similar to our own. And, there is an enormous gain in explanatory power in granting others qualia similar to our own. Good explanations are what determine ontology in this case.

Good explanations determine ontology for robots too. If, as I have been assuming, some robots act in ways which closely approximate human behavior, and their brains are functionally equivalent to human brains, then attribution of qualia to robots will be necessary for a reasonable understanding of their actions. I am not denying that there will be lower level explanations of robot behavior in terms of circuitry any more than I am denying that there will be lower level explanations of human behavior in terms of neurology. (Moor, 1978a) I believe these lower level explanations will be compatible with, but in general not as perspicuous as, higher level explanations of behavior in terms of qualia. Of course, it will always be logically possible that attributing qualia to robots is a mistake. But this is a general feature of induction and not a special problem about qualia. (Moor, 1976)

What becomes of the common sense objection that electronic robots are

Page 124: Perspectives on Mind

116 Chapter Two

made out of the wrong stuff to have qualitative experiences? I think the objection does have some force, but it is important to realize that this force rests on an ambiguity in the objection. On one reading, the objection is about the empirical possibility of constructing robots--on this reading, the claim is that electronic components can never be assembled to produce robots that have the appropriate affective behavior or cannot be organized into a functional system that duplicates the functionality of the human brain. In the abstract, I don't find this version of the objection plausible. Assuming human behavior is directed by the human brain and the brain operates through the firing of neurons which can be described by a computable function, a computer could be designed to instantiate the relevant portions of the biological system. In other words, at the abstract level functionalism seems invincible. But, functionalism ultimately must become a scientific theory which specifies concretely what the relevant functionality of the brain is and how electronic components can perform it. Thus, it is possible that the wrong stuff objection will turn out to be correct for some straightforward technical or empirical reason the import of which would be that the relevant sophisticated functionality simply cannot be created in nonbiological material.

The other interpretation of the wrong stuff objection is conceptual. Its claim is that, even if a robot could be constructed such that it exhibited the appropriate behavior and had the appropriate internal functionality, the robot would not have qualia because it would be made of the wrong material. On this level, I think the objection is no longer a claim that can be decided by gathering further empirical evidence. Tests like the transmission test and the replacement test may be helpful if we already attribute qualia to robots, but they will never be decisive against the absent-qualia objection. What is needed to answer this objection is a conceptual argument about explanatory power. If robots which had the appropriate behavior and functionality were constructed, then the increase in explanatory power would eliminate the meat/metal distinction with regard to qualia. In such a situation, there is no wrong stuff; people and robots are both made of chemicals, and as it turns out, chemicals are the right stuff.

--11--

A key dimension of Moor's argument is directed against the objection that functionalism fails as a theory of mind because it cannot show how to replicate in robots such qualitaiive experiences as sensations, feelings, and emotions. But since it seems unlikely that we could ever design tests for determining the "qualitative character," if any, inherent in a robot's

Page 125: Perspectives on Mind

Commental)' 117

functional processes, the objection turns out to be ineffective against a functionalist strategy. The real challenge for functionalism, Moor argues, is to develop a scientific theory capable of specifying in concrete terms the relevant functionality of the brain and how it can be performed by electronic components.

Suppose functionalism were eventually to meet this challenge. According to Moor, it could then appropriate the explanatory "leverage" that comes from attributing qualia to computational mechanisms which have been designed in accordance with its theory of mind. After all, the mechanisms would behave pretty much the way we behave. So, since we gain explanatory leverage with respect to human behavior by invoking references to qualia, why wouldn't we gain--and, indeed, find it useful to do so--the same sort of leverage with respect to robots? Thus, whether or not Moor"s robot actually entertains qualitative experiences, as long as it acts as though it did, we would be justified in ascribing to it the presence of qualia. And, happily, Ollr ontological commitment would go no further than this, for as Moor observes, "good explanations are what determine ontology." (Moor: this volume, p. 115) Doesn't this imply that the "absent qualia" issue is irrelevant to PLlZzles concerning the nature and function of mental processing?

On the surface, this appears to be a paradoxical position. For how can the concept of "qualia" provide us with explanatory leverage if we have no interest in the actual relation that might hold between qualia and behavior? The following commentaries suggest that Moor's conclusion may be vulnerable to two lines of criticism. Robert Van Gulick's commentary challenges Moor's conception of the constraints within which functionalism must operate as a science of the mind. He maintains that Moor has not really demonstrated that a functionalist account of the mind's "internal organization" is incapable of explaining the relation between qualia and behavior, and that such an account would therefore be better served by just ignoring the issue.

From a second angle, Henry Johnstone questions whether Moor has analyzed the olliy strategies worth considering when it comes to testing robots for qualia. Do the "transmission" and "replacement" tests really exhaust the list? Johnstone suggests the possibility of a "~ommunication" test, a test which he feels might elicit solid evidence regarding the presence or absence of qualia in computational mechanisms.

Van Gulick's commentary, which comes first, begins with a general criticism of Moor's view. He argues that Moor's primary assumption is incompatible with his subsequent rejection of the metaphysical aspect of the qualia issue. Moor has postulated the existence of a "functional equivalence" between his rohot"s behavior and our own without having a definitive conception of what such an equivalence would require. How, then, can he use this assumption as the hasis for conclusions about the metaphysical side of the qualia issue? "lInless we can say what counts as

Page 126: Perspectives on Mind

118 GlOpter Two

playing the same junctional role as a qualitative state," Van Gulick writes, "we cannot hope to determine whether non-qualia states could play such roles." (Van Gulick: this volume, p. 120) His analysis of the concept of "functional equivalence" hinges on the distinction between "\Vhat some item does and how it does it," and this leads him to stress the theme of "psychological equil'alellce." (p. 121) For Van Gulick, the key lies in determining "how qualitatively differentiated representations function and how such functions might be realized by underlying causal mechanisms." (p. 122) Given such a theory, he argues. the functionalist might well be in a position not only to address the qualia issue, but also to actually make effective use of Moor's "transmission" and "replacement" tests,

Page 127: Perspectives on Mind

ROBERT VAN GULICK

Qualia, Functional Equivalence, and Computation

Despite their impressive abilities to calculate and process information, present day computers do not have feelings, experiences, or inner lives involving qualia or phenomenal properties. Is this merely a reflection of the present limited state of computer technology or are there a priori and conceptual reasons which preclude the possibility of developing computers with qualia? If, in the future, robots are built which appear to display the full range of human affective behavior, how would we decide whether or not they did in fact have feelings and experiences? How could we determine whether they felt pains and enjoyed the taste of chocolate or merely simulated the human behaviors associated with such inner states?

These are the questions Moor addresses in his paper, "Testing Robots for Qualia." He hypothesizes the existence of a future robot that "behaves in a manner closely approximating human behavior and that ... has an internal organization which is functionally equivalent to relevant biological systems in a human being." (Moor: this volume, p. 108) He considers two sorts of tests which might be used to determine whether such a robot had experiences or qualia: the transmission test and the replacement test, each of which has a direct and an indirect version. He finds that none of these tests would decisively answer the robot qualia question. He concludes that the question is not subject to rigorous empirical test, at least insofar as one remains committed to the basically functionalist view of mind which he takes to be implicit in the tests discussed. He argues that attributions of qualia to robots would have to be largely non-evidential and would be justified instead on the basis of explanatory power with respect to robot behavior.

I am inclined to agree with Professor Moor about the inconclusive nature of the tests he considers, but to disagree about the general consequences for functionalism. The two sorts of tests he considers do not seem to exhaust the options open to the functionalist. Moreover his initial formulation of the problem threatens to make qualia epiphenomenal in a way which would undermine his proposal to justify qualia attributions on the basis of their explanatory power. Let us begin by considering his statement of the problem. Moor's robot is hypothesized to have an internal organization which is "functionally equivalent" or "functionally analogous" to the behavior controlling systems of the human brain. However it is not all clear what such equivalence requires.

At some points Moor seems to suggest it requires only input/output (110) equivalence; that is. the functionality of the system. the relation­ship of inputs and outputs, is maintained although the makeup of the system

119 H. R. Otto and J. A. Tuedio (eds.), Perspectives on Mind, 119--126. © 1988 by D. Reidel Publishing Company.

Page 128: Perspectives on Mind

120 Raben Van Gulick

is changed. 1/0 equivalence is a fairly weak relation and allows enormous vanatlOn in the causal system mediating inputs and outputs. It is not surprising that there might be systems which are 1/0 equivalent to humans but which lack qualitative or experiential states. Such a finding would have little impact on functionalism. I/O equivalence requires only simula­tion of human behavior, and most functionalists have denied that purely behavioral criteria can suffice for the application of mental predicates.

I suspect that Professor Moor has a stronger equivalence relation in mind since he writes of basing the robot's design "on the internal workings of a human being," and in his discussion of the replacement test he describes the substituted electronic component as functionally equivalent to the replaced brain portion. But he does not explain just what sort of equivalence this might be or in what respects the robot's internal workings (or Sally's electronic implant) are analogous to those in a human brain. This is unfortunate since the notion of functional equivalence is notoriously slippery [I] and central to the question at hand. Unless we can say what counts as piarillg tize same fUllctiollai role as a qualitative state (i.e. as being functionally equivalent to such a state). we can not hope to determine whether non-qualia states could play such roles.

Although he does not make an explicit statement on the issue, Moor seems to think of functional roles as nodes in a network of states defined by their relations to inputs, outputs, and one another, with the nodes linked by the relation of simple causation. That is, the state, behavior, or perceptual input at a node is linked to another if it typically causes or is caused by the latter. The network should also allow for causal inhibition and cases in which activation of more than a single node is required to produce a subsequent effect. Despite these complications the basic linking relation remains that of simple cause and effect.

This view is naturally associated with machine-state functionalism and with the popular technique of defining functional roles by the modified Ramsay method used by Lewis [2] and Block [3] insofar as the relevant net­work is interpreted only in input and output terms. The functional roles thus defined are quite abstract; the range of realizations is constrained primarily by the nature of inputs and outputs, and there is no procedure for requiring relations among nodes more specific than mere cause and effect. The method does not normally provide for more specific inter­actions such as requiring the occupant of node A2 to pass sodium ions or a certain string of binary code to the occupant of B3. In brief, it pre­cludes requiring one node to hear any relation of qualitative or phenomenal similarity to another. Such models can only require that the occupant of A2 play sOlJle role in the causation, activation, or inhibition of B3.

If the functionalist is restricted to abstract causal networks of this sort interpreted only in terms of their perceptual inputs and behavioral outputs, it seems unlikely that he will be able to exclude non-qualia

Page 129: Perspectives on Mind

Qualia. Functional Equil'alence, and Computation 121

realizations. But the moral to be drawn is not that functional descrip­tions cannot capture qualia, but rather that a richer vocabulary is re­quired for specifying functional networks. Thus we havt: still not arrived at a satisfactory interpretation of our original question: could a robot which had an internal organization functionally equivalent to a human brain lack qualia? The notion of functional equivalence cannot be interpreted as 110 equivalence or as equivalence with respect to a simple cause and effect network of the kind just described without trivializing the question. On either reading the answer is probably, but uninterestingly. affirmative.

Delimiting the relevant notion of functional equivalence or functional role requires a principled way of distinguishing between what some item does and how it does it, which allows for the possibility that some struc­turally distinct item might do the same thing but in a different way. A fuse and a circuit breaker both prevent current from exceeding a certain maximum. One does so by melting as a result of heat qenerated by electri­cal resistance, the other opens because of electromagnetic repulsion. How­ever, their description as functionally equivalent is principled only rela­tive to a given level of abstraction and an associated context of pragmatic interests. If the context shifts to include other causal interests, such as interactions with nearby heat sensitive or magnetically sensitive com­ponents, the two will no longer count as functionally equivalent.

This well known relativity of functional equivalence [4] has important application to Moor's question. We want to determine whether a robot could lack qualia while having an internal organization functionally equivalent to a human brain in all psychologically relevalll respects. We cannot re­quire the robot's organization to be causally equivalent in all respects, for then nothing would suffice except giving the robot an artificial but molecule for molecule duplicate of a human brain. Any difference in com­position which was perceptible or even indirectly detectable would consti­tute a difference in causal role. Thus, what is required is some relation weaker than total causal-role equivalence but stronger than 110 equivalence or simple causal network equivalence. We want a notion of psychological equil'alence, but unfortunately that notion is itself far from clear. How do we draw the line between a brain component's psychological role and the non-psychological facts about how it fills that role? In fact, it seems there will be no unique way of drawing such a line: rather the line will shift depending upon our particular psychological inquiry.

In the case of a computer subsystem, we may be content to describe it in terms of its input/output function as a multiplier. But in other cases we may wish to push farther and distinguish between two such 110 equivalent units if one produces its results by serial additions and the other relies in part upon circuit analogs of multiplication tables. We will often wish to distinguish among devices that operate according to different algorithms. have different architectures. or employ different sorts of

Page 130: Perspectives on Mind

122 Roben Van Gulick

representations. even if they produce similar outputs. It seems likely that in at least some psychological cases, we will

want to distinguish between systems with qualia and those without. Consider color qualia. They are most plausibly treated as properties of complex 3-dimensional representations. Normal visual perception produces represen­tations with the formal structure of a 3-D manifold whose regions are dif­ferentiated at least in part by color qualia. Those colors also have a complex formal structure of similarities, unary/binary relations, and brightness relations. While it might be possible to process and store the information contained in the visual manifold in other non-qualitative ways, they would be importantly different from those involved in normal visual perception. Non-qualitative representations might be itlfo177lationally equivalent, but they would have to be quite different in format, structure. and the nature of the processes which operated with respect to them.

Thus. if we are employing a notion of psychological equivalence which distinguishes among psychological subsystems on the basis of the sorts of representations and processes they employ. we will get a negative answer to our original question. No robot component could be functionally equivalent to such a brain system in the psychologically relevant sense unless it involved the use of qualitatively differentiated representations. The functionalist need not restrict himself to Professor Moor's two sets of tests. Rather, he can appeal to evidence about how the component sub­systems of the robot operate. Just what sort of evidence he will need is at present uncertain, since we remain ignorant about the underlying phys­ical basis of qualitatively differentiated representations in the brain. But we can reasonably hope that theoretical understanding of such matters will be forthcoming and may well arrive on the scene before the advent of convincingly humanoid robots of the sort Professor Moor hypothesizes.

Given an adequate theory of how qualitatively differentiated representations function and how such functions might be realized by underlying causal mechanisms, the functionalist would be prepared to address the robot-qualia question. There is no need for a transmission test. Neither direct nor indirect empathetic perception of robot qualia would be needed to establish their existence. Rather, it could be established in the standard scientific way by theory-based inferences from data about the robot's internal physical structure and activity, just as scientists today indirectly establish the existence of catalyzing enzymes in protein construction or photon-captures in photosynthesis. Scientific observation of qualia need not be empathetic.

Some versions of the transmission test could nonetheless be useful. If, for example, qualia should turn out to be associated with dynamic properties of electrical fields as certain Gestalt psychologists conjectured early in this century, a transmission might be devised which replicated in the perceiver the sort of fields occurring in the "brain" of

Page 131: Perspectives on Mind

Qualia, Functiol1al Equivalel1ce, and Compll/otion 123

the subject being observed. The instrumentation for such a test would have to be based upon a prior theory about the underlying basis of qualia. but it would avoid the sorts of Lockean worries raised by Professor Moor. Given suitable theory and technology, empathetic perception might be possible to supplement indirect methods of non-empathetic observation.

A functionalist theory of qualia would also provide a more satisfac­tory formulation of the replacement test. The replacing component would have to do more than replicate the causal effects of Sally's damaged brain unit relative to verbal behavior, non-verbal behavior and 'the production of verbally encoded belief representations. It would have to have a physical organization of the sort needed to realize the functional properties theoretically associated with qualitatively differentiated representations. Without such a structure it might show the right sort of input/output activity, but it would not be producing those outputs il1 the required way.

Moreover, I am skeptical that non-qualia components could produce all the right outputs. As Moor notes, outputs include beliefs about qualia, and I am more sympathetic than he is to Shoemaker's claim that a creature without qualia could not have the relevant beliefs about qualia [5]. Though it is not quite Shoemaker's way of making the claim, a quick argu­ment can be given to establish his point. One cannot believe a proposition one does not understand. A creature without qualia cannot fully understand what qualia are; so, such a creature cannot fully understand or believe propositions about qualia. Such a creature could not have beliefs equival­ent in content to those which a normal human has when he believes that he is having a toothache, a red after-image, or is savoring the taste of a good Chardonnay. Thus, no component in a non-qualia robot could produce all the outputs produced by a normal qualia component in a human brain.

The functionalist equipped with an adequate theory would also be in a much better position to make the sorts of explanatory appeals to qualia that Professor Moor falls back upon at the end of his discussion. For without such a theory, it is not at all clear what explanatory work qualia are to do. In Professor Moor's hypothesized cases, the robot's internal workings are to be functionally equivalent to the behavior regulating portions of the human brain, while leaving the qualia question open. In such a case, what additional explanatory value could be purchased by attributing qualia to the robot? We might make the robot empathetically comprehensible to ourselves, but this would work as well for non-qualia robots as long as they simulated human behavior. Professor Moor does appeal to levels of explanation, and claims correctly that a complete description at the microphysical level will not suffice for every explanatory purpose. However, by allowing that any functional role filled by a qualia component might also be filled by non-qualia structures or processes, he deprives qualia of any causal explanatory role. He makes qualia (or at least the difference between qualia and non-qualia processes)

Page 132: Perspectives on Mind

124 Roben Vall Gulick

epiphenomenal. By contrast, the functionalist with a theory about how qualitatively differentiated representations function and are realized can invoke it to explain the causal operations of the relevant internal components, such as those underlying visual perception,

Professor Moor's distinction between qualia attributions based on evidellfial considerations and those based on explanatolY considerations is not really viable, What we want is a theory which allows us to use detail­ed evidence about internal organization to explain how qualia function in the causation of behavior. Qualia attributions made in the context of such a theory would be genuinely explanatory. One final point requires mention. Moor sometimes asks whether we could build a computer with qualia and at other times whether we could build a robot or electronic device with qualia (the meat/metal distinction). Though he seems to regard these questions as interchangeable, they should be kept distinct. While most present day computers are electronic devices, not all electronic devices are computers. Nor need future computers be electronic. The compl/tationaltizeon' of mind should not be confused with a commitment to pizysicalism or mechanism. Many critics of the computational view, such as John Searle, explicitly maintain a materialist view of mind [6J. The materialist is committed only to the claim that producing qualia requires building a system with the necessary physical organization. Computationalists claim that the relevant features of that organization are solely computational. According to them, having a mind, mental states, or perhaps even qualia requires only having a physical organization that instantiates an appropriate formally specifiable computational structure. No other physical constraints are placed on the class of systems with genuine minds. What physicalists like Searle object to is the suggestion that sufficient conditions for having a mind can be specified in such an abstract vocabulary unconstrained by any specific conditions on the details of physical constitution.

By analogy, we might apply the computationalistiphysicalist distinc­tion to the case of artificial genes. Is it possible to build robots or computers capable of "sexual" reproduction? The physical basis of human sexual reproduction and genetic transmission is today well established, and thus it is at least possible to construct artificial sexually reproducing physical devices. But it is far less obvious that doing so need only be a matter of making devices with an appropriate computational structure. Nor is it clear that electronic components could carry off the task. Consider how the claim Moor makes in his second to last paragraph about mimicking the computational structure of the brain would read if modified as a claim about genes: "Assuming human reproduction is directed by human genes and the genes operate through the activity of nucleotides which can be describ­ed by a computable function, a computer could be designed to instantiate the relevant part of the biological system." As an orderly physical process the activity of the nucleotides probably can be described bv a

Page 133: Perspectives on Mind

Qualia, Functional Equil'Qlence, and Computation 125

computable function, but not every realization of that function will be a system of sexual reproduction or an instantiation of the relevalll parts of the original biological system.

The anti-computational physicalist claims that having thoughts, experiences, and qualia is more like being capable of sexual reproduction than like being an adding machine. Having the requisite sort of causal organization is not merely a matter of instantiating a certain sort of formal or abstract computational structure. More concrete causal cons­traints apply. Still, it may turn out that the relevant sorts of causal processes involved in having qualia can be produced in electronic as well as organic components. I f. for example. the old Gestalt proposal identi­fying experience with electrical fields happens to be correct. such fields might be capable of production by non-organic components. But that result would still not confirm computationalist claims given our present ignorance about the physical basis of qualitative experience; for there is little we can say about the range of systems in which they might be produced. But in considering and investigating the question. we should be clear about the options and not confuse physicalism with computationalism, nor general questions about robots with more particular questions about computers.

--1/--

Moor has assumed a position which aligns him with those functionalists who maintain that in order to understand the mind all that is needed is a computational replication of human skills and behavior. For them any phys­ical organization capable of the requisite computational functions would suffice: "No other physical constraints are placed on the class of systems with genuine minds." (Van Gulick: this volume, p. 124) Van Gulick, however. notes that there is an important distinction which computation­alists tend to overlook, namely that between "what some item does and how it does it." Given this distinction. the concept of "functional equivalence" used by Moor and the computationalists is inadequate to the task of describing relevant behaviors in contrasting systems. On the other hand, if an appropriate revision of this key notion is carried through, then their argument fails. And in that case the question about qualia reasserts itself.

Van Gulick argues. therefore. that the goal of functionalism should be to identify not only the computational structures that constitute the "behavioral organization" of the brain. but the underlying causal mechan­isms as well. Whether these mechanisms are themselves ultimately reducible to computational description without remainder is an open question. one awaiting the judgment of further empirical research. If they do happen to turn out to be reducible. then if we could identify the physical processes which give rise to qualia in human beings. we could presumably generate a

Page 134: Perspectives on Mind

126 Chapter Two

computational translation that would replicate such processes within a system of programmed functions capable of running on "hardware" quite different from our own, and such a mechanism would experience qualia. The assumption here, of course, is that an "orderly physical process" can be replicated by a "computable function."

But how would we klloll' that it experiences qualia? Would it be enough for us to focus on the "functionally analogous" behavior of the mechanism as Moor does? Van Gulick sees nothing to be gained from this line of questioning. He argues that we would have every reason to attribute the experience of qualia to an artificial intelligence whose structure was transparent to us at the design level and which exhibited all the proper behavioral patterns and responses. He asserts that,

it could be established in the standard scientific way by theory-based inferences from data about the robot's internal physical structure and activity, just as scientists today indirectly establish the existence of catalyzing enzymes in protein construction .... Scientific obser­vation of qualia need not be empathetic. (p. 122)

In other words, to the extent that qualia playa functional role in the internal organization of the mechanism's computational alld underlying causal structure there is no reason that the experience of the mechanism would lack any of the qualitative dimensions intrinsic to human experience. Van Gulick would retain but reformulate the testing strategies discussed by Moor, for he feels they might be useful in helping us to comprehend the electrical dynamics of the brain system. But do qualia have the sort of nature that can be replicated in a functionalist format, with or without relationship to an underlying causality?

In the following commentary, Henry Johnstone proposes that qualia are mental phenomena that "emerge" from an illfelprefive process intimately bound up with communication. If qualia are indeed products of such "interpretive screening," functionalist replications of qualitative exper­ience would need to include some sort of "translation" of the relevant interpretive functions. But, even given that, how would we tell whether the translation was a success? Johnstone's response would be to test robots for the ability to use language. Previously. Moor had argued that the qualia problem is analogous to the problem of other minds, and equally intractable. Johnstone's proposal appears to provide a countervailing view on this matter; for while Moor pushes the skeptical proposition that we cannot determine that qualia exist even in other minds (much less in robots), Johnstone, starting from the fact that we do attribute qualia to other minds, proposes that our use of language can be exploited to deter­mine the presence or absence of qualia in human subjects. He concludes, consequently, that a "communication test" might work with robots as well.

Page 135: Perspectives on Mind

HENRY W. JOHNSTONE. JR.

Animals, Qualia, and Robots

Moor's project is to suggest tests to determine whether machines experience qualia on the assumption that the analysis of the proposition that they do experience them is already settled~~at least supposing that they are made of the right stuff. I assume this too. My project is also, in a sense, to suggest a test; but it is a test not so easily formulated as any of Moor·s.

Moor asks whether the fact that robots are made of a stuff different from ours (metal, not meat) precludes their experiencing qualia. Is the stuff necessary for the experience? An equally legitimate question is whether the stuff is sufficient for the experience, assuming a properly functioning organism. Do animals experience qualia? We have little difficulty in supposing that monkeys, dogs, and cats do. What of lizards? Bees, we know, are sensitive to many wavelengths in the light spectrum, including wavelengths we are not sensitive to; but do they experience the qualia of red, blue, and ultra~violet? And what would be added to our understanding of bee behavior by saying that they do? With bees. we seem to be confronted with machine~like objects, and Moor ought to find it just as plausible to suppose that bees have qualia as that machines do~~and for the same reasons. If transmission and replacement tests can (at least in principle) be designed for machines. there is no reason why they cannot (at least in principle) be designed for bees. Descartes held that animals are machines. One thing that prevents many people from accepting this contention is their inclination to attribute qualia to some animals; e.g., cats, dogs, and monkeys. But with respect to bees, there is not a strong inclination to do this, and hence no great reluctance to agree with Descartes.

If it is reasonable to generalize the qualia question from machines to animals, the question can be further generalized in an interesting way. For the question whether non~humans experience qualia is analogous to the question whether and in what sense non~humans use language. Bees are again a good example. There is clearly a sense in which bees use language. Their dances communicate the whereabouts of nectar. The sense of "com~ municate" here is the same as that in which machines "communicate" with one another. A radar beacon can communicate to the computer of an airplane the whereabouts of an airport. Is there any need to assume that the bees are communicating with one another in a stronger sense of "communicate" than that in which it is sufficient to assume that the machines are commun~ icating? This is like the question "Is there any need to assume that bees are sensitive to colors in a sense of 'sensitive' stronger than that

127 H. R. Qlto and 1. A. Tuedio (eds.), Penpectil-'es on Mind. 127-136. © 1988 h_v n. Reidel Publishing Company.

Page 136: Perspectives on Mind

128 Hellry W. Johlls/olle, Jr.

in which it is sufficient to assume that some machines are sensitive; i.e., that they are responsive to stimuli without experiencing qualia?"

It would be helpful if one could offer a clear definition of this stronger sense of "communicate," because we might then see what is meant by the stronger sense of "sensitive," but the task is not easy. Perhaps we could start by claiming that an airplane pilot in contact with a ground controller is communicating or being communicated to in the stronger sense, because as the result of what the controller tells him, the pilot kllOws where he is. But how will the knowing pilot differ from the autopilot? Unless he operates the controls of the plane in exactly the same way (or very much the same way) that the autopilot would have operated them, he cannot be said really to know at all. Perhaps the pilot "knows" in the sense that he could give an account of the plane's whereabouts to another person. But the onboard computer, in collusion with the beacon, and on the basis of other signals, could probably give a better account. We are on a slippery slope. As long as we treat communication, or the understanding of what is communicated, as a competence, we can always design machines more competent than humans in exercising the skills that we claim humans exercise which they communicate ill a sense of "communicate" not applicable to bees. This is an instance of the principle that however we define intelligent behavior, someone can design a machine capable of such behavior. Hence there is no difficulty in showing that machines are intelligent. We capitulate to this conclusion by failing to take issue with the assumption that intelligence is a form of behavior. What has gone wrong similarly with our attempt to isolate a kind of communication higher than that of the bees is that we have failed to resist the assumption that communication is or results in a sort of competence.

We come somewhat closer to grasping the sense of "communication" we are seeking if we see it as having a rhetorical dimension. Rhetoric is required at least to call a//ell/ion /0 the content communicated. I must get the attention of my interlocutor or audience if I am to communicate anything at all to him or it. I must bring it about that minds are put onto what I am saying. A pilot kIlOIl'S where he is only if his mind is on his whereabouts. He can, or course. respond to signals like an autopilot, but I think we would characterize such response as automatic or purely reflexive, like the response of the bees, not based on knowledge. The dancing bees engage in no rhetoric of attention-getting, nor does the radar beacon addressing the autopilot. No such rhetoric is necessary. since the members of the dancers' audience have no choice except to respond, and the same for the autopilot. But the attention of the pilot must somehow be drawn to his own situation if he is to he said to "know" where he is. It does not much matter what alerts it--whether another human in the cockpit. or the pilot somehow collecting himself together or simply spontaneously noticing some signal or reading showing on an instrument. My point is not

Page 137: Perspectives on Mind

Anima/s, Qualia, and Robots 129

that machines are never the source of the rhetoric of attention-getting; it is that they cannot be its destination. A person knolt's his whereabouts only when his aI/ell/ion to his whereabouts has been summoned by some rhetorical stimulus. But it makes no sense to speak of getting a machine's attention; once it is properly switched on and tuned in, it has no choice except to lend its ear.

There is another way to put the point. The relati,on between the dancing bees and their audience is a dyadic relation; the dancers stimulate the foragers, and the latter respond. But a dyadic relation is an inadequate model of communication in any except the rudimentary and perhaps metaphorical sense in which the bees are said to communicate or a radar beacon may be said to communicate with an autopilot. As Peirce plainly says, three terms are needed: a sign, that of which it is a sign, and the being that interprets the sign. Thus the ground controller's words are a sign to the pilot of the whereabouts of the airport. But we cannot formulate a corresponding analysis of the "communication" of the bees. For it would sound very strange to say that the dance is a sign to the bees of the whereabouts of nectar. That would suggest that the bees had their minds on a task. And such a suggestion contradicts our understanding of insect behavior as mindless--as based on unconditioned reflex rather than intellect.

My appeal to the concept of rhetoric in attempting to characterize communication at a level above the most primitive is not, however, equivalent with my appeal to Peirce's semiotic triad for the same purpose. For we can catch the attention of a being not certainly capable of taking the role of interpretant. A playful or distracted dog may have to be addressed repeatedly before it will finally listen to a command. But once we have gotten its attention, can we be sure that it will interpret the command as a sign of something else? This seems unlikely, for what we mean by a "command," at least as addressed to an animal, is a stimulus intended to elicit a response. Rhetoric, in other words, may be a prerequisite to semiosis, but can also be a prelude to communication at a pre-semiotic level, at least with some animals.

Communication with dogs is, of course, a two-way street, and when the other interlocutor is a human being it can be genuinely semiotic. To a human the dog's bark can he a sign of someone's being al the front door. But of course exactly the same can be true when the source of the sign is an inanimate object such as an instrument in an airplane. And in both cases communication reaches the semiotic level only because of the human participant in the interaction.

What if the dog could understand its own bark a~; a sign of the presence of someone at the door? Then it would be telling us something in a way most people would regard as uncanny. The dog would itself be communicating at a level above the rudimentary and metaphorical, because

Page 138: Perspectives on Mind

130 Hef1l), W. Johnstone, Jr.

its mind would be on the task of getting a message across. This possibility is not often seriously discussed in connection with

dogs, but the question has, in effect, been raised whether chimpanzees can be interpretants of their own signing behavior. If they can, the question whether they are language-users can be answered in the affirmative. If not, this behavior, however complex it may be syntactically and in vocabulary, seems to reduce to a tactic of problem-solving. (The dog is presumably also trying to solve a problem by barking.)

Is there a language in which humans and chimpanzees can communicate? So far as the human participants are concerned, there clearly is. These humans are aware of the semiotic function of their own messages to the chimpanzees and respond to messages from the latter as signs, not just stimuli. But it can still be a moot question whether the chimpanzees see their own messages as signs, or, for that matter, whether they are interpretants of the messages they receive from humans. How would one find out? Any objective test of their status as interpretants could in principle amount to no more than a test of their respomes to certain stimuli, and thus would be self-defeating. The only hope would be in asking the chimpanzees whether they are interpretants. We would have to pose questions like "Does the yellow disc mean 'banana'?" Such questions would clearly require a much richer vocabulary and syntax than any hitherto taught chimpanzees. In order to be reasonably sure that the response was not merely the result of training, we would have to insist on the use of a language comprehensive enough to allow question and answer to be framed in a number of ways, as well as to permit excursions into the metalanguage (as in "X means 'X'. ") The result would be that any chimpanzee able to understand the question would automatically qualify as an interpretant.

I return to the question of qualia. Qualia emerge in an inter-pretative process. When this process is absent, objects come close to being pure stimuli. Thus the ring of the telephone can stimulate a reflex arc causing me to pick up the receiver. In this case I hardly notice the sound at all. I do notice it when I am further away from its source, and the ring contrasts with other background noises; then the ring emerges as a quale. But I can fail to notice this contrast, too, if my mind is not on what I am hearing; there may be no qualia for me at all if I am day­dreaming.

The interpretative process in which qualia emerge is a kind of discrimination. But "discrimination" is an ambiguous word; it can be a response to pure stimuli, as it probably is in the case of the bees who fly to the sugar-water in the red dish but 11'" ;n the blue. This feat can be explained without any reference to an au .. ; interpretation. Hut if humans choose the contents of the red dish over those of the blue, we do not assume that they are conditioned by a stimulus that does not enter into their experience. It is more plausible to suppose that the very

Page 139: Perspectives on Mind

Animals, Qualia, and Robots 131

distinction between red and blue arises as a way of marking the distinction between positively valued contents and contents without this value; that this distinction is called for as an interpretation of the value distinction. If there were no value distinction, the color distinction would no longer matter, and would tend to fall from view, to lapse.

It would be preposterous to deny that humans can entertain qualia wholly apart from their role as markers of distinctions. It would similarly be preposterous to deny that a sign can be enjoyed for its own sake, wholly apart from the object of which it is a sign. Painting and poetry--not to mention all the other arts, or indeed whatever induces esthesis--would be devastating counterexamples to any such thoughtless denials. It can nonetheless be reasonably claimed that the qualia entertained in esthetic experiences must first be gained as vehicles of distinction. This process can be documented in art itself, which invites us not only to entertain but also to discriminate.

Just as signs can be characterized in terms of their position in a triad, so can qualia. What distinguishes a quale from a pure stimulus is that someone ascribes it to something; e.g., I ascribe the red to the dish. The similarity between this process and that of semiosis is obvious. Qualia are, in fact, signs.

When Moor speaks of qualia, his examples are not primarily colors or sounds. They are pains. The crucial problem for him is whether a robot can experience pains as Sally does. But there seems to be no problem about fitting pains into the triad. A pain is distinct from whatever causes a reflex flinching; that would be a term in a dyadic relation in which pain had not yet arisen. Pain does arise when someone ascribes a certain quale to something (using "someone" in a broad enough sense not to rule out animals or machines). For example, I ascribe a pain to my tooth. The word "ascribe" here may seem a little strange, since I probably have no choice except to feel pain in my tooth. The act of ascription, in the sense intended here, is not the result of reflection. If there is a more suitable word than "ascribe," let it be used.

While it is likely that I have no choice except to feel pain in my tooth, I can--not through choice--fail altogether to notice it, as when my mind is on something else. Similarly, a person for whom a message is intended can, through absent-mindedness, fail to play the role of interpretant. For semiosis to occur, there must be attention. Similarly for qualia.

How do we know whether an organism or machine experiences qualia? Objective tests can in the end do no more than provide stimuli for the subject to respond to. If I flash a red light and you say "red," how do I know that you are experiencing the qllale red? And if this is the case for people, it is a f0l1iori the case for animals and machines. The only hope will have to be what was the only hope of learning whether a subject is an

Page 140: Perspectives on Mind

132 Chapter Two

interpretant--namely, the use of questions and answers in a language sufficiently comprehensive and open to allow us to discuss the subject's experiences with him. I assume, incidentally, that the possibility of such discussions in the case of humans is a powerful argument against solipsism, which in the face of a flexible and reflexive language becomes a hopelessly complicated hypothesis.

If we learn through conversation whether a subject experiences other qualia, we learn about the subject's pains in this way, too. So the question is whether animals and machines can use a suitably complex language to enable them to discuss the matter with us. This conclusion reverses our deepest intuitions about the capacity for pain of animals and that of machines. We think it probable that at least the higher animals experience pain and improbable that machines do. And yet it is far more likely that machines can be constructed capable of using languages of the requisite complexity than that any animals exist with such a capacity.

But perhaps there is a confusion here. Animals exhibit "pain behavior"; they flinch and scream when confronted with certain stimuli. Machines do not usually behave in this way. Rut the behavior in question has little bearing if any on the issue of qualia. That issue is, I take it, epistemological; we want to know what data can be available to subjects of various sorts. A quale is a datum; a flinch is not. Or, as Moor puts it, a quale presupposes a belief. But only a linguistically sophisticated being can formulate its own beliefs. Again, to refer to another of Moor's examples, the issue is not just pains but the identitl' of pains, and such identity is neither asserted nor established by a scream.

Quite distinct from epistemological questions. and of far greater practical importance, is the moral question of how to deal with animals and robots. It is obviously cruel gratuitously to stimulate pain behavior, especially on the part of organisms unable to formulate their reactions to the stimuli--able only to scream. A scream is not a report, but it can be a powerful moral imperative.

--11--

Johnstone takes issue with Moor's unspoken assumption that intelligence is a form of behavior involving the exercise of skills and competences that can be replicated in a computational mechanism. If we were to ask Moor how he would determine when a robot is exhibiting intelligence, he would respond by analyzing the behavioral traits of the robot. Similarly. to determine whether or not the robot is communicating with us, he w~uld look for language performance that is functionally analogous to our own. Johnstone is critical of this approach primarily because it overlooks a crucial feature of genuinely intelligent behavior.

Page 141: Perspectives on Mind

CommentalY 133

This feature, which sustains our capacity to use language as well as to experience qualia, is explained by Johnstone in terms of the role played by "attention" in shaping our linguistic response to stimuli.

Qualia are said to differ from mere stimuli. A bright light will set in motion a "reflex arc" behavioral response: we blink, or we turn our head away from the light. But what about the painful glare of that light even as the reflex swiftly completes itself? Or, what about a noise in the woods that makes one's skin crawl at night? Why do some sounds catch my attention, while others do not? What about my preference for dark blue over light blue? At what point are we beyond examples of mere stimuli? What is it that gives rise to qllalia?

Johnstone argues that the key to experiencing qualia lies 1101 in exhibiting the proper behavior, but in having the internal capacity to identify stimuli as meaningful signs. The ability to discern qualia depends on our capacity to identify, from a first person standpoint, the special characteristics of stimuli infused with significance or meaning relating to aliI' situation. This, in turn, implicates the presence of sophisticated linguistic capacities allowing the creature in question to formulate beliefs in ways that can be communicated to others. Ultimately, then, the existence of qualia appear to go hand-in-hand with the ability to express meaning through channels of communication.

Moor's analysis of the qualia issue has stimulated reflection on some important issues; however, major questions remain unanswered. Since the testing problem (with respect to the metaphysical side of the qualia issue) appears not to have been defused, we may yet have to deal with the issue of how to test for qualia in mechanisms designed to operate as "full-fledged" minds, Nor have we anything more than a provisional understanding of the impact of qualia on behavior. It has been argued that the "taking" function plays a key role in the experience of qualia, but not enough has been said about the nature of this function to determine whether or not it lends itself to computational reduction. We need to examine the possi­bility that "taking" is an intrinsic, intentional structure of mental processing. If we can determine that intentionality is integral to the "taking" function, then we are brought once again to the central question raised in Chapter One, namely, to what extent is the semantic content of mental processing reducible to the formal syntax of computable functions?

Computationalists, of course, face an additional challenge from those who argue that phrsical ()fganizmion plays a key role ',n the structural makeup of qualitative experience. Proponents of this view contend that, sooner or later, the computational theory of mind must wrestle with the problem of replicating the causal organization of neurobiological functions. Here the computational strategy will meet its match, these critics argue, for the requisite physical-chemical organi2ation cannot be lranslated into the formal syntax of computable functions. Lacking the

Page 142: Perspectives on Mind

134 Chapter Two

ability to formalize this crucial element of the puzzle, computationalism is stopped short in its attempt to design a network of computational functions capable of replicating a full-fledged mind.

We will begin tackling these issues in the next section which revolves around a paper by RJ. Nelson. In the context of his investigation of the "taking" function, Professor Nelson proposes an eclectic merger of physicalism and computationalism designed to free functionalism from the constraints that would otherwise be imposed upon it by a purely computa­tional approach. Were he to succeed, he would thereby fulfill the require­ment set out by Van Gulick in the latter's critique of Moor. But, in the end, he appears to be driven to admit that even this eclectic merger may fall short as an account of "full blown" intentionality.

2.2 Intentionality

The intentional character of mental actIVIty is a primary object of reflection for most "first person" approaches to the study of mind. In contrast, most (if not all) computational and physicalist approaches continue to stumble against the enigma of intentionality. Indeed, as we witnessed in the Rey and Moor essays, there is a growing tendency from these standpoints to ignore or downplay the importance of intentionality as a characteristic of mental life.

The following paper, by R.J. Nelson, attempts to take the middle road: while admitting that the intentional character of mental life is an important ingredient that must be accounted for by an adequate theory of mind, Nelson tries to show how we might analyze intentionality from a standpoint that merges the computational and physicalist approaches. He calls his approach "mechanism," and contends that while it may fail to capture the essence of conscious intentional attitudes, it is nevertheless sufficient for analyzing all other forms of intentional phenomena, including the full range of our perceptual experiences and our "tacit" beliefs and desi res.

Nelson stresses the importance of viewing the mind holistically, but argues that many of the holistic aspects of mental life can in fact be analyzed in terms of computational and lor neuro-biological functions. The early sections of his paper present a criticism of the purely computational approach to the study of mind, highlighting weaknesses inherent in the functionalist's conception of feelings, beliefs and desires as "role­playing" mental states. He also stresses the importance of distinguishing "mental attitudes" (which are intentional in character) from "cognitive skills requiring intelligence" (which need not be intentional). Thus he proposes that:

Page 143: Perspectives on Mind

Commentary 135

a computer might "read" stereotypical print, "play" chess, "compose" music, "draw" pictures, and "prove" theorems, but would not believe, perceive, strive for, hope for, or understand a thing. (Nelson: this volume, p. 145)

Of course, the computer might prove to be functionally identical to a conscious entity that really does believe, perceive, strive for, hope for, and understand things. But this would demonstrate nothing more than token-token identity between the two structural systems. Lacking the neural network that instantiates our intentional life, and despite the programmed presence of token-identical logical structures, the computer would be incapable of feelings, sensations, or other subjective experiences, and hence would lack "full-blown" intentionality.

In contrast to Moor, Nelson proposes that the intentional life of mind is dependent on the right (neurological) stuff, and that every conscious mental occurrence is thus type-type identical to an event in the nervous system. This in turn leads Nelson to emphasize a sharp distinction between components and structures: "components are type-type identical to material events, stmctures are individuated functionallv and are token-token identical to material complexes." (p. 147) This is the key to his mechanist version of computationalism. From this standpoint, he proceeds to analyze several key aspects of the intentionality issue, including the "taking" relation (which, given its semiotic character, necessitates an account of the self-referencing character of "recognition states").

As we saw earlier, Johnstone's reflections on the semiotic character of the "taking" relation foreshadowed Nelson's proposals. Johnstone emphasized the interpretive process that underlies all qualitative experience and focussed on the semiotic character of t he relation that holds between "sign," "referent," and "interpreter." He contrasted this with a merely "dyadic" relation which structures the operations of computational mechanisms, and concluded that it is our capacity to foclls "attention" in selective ways that separates us from these mechanisms, and separates us in ways that cannot be captured by computational-based philosophies of mind. He concluded that while the computational mechanism might be capable of manifesting behavior-tendencies that simulate our own. and might even manifest linguistic capacities good enoLlgh to pass a communication test, such a mechanism could not function in ways that are dependent on the "taking" lelation, and so would lack the capacity for experiencing qualia or other intentional phenomena.

But what is this "taking relation?" In particular, is it really dependent (as Johnstone's discussion of "attention" has suggested) on conscious mental processing? Nelson will propose, contrary to Johnstone, that there is no reason why computational mechanisms could not be programmed to exhibit functions dependent on the taking relation. Here the

Page 144: Perspectives on Mind

136 Chapter Two

reader should be alert to Nelson's critique of the thesis that all computational functions are reducible in principle to pure syntax. He takes an interesting stand in contrast to the usual critique: one would expect the argument that since intentional states are semantic primitives and computers operate on the basis of syntactic primitives. computers lack intentional states. But Nelson attacks the key assumption that computers must operate solely on the basis of syntactic primitives, basing his argument on the thesis that computational mechanisms are at least capable in principle of behavior that has been generated in accordance with non­conscious intentional states. Mechanisms operating in accordance with non-conscious intentional states "satisfy all of the requisites of full intentionality ... [although) without feeling or awareness ... " (pp. 155-156)

These initiatives are part of Nelson's general attempt to develop a theory of "mind" that preserves the integrity of holistic mental phenomena; and they offer the reader an intriguing perspective from which to think about mind. If Nelson is on the right track. the central issue in cognitive science should no longer be whether computational mechanisms can exhibit functions dependent on the "taking" relation; but. instead. we should be trying to understand the "taking" relation itself, so that we can determine the conditions of satisfaction which would have to be met for a computational mechanism to actually manifest behavior born out of the semiotic character of human "taking." Although the makeup of this mechan­ism would lack the requisite biological foundation for conscious awareness (since there would be no type-type identity between "mental" occurrences and events in the central nervous system). its structure would be functionally analogous to our own; and, hence, the mechanism would indeed think.

Page 145: Perspectives on Mind

RA YMOND J. NELSON

Mechanism and Intentionality: The New World Knot

Mental life presents many holistic phenomena. Gestalt perception comes to mind as does the manifold of intentions--beliefs. desires, and actions--and the fabric of linguistic meanings. In this paper I want to argue that none of these wholes resist analysis except for conscious intentional attitudes. It seems we cannot get a theoretical grasp of the difference between tacit (unconscious or preconscious) beliefs, desires, etc. and conscious feeling­laden belief. As I shall endeavor to explain, perception, unconscious attitudes, linguistic competence, and even intellectual skills accompanied by raw feeling (kinesthetic sensation at the edge of attention while typing or playing a musical instrument) are explainable in principle within the computer paradigm. Conscious attitudes alone seem not to be explainable in such terms and are perhaps absolutely holistic.

Schopenhauer termed the mind-body problem the "World-Knot." To an extent, materialism unties it. But even if one accepts this, there remain holistic aspects of the mind which are either unbreachablle in principle or present a new "knot" of a more local kind.

Not long ago, British empiricism and our own empiricist tradition taught that holism is a species of pseudo-philosophy embraced by the philosophically naive, especially by denizens of the Continent, and tolerable to an extent inversely proportional to one's mastery of Volume I of Whitehead and Russell's Principia Mali1emalica. To parody Mill: better to be a Russell scorned than a Hegel adulated. In that climate, anyone who argued a complex object to be holistic--for instance, that society is more than the sum of the individual persons in it, or that the mind is more than a bunch of interconnected neurons or dispositions to behave--needed corrective therapy. He needed to be shaken from his errant belief that there is a special epistemic power for grasping intrinsic wholes, and to be purged of the mistaken doctrine that the ordinary analytical tools of academic philosophy blunt against objects taken in their complex wholeness.

Things have changed. There are negative reasons for being quite respectably holistic. The principal ones are that analytical, piecemeal, and reductionist attitudes toward philosophical problems have failed. Here are two examples--one having to do with the demise of logical empiricism. and the other with the unsettled state of the philosophy of mind. The positivist campaign to trace all philosophical tangles to corrupt syntax, to promote the verifiability theory of meaning, and to reduce natural

137 H. R. 0([0 and 1. A. Tuedio (ed5.). Perspecli~·es on !Hmd, /37-158.

© J9RR hy D. Reidel Publishing Company

Page 146: Perspectives on Mind

138 RaYlIlond 1. Nelson

science to physics, and physics itself on down to phenomenal or protocol sentences, was broken up years ago, mainly by critics who were themselves part of the analytical camp.

Not all of metaphysics is had syntax. Some of it is had semantics. Once one goes into "seman tical therapy" one stirs up ontological problems galore. The theory of reference is riddled with sets, sets of sets, possi­ble worlds, universals, tYres, tokens, events, propositions, causes, inten­sions, dispositions, abstract states, and much 1110re. Verificationism turns out to be hopelessly muddled and simplistic. Clearly, determining the meaning of an empirical sentence is more than verification; it depends on observation, background knowledge, expectations, and on the paradigms of the day. Meaning is an ingredient of a complex psycho-sociological whole. So, reductionism has to be, and was, scratched. Carnap's grand goal of the Aufbau was never achieved; indeed. far more modest proposals for reduction among established sciences never got off the ground. As arch-empiricist Nelson Goodman insists in Wan of World-Making [1]. there simply aren't any reductions around, even of chemistry to physics or of mathematics to logic. Science is a thing of many fabrics knit into one whole; its objects subsist on many interrelated ontological1evels. It is holistic.

In philosophy of mind, what passes as philosophical sophistication-­namely Turing machine functionalism or computationalism, treats the mind as a system of role-playing states. This has a nice holistic ring to it; but unfortunately functionalism is on one side of an epistemic divide having language, cognition, mental skills and attitudes as its subject: while sensation, feeling, awareness and perception are on the other, completely beyond its theoretical reach. Although this represents quite a different dualism than Descartes', it does reflect the object-subject gap that has plagued modern Western philosophy for a long time. If mind is the "whole" that phenomenological awareness tells us it is. functionalism must fail.

In its beginnings in the late 1950s, functionalism did present itself as holistic, although the term was not in vogue then. It attempted to characterize all mental content as functional, which means that subjective experiences as well as cognitive skills, objective ideas, language, and mental attitudes are all individuated by their role-playing interrelation­ships with one another. Mind is a functional system. It turns out that this philosophy, as programmatically stated. is as vague as it is holistic. But before I deal with the troubles in functionalism let me clear the air, for my own purposes here, with respect to the notion of "holism" itself.

There are three orders of holism, the metaphysical. the seman tical and the epistemological. Having no neat definitions, I shall proceed hy examples. A clear case of metaphysical holism is perception. Both the perceptual object and the mental faculty are holistic. Grasping: a sentence in handwritten scrawl or a face in a Braque, or a melody in SChoenberg is no analytical fcat. These objects are intrinsic 11l1oles. Moreover.

Page 147: Perspectives on Mind

Mechanism and lmemionality: 77le New World Knot 139

veridical perception is not an act of checking off component traits on a list, but of capturing a whole qua whole. Indeed the parts of a scrawled sentence or of a painting have definition and significance only relative to the whole. The perceptual whole is greater than the sum of the parts and largely determines the quality of the parts.

Semantical holism is not readily captured by the part-whole metaphor. The meaning of some expressions depends on that of others and vice versa. For instance the sense of 'belief' depends on that of 'desire' (and 'act', etc.) and that of 'desire' on 'belief'. The meaning of 'pt:rson' is wrapped up in that of 'society' and vice versa. In the material or metaphysical mode, belief, desire, and action seem to be interwoven in human experience. Societies and persons are interdependent components of a whole. Causal explanations of one attitude require appeal to the others, and similarly with societies and persons. [2) Epistemological holism is partly psychology and partly prescription. Knowledge of intrinsic wholes (e.g. either a subject's knowledge of a perceptual object, or our knowledge, as enquirers, of the subject's faculty--since we, too, must grasp the whole object to understand the relation) depends on a capahility of grasping objects in their entirety. No holistic complex is knowable by breaking it into parts and studying it piecemeal.

Provided this account of holism is close to what philosophers seem to mean by the term, there really are wholes relative to descriptions. Although I hesitate to delve into a theory of science here, I should make it plain that these descriptions are prescientific. The holism of perception, and of intentional attitudes is essentially a piece of folk psychology. I do not want 'folk' to be construed pejoratively, however. Others might prefer to say that our intuitions based on introspection--an old psycho-philosophical tradition--and currently popular paradigms, tell us that these phenomena are holistic.

The other side of the position--the side I espouse--is that there is often no alternative to holism: for if it is not until a later stage of scientific development that putatively adequate analyses emerge, then earlier central phenomena must be regarded as holistic if our conception of them is to accord with basic intuitions of the seriously concerned. For example, in the case of perception the best known analytical techniques on the basis of which psychological hypotheses might be formulated, derive largely from artificial intelligence research. The main methods are template-matching and component-analysis of pattern traits. All such tactics fail. No computer loaded with templates or trait recognition algorithms could translate my handwriting into clear printer output. Thus relative to today's known analytical techniques and to our underlying intuitions about the nature of perception, perception is holistic. Similarly, prior to Gibbs, heat was an intrinsic, holistic phenomenon irreducible to categories of classical physics; prior to Maxwell, magnetic

Page 148: Perspectives on Mind

140 Raymond 1. Nelson

phenomena were unanalyzable powers, and prior to Dedekind, real numbers were abstract, unanalyzable entities characterized implicitly in terms of field axioms. In each of these cases, new analytical theories had to, and eventually did, meet the intuitions of a community of serious scientists.

Any question of this sort is open until evidence against its holisticity is overwhelming. Thus, if gestalt perception is never cracked, it stands as an example of a holistic reality. If it turns out to be analyzable, then it is not. TIle holistic character of a subject of inquiry is a matter relatil'e to the status of a theolY at a time.

Perhaps some psychological features are holistic a prio"; such that no conceivable scientific approach could explain them in terms of interactions of simple parts. Even if this is so, it would not rule out scientific findings. Certain physical self-organizing systems are describable by nonlinear differential equations which express predictive laws of develop­ment; but such results do not answer the question of the etiology 01' inner mechanism of self-organization, if there is one. Until the latter question is answered the phenomenon is "holistic" notwithstanding the success of analysis as a predictive tool. Thus, it might be that intentional attitudes--such as belief, desire, hoping, and seeking--are not amenable to explication in terms of primitives, in which case they must be considered absolutely holistic. Still, I know of no way of establishing such a claim outside the body of philosophical and scientific knowledge at a given time.

Returning to the case of functionalism, I want to consider (I) how it came to be transformed into computationalism; (2) how the more precise computationalist version lost some of the glow of the original holistic idea, especially how mental qualia got lost in the theory altogether, and how it failed to account for intentionality; (3) how intentions and belief­desire-act wholes can be restored using computer logic models [3]; (4) how a type-type identity theory might be employed to account for feeling and sentience present in mental skills such as game-playing, theorem-proving, and perhaps even perception; and (5) why the identity theory seems to fail to account for conscious intentional attitudes. Whether this latter is a failure in principle is the "new" world-knot.

2. Functionalism

To give a quick and sketchy review [4]: there are foul' "corner-posts" of functionalism. (i) Mental entities [5]--pains, thoughts, beliefs, cognitive maps, 01' perceptions--are material; functionalism is thus a version of materialism. (ii) Mental entities differ from other entities in function. Two material systems might be irreducibly different in material constitution and yet function the same. A classical non-psychological example used by Herbert Simon is the wing: a bird's wing and an airplane wing are functionally alike but materially different. Airfoil theory

Page 149: Perspectives on Mind

Mechanism and Intentionality: 77,e New World Knot 141

applies to both. just as certain second order differential~quations apply equally. except for empirical parameters. to both electrical and damped mechanical systems. Similarly. one can conceive of an organ physically unlike the brain yet having the same function. that is. the same mentality. (iii) Mental entities play roles. A belief is a belief owing to its func­tional relationships to desires. hopes. and actions. In general. mental entities are individuated by the roles they play vis-a-vis inputs. outputs. and other states. Mental life is purposive. (iv) Psychology. defined as the science of the mental. is autonomous. Descriptions and explanations of intelligent behavior are mentalistic--that is. they refer irreducibly to mental states as role-playing entities not capable of definition in purely behavioral or neurological terms.

Points (i) and (ii) jointly entail a materialist ontology of mind. the so-called "token-token" identity theory: every mental entity is identical to some physical complex. although two functionally identical. i.e. mentally idellfical. entities need not be the same one physical thing. Your belief state is not materially the same as that of a sufficiently fancy robot. and might not be quite the same as mine. but the beliefs we all held. qua beliefs. could be the very same. I shall assume familiarity with the main tenets of the functionalist theory of mind (as roughly sketched here). and also with the history of its emergence as an alternative to dualism. behaviorism and central state materialism.

Already in Putnam's classic "Minds and Machines" [6]. functionalism was tending toward what we today call "computationalism" (I myself prefer "mechanism", a term which has a solid philosophical tradition behind it. and I will introduce that term later in order to fix certain distinctions). As early as the mid-1940s. John von Neumann distinguished between an en­gineering analysis of electrical circuits and computer circuit logic (Shannon and Peirce before him had already noted that electromagnetic switching nets realize truth functional logic). Two adding devices can have the same functional organization yet be fashioned of different mater­ial components. It is natural to adapt all four marks of functionalism to computer logic theory. Thus. corresponding to (i)-(iv) we have: (i') com­puter logic devices are physical; (ii') computer logic devices are indivi­duated by their function; they are token-token identical to such material items as hard tubes and transistors; and two materially different devices can perform identical logical functions (machine holism); (iii') circuits play roles (they add. control, store, etc.): (iv') computer logic is not reducible to the physics of electrical or electronic circuits. Today, this very distinction between function and material constitution is heeded at various levels in computer science: electronics versus switching logic, hardware versus software. Actually, computer functionalism preceded functionalism as a theory of mind by a good fifteen years.

Recall. next, that the burgeoning theory of transformational grammars

Page 150: Perspectives on Mind

142 Raymond J. Nelson

has become firmly rooted--at least at the level of deep structure--in precisely the same logic as computer models. Every phrase-structure grammar is a nondeterministic Turing machine. This strongly suggests a computationalist foundation for psycholinguistics. Thus, since Turing machines are discrete-state systems, it is natural in light these analogies to consider mental entities discrete even if underlying physical activity is continuous. Since Turing machines perform any effective symbol process­ing task, why not in the interests of clarity take them as models of mental activity? This leads to so-called "Turing machine functionalism."

Finally, we note that Turing's 1950 paper "Computing Machinery and Intelligence" introduced to the scientific commmunity the possibility of artificial intelligence of a high order. Since then, a mass of evidence has accumulated to the effect that computers can indeed perform skills one would ordinarily think of as requiring high intelligence and imagination-­for example, playing Master's level chess. This possibility is no longer even arguable, although a case can be made that there are limits to the AI enterprise. Certainly, it is not yet up to the level Turing predicted would be achieved by the year 2000. But some time remains yet; however, this issue is quite aside from the focus of this paper.

The net effect of these developments has been two-fold: to convert functionalism into computationalism--the doctrine that the mind is a system of programming-like or computational rules--and to raise questions about capabilities of machines and robots. If minds and computers are functional in the sense (i)-(iv); if both are discrete state and finite; and if AI continues its progress; then it is a reasonable hypothesis that the mind is, or is fruitfully modeled as, a computer of some sort.

3. Difficulties with COlllputationalism

To what extent does computationalism fit mental life and, specific­ally, does it do justice to holistic aspects of mind? As we shall see, the idea is loaded with difficulties. First, the functional theory of feelings is mistaken. Consequently, computationalism is irrelevant to the question of the ontological status of feelings and other "awareness" phenomena. Here are three arguments to that effect.

(I) Feelings are qualities, while role-playing entities are not, although of course these latter might hOl'e feelings or other qualities. An otherwise unconscious desire, for example, might be saturated with feeling. In Peirce's ontology, feelings are "firsts" and role-players "thirds". Thus, to identify feeling with some kind of functional organization is a category mistake.

(2) If mental states are held to be the same as logical states [71.

Page 151: Perspectives on Mind

MeciJanism and 1l1lelltionality: 77,C Nell' World Knot 143

then the theory is internally incoherent. For, two logical states can play the same role in two systems, one that hurts, and one that doesn't. This objection is very similar to the first, but represents a minor advance from the somewhat vague notion of role-playing to that of a state. Understood on the basis of Peircean categories, the trouble is that the identification of feelings with logical states confuses "firsts" and "seconds".

(3) But let us persist in the state interpretation of feeling for a moment. A logical state. in contrast to a physical state, is either the physical state taken from the standpoint of a functionalist description (i.e. it abstracts from material properties to functional relationships), or an abstract entity which the physical state "realizes" (a particular vis-a-I'is a Platonic idea), or a state symbol in an uninterpreted first order automaton language. l8] The first possibility is ruled out by the previous argument. The second is absurd. Feelings are not abstracta, at least mine aren't, and that's enough. As to the third, suppose a feeling is individuated by its being correlated to a state symbol; this would lead to a contradiction. It can be shown that there are two isomorphic models (of formal automata theory) in which corresponding states in the isomorphism correlate to different state symbols. Thus, two state­feelings are functionally identical and yet relate to different formal symbols--a contradiction. Hence feelings are not states.

Since feelings are not individuated by functions in a material system, holistic mind already slips through the grasp of functionalist theory. However, they might be type-type identical to neural entities, a possibility I myself favor. Thus, feelings would still be token-token identical to material objects, though not on functionalist grounds. I shall return to this theory later on.

Second, intentional attitudes are not states either. The original idea of role-playing was meant to capture attitudes such as beliefs, desires and hopes. A belief in early functionalism was some kind of role­player mediating desire and action, and was identified with an internal state. The advantages of the idea of a state in an input-output system are apparent, if sound. Thus. one obtains an attractive alternative to Brentano if attitudes are identified with states. Although intentions are not reducible to a physical or topic-neutral vocabulary, they are identi­fied with components of strictly material functional systems. Psychologi­cal laws relating beliefs, desires, and other attitudes represent state-to­state relationships of a holistic computational organization. We get the economical advantages of materialism plus the insights of phenomenology.

But as so often happens in philosophy, an insight that glows in the

Page 152: Perspectives on Mind

144 Raymond J. Nelson

dim light of programmatic pronouncements fades under closer scrutiny. And so it is with role-playing states. In early functionalism 'state' was fuzzy enough to cover just about any mental content. However with the advent of computationalism, the term 'state' gained in precision, but at the cost of coming to connote role-playing. But, in automata theory a state is an element of a space that maps outer perturbations back into that space. In this sense a state in a computer or brain plays roles no more than an element in a state-space in physics, say a position-momentum pair of a material body. Roles, purposes, and intentions cannot be derived from this strict concept without adding something or depriving it of precision.

However, even if, for the sake of argument, we let states play purposive roles, we still have the wrong concept if we want to grasp the intentional. Following the materialist line to which functionalism is already committed, beliefs are dispositions of a sort. dispositions to act. They play parts in causal chains with other attitudes. But states are certainly nol dispositions. One can have a disposition to sit without actually being in the state of sitting down. Confusion of states with dispositions is another category mistake. Beliefs and other intentions are not states, although there are no states without dispositions and no actualized dispositions without states.

There is a remedy for this situation, however, namely to identify attitudes (since they are dispositional) with whole input-output-state systems instead of states alone, or alternatively, with programs. And state systems are close to being structures in the mathematical sense. From the abstract computationalist point of view an "adding device" in a microprocessor is a set of input, output, and state elements related in certain ways (by AND, OR, and FLIP-FLOP operations) much as a group in algebra is a set of elements related in certain ways fixecl by the axioms. Owing to the presence of such a structure, a processor can be saicl to have a "disposition" to aclcl. Of course, the processor might never add; that is, it might never go into a carry state or an acid state, or even be turned on. Nevertheless, it would still have the acldition disposition. [9] Computer logic models are, moreover, reasonably good models of the holistic, of complex wholes greater than the parts. One can design a logic network that can both add and subtract, one that has an identifiable adder part, but has no identifiable subtractor. [10J Again, there are single-unit transistors that realize AND, OR, and NOT gates (incleed, all sixteen truth functions of two variables) but that contain no AND-gate, etc. parts.

Similarly, a belief might be identified with some computational structure, a desire with another, and then the reciprocal role-playing of attitudes be explained by the interaction of these structures. This move would re-capture the dispositionalit)' of belief lost in the state concept, and to some extent satisfy our intuition about the interdependence of intentional attitudes. Of course the idellliticalioll with a logical

Page 153: Perspectives on Mind

Mechanism and Intentionality: 77,e New World Knot 145

structure entails a reduction of concepts of intentionaliity to computer concepts. I shall indicate how this might be done later.

As to programs, a computer might be said to "believe something" if it contained stored data and a program coded in memory related to that data. Or, we might ascribe beliefs and desires to a program in order to under­stand and cope with it. In a chess game if a computer were to interpose a piece between your Queen and its King, it would be perfectly reasonable and certainly no deviant use of English to say the computer believes it is in check and desires to prevent a mate. We ascribe beliefs and desires to each other in the same way so as to be able to cooperate with, take advantage of, or overcome one another. Without such attributions, it is difficult to see how a program, fashioned as it is in a language of imperatives, could be used as a target language for reduction of intentional vocabulary. This is a technical issue having ontological (or at least epistemological) overtones. One cannot get a doxastic language out of a logic of imperatives any more than one can get a logic of imperatives out of ordinary propositional logic. Although programs and computer logic systems are equivalent in the sense that any computation directed by a program can be performed by some special purpose circuit and vice versa, the languages are irreducibly different. In my opinion, artificial intelligence--if that means programming--is the wrong place to look for an analysis of the intentional beyond, that is .. anything other than mere attributions or ascriptions.

There is a further difficulty with rudimentary computationalism. It seems unable to distinguish between skills requiring intelligence, on the one hand, and mental attitudes, on the other. Computers can perform complicated enough tasks of an intelligent sort. But skills and attitudes are radically different mental phenomena. A computer might "read" stereotypical print, "play" chess, "compose" music, "draw" pictures, and "prove" theorems, but would not believe, perceive, strive for, hope for, or understand a thing. So although it is true that computers do have dispositions to react to input, and although animal attitudes are dispositions. it by no means follows that computers have attitudes. Hence, identification of attitudes with abstract computational structures (as intimated above) would not, without adding something more to the suggestion, differentiate them from skills. As critics of artificial intelligence have been arguing all along, there is a strong sense in which skills are not mental at all. Skills (except for innate skills, if there are any) are not genuinely "mental" although Cl/itivation of them demands an organizing mind. To get up to performance level a good pianist requires ambition, certain beliefs, powers of perception and interpretation, analytical powers--indeed, an almost boundless repertoire of positive attitudes towards his music. his audience, and himself. But once mastered. the technical execution of a piece is almost wholly mechanical. The

Page 154: Perspectives on Mind

146 Raymond J. Nelson

pianist will have acquired a skill. Similarly, attitudinal effort goes into a computer program, which will later run automatically, but it is the computer programmer who (like the pianist) lays the groundwork.

A related point urged by both Fodor and Searle [II), but in different ways and moved by different philosophical convictions, is that a computer (or an abstract model in psychological theory) is merely a syntactical system. Computers might have beliefs in a dispositional sense (though at the moment we are questioning this), but not in a relational sense, i.e., they do not believe anything. For computers there are no objects of which their beliefs might be true or false, or so it is said. [12) Computers follow algorithms. However the reasoning logician or scientist usually proceeds differently, by grasping interrelated meanings. He proceeds semantically. He knows a proposition is true because it is semantically entailed by others that are true. No program, however clever, generates a result even remotely like semantical illation (or so this criticism goes). Intentional attitudes are propositional; and this relational, seman tical character of belief escapes computationalism. Computers are symbol manipulators but do not use symbols referentially, and hence do not have intentional attitudes except as we are willing to ascribe attitudes to them. Let me summarize the rather negative points so far generated:

(1) Functionalism cum computationalism does not account for feel­ings; feelings are neither functional nor logical states;

(2) It seems unable to account for intentional attitudes because: (a) Attitudes qua functional are not states, but dispositions; (b) Attitudes might be identified with logical stluctures in

digital computers on analogy with organs; this seems to imply a reduction of some kind, and remains to be explored.

(c) Intentions are not programs, although we might ascribe intentions to programs;

(d) Computational systems are capable of performing skills ordinarily requiring high intelligence, but skills are not attitudes;

(e) Beliefs and other attitudes are relational. However compu­ter logic systems seem to be syntactical (1 shall dispute this). They submit to algorithms, but they are not propos­itional and do not imply or relate by virtue of meanings; the symbols computers manipulate do not refer to anything.

4. A Defense of "Mechanism"

Although computationalism in this lifeless form fails as a theory of mind, I agree with many that it is currently the best framework for

Page 155: Perspectives on Mind

Mechanism and /llIelllionality: 77le Nell' World Knot 147

philosophy of mind and cognitive psychology. That the mind is a data­processing system of some kind is a very appealing hypothesis. If so, it must exhibit in the face of such failings some way of accounting for phenomenal mind and intentional attitudes as a total unity.

As to feelings, sensation, subjective experiences of various kinds, I can find no better path--and one that stays withill strictly materialist boundaries--than the type-type identity theory, which is a departure from functionalism as such. but not entirely so. Logic structmes, to be seen as instantiated in neural networks, are functional in the original sense. They function the same in whatever material embodiments, for example in humans or in robot hardware. Put in a logically more palatable form, every conscious mental occurrent is type-type identical with some event in the nervous system. This view has all the advantages as well as disadvantages of modern materialism, and I claim no more for it. [131 Let it suffice here to mention two points about this version of the identity theory.

First, if a feeling such as a pain is type-type identical to a neural event, then computers do not have feelings inasmuch as their input, output and state realizations are not neural. This agrees with the evidence. No evidence I know of points to feeling in other than living things, although I grant that accepting the evidence asjillai is a piece of chauvinism.

Second, the structures--that is to say, the relations of inputs to outputs and states, and of states to states--would still be functional in the sense we have been discussing. Thus, supposing the brain to have a structure for drawing conclusions by modus pOllens and a robot to have one also, then the two would be functionally identical--mentally the same in that respect--although one would have conscious experience of a thought while the other would not. The structural systems would be token-token identical to material complexes in the original functionalist sense.

Thus, the "stop-gap" theory I advocate is double-barreled: it distin­guishes between components and structures--compollellls are type-type identi­cal to material events, sflllcfures are individuated functionally and are token-token identical to material complexes. For lack of a better term, I shall use the fine old expression 'mechanism' for this theory. Although computationalism combined with type-type identity theory is not as "uni­fied" as primitive functionalism, which counted all mental content as functional states, it serves the holisitic character of mental life better inasmuch as inner experience--which is certainly a mark of the mental--is not altogether left out.

Turning to the questions previously raised about intentional atti­tudes, if mechanism is to be vindicated I have to show that computers could have attitudes in their functional (causal, dispositional) and relational aspects, despite earlier pessimistic surmises. To show that attitudes are holistic, interactive structures as suggested above is to satisfy a reduc­tion requirement that runs squarely into Quine's linguistic version of

Page 156: Perspectives on Mind

148 Raymolld J. Nelson

Brentano's Thesis to the effect that the circle of intentional exrressions, although perhaps interdefinable, cannot be breached by a vocabulary of physical science or mathematics alone. [14] For this reason Quine proposed elimination of intentional terms like 'belief' from the "austere" formal language of hard science. I believe the circle call be broken, and by the use of mathematical concepts expressible in that very language. To show that attitudes qua relational are explicable in strictly computational terms without importation of a special category of meaning-laden symbols or mental representations in Fodor's sense [15] calls for more of the same.

I agree with representationalists that the semantics of natural language should be founded on a theory of prelinguistic attitudes, and in particular that questions of the meaning and intension of expressions be reduced to internal representations in some sense, I radically disagree with the opposite view, voiced by David Lewis, that "only confusion comes from mixing the topics of an abstract, model-theoretic approach to semantics with psychological descriptions." [16] To the contrary, seman tical theories that emulate model theory will never illuminate natural language sufficiently to aid in understanding either the special problems of natural language semantics or the role of language in cognition. I am certainly a holist in the area of language to the extent that I insist that a theory of verbal meaning be grounded in a theory of intentionality, and much of what such a theory would contain should apply even to dumb animals. Any theory that ignores the facts of mental symbolization is likely to be on the wrong track. However, I am not satisfied with introducing yet another category of irreducible entities--representations--whatever their ontological or methodological status might be.

The claim that computers (including abstract computers realized in nerve networks) are merely syntactical systems, that they somehow cannot be semantical, mystifies me. I think it simply misses the significance of Godel. A formal system is syntactical or not depending on the point of view taken toward it. in particular on the resources of the metalanguage. If the system is interpreted in some domain, the total including the formal language and its metalanguage, comprises a seman tical system. Now self­reference is possible by way of a process of arithmetization of the metalanguage, as Gbelel showed. Self-reference in this sense appears in abstract form in the recursion theorem of arithmetic, which leads in turn to the theory of self-describing Turing machines and to von Neumann's theory of self-reproducing automata. Perhaps the self-reference in these systems is not "seman tical. " But lacking any decent understanding of "semantics" outside model theory (and possibly situation semantics) I do not know what this means. It is true that bv Tarski's Theorem one cannot get a truth definition for self-referring arithmetic in terms of that arithmetic itself. But--a c1osl)' relat~d phenomenon--neither can one solve the "halting problem" for computers. Indeed, there is no general tracing

Page 157: Perspectives on Mind

Mechanism and Intentionality: the New World Knot 149

program that will tell whether a given program will do what you intend for it to do. But programs do exist that verify other programs nonetheless, and by the same token sufficiently complex systems could in principle self­refer. By 'self-refer' I do not mean programs, like recursive subroutines that call themselves but programs that could compute from data-input to encoded descriptions of themselves. [17]

Mechanism uses self-reference to get mental representations roughly as follows. Reference to the outer world of real objects (de re) or to propositional objects (de dieto) consists of a causal relation of objects to a perceptual recognition system (more on this below), followed by an operation of taking (very much in Chisholm's sense [18]) input to be of such-and-such a type--a red ball, a horse, an odor of minto-even when that input is an orange ball, a mule or a sprig of basil. Taking is achieved by a self-describing automaton, which operates as follows.

Suppose a subject is confronted with an orange ball in a mist and she is seeking a red ball. She glimpses a curved three dimensional surface that is reddish. This puts the subject in a state such that more favorable input would put her in a recognizing state meaning Red Ball. Since the input is vague and uncertain, the subject (tacitly) examines the coded description of herself thereby determining what state she would go into if she were getting clear input. Thus, she takes input in such a way as to satisfy expectations; of course more input might fimtrate her. (Here and in what follows, intentional terms such as 'taking' and 'expectation' are to be imagined to occur in scare quotes unless the subject of inquiry is capable of feeling, and until it has been indicated how we might realis­tically justify using the expressions for computational systems.)

The causal relation of input to the taking system provides a triadic theory of reference reminiscent of Peirce, at least for the case of reference in immediate experience. [19] The state meaning Red Ball is a function of the causal impact of the object, here an orange ball, and of the subject taking it to be red in order to satisfy her expectations. Then the expression 'this is a red ball' which is associated with the internal state has a reference that derives from that of the state (the state is essentially a Peircean interpretant, and the passage to the state a tacit abductive inference, in his terminology). Of course if the subject utters 'this is a red ball' she is wrong. At times she's right, however, and this leads to questions of the epistemology of veridical perception.

In this scheme. the object that causes the stimulus event is the real object. One can show in the model that a subject might take a llull object as well as an orange or other object to be a red ball; i.e. it could hallu­cinate. imagine, or dream, in which case there is no object de reo [20] The propositional (de dicto) object is the set of possible "takings" (automaton input strings instantiated in the nervous system) that would satisfy expectations. It turns out that this set is precisely a Quinean

Page 158: Perspectives on Mind

150 Raymond J. Nelsoll

possible world [21], and is indeed a represenlalion. Mastery of language is somehow knit into this underlying referential apparatus which provides the seman tical component of the experiential part of ordinary language.

The concepts of reference. object. and propositional object aJJ boil down to behavioral terms expressing the causal relation of objects to stimulus events and to computer logic (see ftn. 3). At present. this reduction appears feasible only for representation in perceptual experience and in occasional beliefs, desires and actions. But, that it can be done at aJJ lends support to the view that no new category of representations to explain the relatedness of intentional attitudes to objects is needed. Of course, this is disputable and my attempts at reduction might be seriously lacking. Thus, in support I next want to explain the methodology of my approach, but without burdening the reader with technical details.

5. Methodology

My procedure imitates Tarski's and Carnap's method of explication, although it is neither an exercise in formal language semantics, like Tarski, nor one of partial definition or meaning postulation, like Carnap. What 1 borrow from them is the idea of expressing problematic concepts, such as perception, as conditions to be met if the concept is to be realized. Explication is da neutral mathematical and computer logic terms; and, then, demonstration that the explication satisfies the problematic conditions is given. More perspicuously, I require:

(1) A statement of adequacy conditions to be satisfied for a term to be explicated;

(2) An explication in behavioral (input-output or S-R) and computer logic terms;

(3) An argument to the effect that the explication required by (2) satisfies the adequacy conditions noted in (I).

As an example of (2), laking, which is central to my theory of attitudes. is explicated in terms of a self-describing automaton and causal relation of object to stimulus pattern as in the "red baJJ" story. Now let us look at expectalion. An adequacy condition from Chisholm [221 is as follows:

S expects x at timc I if and only if S is in somc winning state q such that either (i) x fulfills S in state q at I if and only if x occurs at I. or x does not occur at I but S takes x to occur; or (ii) x disrupts S in state q if and only if .r does not occur at I or x occurs at I but S does not take x to occur at I.

To meet (2), 1 then explicate 'fulfill' and 'disrupt' (I already had

Page 159: Perspectives on Mind

Mechan;sm and Intelll;ollality: The New World Knot 151

'take'and 'winning state') in computational primitive terms plus some assumptions about 'occur' which merely fix the meaning; and from these explications derive the adequacy condition thereby satisfying (3).

Note that this complex condition includes cases wherein a subject might satisfy expectations of x when it does not occur or even when nothing occurs as in hallucinatory experience; and likewise cases wherein the sub­ject might have her expectations frustrated even though x does occur. As Chisholm notes, this cannot be accounted for from a rigidly behavioral viewpoint. But, my analysis shows that it can be explained in principle by computationalist methods. Turning to perception, I note four adequacy conditions for perceptual acceptance, including Gestalt conditions.

(a) An explicative model must be able to assign types to tokens (extract universals); must be able to identify a cat as an instance of CAT, of red as RED, and so on; and must be able to discriminate among cats, dogs and sheep; and between a-flat and b-flat, b-flat and c-sharp, and so on.

(b) A model must be able to identify a class of inputs as instantia­tions of two or more types depending on side conditions; it must be able to perceive either a duck or a rabbit but not both in duck/rabbit gestalten, and read '0' as an oh or zero, depending on context. This phenomenon is the counterpart in the subverbal realm to differences in intension in linguistic predicates.

(c) A model must be able to identify the elements of two disjoint classes as instantiations of the same type; it must be able to correctly identify a tune played in the wrong key or on wrong instruments, or the face of Washington in colors complementary to those of Gilbert Stuart.

(d) A model must be able to identify resemblances in Wittgenstein's sense; to identify a half-rotted leaf as a leaf or a tree as a tree though there be no common tree property, but only family resemblances.

To meet condition (2), I explicate 'perceptual acceptance' as follows:

S perceptually accepts x as of a type p at t if and only if x causes a stimulus pattern 1 in S's receptors, and S realizes a self-describing automaton that takes y to be of type p, that is, that satisfies S's expectations.

It is easy to show (a) that S can assign types to tokens; that (b) is

Page 160: Perspectives on Mind

152 Raymond 1. Nelson

satisfied by showing that two different automata can respond in different ways to the same set; that (e) is met by showing the isomorphism (essent­ially) of two automata having disjoint input sets; and that (d) is satis­fied by showing recognition of members of a family of resemblances by taking them to be of a family, which presupposes no common defining proper­ties whatsoever. Also, it is possible to show that the model can recognize items by context even when there is a multiplicity of contextual levels.

If conditions (a)-(d) do capture perception in its full holistic character and if my alleged satisfaction arguments hold, then I have shown the plausibiity of a reduction of perception to computational theory. Note that by stretching things a bit one might count the explication of 'percep­tually accepts' as a hypothesis about perception. I don't claim that much: all I claim for this as a philosophical argument is that perception, in principle, can be explained mechanistically, not that the model is the best hypothesis. It suggests a working hypothesis.

As we turn to a brief account of belie! and some other attitudes, it is extremely important to distinguish between tlVO very different holistic traits of intentional attitudes that are frequently confounded. One is reduction of attitudinal terms to physical or otherwise neutral terms (as I have already mentioned, a possibility denied by Quine in his version of Brentano's Thesis). The other is the implicit interdependence of attitudes on one another. An example of the first is my explication of perception; of the second, the putatively empirical fact that there is neither belief without desire and a disposition to action, nor desire without belief, and so forth. These are clearly not the same. It is possible to be a holist in the second respect and not the first. One might deny that there are primitives within the vocabulary of intentional terms whereby one could get all the others by explicit definition, and still consistently maintain that 'belief', etc. can be defined from outside that vocabulary in such a way as to prove statements that express the interdependencies.

In fact, this is the track I take: to explicate 'belief' in terms of 'take' and the other concepts I have already attempted to analyze, and then argue that the intertwining of belief and desire and the like are deductive consequences of the explication. This calls for the statement of two adequacy conditions, one for the analysis and the other for the belief. desire, act manifold. For 'belief' (and similarlv for other attitudes), restricted to occurrent or occasional belief:

(a) The explicandum of'S believes at I that a IS p must satisfy the following conditions:

(il It must not entail either that there is an a or that there is not an 0:

(ii) It must not entail that the suhordinate clause a IS P is either true or false:

Page 161: Perspectives on Mind

Mechanism and Intentionality: 77le New World Knot 153

(iii) Substitutivity of identities must fail: the explicandum must not entail that'S believes that b is p' is a deduc-tive consequence of 'a=b' and'S believes that a is p'.

These conditions are well-known to analytic philosophers and phenomenol­ogists alike as characterizing the intentionality of belief.

The adequacy condition for interdependence of 'belief', 'desire', and 'act', which is generally agreed on by philosophers of psychology [23] is the conjunction of the following pair:

(b) S believes that q implies that if S desires that p, then if S acts in a way depending on q, then S expects that p,

(c) S desires that p implies that if S believes q, then if S acts in a way depending on q, then S expects p.

Conditions (a), (b) and (c) together comprise meeting the first step (1) in the reduction of attitudinal terms to computer logic. As to (2), the explication of 'belief' (for occasion belief only) builds on that of 'perceptual acceptance' plus a clause expressing the subject's ability to utter her belief that p by 'p' (or equivalent). This clause introduces no new concepts outside of the syntactical theory of phrase structure grammars (a species of computer logic) plus some computer ideas like gating and calling subroutines. From this explication, all the the parts of condition (a) follow readily, which satisfies requirement (3). In order to derive the interdependence condition (b), 'act' is defined in terms of 'belief' and 'expects', the concept of final recognizing state of a Turing automa­ton, and 'bodily movement'. An act is thus analyzed as a movement based on belief with expectations as to outcome of the movement. Then 'desire' is defined essentially as in condition (b) above; (c) follows trivially.

6. 77le New Knot

It might be objected that this ali-tao-brief sketch attempts to explain the unexplainable. If so, then 1 have failed to justify my philosophical strategy (1)-(3) discussed earlier. Perhaps no possible justification is convincing enough to swerve the obdurate HOLIST. (I am tempted to say "obscurantist," but of course will noLl Assuming this is not the case and that criticism focuses on the statement of adequacy conditions, the details of the model, or the alleged proofs of adequacy (which I simply hint at here), let us confront the "new" knot.

Mechanism, including the type-type identity theory as proposed, seems to me a reasonable philosophical hypothesis. Computationalism or "strong AI" is already an established framework in cognitive science, and mechanism

Page 162: Perspectives on Mind

154 RaYlllolld J. Nelsoll

adds, I think, some slight theoretical support; it superimposes identity theory and so invites renewed tolerance for conscious experience, although it does sully pure functionalism. Moreover, the kind of argument I have suggested seems to support a computational account of belief (in the full relational sense) without an appeal to irreducible representations. AI continues to deliver substantial evidence that mental skills, including to some extent artistic skills, are algorithmic. Finally, linguistics and psycholinguistics, which, since Chomsky, have undergone the most spectacu­lar developments of all the social sciences during the past thirty years, is itself computationalist. For all of these reasons, mechanism shows promise for philosophy of mind. It falls short, however; for it is unable to provide for certain relations between mental features such as that between skills and intentions, and between intentionality and language. But these shortcomings are not matters of failure in principle.

The central problem of mind, however, is, as I see it, still untouch­ed. We simply do not understand consciousness, feeling, and emotion laden beliefs, desires, etc. Computers don't have them, for they have no con­scious experience. As argued above, we can explain tacit attitudes along lines proposed earlier, yet phenomenal experiences--essential ingredients of the intentional--are completely outside the purview of functionalism. Any theory purporting to account for them has to explain the involvement of feelings and emotions in conscious attitudes, as well as their disposition­al (causal) and relational dimensions. It is a basic phenomenological datum that desires, hopes, and beliefs are saturated with feeling and emotion. Given epiphenomenalist arguments, I admit it is questionable whether motivational attitude in conscious experience, colored as it is with sentient quality, has powers not present in the unconscious. Yet it is very difficult to convince myself that a tooth ache (not the decaying tooth) has nothing to do with my belief that something is wrong with the tooth or with my urgent visit to the dentist. Maybe the idea that atti­tudes depend to some extent on mental qualia is naive; but no one is close to an explanation of such things in all their phenomenal fullness; so according to my holistic prescriptions (Section 1), our experiences and ordinary intuitions of such matters IIIUSt be given first epistemic priority.

I wish identity theory led to some sort of analytic understanding, but it doesn't; on the contrary it leads to an impenetrable holistic question. In the following discussion I shall restrict the term 'mental event' to events such as pains, feelings, and other contents of awareness which according to my version of the type-type identity theory are identical to some physical event, e.g. the firing of neurons. Now. there is no incon­sistency either in conceiving a neural event as identical to a mental event or as not so, for by hypothesis the identity is contingent, not necessary. Also, any neural event not identical to a mental event causes some event, say E, or not. Suppose now that M is a mental event and B is the brain

Page 163: Perspectives on Mind

Mechallism alld Intentiollality: TIle Nel\' World Knot 155

event it is contingently identical to; then define.

and,

M is a tlollcallsai mental event if there is some brain event E such that B causes E, and if M were not identical to B. B would still cause E,

M is a callsal mental event if there is some brain event E such that B causes E, and if M were not identical to B, then B would not cause E.

The first definition says that B causes E independently of its identity, or lack thereof. to the mental event M. This entails a kind of quasi­epiphenomenalism. For, the contingent identity of M to B plays no causal role in B's career, although it is ontologically one with M. At first sight this might seem to be a strange doctrine, but I see nothing incon­ceivable in it. It is an epiphenomenalism without the onwlogical dangler. Shortly, I shall recommend this sort of identity in an account of sentient experience arising in the performance of routine mental skills and in per­ception. In the second definition the mental event (or better, the men­tality of the event) is efficacious. A neural evenl identical to a mental event has a power the neural event not so identified (e.g. a neural event in the lower brain) does not have. Event B(=M) is a kind of a whole having causal properties that B( 'F M) does not. What kind of entity is thar? Whatever its nature, it is the sort of thing that arises I'n our subjective distinction between a decayed tooth that hurts and one that does not.

Let us now imagine a community of robots, Golem Heights. These robots are to be able to communicate in a simple language, ordering one another around, replying, disagreeing, and perhaps fighting over who does what in performing tasks. Their language would refer to task-oriented objects and to the Golemites themselves. Each robot would have tacit beliefs and desires that enter causally into his behavior. and of course his attitudes would relate semantically to objects since such attitudes are necessary. on our theory, for having spoken language. Our Golemites would be engineered computational systems, informationally up-to-date technology designed along lines of the theoretical strategy I outlined above. I see nothing incon­sistent or impossible in such a proposal; indeed the logic of it (short of the fantastically difficult job of designing appropriate transducers and external sensors for perceiving and grasping objects) would be an applica­tion of 20th-century developments in logic which already form the founda­tions of computer science.

Inasmuch as Golemites are made of metal and wires .. they would lack sentience. Still, in addition to maintaining themselves in Golem Heights, they could play chess, prove theorems, indeed. learn all the mental skills of contemporary computers and morc. But being unconscious, they would lack mental states in our sense and could not be said to real/r understand or believe anything, except tacitly. On my view, they would satisfy all of

Page 164: Perspectives on Mind

156 Raymond 1. Nelson

the leqUlsltes of full intentionality--the relational and dispositional-­but without feeling or awareness, which 1 argue is essential. If I am right there must be some kind of division between those skills and atti­tudes in which full understanding and belief are consciously present and those, like those of the Golemites, that are "automatic." I have no idea where to make the separation in a satisfactory way--and, that is the knottiness of the knot.

We can mention cases. Golemites would not go to the robot repairman for help because a knicked part hurt, and I speculate they would not go in any case. They would be capable of the automatic, subliminal cognitive activity underlying problem solving, much of thought, and certainly lin­guistic performance; but not any of the painful, trial-and-error "prelim­inary" programming that precedes subconscious problem solving. They could be programed to play instruments, but could not program themselves. (Self programming in this sense has no relation whatever that I can see to self­programming in the sense of compiling machine language code from a user­oriented language like BASIC). They would be "wired" to learn Golemite without conscious effort (Chomskian Golems); but they would lack ability to learn English or Russian. They could not consciously deliberate.

In our God-like stance, let us now endow the Golemites with feelings in such a manner that every feeling or conscious event is a /lo/lcausal mental event contingently identical to some physical event. Since I hold that the identity is type-type, it means our Golemites must be transmuted from tin, chips and wires to living, wholesome flesh. As a further embellishment, they might even be made look like us. More or less unfortunately they would have exactly the same mental skills, attitudes (and omissions thereot) as their insentient cousins since all their mental events would be noncausal, by hypothesis. There is one notable difference, and that is with regard to perception. To be perception, what I called "perceptual acceptance" has to be conscious. Since Freud, this may be not true in ordinary parlance about belief and desire, but we simply do /lot say of an unconscious person that he "perceives" anything at all if he is sleep-walking or in a trance like Huxley's Sergeant. So, although our transmogrified Golemites would have no perceptual skills beyond their aboriginal cousins, they would indeed perceive.

Off-hand, one might think there could be another difference. If all Golemites tell the truth, the untransformed ones would answer "no" to the question, "do you feel anything," while perhaps (one might think) the sentient ones would respond "yes." But if the new Golemites lack (as they do, by hypothesis) causal mental events in their intentional makeup they would perforce give the same answer as the aboriginals to the question. But this would not make liars out of them. Only beings that have conscious control of speech acts could be liars. I believe this fully agrees with ordinary views on the psychology of morality.

Page 165: Perspectives on Mind

Mechallism alld IllIentiollalitv: The New World Kllot 157

Finally, the arrogance of our stance notwithstanding, we are not Gods, and therefore do not know how to take the final step and create full minds, minds with conscious, even self-conscious, beliefs and desires. Apart from the metaphoric scenario, our theories simply are not up to full-blown intentionality. I do, however, think it is fair to say, at a minimum, that we are on the track of accounting for mental skills, unconscious belief and desire attitudes, perception, and if we assimilate type-type identity into our theory, even raw feelings. But, so far, we remain faced with the mystery of causal mental events (in what I hope is a relatively precise special sense), and their involvement with intentionality. This central core of "mindedness" in the human being, the "knot" as it were, is holistic, and for the time being looks to be quite unravelable.

--r.]--

Several aspects of Nelson's position seem to have a direct bearing on issues raised in connection with the previous papers. His discussion of "taking" is clearly in the spllll of Rey's analysis of rational regularities. But Nelson appears to be more sensitive than Rey to the holistic character of mind, and is unwilling to neglect the "phenomeno­logical fact" that some mental states are conscious. Thus he stops short of drawing Rey's conclusion that all aspects of mental life lend themselves to computational replication. Indeed, he holds out for a biological component, on the assumption that mental states and neural events are type-identical. In this regard, Nelson appears to side with Smith and Searle against the reductive tendencies inherent in Reys analysis. The aspect of Nelson's view compatible with Rey lies in his analysis of "perceptual acceptance" and the other "tacit" mental functions involving the "taking" relation. Thus, if Nelson is on the right track here, his analyses might playa role in the development of algorithms crucial to the functional success of Rey's machine.

In a similar vein, Nelson's discussion of "perceptual acceptance" is in the spirit of Moor's reflections on the absent-qualia issue. Each develops a theory of perceptual "taking" that effectively displaces the qualia issue (at least with respect to tacit mental functions). But Nelson does not share Moor's opinion that "functionally analogous behavior" is an adequate standard for determining the success or failure of a computational replication of full-blown mental processing. Functionally analogous behavior might indicate the successful replication of mental skills, he concedes, but there would remain a "division" between "skills and attitudes in which full understanding and belief are consciously pl'esent" and those that are merely "automatic." (Nelson: this volume, p. 156) Nelson stresses the difference between computational "dispositions" and full-fledged conscious attitudes. These distinctions, emphasized in his "Golem

Page 166: Perspectives on Mind

158 Chapter TII'o

Heights" thought experiment, leave us to wrestle with the very "world knot" that Moor and Rey urged us to avoid at all cost.

In the following commentary, Professor John Bender examines several of Nelson's theoretical claims. Three important issues emerge: one deals with the role of "self-referencing" in the computational replication of rudimentary intentionality; another deals with Nelson's analysis of "taking" relative to the analogy which he draws between tacit mental states and machine states; the third raises questions about the coherence of Nelson's appeal to "type-identity" theory.

To begin, let us suppose that a "self-referencing" capacity has been instantiated in the functional hierarchy of a comrutational system in the manner suggested by Nelson: would this be sufficient to replicate the relational and dispositional characteristics that are at the base of even the most rudimentary intentionality? The argument which Bender advances seems to show that Nelson has yet to demonstrate that it would be sufficient. Indeed, Bender holds that Nelson's argument falls short of establishing even the possibility.

Anticipating Bender's next point, suppose we grant Nelson's analogy between a tacit mental state and its intentional object, on the one hand, and a machine state and its stimulus input, on the other. Could we not still question Nelson's contention that the machine state "refers" to an object? If the machine state is in fact caused by the input, it would be rather gratuitous to say that the machine state is "of" or "about" a red ball. But what if the machine state is merely triggered by the input? For example, suppose the stimulus triggers a probabilistic framework of expectation, as envisioned by Nelson: would this not be sufficient to establish the semiotic basis of "taking" and, in the process, render the machine state "intentional" in character? Bender's reply is in the negative.

Finally, Bender questions Nelson's defense of the claim that computational structures realized in neural networks have a causal efficacy lacking in non-biological replications. More pointedly, what explanatory gain is purchased by Nelson's appeal to type-identity theory? Bender's argument suggests that Nelson has fallen into the very pattern of "chauvinistic thinking" that functionalist arguments are designed to uproot. Bender traces this apparent weakness to Nelson's important (though perhaps faulty) distinction between the "knotty" problems he associates with the analysis of conscio1ls intentionality and the more tractable "computational" problems he associates with the analysis of tacit intentionality.

Page 167: Perspectives on Mind

JOHN W. BENDER

Knotty, Knotty: Comments on Nelson's "New World Knot"

There is, I think, considerable agreement among philosophers of mind and cognitive scientists that a functional or computational theory of the mind faces two deep challenges--the Scylla and Charybdis of Qualia on one hand and Intentionality on the other. There is less agreement, of course, whether these problems have sunk the computational model, or have forced it to seek alternative routes to its goal of a materialist account of the mind, or simply stand as the tasks to be accomplished.

RJ. Nelson has taken a somewhat new tack by suggesting that both of these threats can be met for a rather wide variety of mental states, but that there remain certain mental entities, certain centrally important, holistic aspects of the mind, namely, conscious, causally efficacious attitudes, which frustrate the hopes of mechanism and stand as an apparent­ly unravelable knot in the threads of materialist thought. Nelson believes the computer paradigm, if augmented by a type-identity theory to handle certain qualia, can adequately explain tacit beliefs and desires, uncon­scious attitudes, many intellectual skills, certain "raw-feeling" states, and (surprisingly) even perception, but that "conscious, feeling-laden beliefs" and other intentional attitudes remain, perhaps permanently, as materialists' migraines. There are, then, both positive and negative philosophical claims in "The New World Knot", and I shall be looking at both. I shall first discuss Nelson's suggestions for solving the problem of intentionality, and will especially examine their application to percep­tual states. Qualia and the type-identity theory will be my second con­cern, and my final comments will deal with the nature and the number of knots facing the materialist.

I. Intentionality, Computational StluCfures, and Perception

Early in his paper, Nelson rehearses the well-known major difficulties with computationalism--that version of functionalism which applies the computer model to problems of mind. These include the fact that intention­al attitudes cannot, in general, be identified with Turing machine states but must be thought of as complete input-output-state systems [1], and that computer systems, being purely syntactical (i.e. systems whose states lack intrinsic intentional or semantic content) [2], seem unlike the mind.

Nelson suggests that intentional attitudes can be brought within the compass of a purely computational model by conceiving them as interactive computational structures which possess a sort of self-referring power. His model is supposed to avoid "introducing yet another category of irreducible

159 H. R. Otto and 1. A. Tuedio (eds.), Perspectives on Mind, 159-168. © 1988 by D. Reidel Publishing Company.

Page 168: Perspectives on Mind

160 John W. Bender

entities--representations" (Nelson: this volume, p. 148). Various intent­ional attitudes such as belief, expectation, and perceptual acceptance are apparently thought to be definable, either directly or derivatively, in terms of a basic operation which Nelson calls lakillg. Here is the computational mechanism that Nelson believes "generates" intentionality:

Mechanism uses self-reference to get mental representations roughly as follows. Reference to the outer world of real objects (de re) or to propositional objects (de dicta) consists of a causal relation of objects to a perceptual recognition system, ... followed by an operation of laking (very much in Chisholm's sense) input to be of such-and-such a type--a red ball, a horse, an odor of mint--even when that input is an orange ball, a mule or a sprig of basil. Taking is achieved by a self-describing automaton ... as follows .... Suppose that a subject is confronted with an orange ball in a mist and she is seeking a red ball. She glimpses a curved three-dimensional surface that is reddish. This puts the subject in a state such that more favorable input would put her in a recognizing stale meaning Red Ball. Since the input is vague and uncertain, the subject (tacitly) examines the coded description of herself thereby determining what state she would go into if it were getting clear input. In this way she takes input in such a way as to satisfy expectations .

... The state meaning Red Ball is a function of the causal impact of the object, here an orange ball, and of the subject taking it to be red in order to satisfy her expectations. Then the expression 'this is a red ball' which is associated with the internal state has a reference that derives from that of the state ... (Nelson: this volume, p. 149)

Nelson's stated goal here is to show how the intentional attitude of perceptual taking can be achieved by a self-describing automaton, without introducing semantically interpreted states or representations as primitives. Based upon vague or degraded input, the automaton projects which recognition state it would have gone into if the input had been better. This hypothesis projection is the "taking" state.

We are given no clarification of the automaton's use of the standards of goodness that it applies to input, and, since "favorableness" of input is relative to, and will vary with, different recognition states, it is difficult to grasp Nelson's picture completely, but maybe it is something like the following. Given its input,the automaton assigns different probabilities to each of its various possible recognition states. (These probabilities in effect measure the similarity between the actual input and the hypothetical favorable input for each of the machine's recognition states.) Taking input to be of type p is either the assigning of the

Page 169: Perspectives on Mind

Knotty, Knotty: Commellls on Nelson's "New World Knot" 161

highest probability to a certain recognition state, or perhaps is the machine's going into that recognition state as a result of its calculation.

Self-reference of this sort is imaginable in sophisticated scanning machines. The laser scanners now used to read universal product codes (the series of lines seen on many product labels), may soon be improved, we might imagine, so that they can successfully identify items which have been vel)' carelessly passed over the detection screen. (In fact, the present machines are already very forgiving.) Such a machine would be given certain expectations (i.e. possible recognition states corresponding to the store's inventory), and would be programmed to assign probabilities to those states based on the vague and uncertain input it receives. It "takes" itself to have scanned Green Giant peas at 59 cents if it concludes that that is the recognition state it would (with high probability) have been in if the item had been more carefully passed over the screen. But this sort of computa­tional story does not explain intentionality, because the machine does not interpret any of its states: its taking states and its recognition states have intentional content only as they are interpreted "from the outside" by the users of the machine. It is widely acknowledged that we can heuristic­ally describe the workings of any computer in intentional terms if we so desire. If Nelson's automaton is as I have described it, I can see nothing new in his suggestion concerning self-reference.

On the other hand, if Nelson is claiming to have explained without the help of irreducible representations how the "taking" achieved by a self­referring automaton is about a red ball in some fuller sense--how the taking state is about a red ball for the machine, as my perceptual state is about a red ball for me, then it seems he has failed since taking is de­fined in terms of recognition states that are primitive elements of his story and that are individuated scmamicaliv. The automaton computes that more favorable input would have put it in the recognizing state meal1il1g Red Bali, Nelson says. As far as I can see, Nelson has endowed these recogni­tion states with intentionality by mere fiat, and has simply provided a mechanism by which that content can be transferred to another state (per­ceptual taking). It is possible that I am missing something of importance when Nelson says the state meaning Red Ball is a function of the causal impact of the object and of the subject taking it to be red (I would have thought he would say that takil1g was a function of the input and the recog­nition states) but I confess my inability to understand how causal connec­tions between uninterpreted stimuli or input and an internal state could generate intentionality or content.

Using his notion of taking, Nelson defines the concept of perceptual acceptance in purely computational terms, and because this state is claimed to have the "full and relational" intentional content of perception itself (lacking only perception's consciousness), Nelson believes that he has provided support for the claim that perception can in principle be reduced

Page 170: Perspectives on Mind

162 101m W. Bender

to computational theory. Now, the definition of perceptual acceptance, (p. 151), gives the reader the impression that this is an intentional state ascribable to the perceiver, i.e. that it is a personal-level intentional state which has been defined in terms of a computational state of the sub-personal level, viz. taking. Perceptual acceptance relates the subject, S, to an object. x, while taking relates the automaton whirh S realizes to a stimulus pattern or input, y, caused by x. However, if we recall the introduction of the concept of taking, we unfortunately find that taking is sometimes treated as itself a relation between person and object, as when Nelson says that the input of the taking operation may be "an orange ball, a mule or a sprig of basil" (p. 149), and later when he says that "a subject might take a null object as well as an orange or other object to be a red ball." (p. 149) But. since this construal of the process of taking makes it a mere synonym for perceptual acceptance, I will assume that this is an oversight by the author, and that he indeed intends taking and acceptance as distinct relations. Once it is clear that the input for the taking process is a stimulus pattern and not a distal object. the problem of the intentional content of the automaton's recognition states which I mentioned above is brought into starker relief. What justifies our claim that the machine's state refers in any full sense to the orange ball causing the stimulus, or that it means Red Ball?

These difficulties, it should be noticed, focus on the intentionality of the states which Nelson hopes provide a computational explication of perception, as well as other attitudes such as belief. We haven't yet addressed the problem for computational reduction created by the fact that intentional attitudes like perception are conscious states with a certain qualitative or phenomenological character. This is the problem of Qualia.

2. Qualia and "Mechanism"

Our desires, hopes, and beliefs are "saturated with feeling and emotion", Nelson tells us, and this fact demands that we distinguish conscious intentional states, states with "feeling or awareness", from tacit intentional attitudes, even though the latter may "satisfy all of the requisites of full intentionality." A computational model which accounts for the full intentionality of some mental skills and attitudes (including perceptual attitudes if Nelson is correct and my arguments above are wrong). is still "lifeless" without a way of accounting for phenomenal mind. Nelson suggests a way of handling at least some phenomenal states, and these include "raw feels." feelings, sensation. subjective experiences of various kinds," and perception. The answer is to accept the type-type identity thesis for such states. and to regard these cases of consciousness as epiphenomena of complex logical structures that arise when those structures are realized neurophysically.

Page 171: Perspectives on Mind

Knotty, Kllotty: Comments 011 Nelson's "New World Knot" 163

This is an idea that has tempted philosophers, especially those who are pessimistic about the functional identity theory, But the thought that a neurally realized informational and computational structure enjoys causal powers not manifested by alternate realizations of the same structure is usually put foward as an general objection to functionalism or compu­tationalism. Nelson's proposal, which he calls "mechanism" suggests that we add the type thesis about selected awareness states to computationalism to yield a more powerful theory. The conscious and holistic quality of these states results from the neural realization of a computational structure, even though that same structure does not give rise to qualia when realized in computer or robot hardware .

... the structures--that is to say, the relations of inputs to outputs and states, and of states to states--would still be functional in the sense we have been discussing. Thus supposing the brain to have a structure for drawing conclusions by modus pOllens and a robot to have one also, then the two would be functionally identical--mentally the same in that respect--although one would have conscious experience of a thought while the other not. (p. 147)

The apparent advantage of the type-type identity thesis is that it keeps computers and robots under control, it satisfies our intuitions "that a functional system made out of transistors or tilting beer cans would not feel anything", as Nelson puts it. This salving of our intuitions has its price, however, which Nelson fails to mention. As Block has put it, this kind of physicalism is a chauvinistic theory of the mind, refusing, as it does, to ascribe mental states to entities which do not share human neuro­physiology. [3] Lewis has put the point in a colorful way by arguing that any credible theory of mind needs to make a place for "Martian pain", i.e. instances of pain which feel just like ours do, but which differ greatly from our own in their physical realization. [4] This is the kind of consideration which, made by Putnam in the middle sixties, helped to move the philosophical discussion from the identity thesis to functionalism. [5]

Nelson is evidently suggesting that chauvinism is our best strategy, at least for some of our qualia-Iaden experiences, although, as we shall see later, he believes that the really "knotty" conscious states cannot be adequately handled in this way. But once we partially reinstate the type identity thesis, the pressing question becomes one of Iimit:;: how (and why) do we reject a wholesale chauvinism, about all mental states? Why should we adopt this expedient for some qualia and not for all?

Nelson's reply seems to be that an epiphenomenal treatment of certain states of awareness is acceptable. but for others it is not plausible. In some cases, we can think of qualia as "add ons" to intentional attitudes which are otherwise functionally realizable in non-conscious computational

Page 172: Perspectives on Mind

164 John W. Bender

systems: this is what he means, I think, when he refers to beliefs that are drenched in feeling and emotion. But I think we need to resist the idea that conscious qualia are causally inefficacious by-products of our neural wetware. One who is concerned, as Nelson is, with the holistic aspects of experience should be prepared to admit that awareness of these aspects, and cognitive processes involving them make most if not all qualia causally and functionally important and efficacious. It is puzzling that Nelson cannot convince himself that a toothache has nothing to do with his belief that something is wrong, while nonetheless being willing to claim that an au­tomaton whose logical structure yields (tacit) perceptual acceptance states could indeed be said to perceive in the fullest sense of that term if only it were realized in living flesh. Among qualia-Iaden states which are the least likely candidates for epiphenomenal treatment, I would have placed perceptual experience at the very top of the list. One cannot add on consciousness in a wholesale manner to a particular state of perceptual acceptance (e.g. accepting an object as Red Ball), and expect that the result is conscious perceptual awareness oj a red ball. If our advanced scanner were somehow to be realized in the appropriate protoplasm, could we conclude ipso Jacto that it was conscious of the Green Beans'? It seems true that there are intentional but unconscious states, but from this it does not follow that the content or intentionality of a conscious mental state is a feature separable from its consciousness. Qualia and intention­ality are intertwined in our awareness states, and perceptual awareness is as knotty as things get.

In addition to these troubles with applying the type-identity thesis to some qualia-states, Nelson's suggestion seems to leave the door open to "chauvinistic", anti-computational arguments regarding all intentional states, whether saturated with qualia or not. John Searle has argued that intentionality in the full and relational sense, "intrinsic" intention­ality, to use his term, is a result of the unique causal powers of our nervous systems, and cannot be explicated by the computational model. [6] If I am correct about the inadequacy of Nelson's explanation of intentional attitudes, then he has not provided any philosophical defense against the Searlean position. The question, once again, is why accept type­physicalism for some mental states and not for others'?

3. Kilo/-Counting

Mechanism is on the track of accounting for mental skills, unconscious belief and desire, attitudes, perception, and raw feels, if Nelson is correct. The conditions of full intentionality are satisfied by imaginahle robots (the "Golemites"l. made of silicon and wire, and their functionally equivalent but "transmogrified" flesh and bones cousins can even be ascribed certain feelings as well as perception. The brute fact of

Page 173: Perspectives on Mind

Knotty, Knotty: Comments on Nelson's "New World Knot" 165

consciousness, therefore, is not the monument to the inadequacy of mechanism which some philosophers have thought. This is not always made clear by Nelson; indeed 1 think that he is somewhat unclear about the precise nature of the "World-Knot". In his introductory paragraphs, the problem is characterized as our inability to theoretically grasp the difference between tacit (unconscious or preconscious) beliefs and conscious feeling-laden belief. Later, he says that "[t]he central problem of mind as I see it is, however, still untouched. We simply do not understand consciousness, feeling, and emotionally laden beliefs and desires, etc." (p. 154). But the stated point, of course, of adding the type-identity theory onto the computational model was to bring some conscious attitudes into the explanatory purview of "the best framework for philosophy of mind and cognitive psychology around". So what exactly is the knot which is not within our reach, according to Nelson?

What the Golemites--even the soft and warm ones--Iack are conscious mental states which are causally efficacious; they lack what Nelson calls "causal M-events". Certain neural events, qua conscious experience have a power to cause certain effects which would not be caused if those neural events had not been contingently identical to conscious mental phenomena. (p. 155) Nelson does not really tell us about the nature of these unique effects of consciousness, but his claim seems to commit him to the conclu­sion that there are mental skills or attitudes which Golemites cannot have. If we assume, as seems reasonable, that such effects of conscious events alter the logical or computational structure of the organism possessing them, Nelson appears to be claiming that there are certain mental skills or attitudes for which there are no Golemite functional analogues. He says,

If I am right there must be some kind of division between those skills and attitudes in which full understanding and belief are consciously present and those, like the Golemites', that are "automatic". I have no idea where to make the separation in a satisfactory way--and, that is the knottiness of the knot.

... [We end,] with the mystery of causal mental events ... and their involvement with intentionality. This central core of "mindedness" in the human being, the "knot," ... is holistic, and for the time being looks to be quite unravelable. (pp. 156-157)

But the efficacy of conscious experience, remember, has not stopped Nelson from claiming that a mechanistic (and epiphenomenal) account of perception is possible: the transmuted Golemites, recall, have true perception. The usual effects of perceptual states are beliefs, desires, verbal and non-verbal behavior, and the like, and Nelson would not deny that these do have functional equivalents among the robots. Similarly, the effects of a toothache, to use Nelson's other example, are beliefs that

Page 174: Perspectives on Mind

166 John W. Bender

something's wrong. and dentist-seeking behavior--simple enough for robot simulation. So what are the special conscious states and their effects which lie beyond the boundaries of Golem Heights? Why doesn't Nelson untie his knot by generalizing the position he proposed for perception?

The examples which he gives are these: Golemites wouldn't seek repair because a part hurt; they are not capable of painful trial and error learning; they could not learn a language or learn to play an instrument through conscious effort, although such skills could be "wired in"; they cannot consciously deliberate. Is the puzzle about these skills that they are "causal mental events", that "[have causal) powers not present in the unconscious"? (p. 154) Do these events have effects on functional structure which cannot be realized by Golemite equivalents? (If this were so, there should not have been such animated recent discussion of examples like Searle's Chinese Room, or Block's Homunculi Heads). Exactly what part of the functional role of a toothache cannot be realized in the Golemites? Or is the problem simply that these events are conscious? It is difficult to decide whether Nelson endorses either of these alternatives or both. We have seen that it cannot be the latter alternative, if Nelson is to be consistent with his position on the type-identity thesis and on the transmogrified robots, and the former alternative appears to be false.

Perhaps Nelson's idea is this. Certain holistic features of conscious experience enter importantly into our cognitive processes, and result in beliefs, skills, and attitudes which we otherwise would not have. For example, we can come to have certain beliefs about objects from our awareness of holistic qualities in our perceptual experience: we judge from the nature of the brushstrokes in a painting that the work belongs to the artist's late style; awareness of the subtle undertones of its color lead to the belief that the wine before us is more than 15 years old. Learning of all sorts seems to go on in this way, capitalizing on the fact that we are conscious of certain qualia in our experience.

Although robots may simulate the functional effects of a conscious state, and although the fleshy Golemites even possess some consciousness, what cannot be simulated by either computers or Golemites is the actual utilization of the holistic conscious content of a mellfal state in the generation of other intentional arrirudes (such as beliefs) as computation­aloutput. "Holistic conscious content" might well embrace both qualia and the intentional or semantic content of conscious states. Neither the phe­nomenal character of an experience nor the semantic interpretation or "aboutness" of a conscious mental state seems to be expressed or applied at the level of computational processing; neither are "available" to opera­tions of a computer. We simply do not understand qualia and intention­ality, or how to realize them in computational devices. If this is the best account of the "mystery of causal mental events", then clearly Nelson must recant his claims about perception. for there is no clearer example of

Page 175: Perspectives on Mind

Knott)', Knott)': Comments on Nelson's "New World Knot" 167

efficacy than detailed perceptual experience. Nothing much would be lost by this admission. since we can now see just where the knottiness lies.

For at least two reasons. it is misleading to describe the problem of Mind in the World as a difficulty in separating unconscious skills, atti­tudes, raw feels. and perception either from causally efficacious mental events or from conscious feeling-laden attitudes. The first reason is that it implies that there is no unresolved problem arising from the intention­ality of our mental states--but we have seen that that problem remains even after Nelson's efforts. The second is that placing cel1ain qualia-states such as perception and "raw feels" on the un knotty side of this division implies that it is not the occurrence of qualia but only their use which is puzzling for mechanism. but I fear that few philosophers would agree.

So, as I suggested in the opening paragraph, there are two knots for the philosopher of mind and the cognitive scientist to puzzle over: Intentionality and Qualia. In my view, these are themselves deeply enough intertwined, sharing numerous threads, that perhaps the loosening of one will bring on the weakening of the other. But I am afraid that we are not as close to straightening things out as Nelson believes.

--fJ--

Bender's commentary accentuates the originality of Nelson's distinction between tacit and conscious intentionality. Traditional theories have conceived "intentionality" as a primary structure of consciousness. On these views. it is consciousness that is "tacit" or "reflexive," not intentionality. But Nelson drops this view. By analyzing intentionality in terms of the "taking" function, he is able to disclim­inate between "taking" operations that are tacit and those that are conscious. In this way Nelson is effectively freeing the analysis of intentionality from whatever conceptual box might be imposed by a theory of consciousness. In fact. his type-identity hypothesis is formulated as the basis for a theory of consciousness that would itself be grounded in his analysis of tacit intentionality.

Bender's critique of this position merits close inspection. but the reader should take care not to be persuaded by it without first evaluating the merit of the underlying interpretation of Nelson's argument. for there might be difficulties in his position that elicit more than one interpreta­tion. Thus, aspects of Bender's analysis seem misleading. For instance. is his example of the scanning device really comparable to Nelson's "Red Ball" illustration of a self-referencing automaton? If so, his criticism might prove devastating to Nelson's arguments; but if the example is not comparable. then the blow should be fairly easy to deflect. Nelson would contend that the two examples differ from one another in at least one

Page 176: Perspectives on Mind

168 Chapter Two

crucial respect: whereas the scanner is little more than a decoding device that accesses memory entries, the automaton is a Turing function which literally describes its own structure to itself and assigns probabilities to possible recognition states based on this self-referential relation,

Nor is this assignment of probabilities merely the result of a "similarity measure," as Bender has argued, Nelson's recognition models make use of "product" sets of individual automata. each of which computes a "characteristic function" that correlates with a particular sensible property. In cases where there is degraded or ambiguous input, each automaton uses its own self-description to determine the character of the "good" input which would have sufficed to move it to a recognition state. In such a model, all automata simultaneously assess the input, degraded or otherwise, and thereby arrive at their respective probabilities. The high­est probability determines the state the model moves into, and thus the resulting state of the machine exhibits "tacit" intentionality, not by fiat, but by having satisfied adequacy conditions for intentionality which Nelson claims have been derived largely from phenomenological literature. Hence, the "taking" relation which Bender has criticized as either non­intentional or intentional only fiat comes to exhibit the crucial capacity to relate sensible input to machine states by way of causal mechanisms.

In the next commentary, Christopher Hill addresses several important methodological issues with respect to Nelson's project. Professor Hill contrasts Nelson's reductionist strategy with a "contextualist" approach in which philosophical questions about intentionality are answered by "trying to make explicit the principles that underlie our use of intentional concepts in everyday life." (Hill: this volume, p. 169) In the spirit of Bender'S critique of Nelson's analysis of "perceptual acceptance," Hill attacks Nelson's treatment of "perceptual belief," arguing that it is flawed in a way analogous to a weakness that has been charged against attempts to analyze intentionality on behaviorist principles.

It is important to recognize from the start that Hill's concern differs in important ways from Nelson·s. His commentary develops principles related to the acqLlisition ancl use of intentional concepts: he wants to know how humans COlli£' to hal'£' intentional concepts. Nelson's concern, on the other hand, is with the structural nature of mental states that he assumes exhibit intentionality. Consequently, we need to consider the extent to which the two approaches are compatible. It may turn out that Hill's position, far from being a criticism of Nelson's project, is best seen as a complementary project which, if successful, might provide Nelson with an important source of "adequacy conditions" which he must have in order to properly define structures he thinks are integral to the machine's design.

Page 177: Perspectives on Mind

CHRISTOPHER S. HILL

Intentionality, Folk Psychology, and Reduction *

Very roughly speaking, intentionality is the characteristic: that a mental state has if it represents or is directed on an entity (where the entity may be a proposition or a state of affairs). Philosophers have found this characteristic to be elusive and confusing, and they have been increasingly concerned to explain it. One approach consists in attempting to explain intentionality by giving redllcri.'e accounts of such intentional concepts as belief and desire. This approach can take several different forms: in the past reductionists thought that it might be possible to provide reductive definitions of intentional concepts in terms of concepts that stand for forms of behavior, but more recently they have sought to give reductive definitions in terms of the concepts of a formal discipline like computing or information theory. A second approach consists in trying to make explicit the principles that underlie our use of intentional concepts in everyday life. Advocates of this approach maintain that the content of an intentional concept is determined by the role that it plays in these principles. They also maintain that philosophical questions about belief and its fellows can be answered by enumerating the principles and explaining their logical and semantic properties.

The first approach is represented in this volume by R.J. Nelson's interesting paper "The New World Knot." I will try to balance Nelson's contribution by sketching and defending a version of the second approach. My sketch will be programmatic and my defense will be incomplete. I hope, however, that my remarks will suffice to justify the view that the second approach is worthy of further attention. In addition to sketching an alternative to Nelson's position, I will argue that his position suffers from a flaw that may be fairly serious. Specifically, I will try to show that his account of intentionality is jeopardized by a line of thought similar to one of the main arguments against logical behaviorism.

According to advocates of the second approach, the process of acquiring intentional concepts consists largely or entirely in coming to accept a set of principles which in effect constitute a common sense psychological theory (hereafter called "Folk Psychology"). Some of the principles link intentional concepts to concepts that pick out internal states of other kinds (e.g., to concepts that pick out sensations): some link them to concepts that pick out actions: some link them to concepts that pick out environmental factors: and some link them to one another.

169 H. R. Olio and 1. A. Tuedio (eds ), Per.lpeClIVes on ll-lind. 169-182.

© 1988 b.v D. Reidel PubU"hmg Company.

Page 178: Perspectives on Mind

170 Christopher S, Hill

The content of intentional concepts is largely or entirely determined by these principles, and it is therefore appropriate to say that the principles provide implicit definitions of the concepts, (This is here­after called the "Implicit Definition Thesis,")

The main justification for this position is the fact that we appear to have no basis for applying intentional concepts that is independent of the laws of Folk Psychology, It is quite implausible to claim that we first acquire intentional concepts by learning theory-neutral ways of determining whether individuals have specific intentional states (e,g, specific beliefs and desires), By the same token, it is implausible to claim that we learn the laws of Folk Psychology by generalizing from concrete pieces of infor­mation about the intentional states of specific individuals, It is much more plausible to say that we learn about the intentional states of indivi­duals by finding it possible to obtain satisfactory explanations and predictions of their behavior when hypotheses about their intentional states are combined with laws of Folk Psychology.

We can get a better sense of the content of the Implicit Definition Thesis by focussing for a moment on the concept of belief. Here are several generalizations about belief that seem to be a least rough approximations to genuine folk laws.

(I) If i believes that p, then I IS disposed to act in ways that would tend to satisfy i's desires if p and the other propositions believed by i were true,

(2) If p is saliently instantiated in i's immediate environment, i is attending to information from the part of the environment in which p is instantiated, i has concepts that pick out the various individuals, properties, and relations that are involved in p, and p is compatible with i's prior beliefs, then i comes to believe that p.

(3) If i acquires extremely good evidence that p, i is aware of this fact, and p is compatible with i's prior beliefs, then i comes to believe that p.

(4) If the proposition p is implied by other propositions that i believes, i is aware of this fact, and p is compatible with i's prior beliefs, then i comes to believe that p.

According to the Implicit Definition Thesis, we could not acquire the con­cept of belief before becoming acquainted with (1)-(4) or, rather, with the actual laws of which (1)-(4) are approximations. We acquire the concept ill the course of learning that generalizations like (1 )-(4) can be fruitfully

Page 179: Perspectives on Mind

Intentionality, Folk Psychology, and Reduction 171

applied in explaining and predicting the behavior of animate beings. The Implicit Definition Thesis supports two important methodological

principles. Thus, (i) if the thesis is correct, then when someone asks for an explanation of the nature of an intentional state, or for an explanation of tlze representational relations between intentional states and the entities on which they are directed, it is possible to provide an adequate answer by enumerating the laws of Folk Psychology. Moreover, (ii) when someone objects that an intentional concept may not be well defined, it is possible to arrive at an adequate assessment of the objection by investigating the logical and semantic properties that the law!. possess.

II

When it is claimed that certain concepts C" ... ,C are implicitly defined by a set of principles P, where P also inv:~lves a set of independently meaningful concepts B " ... ,B , it is possible to understand the claim in two quite different ways. Firsf, it can be taken to mean that unique satisfaction conditions accrue to C " ... ,Cn in virtue of their roles in P and the independent satisfaction conditions of B

" ... ,B ; that is to

say, it may mean that P forces the satisfaction conditions astociated with C , , ... ,Cn to remain constant as long as the satisfaction conditions associated with B

" ... ,B are held constant. This is roughly the sense

that "implicit definition'" has in presentations and discmsions of Beth's Definability Theorem [1]. Second, it can be taken to mean that the definition provides a "partial interpretation" of C

" ... ,Cn . On this

construal, it means (i) that the definition does not assign unique satisfaction conditions to any of C

"""Cn ' and (ii) that it nonetheless

restricts the satisfaction conditions of each of C 1'''' 'Cn to the members of some non-empty class of satisfaction conditions.

It follows, of course, that there is an ambiguity in the special case in which C

"""Cn are intentional concepts, P is the class that consists

of the laws of Folk Psychology, and B"

... ,B are the non-intentional concepts that occur in the laws. The Implicit Dgfinition Thesis may mean that the satisfaction conditions of our intentional concepts are uniqllc(\' detel711ined by the laws of Folk Psychology, but it may also mean that the former are pal1iall.l' cOllstrained by the latter.

There is a persuasive argument for the view that the Implicit Definition Thesis is false on the first interpretation. The argument that I have in mind may be summarized as follows: As can be seen by reflecting on (1)-(4) and other examples, if a law of Folk Psychology states a sufficient condition of the applicability of an intentional concept, then the part of the law which expresses the conciition contains as least one intentional concept. In other words, when P, C, ,,,,,Cn and B, , ... ,B have the values that are assigned to them in the previous pa,agraph, it" turns

Page 180: Perspectives on Mind

172 Christopher S. Hill

out that there is no member of P such that (i) it has the form of a condi­tional, (ii) there is at least one member of C

" ... ,Cn in its consequent,

and (iii) there are only members of B , ... ,B in its antecedent. Unfortun­ately, this means that it is impossible for Punique satisfaction conditions to accrue to C , , ... ,C from P and B, , ... ,B . Since there are no conditions containing only members of B, , ... ,B whiclh are sufficient for the applica­bility of the C. 's, the satisfaction cbnditions of C , , ... ,C can vary even though the satisfaction conditions of B

" ... ,B are held ~onstant. (The

claim made in the last two sentences can be defended by a line of thought that is closely related to the proof of Beth's Theorem.) An advocate of this argument would grant that it is possible to use P to assign unique satisfaction conditions to some one C i by using the satisfaction conditions of B, , ... ,B and the remaining C. ·s. But he or she would deny that this fact is rele~ant in the present c~ntext, and would in justification point out that we are here considering the question of whether it is possible to use P to assign unique satisfaction conditions to C, "",Cn as a class.

[n addition to this argument for the claim that the satisfaction conditions of C

"""Cn are not uniquely determined hy P and the

satisfaction conditions of B, , ... ,B , there is also an argument for the claim that the constraints imposed t5y P and B

" .. .,B are extremely weak.

According to this second argument, it is not only tri.e that P contains no non-intentional sufficient condition of the applicability of an intentional concept. We must also recognize that the necessary conditions provided by P are largely anemic. Thus, suppose that L is a member of P which states a necessary condition of the applicability of some C i ; that is, suppose that it is a conditional with at least one C in the antecedent. [t will be found that L falls into one of the f~lIowing categories. First, the necessary condition that L provides may involve one or more intentional concepts (i.e., there may be one or more members of C

" ... ,C in the

consequent of L). In this case, the stringency of the constraint imposed by L will depend largely on the stringency of the constraints that are imposed on the members of C 1"" ,Cn in the consequent by other members of P. If the other members of P leave a lot of slack, L will leave a lot of slack as well. Second, there may be no members of C , ,.. "Cn in the consequent of L, but the consequent may be so vague or so general that it fails to narrow the range of satisfaction conditions in an interesting way. And third, there may be more than one intentional concept in the antecedent of L. In this case, even if the necessary condition provided by L is free from intentional concepts, we will not be able to use L to filter out satisfaction conditions for any C. that occurs in the antecedent unless we have some independent way ·of ~onstraining the satisfaction conditions of the other Ci's that occur in the antecedent. (It will be possible to lise L to show that it is wrong to assign a certain set S, of satisfaction conditions to one concept while also assigning a certain different set S,

Page 181: Perspectives on Mind

Intentionality, Folk Psychology, and Reduction 173

to a second concept, but L will still allow us to assign S to the first concept provided that we make compensatory changes in S., beTore assigning it to the second concept.) •

The first argument supports the view that assignments of satisfaction conditions to intentional concepts are inevitably underdetermined by assignments of satisfaction conditions to non-intentional concepts, but it is compatible with the view that the degree of under-determination is fair­ly low. The second argument adds fuel to the flames. If it is accepted, we must recognize an extremely high degree of underdetermination.

With the second argument in view, it's easy to sympathize with Quine's contention that intentional concepts should be demoted to second class status. (2) After all, if the content of intentional concepts accrues to them from the laws of Folk Psychology, and such content is radically under­determined by those laws, then truth values for ascriptions of intentional states are largely independent of the facts and the content of intentional concepts. But if truth values of such ascriptions are largely independent of the facts and the content of intentional concepts, then the assumption that the ascriptions have truth values begins to seem artificial, point­less, and even just plain wrong. In short, the second argument seems to lead ineluctably to a non-realist construal of intentional states.

1II

In this section I respond to some of the questions that are posed by the arguments we have been considering. As for the first argument, it seems to me that it is as solid as granite. It establishes conclusively that the satisfaction conditions of intentional concepts are not fixed by the laws of Folk Psychology. I do not see this, however, as a result that is particularly embarrassing to fans of Folk Psychology or to advocates of the Implicit Definition Thesis. I think we should be willing to live with a degree of indeterminacy in a set of theoretical concepts provided that the following Conditions of Adequacy are satisfied: first, the concepts in the set are useful (in the sense that they enable one to explain a broad range of facts that can be fully described in terms of independently mean­ingful concepts), and second, the degree of indeterminacy is comparatively small (in the sense that the admissible interpretations of the concepts are heavily constrained by laws that state necessary conditions of the applica­bility of the concepts). It is fairly clear that intentional concepts satisfy the first of these Conditions, and I feel that there is no reason to doubt that they also satisfy the second.

We are all aware that it is possible to explain a broad range of non­intentional phenomena by combining hypotheses which ascribe intentional states to individuals with the laws of Folk Psychology. Thus, for example, we can explain an great amount of human behavior by combining hypotheses

Page 182: Perspectives on Mind

174 Christopher S. Hill

about beliefs and desires with (5).

(5) It (a) i desires that q, (b) i believes that he or she could bring it about that q by doing A, (c) it is physically possible for i to do A, and (d) there is no proposition I' such that i desires that I' more than i desires that q and i believes that he or she would reduce the probability of r by doing A, then i does A.

Closely related laws enable us to explain sequences of behavioral phenomena in terms of hypotheses concerning long range goals and beliefs about the more or less distant future. Moreover, it is possible to explain a number of i's non-intentional psychological states by combining hypotheses about i's beliefs and desires with laws like (6) and (7).

(6) If i desires that q and i comes to believe that q will obtain, then i experiences pleasure and excitement, provided that there is no proposition r such that (a) i desires that r more than i desires that q and (b) i simultaneously comes to believe that r will never obtain.

(7) If i desires that q and i comes to believe that q will never obtain, then i experiences displeasure and frustration, provided that there is no proposition r such that (a) i desires that r more than i desires that q and (b) i simultaneously comes to believe that r will obtain.

It would of course be wrong to maintain that the explanations afforded by laws of this kind can match the explanations afforded by physics ill precision and detail, but it would be no less wrong to urge that they do not count as genuine explanations.

In view of (5), (6), (7), and related laws, it is natural to think that intentional concepts satisfy the first Condition of Adequacy. But what about the second Condition of Adequacy? According to the second of the two arguments that we considered in section ll, the constraints that govern interpretations of intentional concepts are extremely weak. Is this true? Do we have reason to think otherwise? My own view is that the constraints imposed by Folk Psychology are at least fairly strong. Here are two principles that support this view:

(8) If p is a proposition that can be decided by observation (in the sense that it is possible for p to be strongly confirmed and also strongly disconfirmed by sense experiences), and i believes that p, then i is in a state S such lhal (a) the probability that i is

Page 183: Perspectives on Mind

Intellfionality, Folk Psychology, and Reduction 175

in S is low when i has no sensory evidence that favors p, and (b) the probability that i is in S increases as the sensory evidence that is available to i increases,

(9) If p is a theoretical proposition that admits of empirical assessment (in the sense that it can be confirmed or disconfirmed by inductive arguments), and i comes to believe that p. then i comes to be in a state S such that (a) the probability that i is in S increases with the strength and simplicity of the inductive arguments which lead to p from propositions that i already believes, and (b) the probability that i is in S decreases with the strength and simplicity of the inductive arguments which lead to not-p from propositions that i already believes.

Given that philosophers alll'ays underestimate the complexity of things in formulating general principles, it would be silly for me to claim that (8) and (9) are among the laws that count as actual constituents of Folk Psychology. Like (I )-(7), they should be taken cum grail" salis. However, I do wish to claim that there are actual constituents of Folk Psychology to which (8) and (9) are rough approximations. These latter attest to a fea­ture of belief ascription that has often been noted in the literature-­namely, that they carry a teleological presupposition. According to (8) and (9). beliefs are usually of considerable potential value to an organ­ism. Thus, according to (8) and (9), if p is an empirically testable proposition, then believing that p is correlated with situations in which there is a fairly high probability that p is true. [3]

Now (8) and (9) can be used to filter out a wide range of intuitively unacceptable ascriptions of belief. To be sure, (9) does not provide much of a filter when it is taken alone, for the concept of belief figures as prominently in its consequent as in its antecedent. It can be used to obtain the conclusion that i probably does not believe that p, but only in a context in which we have reason to think that i has other beliefs which tend to support the proposition that not-po However, suppose that we are not trying to determine whether i has a single belief but are rather concerned to assess an hypothesis H which attributes a large set of beliefs to i. Suppose also that H contains several sub-hypotheses to the effect that i has beliefs which support the proposition that not-I'. In this case, given that H claims that i believes that p. law (9) will authorize us to conclude that H is probably wrong. Thus, although (9) cannot be used by itself to filter out hypotheses which attribute single beliefs to i, it is a useful tool for filtering out hypotheses that attribute sets of beliefs.

It should be mentioned. however. that an advocate of the second argument could concede everything that has been said thus far about (9). The second argument is based in part on the following line of thought: If a

Page 184: Perspectives on Mind

176 Christopher S. Hill

law is like (9) in that it has one or more intentional concepts in both its antecedent and its consequent, then it cannot be used as a basis for a categorical refutation of an hypothesis to the effect that i is in a certain intentional state. To be sure, it can be used as a categorical refutation of an hypothesis which is like H in that it is concerned with a set of intentional states. But to say that a law counts against an hypothesis which ascribes the set {S,SI' S2 , ... ,S } to i is to say no more than that it provides a purely conditional refutatfon of an hypothesis that ascribes S to i. Thus, the law can only be used to establish the following conditional claim: i cannot be in S if it is true that i is in SI'S ,,,,,Sn' And a claim of this sort is too weak to be of interest.

lfhe second argument is also based on a closely related line of thought about laws having no intentional concepts in their consequents but which have two or more intentional concepts in their antecedents. This second line of thought runs as follows: It is impossible to combine such laws with non-intentional facts to obtain a categorical refutation of the hypo­thesis that i is in S. The most that can be shown is that either i is not in S or i is not in one or more of the states S ,S2,,,,,Sn' But a proposi­tion of this sort has too little information to be of value. We want the laws of Folk Psychology to rule out ascriptions of intentional states cate­gorically. Disjunctive and conditional refutations are unacceptably weak.

In evaluating these lines of thought, we must remember that a set of conditionals and disjunctions can provide a categorical refutation of an hypothesis even if no one of the conditionals counts as a categorical refutation when taken in isolation from its fellows. For example, suppose that "If HI and H2 then N" is an instantiation of a law of Folk Psychology containing two intentional concepts in its antecedent and a non-intentional concept in its consequent; that "If HI then H2 " is an instantiation of a law that contains intentional concepts in both its antecedent and its consequent; and that "It's not the case that N" reports a non-intentional fact. Together these propositions entail "It's not the case that H~." That is to say, together they constitute a non-conditional, non-disjuncltve refutation of a proposition that ascribes an intentional state to i.

In other words, the strength of a set of constraints can be much greater than the sum of the strengths of the individual members of the set. It follows, of course, that it is impossible to establish the claim that the necessary conditions provided by Folk Psychology are weak by pointing to the logical properties of individual laws. In order to argue for this claim, it would be necessary to enumerate all of the constituent laws of Folk Psychology and to show that they are jointly satisfiable by a number of conflicting hypotheses about ;'s intentional states. Since we do not as yet have a full enumeration of the constituent laws, we could not possibly have a good reason to believe that the necessary conditions provided Folk Psychology are weak.

Page 185: Perspectives on Mind

lntenrionality, Folk Psychology, and Reduction 177

We must acknowledge, then, that our current perspective leaves plenty of room for optimism, We have no reason to doubt that the admissible interpretations of our intentional concepts are restricted by the laws of Folk Psychology to a class that is relatively small and homogeneous.

But more. If it should turn out that the constraints imposed by Folk Psychology are strong, then there would be logical room for a rather interesting view about the sufficient conditions of the truth of ascriptions of intentional states. Thus, there would be room for the view that we implicitly accept a principle which goes far beyond all of sufficient conditions afforded by individual laws of Folk Psychology. The principle I have in mind is based on the concepts of explanatory power and simplicity. To be more specific, where S is a set of hypotheses that ascribe intentional states to an individual i, the principle asserts that the members of S are true if S has the following properties: (a) S satisfies the "closure" conditions expressed by laws like (2), (3), and (4) above; (b) S is compatible with the set of statements that consists of the laws of Folk Psychology and of all true positive and negative statements about i's behavior, i's non-intentional psychological states, and i's behavioral and psychological dispositions; (c) when combined with the laws of Folk Psychology, S has as much explanatory power as any competing set of hypotheses about i's intentional states; and (d) S enjoys at least as much simplicity as any of the alternative sets that have properties (a)-(c). It is clear that this principle presupposes that the constraints imposed by Folk Psychology are fairly strong. Thus, if it is possible for two incompatible sets of hypotheses to have properties (a)-(d), then the principle can be used to derive the unacceptable result that two hypotheses can be true even though they are incompatible. The question of whether it is possible for incompatible sets to possess (a)-(d) depends in large measure upon the filtering power of properties (a) and (b), and the filtering power of (a) and (b) depends in large measure upon the strength of the constraints imposed by Folk Psychology. [4)

The principle authorizes inferences about intentional states that are not authorized by the laws of Folk Psychology. Because all of the sufficient conditions provided by individual laws of Folk Psychology contain intentional concepts, they cannot be used to show that ascriptions of intentional states to i are true unless there is another set of hypotheses about the intentional states of i that is already known to be true. However, in part because the principle gives a sufficient condition for the truth of classes of ascriptions (where the classes are large enough to satisfy the appropriate closure principles and to have the virtue of explanatory completeness), and partly because it uses a set of concepts that is essentially richer than the set that is deployed in the individual laws of Folk Psychology (note, for example, that it makes use of the predicate "is a law of Folk Psychology"), it transcends the limitations of

Page 186: Perspectives on Mind

178 Christopher S. Hi /I

other sufficient conditions. It entitles one to conclude that certain hypotheses about the intentional states of i are true even when one has no prior knowledge of the truth values of other hypotheses about i's intentional states.

IV

In the Logic of Mind and a series of papers [5], RJ. Nelson has attempted to explain some of our main intentional concepts by defining them in terms of concepts constructed from the primitives of automata theory. Although I feel that Nelson's writings contain a number of ideas of considerable value, I have reservations about the adeqllacy of several of his central definitions. one of which I will state in the present section.

Nelson's theory of belief focusses on beliefs about observable objects and events. His approach belongs to a family of theories which analyze perceptual belief in roughly the following way: Where A is an animate being and p is a state of affairs, A has a perceptual belief to the effect that p obtains if and only if (i) A incorporates a complex information processing system S such that S's contribution to A's perceptual states can be described without using intentional terms, and (ii) there is a type T of input to A's sensory apparatus such that (a) S is capable of distinguishing between inputs of type T and inputs of some other type, (b) inputs of type T normally attest to the truth of propositions like p, and (c) S recognizes one of A's current stimuli as an input of type T. This crude formulation does not come close to doing justice to the subtlety and power of Nelson's theory. Thus, in describing the information processing system S, he offers a number of illuminating suggestions about the nature of such operations as rectifying degraded input, compensating for gaps in input, and using contextual factors to resolve ambiguities in input. However, the details of his theory are not relevant here. My objection to the theory concerns a feature that can be captured by a rough sketch like the one just given.

The feature I have in mind is this: since Nelson tries to give an account of perceptual belief that does not contain any illtentional con­cepts, he presupposes that it is possible to state a sufficient condition of a perceptual belief's coming into existence at a given time without using the concept of belief in stating the condition. It follows from this presupposition that one can state a necessary and sufficient condition for having some one belief that does not refer to any of an individual's prior beliefs. And it follows in turn that one can state a necessary and suffic­ient condition for having some one belief which differs from the sufficient condition given in law (2) above in that it do,,, not acknowledge the possi­bility that an inconsistency between current input and prior beliefs will disrupt the processes that normally lead to fixation of belief.

It seems to me that this proposition is false. Animate beings are

Page 187: Perspectives on Mind

illfellfionality, Folk Psychology, and Reduction 179

frequently led by prior beliefs to discount information that would otherwise have caused them to form new beliefs about the environment. Thus, you may decide against adopting the belief that there is a rabbit in front you because you know that your neighbor is addicted to playing tricks with movie props. Or you may decide against adopting the belief that it is a dagger you c;ee before you because you have reason to think that it is a holographic image. Further, although your dog is probably incapable of having prior beliefs of this level of sophistication, he or she is capable of detecting certain basic discrepancies between past sensory information and the information available in the present, and such discrepancies may prevent him or her from adopting a new perceptual belief.

Here is a different way of putting the point. It is clear that the information one has extracted from past experience can prevent one from forming new perceptual beliefs, Every comprehensive theory of belief-­whatever its goals, whatever its primitives--must do justice to this fact in one way or another. Now it follows from the fact that it is impossible to give an adequate necessary and sufficient condition of perceptual belief unless one includes a clause that allows for the influence of information extracted from past experience on belief formation. But there is no clause in Nelson's analysis of perceptual belief that refers to such influence. Hence, Nelson's analysis is inadequate. Moreover, since we don't know how to pick out the information-bearing states of an organism without relying on intentional terms, its seems Nelson would have to use an intentional term in order to fill this lacuna in his analysis. ]n particular, its seems that he would have to use "believes" or an equivalent thereof.

This objection to Nelson's theory of belief is of course closely related to the argument against the first interpretation of the Implicit Definition Thesis that we considered in section II. Moreover, both are similar to a number of lines of thought which can be found in the standard literature. For example, there are similar arguments in H.P. Grice's well known paper entitled "Method in Philosophical Psychology." [6] (Grice's arguments are directed primarily against attempts to give behaviorist analyses of intentional concepts, but he seems to have been tempted by the view that every attempt to break out of the net of intentionality can be defeated by a line of thought of the same general type.)

--'i/--

Hill's commentary indicates a method for analyzing the content of intentional concepts by appeal to specific laws of Folk Psychology. His central thesis is that the content of intentional concepts "is largely or entirely determined" by a sct of principles "which in effect constitute a common sense psychological theory," and which are said to provide "implicit definitions" of these concepts (Hill: this volume, p. 169). To illustrate,

Page 188: Perspectives on Mind

180 Chapter Two

he examines the intentional concept of belief, and offers several "rough approximations" to genuine folk laws, arguing that we "could not possibly acquire the concept of belief before becoming acquainted with [these laws]." (p. 170) In the course of learning that these "generalizations" can be "fruitfuly applied in explaining and predicting the behavior of animate beings," we come to acquire our concept of belief; thus, Hill concludes, the content of the intentional concept derives from an awareness of the generalizations themselves. In other words, what we mean by our concept of "belief" must be distilled from the internal logic of the set of folk laws that establish the necessary and sufficient conditions for belief. But does this approach conflict with Nelson's reductionist program?

Consider Hill's criticism of Nelson's treatment of "perceptual belief." Nelson would quite likely propose replacing Hill's second "folk law" with one that appeals to "salient expectations" instead of "prior belief." This would seem to allow for an adequacy condition that could be satisfied by the "taking" relation (which is at the center of Nelson's constructive definition of "perceptual belief"). Of course this analysis might still fall short of satisfying the intuitions of mainstream phenomenologists and philosophical psychologists. But the fact that they remain unconvinced that their intuitions about belief have been satisfied may mean only that a further fine tuning of Nelson's structural definition is in order, and not that his approach is itself untenable or unpromising.

In response to Hill's proposals, Nelson points out that even if intentional concepts are not definitionally reducible to physicalist terms, "this is not the same as ontological reduction." That is, definitional irreducibility does not rule out an ontological reduction. For example, the Peano axioms provide implicit definition for the concept of natural number in a way that makes it plausible to say that "there is no set of principles for 'natural number' not containing other arithmetic terms." This, in turn, is "quite analogous to asserting that there is no set of principles about belief that does not contain other intentional concepts." Even so, it remains the case that all arithmetic terms "can be defined in terms of expressions designating sets: and the Peano axioms can be proved as sentences of set theory." Thus, "number theory is reduced to set theory" in a way that "leaves arithmetic intact, but with a new underlying ontology." (Nelson: unpublished reply to Hill) If a reduction of cognitive theory is feasible, including a reductive analysis of intentionality. this would mean that relevant features of mind can be taken as computational relations over purely material things, and Hill's Implicit Definition Thesis would simply serve to remind us of the unavoidable fact that any "slack" in Folk Psychology will always be reflected as "a like indeterminacy" in computational theory.

Page 189: Perspectives on Mind

Commentary 181

2.3 Transaction

We have seen that Nelson's project unfolds within the context of an avowedly functionalist framework, indeed, one that turns over as much of the mental as possible to computational analysis. But Nelson's approach differs in an important way from those exhibited earlier in the essays by Rey and Moor. They sought to challenge the credibility of intentional concepts, Rey emphasizing the metaphysical problems, Moor pointing to explanatory leverage as the only justification for the use of such concepts. But in each case the result was the same--a casting aside of what Nelson refers to as the "phenomenological evidence." Nelson, on the other hand, refuses to deny the phenomenological evidence. This is shown by his willingness to countenance certain dimensions of mental experience as quite real and in need of analysis, despite the fact that they have yet to be captured in the net of an existing functionalist model, even his own. As Nelson puts the matter, near the conclusion of his essay, "We simply do not understand consciousness, feeling, and emotion-laden beliefs and desires." (Nelson: this volume, p. 154) We can explain tacit attitudes, he contends, "yet phenomenal experiences--essential ingredients of the intentional--are completely outside the purview of functionalism" (p. 154). This, in effect, is the new "world knot" with which the cognitive sciences must wrestle.

When Nelson's programmed mechanism moves into a recognition state, it moves into the state associated with the highest probability. Something has caused this "move." Was it the input? Or was it the design state of the automaton which happened to have the highest probability? In Nelson's example, the input was an orange ball; yet the input was taken to signal the presence of a red ball. From this it would appear that the design state of the mechanism plays a major role in establishing the character of a specific "taking" relation. But the "misty" character attached to the orange input plays an important role as well, for it triggers a recognition of ambiguity. This recognition, in the context of the subject's anticipations, sets in motion a process that generates compensations. These serve, in effect, to translate the orange input into the projected presence of a red ball. If this projection is associated with the highest probability, the result will be a recognition state that means "red ball." But why does the automaton compensate for a lack of redness in the input?

It is highly significant that the subject in Nelson's example is looking for a red ball at the time that it receives the orange input. The "orangish" spherical shape looks "reddish:" is it the red ball? According to Nelson, the next move "puts the subject in a state such that more favorable input would put her in a recognizing state meaning 'Red Ball'" (p. 149). The subject attempts to determine the state sh(: would be in if she were receiving clear input and, in the process, interplets the input in

Page 190: Perspectives on Mind

182 Chapter Two

a way that satisfies her expectations. As Nelson points out, additional input might very well "frustrate her expectations," but by now the move to a recognition state has already established a semiotic relation between the subject and her input. Nelson has argued that this relation "is a function of the causal impact of the object, here an orange ball, and of the subject taking it to be red in order to satisfy her expectations." (p. 149) From this standpoint, the expression "This is a red baJJ" turns out to have a reference that derives, not from a fact about the object (which is, after aJJ, an orange ball), but from the reference intrinsic to a specific recognition state, one that serves on this occasion to relate the subject (as if) to a red ball (p. 149).

A significant feature of Nelson's recognition scheme is its incorporation of the potential for correction, refinement, and other forms of up-grading in response to later input. This implicates two important factors: the role of a "temporal schema" serving to link the present "now" to a future "now" which is already "pregnant" with possibilities; and a relation to "otherness," predicated on a recognition of the potential for corrective feedback. In order to explore these factors, we shall turn our attention to the "cognitive phenomenology" of Edmund Husseri.

In the next major paper, James A. Tuedio augments Nelson's provisional analysis of tacit intentionality by applying Edmund HusserJ's phenomeno­logical method toward an understanding of "intentional transaction" as a primary structure of mind. In this, Professor Tuedio's focus is on the intentional content of mental experience, that is, on the relationship between what Nelson called "recognition states" and the things we believe ourselves to be experiencing directly. What are the structures that bind us to the world, and how do these structures condition the possibility of experiencing this world as an objective reality? Whereas earlier, Mcintyre chose to focus on Husserl's analysis of mental reference in connection with the development of a representational theory of mind, Tuedio emphasizes the connection between Hussed's position and the problem of objective refer­ence. Viewed from this perspective, Husserl's reflections on mental reference coalesce into a theory of intentional transaction.

Beginning with a summary of the pivotal aspects of Hussed's phenomen­ological method, Tuedio reveals the "ontological neutrality" of HusserJ's approach to the problem of objective reference. He then presents key elements of HusserJ's theory of intentionality and, by emphasizing observa­tions on the nature and role of "noematic prescriptions," is able to de­velop HusserJ's proposed resolution to this problem. Tuedio argues that this approach implies both that the mind is engaged in a form of intention­al transaction with its environment, and that these transactions are structured in accordance with the ideal of "perfect correspondence" between intention and object.

Page 191: Perspectives on Mind

JAMES A. TUEDIO

Intentional Transaction as a Primary Structure of Mind

Edmund Husserl's phenomenology addresses itself to issues that are integral to the study of mental experience. [I] In this respect, it would not be inappropriate to refer to Husserl's philosophy as cognitil'e phenomenology. In fact, given Husserl's emphasis on the importance of establishing the strictly scientific character of the philosophical enterprise, one can extend this claim even further: cognitive phenomenology is, if Husserl is correct, the only truly rigorous foundation for the enterprise of cognitive science. We will investigate this claim at the conclusion of this essay. But first, we need to understand HusserJ's position with respect to the nature and function of minds. Since Husserl's reflections on the structure of mental experience developed in large part out of an attempt to resolve the enigma of objective reference, I will present a capsule view of his proposed resolution to this problem. In the process, I will attempt to show that Husserl viewed mental operations as transactions--specifically, intentional trallsactiolls--between the life of conscious subjectivity and all that stands over and against consciousness as an object or objective state of affairs. In the end, it may be possible to show that Husserl's theory of intentionality should be a crucial ingredient in any attempt to model or comprehend the functional nature of the human mind.

I. Husserl's Strategy for Resoil'illg the Enigma of Objective R~ference

Living amid the context of my everyday "existential" concerns, simply take it for granted that my perceptual acts have reference to the things of the world. [2] I take up things, concern myself with states of affairs confronting or engaging me, and othelwise occupy myself with the "obvious" presence of things. But as philosophers, we are not allowed the lUXUry of taking for granted the possibility of the obvious. If correlation between acts, referents and the things of the world is "obvious" to the natural man engaging his existential responsibilities, this very same correlation is clearly problematic for the philosopher. Of course, the problem is not that there might prove to be no such correlation. What, after all, would be the basis for doubting the reference of perceptual acts to things in the world? Rather, the philosopher's task, according to Husser!, is to account for the conditions or structures which ground the possibility of intentional reference to things in the world. In other words, we must attempt to explain why it is that we take ourselves to be experiencing objects in the world, and what it

183 H. R. Ouo and 1. A. Tuedio (eds.). Perspectives on Mind. 183-198. © 1988 by D. Reidel Publishing Company.

Page 192: Perspectives on Mind

184 James A. Tuedio

is about the structures of conscious life that makes such experience possible. The only question is how and where to begin.

I. I 171e Need for a Special Metlzod

We must be especially careful, Husserl warns, not to depend upon or otherwise impose a conceptual frame that presupposes the possibility in need of articulation. To operate with a methodology rooted in the taken-far-granted structures of everyday life would presuppose at least one linkage or correlation between the life of consciousness and an independent realm of things at the same time that we are seeking to establish the possibility of this linkage in the tirst place. To avoid this pitfall, we must abandon the standpoint of natural, everyday reflection and adopt a standpoint that is neutral with respect to the issue of intentional reference. In Husserl's words, we must abstain from all belief in "the general thesis of the natural attitude" [Ideas I, eh. 3]. This requires a special method, one which motivates abstention from the belief-structures of the natural standpoint while simultaneously opening up access to a neutral, philosophical standpoint that will ground articulation of the structures underlying the possibility of objective reference. Husserl calls this method "phenomenological" because its sole aim is to bring to articulation the logos of the phenomenon. without invoking any meta­physical or ontological assumptions about the nature of the objects we experience. [4]

1.2 Phenomenological Method

Phenomenological method employs two distinct yet complementary operations of thought: reduction and reflection. By means of phenomenological reduction, we restrict our field of interest. We then reflect on the subject-matter of this new level of description, and give expression to the structures uncovered. Since our subject-matter is human experience, or more geneJ'ally, conscious experience, phenomenological method serves to excavate to the fundamental structures of experience, and to describe these structures precisely as they show themselves to the reflecting mind. Reduction, being an excavating process that restricts our field of vision, nevertheless allows us to see more than we were capable of seeing at the outset, though never more than was implicitly available for inspection all along. In other words, by means of phenomenological reduction, I open up structures of experience which had until now been simply taken for granted. structures which had been operating "behind the scenes" of my conscious experience. I then describe these structures as precisely as I can, considering them exactly as they are given to me in reflection. [5 J

Page 193: Perspectives on Mind

Intentional Transaction as a PrimalY Stlucture of Mind 185

It is Husserl's contention that there are four levels of description to be uncovered with respect to conscious experience:

(I) the naive standpoint (vis-ii-vis the "real world" concerns of an individual);

(2) the psychological standpoint (vis-ii-vis the conscious experiences of this individual);

(3) the transcendental standpoint (vis-ii-vis the basic structures of this individual's conscious experience);

(4) the eidetic standpoint (vis-ii-vis the basic structures of conscious experience in general).

The first level is the field of everyday experiencing. Here. our focus is directed toward the realm of existential concerns confronting us in daily life; in effect. we are preoccupied with the things of the world. which we simply take for granted as "obviously" present "out there ... ' in the common sensible world. This level of description simply presumes the possibility of intentional reference without offering any leverage with which to understand the conditions which underlie this possibility. If we are to take up the problem of objective reference. we must set aside naive acceptance of the "obvious" correlation between our thoughts and the things we "entertain" via these thoughts--things that are simultaneously engaged out there. in a world that is clearly transcendent to our stream of consciousness.

Husser! suggests we begin with a psychological reduction. By means of such a reduction. we open up a second level of description. where our focus is no longer on the real-world concern. but on the experience that we are living through while engaging the real-world concern. In effect, we begin by reducing our focus to the psychological dimension of conscious experience. Reflecting on our phenomenological remainder, we encounter a level of experience that sustains our naive immersion in the world of things--namely, that level of experience "lived through" by the conscious subject. To describe this level of experience is to express all that seems to me to be the case. Clearly we must move beyond this level of description, since it still presupposes the possibility of objective reference (insofar as the experience described takes for granted a correlation of reference between the subjective life of the experiencer and his projected field of action). In short, this level of experience still takes for granted the very thing we are trying to understand. namely, direct reference of our thoughts to existing things in the world. If our description is to remain faithful to this level. it must present experience as a psychological event occuring within a psychophysical organism. (According to Husserl, this is the level from which Descartes was operating in his attempts to ground the possibility of objective correlation between

Page 194: Perspectives on Mind

186 James A. Tuedio

immanence and transcendence.) But what other levels are there? If we cannot remain with the levels of experience that are apparent to us in ordinary reflection, how are we to next proceed?

According to Husserl, we cannot hope to reach a level of description that does not presuppose objective reference without excavating to a more primordial level--to a level that generates processes by which our thoughts and perceptions bear relevance to existing things. Such a level cannot be reached without performing a "transcendental-phenomenological" reduction. By means of such a reduction, one excavates to the pure act-component of an experience, thereby removing from consideration all evidence that would normally be generated by the object-side of experience. Naturally, this level will be of no use if it does not contain evidence of those structures necessary to account for the possibility of an act's bearing reference to an objective reality. Therefore, we will need to encounter the "roots" of objective reference within the essential structures of the subject-side of experience, if we are to resolve the enigma of contact between immanence and transcendence.

Since intentional reference to an objective state of affairs does not depend on the existence of the presumed referent to which the experiencing subject is directed, Husserl's strategy is to ground the possibility of object reference in conditions that are a priori with respect to (and which therefore hold the key to a proper conception of) the experiential nature of transcendent realities. According to Husserl, this is possible only by performing a transcendental-phenomenological reduction. By means of such a reduction, we seek to isolate for phenomenological description those structures of conscious experience that give birth to the possibility of objective reference. We seek to isolate the essential structures of conscious life, structures that give a conscious act its capacity to project the conscious subject into a transcendent horizon of action.

Reflecting on the structures of the conscious act, it is our task to find within these structures the source of our intentional access to the things of the world. Once we have described the key structural ingredients of our conscious life, we move on to the fourth and final level of description, by means of a third reduction, which Husserl terms an "eidetic" reduction. [6] At this level of description, we are seeking to identify only the most general characteristics of the act-component, namely, those structures which ground the possibility of intentional reference for any possible mind. Gathering evidence at this level of description should lay the foundation for a rigorous science of consciousness. But what can such a science tell us about the nature of experience as lived through by people who are engaged by the pulse of their existential challenges and opportunities?

Husserl's strategy is really quite marvelous: if he can succeed in isolating those structures of conscious life that generate the possibility

Page 195: Perspectives on Mind

Intentional Transaction as a Primat}' Structure of Mind 187

of correlation between thoughts and objects, he can then argue that these structures are the very same structures that ground the possibility of objective correlation between the mental life of the existential subject and the objective context within which he is situated. If Husserl is right about this, then anything he uncovers that contributes 1:0 a science of consciousness should have implications for discussions of the being-in-the-world of an existential subject. In this way, he can establish the crucial link between immanence and transcendence and so resolve once and for all the enigma of objective reference.

2. Some Pivotal Distinctions ill Husserl's 17,eolY of IlIIent;onality

Let us suppose that I am looking out into the garden: what I perceive is a tree bearing fruit. What I have "in mind" is the experiential presence of a tree in the garden, a real tree, which mayor may not be as I prescribe it to be. If Husserl is correct about this, then the tree there in the garden is experienced through the medium of an "orienting prescription" that determines to some degree the experiential nature of the tree standing there in the garden before me. But this orienting pre­scription does not represent or "stand in" for the real tree. I experience the tree itself, Husserl tells us: "that and nothing else is the real object of the perceiving intention" [Ideas, 90, p. 263]. Nevertheless, I experience the tree itself through the medium of "noematic" prescription, and it is this noematic prescription, not the tree itself, that determines what I have "in mind" when I "see the tree there in the garden." In other words, my prescription carries me out to the presence of a tree bearing fruit in much the same fashion as a hypothesis might carry a scientist out into his experimental situation. Whether or not the "is" is as I prescribe it to be cannot be an issue for transcendental phenomenology.

What is at issue is how to account for the possibility of my experiencing a tree as being there in the garden before me. Whether or not there is a tree of the nature I have prescribed within the nucleus of my noematic phase of the perceptual experience (in fact, whether or not there is a tree at all) will not steal away the fact that I am being referred to the presence of a tree. This indicates to Husserl that the key to understanding how objective reference is possible must lie within the structural dynamics of the noematic element of experience. The contention here is that objective reference is "grounded" in the conscious life of the experiencing subject, that without having a sense of the presented object "in mind" (no matter how distorted this sense might be) tltere would be no object present to me ill experience. To understand how objects can be present to us in experience, then. we must investigate the structural dynamics of the noema.

Let us begin with a very important though seldom emphasized set of

Page 196: Perspectives on Mind

188 James A. Tuedio

distinctions that will allow us to discriminate between the "object," the "phenomenon," and the "noematic Sinn" involved in a conscious act. Though Husser! never offers an explicit characterization of the differences between these concepts, it should become apparent that his theol)' of intentionality would make no sense without such discriminations (assuming, of course, that Husserl's principal interest in developing a theory of intentionality was motivated by his desire to rethink and resolve the paradox of objective reference facing philosophers at the turn of the century).

2.1 Husser/'s Dis/indions Be/ween "Objed, " "Phenomenon," and "Noematic Sinll "

To begin with, let us refer to the object as X and to the phenomenon as "X". Let us then define the noematic Sinn, at least provisionally, as 'my sense of X as "X".' With this granted, here are the distinctions I take Husserl to have in mind:

(I) The noematic Sinn is my sense of something "as being of such-and-such a sort." For instance, I might have a sense of person X as being a trustworthy person: in this example, my sense of X "as being" a trustworthy person is a crucial dimension of the noematic Sinn of the act through which I intend X as being /rusIW0I1hy. (The noematic Sinn also includes, among other things, my sense of X as a person, a sense that is passively given in my experience of X as "trustworthy".) Furthermore, the Sinn determines which object is intended. Thus I intend X to be the one I am intending "as trustworthy". In this way, the noematic Sinn (or, as we shall speak of it, the "noematic prescription") includes both my sense of "this object, X" and my sense of X "as trustworthy." Therefore the Sinn, or prescription, is not merely the sense of "trustworthiness" but is the sense of X as trustworthy [cf. 11&1, pp. 12Sff].

(2) The phenomenon, "X", is X as he would be, if in fact he were, among other things, the trustworthy person I experience when engaging "X" in my life-world. "X", as the life-world entity correlating with my sense of X "as being trustworthy," is a phenomenon whose mode of being takes the form "being for me," insofar as it is constituted as tlce prescribed correlate to my awareness of X as a trustworthy person.

(3) Finally, there is X, the real person, who may be trustworthy, or who may be setting me up for a fall: only an experiential or "intentional" transaction will determine whether or not X lives up to his "billing" as a trustworthy person. X is the object.

Page 197: Perspectives on Mind

i11le11lional Transaction as a PrimQlY Structure of Milld 189

Husserl's interest is not in the question of how I can know--much less know "for certain"--that X is in fact trustworthy. [7] On the contrary, Husserl is explicitly placing questions of this sort on hold the moment he embraces phenomenological method as the key to resolving the problem of intentional reference. Phenomenological method removes from consideration the nature of the objects we experience. In the process, our interest shifts to an analysis of how objects are given in experience, that is, in how objects come to be prescribed by the conscious subject "as being of such-and-such a nature." We can illustrate this by distinguishing between two propositions:

(I) I believe that X is trustworthy. (2) X is trustworthy.

The key element in the first PlOposltlon is clearly my believillg that X is trustworthy. In the second proposition, the key element is X himself, the object. In the first proposition, there are actually two elements that come together to make for my experience of X as being trustworthy. These are my sellse of X as being trustworthy, and "X", the phenomenon, which is the object as prese11l to me ill experiellce. I can never know for certain that the phenomenon is an adequate or accurate portrayal of X's true nature (although through "intentional transactions" I can build up a solid sense of X which then serves to "present" X to me as being trustworthy--as being "X"). In effect, I can "rest assured" that X is trustworthy so long as there is no disconfirming evidence (i .e., so long as there is no reason to believe that X is Ilot trustworthy).

By focusing on the noematic phase of my experience of "intending X as trustworthy," it is possible on Husserl's account to understand what it means "to see X as trustworthy." But what do we find when we stop to reflect on the noematic phase of experience?

2.2 the Noematic Compollelll of Erperiellce

After the reduction to the transcendental domain of investigation, we are given a special sort of reflective leverage over consciousness: Olll"

conscious life is given to us (under the force of the reduction) as still including referellce to transcendent being, but it is no longer considered in relatioll to the being of transcendent realities, nor is it under the force of beliefs about the being of transcendent realities. Within this domain of investigation, we discover two distillcr sorts of relationships.

(1) There is the strict corre/ariOIl between my sellse of an objective reality, and the phellomenon whose being is prescribed as the life­world correlate to the act of conscious apprehension. This is the

Page 198: Perspectives on Mind

190 James A. Tuedio

intentional correlation. When there is an object, the correlation will be between the noema and the object, insofar as the phenomenon is not other than the object. But it is not essential that there be an object, only that there be a phenomenon. Therefore, intentional con'elation is only "essential" correlation insofar as we speak of a necessary correlation between noema and phenomenon. [8]

(2) There is, within consciousness itself, a stlict correlation between noesis and noema: every act of noetic apprehension bears within itself a "sense" and a "manner of givenness", which together make up the noematic phase of the act. Clearly illlentional correlation is dependent upon noetic-noematic correlation, insofar as the key to the being-status of the phenomenon (the key to our having the object "in mind" as being of such-and-such a sort) lies in the nucleus of the noema, not in the object. We can put this another way, by saying that the key to the "as" lies, not in the object, but in our sense of the object--in the noema--for we only have the object "in mind" as it is meant (or intended) in experience.

The point of noetic-noematic correlation can be summarized in another, perhaps more helpful, manner. The noema is an abstract entity (which means only that it can be shared by multiple conscious acts). In order to play its "mediating" role in intention, it must be "entertained" or somehow "processed" in consciousness. Furthermore, there must be a special kind of mental event in which this takes place. This special kind of mental event, which is a necessary part of every intentional act, Husserl calls the "noetic" or "grasping" phase of conscious life. Noetic-noematic correlation is thus a two-fold relationship between the "entertaining" function of conscious life and that which "announces" the presence of X. X is entertained, but only in a way that presents it as "X". The "as" which determines the experiential nature of "X", is prescribed by that phase of the act that announces the presence of X. Husserl terms this the "noematic" phase of the act. Since the presence of X can be entertained only through the medium of a noematic prescription that presents X as "X", all noematic prescriptions must be integrally correlated with a noetic or entertaining function of conscious awareness. Hence the emphasis on noetic-noematic correlation as an essential structure of conscious life.

Noetic-noematic correlation is clearly more fundamental to Husserl than intentional correlation, though one could also argue for a "co-primacy" insofar as it would be impossible to have noetic-noematic correlation without also having a correlation between noematic Sinn and phenomenon. In fact, it would appear that the noematic Sinn is the "hinge" between noetic-noematic correlation and intentional correlation--perhaps even the pivot of their relation.

Page 199: Perspectives on Mind

Inrentional Transaction as a Primary Stl1lcture of Mind 191

Without the noematic component of the conscious act, the phenomenon would have no experiential nature and, consequently, there would be no object present in experience. For we cannot experience an object apart from the manner in which it is given to us in experience, and the object as given is determined by the "sense" that we have of the object there before us. This "sense" is integral to the noematic phase of experience, and is an essential ingredient of the intentional act, not a mere "profile" of the object toward which we are directed in experience. So lon!~ as we think of the noema as a "meaning-entity" or "sense-object", we are likely to miss the importance of this crucial distinction, without which Husserl cannot resolve the enigma of object reference. If, instead of viewing the noema as a "sense-object," we view it as an object-sense, we can appreciate more readily the significance of Husserl's distinction between "noema" and "phenomenon," and so be in a position to comprehend the referential role of noematic prescriptions.

2.3. A Closer Look at "Noematic Prescriptions"

Husserl traces the essence of conscious life to the noematic structuring of experience in accordance with prescriptive "object-senses". According to Husserl, it is only by virtue of these "object-senses" that things can be present to consciousness "as being of such-and-such a sort." The noema of the act sustains this capacity for "making present" by virtue of the fact that its nucleus, the noematic Sinll, contains two dimensions of sense that are essential to the presentation of an object:

(I) The dimension of referenrial sense (i.e., the "determinable X") which picks out or "fingers" an object that is taken to have a unique and identifying nature.

(2) The dimension of descriptil'e sense (called, collectively, the "predicate senses" by Husserl), which is said to prescribe a partial determination of this unique and identifying nature. As a result of this partial determination of properties and aspects, the noematic Silln implicates certain other properties, some of which might have been directly given in prior experiences, and others of which are implied or foreshadowed in partially or fully determinate ways, and all of this together becomes the descriptive sense.

Taken together, the referential sense and the descriptive sense constitute the totality of our noematic prescription. [9]

Such a "prescription" is how we have the object "in mind". But this hardly puts the object in mind; merely the prescription (which is, after all, a sense--namely, our sense of the object, e.g .. our sense of "this

Page 200: Perspectives on Mind

192 James A. Tuedio

person" as being "trustworthy"). Our sense of the object. containing its referential and descriptive dimensions, essentially carries us out to the presence of a transcendent object in the world. By then entering into an intentional transaction with this transcendent reality, we put our sense of the object to the test of evidence gathering, which leads either to a stabilization or destabilization of the operative object-sense. William McKenna has captured the flavor of the intimacy between sense-constitution and intentional transaction in the following manner:

When "something" is perceived, a complex intentionality is at play whereby a number of object-senses of various levels of generality come to bear on that something to apprehend it as "what" it is. This apprehension is never epistemically adequate to the complete actuality of worldly objects, although it becomes more adequate as my life of experience progresses. [10]

Husserl never explicitly speaks of "intentional transaction." but it seems quite fundamental to the point he is striving to make. We see this more clearly if we consider what it means to "experience" an object: On the one hand. we simply "take it up", living it as illlended (which includes a horizon of indeterminacy); on the other hand, we engage the object as il is. If everything remains stable and coherent within the framework of our expectations (some of which are quite determined, others only marginally determined, still other expectations merely "foreshadowed" with perhaps an "open" anticipation of a manageable dose of the unexpected)--if all of this remains stable and coherently ordered, then we will surely gain a stronger motive, a stronger resolve. for seeing X as such-and-such a sort of thing in future circumstances. I might come to trust X, for instance. However, this would not make X trustworthy; it would only make "X" trustworthy.

On the surface. it may appear that there are two distinct entities, one in the world. the other "in my mind." so to speak. X, after all. is clearly transcendent to my experiential stream of consciousness. whereas "X" (the object "as experienced") is merely the objective correlate of an intentional resolve. But let us not take the notion of phenomenon ("X") in an "ontological light." The point is not that there are really 111'0 things, an X that is nol trustworthy and. in addition. an "X" that is trustworthy (and which would be the "true" objective referent of my awareness). Rather we must see that our act gives X the property. not of being trustworthy. but of being illlended as trustworthy. This would indicate that X is the true objective referent of my intentional awareness, even as this X is present to me as an "X" (whose experiential nature is determined by my noematic prescription).

Suppose we are walking along a pathway at night. Suddenly there is a rustling sound lip ahead. Anxiety sets in as we immediately constitute an

Page 201: Perspectives on Mind

Intentional Transaction as a Pri11lwy Structure of Mind 193

object-sense which prescribes a burglar or mugger lurking in the shadows. This prescription for a mugger does not require the presence of an actual mugger up ahead. It requires only what Susanne Langer has termed a "symbolic transformation" of sensory input into the gestalt of a lurking mugger. [II] If we take the prescription seriously. we will surely operate in accordance with the anticipation of an encounter, or a possible encoun­ter, with a mugger. We will act in such a way that if we were actually to experience whatever it is that initiated the input of sensory data, we would be able either to take it up as intended (in which case we would be locked in an encounter with a potential mugger), or to encounter it as other than prescribed, as when a rustling in the branches up ahead turns out to be a prowling cat rather than a potential mugger. [12]

The point here is that I act in accordance with my prescription, and thus operate as though the object were there as I am intending it. If my sense of the object meshes with the objective state of affairs within which I am immersed, then my intentional prescription is "filled" by experience; if the sense fails to mesh with objective reality, then it must be reconstituted through an act of intellectual or conceptual refinement. Often times we will find ourselves confronted by surprises or novelties that require from us a new and ever-richer refinement in the determinateness of our object-sense. Indeed, in the case of any transcendent object of experience, Husserl would have us operate with the recognition that one can never exhaust--or at least that one can never know oneself to have exhausted--the determinateness of a given object of possible or actual experience. And though it is always our task to strive for pelfect determination, we can never know for oertain that our prescription has captured all the possible profiles [Abschattungen] of the actual object, so as to somehow "mirror" the object's properties exactly. As Husserl continually emphasizes, the conception under which a given object is intended can never be known to exhaust the possible deter­minations of this object. Of course, it is often the case that our prescriptions "suffice" for our pragmatic or practical interests. [13] On this theme of "adequacy," Husserl has the following to offer, from the introduction to E>.perience and Judgment:

Depending on my particular goals, I may have enough of what an experience has already provided me, and then "I just break off" with an "It is enough." However, I can convince myself that no determination is the last. [EJ. p. 32]

Furthermore, confirmation of the adequacy or inadequacy of my prescription will almost always be relative to the practical demands of the situation within which I project a given practical end or possibility [EJ. p. 63]. To the extent that my activities remain on target toward the projected end

Page 202: Perspectives on Mind

194 James A. Tuedio

(however indeterminate this "end" might be prescribed), the prescription will seem adequate. When feedback becomes negative, refinement will generally set in, relative to the practical demands of the situation within which I find myself. When I have refined my prescription to the point where I again have a stable object-sense, I enter into a new intentional transaction, which will determine with respect to the new prescription what the earlier transaction had determined with respect to the prior prescription: namely, the adequacy or inadequacy of my sense of the transcendent reality within which I am operating as an existential being.

2.4 The Ideal of Peltect Correspolldence

By transacting with the world in terms of our object-senses, we are able, as William McKenna puts it, to generate a "reorganization, supplemen­tation and refinement of the intentionality containing the [operative] object-sense," so that we can at last imagine "an ideal course of experience ... wherein all such corrections have been made and final harmony is reached." At this point, McKenna concludes, "the intentionality would have come into perfect correspondence with the full actuality." [14] But this ideal of perfect correspondence is nothing more than a reguiatil'e ideal for Husserl, as I shall now attempt to show.

Let us begin by asking what it would mean to realize the ideal of perfect correspondence. On the one hand, it would mean that my object-sense is in perfect correspondence with the object itself (the one that is independent of my efforts to become conscious of it). In this event, my object-sense would be a proper "definite description". [15] But more importantly, it seems to me, a perfect cOlTespondence between intention and object would signify a co-extensiveness of phenomenon and object. As we noted earlier, it is Husserl's contention that I perceive the tree itself: the one there in the garden. "That and nothing else is the real object of the perceiving 'intention'." But because the tree there in the garden is a transcendent reality, I can never have a fully adequate presentation of the tree's true nature. The tree itself is present to me in experience, but the presentation of the tree does not--and on Husserl's account of perception canllot--exhaust the being of the tree.

The ideal of perfect correspondence between intention and object is a regulative ideal (as opposed to a practical ideal) insofar as we are destined always, on Husserl's account, to fall short in our efforts to take the true measure of transcendent realities. The true measure is a limit-pole, an infinitely distant "rendezvous-point" toward which we can proceed in asymptotic fashion. We can move closer to such a true measure in virtue of the fact that the being of transcendent reality is there to be encountered in experience, even though we cannot, in principle, hope to exhaust the infinity of its possible appearances. If we were to exhaust

Page 203: Perspectives on Mind

Intentional Transaction as a Primary Structure of Mind 195

this infinity, then the phenomenon (say the tree as given in experience) would be co-extensive with the object, so that our sense of the tree's being would be in perfect correspondence with the reality that is there to be encountered. Husser!'s point, in speaking of perfect correspondence as a regulative ideal, seems to be this: even though we cannot hope to exper­ience a transcendent reality for what it truly is, we can nevertheless strive for an ever-closer approximation of a given transcendent object's true nature, through the intentional process of corrective adjustment that takes place, presumably, by means of an on-going experiential blending of transaction, assessment and refinement procedures.

For instance, during the course of my intentional life, I am said to build up an ever-fluctuating stock of sedimented object-senses through which, or according to which, I engage transcendent realities. The fluctuations in this stock of senses (some of which serve as "background beliefs") take place in order to retain a harmony between past and present determinations. But the corrections and refinements can also be stimulated by my interactions with other existential subjects. I discover through intentional transactions that these other people are capable of sedimenting object-senses that vary widely from my own, senses that are built up on the basis of a differing standpoint from the one I have come to occupy. On the basis of my interactions with these other subjects, Husserl explains,

there constantly occurs an alteration of validity through reciprocal correction. In reciprocal understanding, my experiences and experien­tial acquisitions enter into contact with those of others, similar to the contact between individual series of experiences within my own ... experiential life; and here again, for the most part, inter-subjective harmony of validity occurs, [establishing what is] "normal" in respect to particular details, and thus an inter-subjective unity also comes about in the multiplicity of validities and of what is valid through them; here again, furthermore, inter-subjective di~:crepancies show themselves often enough; but then, whether it is unspoken and even unnoticed, or is expressed through discussion and criticism, a unifi­cation is brought about or at least is certain in advance as possibly attainable by everyone. [Crisis, p. 163]

Thus, we can learn from one another, and presumably move closer to a "con­firmation" of the being of things that is "true once and for all". [16]

3. The Bottom Line of lIusser! '5 771COI}, of Intentionality

The principal intent of this essay has been to show what Husserl has in mind when he claims that our access to things is "intentional" in nature. On the one hand, I have interpreted this to mean that we can only

Page 204: Perspectives on Mind

196 Chapter TIVO

engage things through the "screen" of noematic "prescriptions" or "object­senses". In virtue of the noematic "content" of an act. things are presen­ted to us in experience as having explicit determinations, and other semi­determinate or even indeterminate characteristics that are in some sense "foreshadowed" or anticipated within the nucleus of explicitly given as­pects appearing to us in the course of experience. We can now interpret this to mean that the things we experience are given, not independently of our "sense-making" (or "constituting") capacity but through this capacity.

But simply because we must experience things as given through appearances, we should not conclude that the things we experience reduce to appearances. Things are constituted in appearing phases of experience, not out of appearing phases. How else could Husser! defend his contention that we perceive the tree there in the garden, rather than some mediating "image" or "representation" of the tree? But at the same time, HusserJ emphasizes, we are never in a position to "drink in" the total sholVing of transcendent reality; we can never engage more than a partial showing. This appearing aspect is given its determination at the point of contact between the tree and my organizing frame of reference (which includes some a priori dimensions, but also many dimensions that have been built up out of past experiences and allowed to sediment as "background beliefs" that serve to individualize my own particular "screen" of vision). [17)

Thus, the bottom line of Husserl's theory seems to be this: our intentional access to objects structures our experience in a way that motivates us to strive for confirmation, to reach for a sense of things that will serve us in the future, or that will blend in harmonious fashion with feedback from future transactions. When the harmony breaks down, the intentional process generates a correction, which requires a refinement in the operative prescription. Future transactions will, in all likelihood, insure a never-ending process of refinement. In this way, we find that the intentional process is itself structured in accordance with the regulative ideal of ultimate convergence, even if such convergence (between object­sense and object) is, in principle, out of the question.

--fJ--

Hussed believes that a network of hidden biases anchors our normal stance within horizons of possibility. Though it is often hidden from view, this network of biases is "constituted" within one's own subjec­tivity; so, too, are the horizons of possibility within which one is situated at any given time. Thus for Husserl the enigma of objective reference is in fact an enigma of human subjectivity: how can one be the source of the very world within which he is situated as an existential being? Hussed's theory of intentionality allows us to distinguish between being the source of the world, and being the source of the meaning

Page 205: Perspectives on Mind

Commenta .. y 197

intrinsic to this same world. Husserl steadfastly denies that the world is a subjective construct; at the same time, however, he affirms that we are the source of the meaning that is intrinsic to our experience of the world. From this perspective, it can be said both that we are drawn into trans­actions with the world and that the world as we experience it is an evolv­ing product of our subjective powers of meaning-constitution. Given this orientation to the problem of objective reference, a fundamental issue arises: what accounts for the fact that we "constitute" the world as an objective, intersubjective horizon of experience?

Tuedio's reflections on the "noematic" phase of experience address this issue directly. The noematic phase is the constituted phase of experience. What is constituted? Meaning. We do not just experience x. We experience X as exhibiting a certain character, as manifesting a specific nature. In Tuedio's terminology, we experience X as "X." That is, we enter into transactions with X; but X as we experience it is "X," an evolving product of our own subjective powers of meaning-constitution. This "X" (the phenomenon intrinsic to a particular experience) is the prescribed reality to which we orient ourselves. Consider ou'r experience of the world as an objective, intersubjective horizon of experience: is this not a noematic prescription? We constitute the world as an objective, intersubjective horizon of experience. But why? Well, because this is part of what it means for us to act in the world as we do! That the world is experienced as an objective, intersubjective horizon is, to borrow John Searle's (1983, 1984) terminology, part of the "conditions of satisfaction" intrinsic to our beliefs and actions. That we take these conditions to be realistic is implied in our transactions with the world. We act as though X were trustworthy. In so doing, we give to X "the property, not of being trustworthy, but of being intended as trustworthy" (Tuedio: this volume, p. 192). Or perhaps we proceed as though the ball were red. thereby giving to X (the orange ball in the mist) the property of being intended as red. This doesn't make the ball red, anymore than trust makes someone trustworthy. Yet these prescriptions still manage to condition our transactions in accordance with our anticipations of eventual confirm­ation. The fact that these anticipations are mere hypotheses is generally overlooked as we enter into transactions, even as we strive for their confirmation.

The "corrective" dimension of intentional transaction is a crucial element in this account of the mind. Thus while it is true that the experiential nature of a phenomenon is conditioned by the operative noematic prescription, there is still a crucial sense in which any given noematic prescription will be affected by the interpretation of feedback from intentional transactions. Husseri's theory of the progressive constitution of experiential phenomena allows us to account for the role played by things in the world. This role is, of course, merely indirect:

Page 206: Perspectives on Mind

198 Chapter Two

a given transaction may trigger refinements in the operative prescription; when this happens, there will be simultaneous corrections or refinements in the experiential character of the phenomenon.

But what is the goal of this "corrective" process? On Tuedio's reading intentional transactions are conditioned by the ideal of ultimate convergence and perfect correspondence. He has emphasized Husserl's view that the adequacy or inadequacy of our prescription is generally related to the practical demands of the situation within which one projects a given practical end (pp. 193-194). This suggests that the theoretical ideal of ultimate convergence is tempered by pragmatic considerations.

The following commentary by Steve Fuller examines the bias that is intrinsic to Husserl's hypothesis regarding the ideal of ultimate convergence. Situating this bias in the "Skeptic" tradition, he challenges its merit and proposes an alternative hypothesis designed to protect the integrity of phenomenological evidence. Stressing the evidence for divergence and incommensurahility, Professor Fuller develops his hypothesis in concert with a non-justificational perspective on the nature of inquiry. From this perspective, inquiry is seen as a striving process conditioned in accordance with the pragmatic ideal of "maximal coherence," where each moment in the striving process "appears as a recognition and treatment--if not necessarily an elimination--of error" (Fuller: this volume, p. 204).

Fuller identifies this with the "Sophist" tradition and emphasizes important differences between the strategies of this tradition and those of the Skeptic tradition within which he would place Husser!. These differences are accentuated by Fuller's emphasis on the role of the second person standpoint in identifying refinements or corrections in operative prescriptions. His principle thesis is that a prescription requires refinement or correction not for reasons determined by external circum­stances, but because of characteristics intrinsic to the operative pre­scription itself, Thus while a given situation may fall short of one's expectations, this in itself does not explain the faulty character of the noematic prescription. Instead, argues Fuller, this inadequacy should be traced to one's failure to anticipate in advance all the actual conditions under which the prescription would be tested through transactions with the world.

Page 207: Perspectives on Mind

STEVE FULLER

Sophist vs. Skeptic: Two Paradigms of Intentional Transaction

In the course of drawing together the many strands of Husserl's thought, James Tuedio succeeds in bringing into focus an issue too often faced only obliquely by "methodological solipsists" in both cognitive science and phenomenological research. [I] "Interface" is the term of art sometimes used by cognitive scientists to capture this issue. Tuedio has rendered it in phenomenologese as "intentional transaction." In naturalistic terms, the issue is this: What distinguishes all organism from its environment, alld how does the organism come to draw that distinction? Notice that I have not at the outset drawn--as Tuedio and other phenomenologists tend to do--the distinction in terms of "subject" and "object," which would imply the differentiation of two equally well-defined things, say, a human being and a tree. Rather, I am striking a contrast implying one well-defined thing and something else defined only as not being that thing. Organism­environment has this character, insofar as such a distinction suggests an island of order in a sea of relative disorder. So too do less naturalistic contrasts, such as figure/ground in Gestalt psychology, or the original conscious/unconscious in German idealism. By shifting the distinction in this way, a more "transactional" account of intentional transaction can be given. Moreover, this shift has other interesting implications for the project which Tuedio has aptly called "cognitive phenomenology."

Before developing my own account of intentional transaction, it will be useful to situate it among the various accounts now current- I shall characterize these accounts as perspectives on the differentiation of an organism from its environment- From a first-persoll perspective, the issue is how an organism is able to generate from its own cognitive resources a sense that there is an environment having recurrent properties. This issue motivates the strategy common to representational theories of mind in both the empiricist and the phenomenological traditions. From a third-pel' SOil

perspective, the issue is how an independent observer draws the boundary between organism and environment. An appropriate strategy is for the ob­server to treat the organism's extended perceptual system, or "skin," as a mechanism for translating ambient physical properties to data from which the organism draws meaningful conclusions. [2] This leaves the secolld­persoll perspective which I shall develop below by way of contrast with Tuedio's version of its complement, the first-person perspective. As we shall see, the strategy of the second-person perspective is to capture the extent to which an organism's attempts at representing its environment have the unintended consequence of revealing more about the organism than about its environment. This perspective involves anthropomorphizing the (,1ll'irOIl-

199 H. R. Otto and J. A. Tuedio (ed.\'.), Perspectives on Mind, J99-208. © 1988 by D. Reidel Publishing Company.

Page 208: Perspectives on Mind

200 Steve Fuller

ment as able to "sense" that something is in its midst (the organism) which is neither merely a part nor a passive reOection of the whole, but rather a functionally independent unit. In short, I am proposing a perspective that allaH'S examination of the notion of "misrepresentation" olltside in.

There are two phenomenologies. There is the phenomenology that treats Kant's Critique of Pure Reason as the most brilliant achievement in the history of the Cartesian project aiming as it does to articulate the first~person perspective. Then there is the phenomenology that treats Kant's Critique of Practical Reason as the modern origin of an entirely new project the aim of which is to articulate the second~person perspective. [3] While phenomenology in the first sense was not fully recognized until the period from Husserl to Heidegger (the first quarter of the twentieth century), phenomenology in the second sense had already been recognized as such during the period from Fichte to Hegel (the first quarter of the 19th century). Though these facts are sometimes recounted in histories of phenomenology, what rarely receives mention is that the ancient sources of the two phenomenologies also diverge in interesting ways.

The first phenomenological project can be traced to the Skeptics, especially to their method of epoche, or "bracketing," whereby judgment regarding the correctness of one's representations is withheld. By con~ trast, the second phenomenological project can be traced to that ancient school which has found modern favor only with the rise of German idealism: the Sophists. [4J They were, of course, the recognized masters in "dialec~

tic," a method whereby an)' judgment was activel), met with countervailing considerations. Though their positions have often been conflated, there is a big difference between Skeptic Hnd Sophist: whereas the Skeptic disavows all interest in determining: that a judgment is true or that it is false, the Sophist avows a definite interest in determining that it is false.

This difference in attitude toward truth valuation is crucial for motivating the two phenomenological projects. For even if the Skeptic's mental representations do not represent an "external world." they at least represent the "internal world" of the Skeptic's own mind, the clarity of which~~as Descartes rcalizcd~-can easily be mistaken for knowledge of something beyond the mind. Indeed, the web of representations is normally so seamless that to inquire into their ultimate truth value is to raise an issue unnecessary for the phenomenology of everyday life. In contrast. the Sophist sees herself as having equally faulty access to both "internal" and "external" worlds. Indeed. her very sense that there is an external world need not be based on actuHI contact or correspondence between her mental representations and such a world. but rather may rest on her inability to fully determine her own mind-~Ieading thus to contradictory mental reprc~ sentations subsequentlv misdiagnosed as caused by something outside mind.

To get clear on what is at stake here between Skeptic and Sophist. recall where Tucclio locates the need till' a concept of "intentional

Page 209: Perspectives on Mind

Sophist vs. Skeptic: Two Paradigms of Intentional Transaction 201

transaction" in Husserl's phenomenology--namely, in the fact that the intentional object is never epistemically adequate to the actual object. This requires the subject to continue to engage, or transact with, the actual object as if it were possible ultimately to construct an adequate mental representation of it. But why is there such an epistemic inadequacy in the first place? The answer may go either of two ways: toward a deficiency in the subject or a surfeit in the object. It is the Skeptic's gambit that these two ways are really two sides of the same coin, for the subject is incapable of adequately representing the object precisely because the object has a potential infinity of representable features. The Sophist, on the other hand, refuses this gambit, for it would commit her--as it does the Skeptic--to metaphysical realism. After all, the Skeptic accepts that there is an external world, questioning only whether his internal world adequately represents it. Hence, the "methodological solipsism" uniting Descartes, Husserl, and Fodor is simply an enactment of the Skeptic's worst possible metaphysical scenario--which appears to turn out to be not so bad, since a solipsist can always fall back on the clarity and distinctness of his own system of mental representations ..

But the Sophist refuses to take part in the Skeptic's solace. Instead, she aims to replace all relevant ontological distinctions with epistemologieal ones. She would have us take all evidence for the existence of an external world--such as the epistemic inadequacy of representations--to be nothing but by-products of the fauIty access the subject has into her own internal world. If the subject were to gain full control of her mental representations (which would amount to having perfect self-knowledge), evidence suggesting existence of a world independent of those representations would disappear. Thus, cases whieh a Skeptic would describe as failure of the subject to have the intentional object correspond with the actual object would be described by the Sophist as the subjects's failure to anticipate one of her own future intentional objects, namely, the one going by the name 'actual object'. Given the lack of reflexive knowledge attributed to the subject on this view, it is clear that methodologieal solipsism offers small comfort to the Sophist. [5]

As a way of epitomizing the differences between the two phenomenologi­cal projects, consider how a couple of commonly recognized brute phenomeno­logieal facts are treated by the Skeptic and the Sophist. These facts are:

(I) that intentional transactions are sufficiently successful to imply that there is normally a non-arbitrary connection between intentional object and actual object;

(2) that successful transactions are occasionally punctuated by failures sufficiently noticeable to demand re-evaluation of any general correspondence between intentional object and actual object.

Page 210: Perspectives on Mind

202 St",'e Fuller

The Sophist observes that the Skeptic simply ptesumes that (1) and (2) are phenomenological facts having, so to speak, "ontological transparency." In other words, the Skeptic takes it for granted that intentional transactions that appear successful are successful, and "'lIfot;" mutandis for failed transactions. The problem for the Skeptic is that a string of past successes is not necessarily an indicator of future successes: hence Skepticism's association with the problem of induction. Nevertheless, if and when an intentional transaction fails, the Skeptic will be able to recognize it as such. But the Sophist presumes no such ontological transparency, claiming instead that (I) is simply the result of the subject uncritically interpreting transactions which are not 01'el1 failures as being successes, which then makes the rarity of failures postulated in (2) an artifact of the unreflectiveness behind (I),

This contrast between the two phenomenological projects has important implications for how the organism's cognitive enterprise is likely to appear in the long run. An example from the history of science may be useful. A standard account of the "progress" that Einsteinian mechanics made beyond Newtonian goes like this: Newtonian mechanics worked fine for objects moving at speeds considerahly below the speed of light, but not for objects approaching that speed: Einstein managed to progress beyond Newton by developing a mechanics that encompassed objects moving at both speeds.

This brief account has the earmarks of Skeptic phenomenology. Failed intentional transactions--in this case between a physical theory and moving bodies--are characterized as representable features of the objects (their speeds) exceeding the subject's system of representation (Newtonian mechanics). The fact that Einsteinian mechanics works for objects moving both below and near the speed of light does not diminish the fact that Newtonian mechanics works for objects moving below that speed: hence, the ontological transparency attributed to cases that had confirmed Newtonian mechanics in the past. If the world were confined only to objects moving below the speed of light. and if the rest of the universe consisting of objects moving near that speed were bracketed, then Newtonian mechanics would be a perfectly adequate system of physical representation,

But, now, consider the Sophist's version of the story. Newtonian mechanics failed 110t because it was eventually shown unable to represent large portions of the physical universe, 'WI' because it all along presupposed things that are now known to be false (such as absolute space and time). To offer either of these reasons would be to concede the Skeptic's metaphysical realism. Instead, contends the Sophist, like all systems of representation, Newtonian mechanics failed 011 ;15 0\\'11 temlS, specifically by its inability to monitor the historical course that its intentional transactions would take. For example, the Lorentz-Fitzgerald contraction became an "anomaly" for Newtonian machanics onlv because

Page 211: Perspectives on Mind

Sophist I'S, Skeptic: Two Paradigms of IntellfiOlw/ Transaction 203

physicists were unable to anticipate (several decades back) that the theory would be made accountable for that case, And insofar as one can rationally intend only what one can, at least in principle, anticipate, it follows that proponents of Newtonian mechanics were partially blind to the intentional structure of their own theory--and hence had no idea where it would eventually lead (namely, to the anomaly in question), Character­istically invoking Ockham's Razor, the Sophist then diagnoses the anomaly, not as pointing to something outside the system of representation, but rather to a deficiency within,

It is typical of phenomenologists attuned to the Sophist's project-­such as Hegel and, one might add in this case, Nietzsche, Heidegger, and Den'ida [6]--that the ideals and virtues emblematic of philosophical inquiry turn out to result from converting cognitive detkits into assets by creatively re- and mis-interpretating them, However, this is often only seen in ironic counterpoint. Thus, Karl Popper thinks he has separated the scientific sheep from the metaphysical goats when he argues that theories which truly advance our knowledge are ones that are "boldly conjectured" beyond the domain of phenomena to which they were originally intended to apply, This makes them prime targets for falsification (the only sure sign, for Popper, that a theory has made contact with the external world), The irony, of course, is that Popper makes it seem as though theories normally do not transgress their intended domains, But if the Sophist's phenomenology is on the mark, then Popper has overestimated the extent to which theories, and other systems of representations, are able to prescribe their own applications, From the Sophist's standpoint, Popper's call to bold conjecture must be read (again ironically) as a request for the theorist to manipulate his own natural incapacity for fully comprehending the intentional structure of his theory so as to arrive at the desired self-contradiction (or "falsification," in Popper's own terms), [7]

We have yet to address advantages that would be gained by the Sophist's project. Clearly, if one is already committed to what may be called an "eliminative idealism," one would lI'al1f a phenomenology that dispenses with the need for an external warld, [8] Such a phenomenology would be expected to account for more than just the Skeptic's sense that a system of representation ideally stands in some "mirroring" relation to something outside itself. It would also need to "represent" the sense that there is a realm which transcends or resists representability, Imperfect reflexive control over the history of one's own intentional transactions does the trick, However, if one is not an idealist armed with Ockham's Razor, the need for the trick is not so obvious, Still, a case can be made--again by contrasting the Sophist's project with that of the Skeptic, However, to do so, we mllst start by refocusing the issue of intentionality,

There are two distinct uses of "intention" in philosophy; they corres­pond to two distinct uses of "object." In Kant's German, this distinction

Page 212: Perspectives on Mind

204 Stel'e Fuller

is generally captured by two different words: Gegenstand, or "object" as something that exists independently of thought, and Objekt, or "object" as something that exists only within thought, such as purposes. [9] To "in­tend" an "object" in the first sense is to have a thought that "contains" the object in some way, say, by falling under a definite description in the thinker's language. This is the sense of intentionality that Husserl took to underlie the nature of theoretical reasoning. By contrast, to "intend" an "object" in the second sense is to have a plan which, if fully realized, would bring the object into actuality. This is the sense of intentionality that has been integral to moral philosophy in the English language since Bentham, and is generally seen as fundamental in practical reasoning.

Philosophers who draw a sharp distinction between these two senses of intentionality generally believe that practical intending involves theoretical intending and something more (such as a commitment to action and concern for consequences), but that theoretical intending as such need not involve any practical intending. The paradigm case of intentionality on this view is a relatively stationary subject who is constantly processing information and who is occasionally forced to make a decision. The Skeptic belongs to this camp. On the other hand, philosophers who do not recognize so sharp a distinction--and indeed who fall back on ambiguous expressions like "the object of inquiry"--tend to see theoretical intending as a constrained, artificial, or incomplete version of practical intending. The paradigm of intentionality here is a relatively mobile subject who, in order to obtain certain sorts of information that it is in her own interest to have, will deliberately regulate her thinking in the manner required by mathematics or experimental science. This camp includes the Sophists and their modern heirs, the German idealists and the American pragmatists. By "refocusing" the issue of intentionality, then, we mean to move from the Skeptic's to the Sophist's camp on precisely the above issue.

Perhaps the most important consequence of this move is that objects toward which a subject directs its attention are, as Hegel or Dewey might say, "dynamic" rather than "static" ("unstable" rather than "stable" would be a more neutral way of making the same point). But like the Skeptic's stasis, the Sophist's dynamism is self-generated. Once the Skeptic postu­lates a maximally coherent internal world which is supposed to correspond perfectly to an external world, he finds it impossible to tell whether the correspondence is achieved. As Tuedio rightly points out, the most a Skeptic can say is that correspondence is a "regulative ideal" whose truth value, for any given intentional transaction, is indeterminate. But in a similarly self-generative vein, once the Sophist stipulates that initially there is only an incoherent internal world striving for maximal coherence, then each moment in the striving appears as a recognition and treatment--if not necessarily an elimination--of error. Whether the striving turns out to involve gradual elimination of error depends on whether the subject

Page 213: Perspectives on Mind

Sophist vs. Skeptic: Two Paradigms of Intentional Transaction 205

gains greater reflexive control over the consequences of treating error. Does each striving increase overall coherence of the world, or does it

simply maintain the level of incoherence by unintentionally displacing error elsewhere? While philosophers of a Sophistic bent (e.g. Hegel and Dewey) have supposed the former, this too may be little more than a regula­tive ideal (or "wishful thought," as Nietzsche might say). [10] Indeed, the latter alternative--despite its seemingly pessimistic character--offers a novel perspective on the phenomenology of intentionaillansactions.

Two related questions about intentional transactions that phenomen­ology in the Skeptical tradition poses, but does't adequately answer, are:

(a) If the subject can never tell whether the intentional object is epistemically adequate to the actual object, why would the subject cease his intentional transactions after a certain point?

<b) If the subject typically ends his intentional transactions after a certain point, why would he think the actual object exceeds his representational capabilities?

A key reason why the Skeptic does not deal well with these questions is that they overshoot his understanding of the limits of phenomenological inquiry. On his view, attempting to respond to (a) is much too practical an undertaking for it would require discussion of features of the subject's material resources that prevent him from delving into the actual object indefinitely. Likewise, trying to respond to (b) is much too metaphysical, since it would call into question the conceptual framewolk in which objects exist independently of the subject and representation remains an inherently inadequate relation in which the subject stands to objects.

By contrast, the Sophist can appeal to one and the same phenomenon in the course of addressing the two questions. In so doing, she underscores the practical character of intentionality. To elucidate: consider the case of my intending that there is a chair in the next room largely on the basis of someone telling me so. I cannot perform the intention by recalling an image of that particular chair. Still, I am able to perform the intention with no difficulty because I simply bring to mind a typical or ideal chair (the difference between "typical" and "ideal" is of no significance here). There is every reason to think that this intentional object will not be epistemically adequate to the actual object, and indeed let us say that it is not. Yet once I finally enter the next room and see the actual chair. I do not continue refining my intentional transactions until I am able to intend the chair as an object distinguishable from every other possit>le chair. Instead, since the intentional object turned out to represent the actual object sufficiently well not to cause any major cognitive disrup­tion, I spend no further energy trying to detail the extent to which the

Page 214: Perspectives on Mind

206 Steve Fuller

intentional object is "epistemically inadequate" to the actual one. There are a couple of ways of regarding what I have just done. First.

the conclusion that I have successfully intended the chair in the next room--despite its differences from the chair's appearance as my inten­tional object--may be the result of my inability to anticipate all the actual objects (or. strictly speaking, subsequent intentional objects, since there is no external world for the Sophist) that would be taken as satisfying the original intention. But if I can be so ignorant about what satisfies my intention, then I can also be equally ignorant about what does not satisfy my intention--hence, the second way of regarding what I have done: the conclusion may simply be the result of my unwitting tolerance for error in judging the intentional chair against the actual chair, which means that the differences have passed unnoticed and the error silently absorbed into my system of representation. But whichever way the Sophist regards my intentional transaction, she has highlighted the extent to which the subject is not in full cognitive control over intentional transaction.

This opens up the possibility that the criterion of correspondence itself may subtly shift over time. [I I] In consequence, objects that the subject would have counted as satisfying the intention, had she been pre­sented with them earlier in a series of intentional transactions, might not be counted as such simply because they were presented in the series when they were. This would amount to a long-term semantic drift which the sub­ject herself might not fully realize but which, from the second-person perspective (e.g. someone trying to converse with her), would appear as the subject continually varying what she means by "chair." Again. in order for such variations to go unnoticed by the subject--noticed only by a careful second-person observer--they would have to be subtle, involving just that manner of equivocation for which the Sophists were, of course, notorious. The net result--which serves to answer question (b)--is that the subject would be left with the impression that there is always more to discover about the objects of her inquiry, when in fact all that has happened is that her words have changed their meanings with each new use.

Having elucidated the dynamic element of the Sophist's phenomenology. I shall conclude by recounting two points in which the Sophist's conception of intentional transaction contrasts with that of the Skeptic's (as pre­sented by Tuedio): in regard to (I) the dil'ergent (rather than convergent), and (2) the practical (rather than theoretical) nature of inquiry.

First, the Skeptic's conception stresses correspondence, or "conver­gence," of intentionality and actuality as a regulative ideal of inquiry. The Sophist, by contrast, stresses incommensurability, or "divergence," as an inevitable outcome--if not a regulative ideal. Thus. the Sophist's explanation of the inability of Newtonian mechanics to account for fast­moving bodies is that, in the course of two hundred years of physicists engaged in intentional transactions, the domain of actual objects to which

Page 215: Perspectives on Mind

Sophist \'S, Skeptic: T\1'O Paradigms of Inte1llional Transaction 207

the intentional structure of Newtonian mechanics had to conform widened considerably beyond the domain Newton himself originally had in mind, And as our brief discussion of Popper suggested, while some of this widening was deliberate, most of it probably was not--at least given the lack of cognitive self-control that the Sophist ascribes to the subject.

Second, the Skeptic conceptualizes inquiry as an asymptotic convergence of the intentional and the actual because he understands "correspondence" to be purely theoretical--and, therefore, from a practical standpoint, an impossibly demanding goal. The Skeptic presupposes that a representation is not "epistemically adequate" to the actual object unless it is completelr adequate, i.e" it corresponds to every essential feature of the actual object--which could be either a few deeply hidden features or infinitely many apparent ones, From the practical standpoint of the Sophist, the Skeptic is, so to speak, an "epistemic optimizer" under severe resource contraints: that is, he is trying to arrive at the best possible representation of the actual object without having access to the best possible information base, Given that ends and means are so mismatched, argues the Sophist, it is more rational, at least in the short term, to be an "epistemic satisficer": that is, one for whom "epistemically adequate" means "sufficiently adequate that error is not noticed", [12] Ironically, while the Sophist's strategy frees up resources for her to make subsequent inquiry into the nature of other objects, those objects may turn out simply to be artifacts of the satisficing strategy itself, which, in turn, explains the ultimately divergent nature of her inquiry,

--v--

Fuller's distinction between "Skeptic" and "Sophist" is designed to accent the practical character of intentional transactions, but it also paves the way for a critique of Husserl's approach to the problem of objec­tive reference, On Fuller's analysis, the fact that Husserl embraces the ideal of ultimate convergence places him squarely in the Skeptic's camp, But there are important aspects of the Skeptic's approach that appear to be in conflict with Husserl's strategy; furthermore, it seems that Husserl's affirmation of the pragmatic scope of intentional transactions may actually be more consistent with the Sophist's view,

Intrigued by the idea that "an organism's attempts at representing its environment have the unintended consequence of revealing, more about the organism than about its environment," (p, 200) Fuller calls for the adoption of a second person perspective from which to analyze the "differentiation" of an organism from its environment. He argues that this perspective allows for an "outside in" examination of "misrepresentation," thus giving him leverage to critique the notion that anomalies are indicators of the "epistemic inadequacy of representations," (p, 200) a

Page 216: Perspectives on Mind

208 Commelllan

view favored by the Skeptic, who tries to explain anomalies in relation to the failure of an operative "system of representations" (specifically, the failure of this system to conform to the objective character of its prescribed subject matter). By adopting the Sophist's perspective instead, Fuller is able to re-conceive anomalies as indicators of deficiencies within a given intentional schema.

This perspective seems to offer considerable advantage in dealing with questions about the semiotic character of a mind's relation to its environment. It also seems compatible with our previous reflections on the "taking" function, particularly in the context of Nelson's analysis of recognition schemes. Tuedio's formulation of this proposal stressed the "mediating" role played by noematic prescriptions. We will see, in the next commentary, that Husserl's position lends itself to yet another formulation, one that rejects the "representational" account of mind and, with this, the concept of "mediating" prescriptions. William R. McKenna's commentary pinpoints three areas where Tuedio's interpretation of Husserl's proposed resolution to the problem of objective reference seems misguided. He gives initial attention to the slant Tuedio has given to Husserl's conception of the problem itself. identifying what he takes to be the chief weakness in this formulation and suggesting an alternative reading of the problem which he feels is more in keeping with Husserl's intent. Lastly, McKenna examines Tuedio's interpretation of the role of the "phenomenon" and expresses concern that, thus conceived, this role would cut against the grain of Husseri's thought, implying both a naturalistic theory of causality and a dualistic framework for comprehending the relation between mind and world.

Page 217: Perspectives on Mind

WILLIAM R. McKENNA

Commentary on Tuedio's "Intentional Transa.ction"

The distinctive aspect of James Tuedio's interpretation of Husserl is the concept of phenomenon "X" within a fourfold noesis-noema-phenomenon­object scheme. It is employed to account for how we can perceive specific objects in the world (the problem of objective reference) and how perceptual knowledge of objects can be corrected or supplemented by further experience (intentional transactions). Most interpreters of Husserl use a threefold scheme, noesis-noema-object, differing with one another about the relationship of the latter two. Some consider the noema to be, or to be part of, the object; while others claim that the noema is not to be identified with the object in any way. In my remarks below I will first explain what I believe to be at stake in these differences and then discuss Tuedio's interpretation within that context. This will require that I give my own interpretation of what Husserl thought the relationship between noema and object to be. I aim to show that what Husserl was trying to do philosophically is not captured by Tuedio's scheme, and that his scheme is problematic even as a conceptual framework for a cognitive psychology.

Husserl's theory of the intentionality of consciousness may be viewed, as Tuedio does, as an attempt to solve the problem of the objective refer­ence of mental states. How is this problem to be understood'? One way is as follows. There is a world which exists and is what it is independently of our consciousness of it. We human beings who exist within the world are able to be aware of it. How is this possible? A natural answer would be: by means of our senses. But it would be naive to think that our senses are like windows through which the world shines onto our minds. For we need to understand how the mind can register, for example, the presence of a parti­cular object, maintain focus on it, and pick out its features. As Kant observed, we cannot suppose that objects and their features just migrate into our minds [1], or that specific ones do on a given occasion through some process of selection which the world establishes. And even if this were the case, the problem would still remain. For in the process of migration there must occur a transition where the object ceases to be just there and becomes noticed. It seems unreasonable to think that anything can enter into our consciousness without our reaching out to it with a reach that is directed from within consciousness. A sllccessful reaching seems to presuppose the possibility of some kind of match between what directs the reaching and that which is reached.

On the basis of this understanding of the problem of objective reference one could think that Husserl solved it by generalizing the linguistic model to perception. Just as an internal feature of our

209 H. R. Otto and 1. A. Tuedio (eds.), Perspectives on Mind, 209--216. © 1988 by D, Reidel Publishing Company.

Page 218: Perspectives on Mind

210 William R. McKenna

linguistic act, namely its meaning or sense, allows us to think (refer to) a specific object, so analogously there must be a sense within perceptual acts that performs the same function. [2] In recent interpretations of Husserl this analogy is taken fairly strictly and it is claimed that the perceptual sense consists of concepts. [3] A threefold scheme results: noesis-noema-object, where a perceptual noema, containing the perceptual sense, is not to be identified with the perceived object any more than is the thought object to be identified with the meaning of a linguistic act.

This is the basis of Tuedio's fourfold scheme. He adds a fourth element, the phenomenon "X", recognizing something which the threefold scheme does not account for. Noematic prescriptions (senses) do not only allow us to pick out a specific object, they also condition the wayan object appears to us. This is obvious in the case of misperception. And even the indexical or demonstrative element of sense which some say is contained in the noema (and which Tuedio calls the "referential sense") can miss its mark, as in hallucination. Yet, something is perceived. Further­more, the way objects appear to us changes as we get to know them better. A fourth element seems necessary in the scheme to deal with these cases.

Tuedio appears to understand the problem of objective reference in the way indicated above. He seems to assume that objects are "simply there." This assumption characterizes what Husserl calls the "natural attitude," which is a general orientation that pervades our normal way of experiencing the world. [4] The natural attitude can also be present in theoretical life. One can even do phenomenology in the natural attitude. One would then be doing what Husserl calls "phenomenological psychology." This is what Tuedio' s essay seems to be a bi t of.

One way to characterize the difference between phenomenological psychology and what Husserl calls "transcendental phenomenology" is to contrast the understanding of the problem of objective reference outlined above with another. On it, the problem is seen to be the very assumption that grounds the previous way of formulating the problem. Instead of assuming that objects are simply there and operating on that basis, the "simply-there" character of the world and all that is in it is thematized and taken to be a problem for theoretical inquiry. This move is called the "transcendental epoche and reduction." [5] The problem now is to account for how that character is a feature of the world as we experience it. Upon analysis, the character of simply being there can be seen to form the basis of both the objectness and objectivity of objects. Furthermore, it can be seen that "intentional transactions" have the sense for us that they have (and which Tuedio describes so well) because of this character. We exper­ience ourselves "learning" about objects, having perceptions "corrected," and making "discoveries" about objects because we take what is learned and discovered as what was simply there all along quite independently of our knowledge and consciousness of them. Thus, this way of understanding the

Page 219: Perspectives on Mind

Commentary on Tuedio's "Intentional Transaction" 211

problem of objective reference can be seen as clarifying the sense of the concepts of "learning," "correction. II "discovery, /I "intentional trans­action," and even "objective reference" as that idea figured in the prior version of the problem. In doing this, phenomenology clarifies fundamental concepts of a science, one perhaps called "cognitive psychology," and is thus a phenomenological philosoph,'. [believe that this second way of understanding the problem of objective reference better captures Husserl's philosophical intentions.

The second way of understanding the problem of objective reference gives a twofold intentional scheme for perception: noesis-noema. The term "noema," or more accurately, "noematic sense" refers to the perceived ob­ject as viewed in phenomenological reflection. The character of "simply there" points to a structural. or better a dynamic, articulation within the noematic sense: on the one hand, objects are experienced by us to have fully determinate and complete natures (Tuedio's X conceptualizes objects considered in this way); yet the totality of what we actually attribute to them in any given experience falls short of this (Tuedio's "X" is related to this). The term "noesis" refers to ways of being conscious which are such that we can experience "objects," i.e., entities that both exist and are fully determined in themselves independently of our consciousness of them, and experience them incompletely. The prescriptions of which Tuedio writes are located within this noetic dimension and are called "appre­hensions" (Auffassullgen) by HusserJ. [6]

A quote from one of Husserl's works will help to clarify some of the points made above. [n his Anal)'scn zur passiven S)'ntll~ses Husserl dis­cusses a broad concept of noematic sense within which he distinguishes a "flowing sense" from an "identical sense." The context of his discussion is an analysis of perceptually exploring an object. He writes:

Here we can observe that in the sense of a harmonious and synthet­ically progressing perception we can always distinguish the ceaseless­ly changing sense from the thoroughly identical sense. Each phase of the perception has its sense insofar as it has the object both in the manner in which it has been determined through the originally presen­tative moments and those of the horizon. This sense is flowing. it is new in every phase. But through this flowing sense, through all the modes "object in the How of determination," the unity of the substrate X, of the object itself, is maintained in the continual overlapping as it becomes more and more extensively determined. This substrate, this object, consists of all that the process of perception determines it to be and that all further possible perceptual proce~;ses would deter­mine it to be. Thus an Idea. lying in infinity, belongs to every outer perception, the Idea of a completely determined object, of an object which would be completely determined and known ... [7]

Page 220: Perspectives on Mind

212 William R. McKenna

The "identical sense" or "substrate X" which Husserl discusses here is the object we understand ourselves to be learning more about as we explore it. It is Tuedio's X. "Flowing sense," which corresponds to Tuedio's "X", denotes the object as given to us in virtue of what we attribute to this object or "determine" it to be in the various phases of our exploration of it, including what we "prescribe" it to be in terms of its unseen aspects (this is the meaning of Husserl's reference to the "horizon"). It should be noted that there are not two objects here, but rather one which, upon phenomemological analysis can be seen to have two aspects which articulate, from an epistemological point of view, what presents itself to us. We believe that as the perceptual exploration proceeds, (i.e. as we engage in what Tuedio calls "intentional transactions"), one and the same object progressively reveals its features. What comes to be given in actual perception (what Husserl calls the "originally presentative moments") may confirm our prescriptions, may reveal something totally new and unprescrib­ed, or may not conform to some prescriptions and thus cause cancellation of them. In the latter two cases, the actual perceptions constitute what we experience as "discovery" and "correction" respectively. What is most important to observe is that the "object itself," the X, is part of the phenomenon of the appearing object as it appears. It is located within the noema in the form of an Idea. i.e. something posited by the mind as having its own complete nature and one which will ever be beyond what we can "know" of it. [8] By so locating the "object itself" Husserl is laying the groundwork for a transcendental philosophical account of our experience of objects. That is, he is attempting to reveal the "conditions of possibil­ity" of our experiencing objects and of the understanding of the achieve­ments of these experiences that we have--and he is attempting to do this by pointing to factors intrinsic to subjectivity. Thus, for example, the identification of the intrinsic factor of positing an Idea helps clarify how we can experience objects as "simply there," because part of this character includes the notion that objects are more than we experience them to be, and it helps to clarify how we can experience and understand perceptual processes as yielding "discoveries" and "corrections." It thus gives a philosophical basis to the "transactions" Tuedio discusses.

These considerations allow us to see how Tuedio's fourfold scheme might be suitable to a cognitive psychology and they will also serve to help us understand the sense of that psychology. Cognitive psychology could be thought of as the attempt to understand various kinds of human behavior in terms of a person's cognitive capability, where "cognitive capability," in part, might refer to "representations" within the mind of the behavorial situation, including the objects in the individual's environment. Tuedio's "prescriptions" would be one way of conceiving of these representations. But how is this environment to be understood? If we look to Tuedio's "phenomenon," this environment would be one shaped by

Page 221: Perspectives on Mind

COl1ll11ental)' on Tuedio's "Illfcnlionai Transaction" 213

the mind's representational capacity; so that human behavior is conceived to occur in a context already meaningful to the subject. lit is a response to an already "known" environment. Intentional transactions work to enrich both the individual's representations and the phenomenal environment.

Thus far conceived, the result would be a non-naturalistic cognitive psychology, i.e. one where causality at the physical and physiological level does not figure in. But Tuedio's scheme leaves an opening for such a supplementation, in that he thinks of the object as "initiating" (by which he seems to mean causing) "the input of sensory data." Should the object now be defined in physical terms, a psycho-physiology could explore the organic basis of human behavior. The resulting science would in some ways be similiar to classical Gestalt Theory, at least in terms of its distinc­tion between the "behavioral" and "geographical" environments. [9]

Husser)'s transcendental phenomenology gives us insight into the sense of this cognitive psychology. "The perceived physical thing itself," Husser! writes, "is always and necessarily precisely the thing which the physicist explores and scientifically determines following the method of physics." [10] He adds that theoretical determinations of things by phy­sics are not at all incompatible with his phenomenological concept of a thing. [11] Although the physicist determines the same object we perceive, a concept of that object is constructed which is quite different from that of ordinary perception. This "scientific" object should not be "absolu­tized:" it should not be thought to refer to a reality behind the "sub­jective" being appearing in immediate experience, and to be causally relat­ed to it. [12] This is not only inconsistant with Husserl's phenomeno­logical concept of thing, it would be in itself "countersensical." [13]

There is an ambiguity in Tuedio's account on this point which makes it unclear whether the cognitive psychology I have outlined would be subject to this criticism. Although he states that there are not two things, an object (X) and a phenomenon ("X"), he does seem to want to attribute causality to the object, and presumably as a way of explaining how the phenomenon arises. If so, this cognitive psychology woulld be subject to Husserl's criticisms. Ultimately the problem here is how to break out of the impasse which seems to have been created by Cartesian Dualism. If the object is thought to be causally related to the phenomenon, a "gap" will eventually appear in the causal pathway where the transibon between the physical and the mental is supposed to take place. Despite the efforts of mind/brain identity theorizing, this gap has become an abyss ever since it appeared and there is no reason to think that the cognitive psychology discussed here would not fall into (and with) it.

This "gap" is the ontic conterpart of the "countersense" which Husserl refers to. His own solution to the problem, developed best in Crisis [14], is to view the scientific concept of the world as a well-founded construc­tion, an "object" of a higher order than the world we experience in normal

Page 222: Perspectives on Mind

214 Chapter Two

perception. It is a creation of our culture, and one which serves it well for certain theoretical and practical purposes. But, as discussed before, it should not be taken to be a truer reality somehow causally related to our phenomenal world. If the cognitive psychology we have been discussing heeds this advice, it could still have both a "phenomenological" and a physical/physiological aspect, but the relationships between the determina­tions of each of these aspects would have to be conceived in an entirely different way than they usually are. Here lies a real challenge for cogni­tive psychology. In its broadest significance, this would be an effort to reconcile for our culture its two concepts of "world."

--'J--

Professor McKenna's summary of Husserl's project appears to challenge several aspects of Tuedio's interpretation. McKenna begins with an analy­sis of Husserl's understanding of the problem of objective reference, draw­ing a sharp contrast between Tuedio's formulation and his own. His thesis is that Husserl "bracketed" questions about the relationship between our mental states and the things we believe ourselves to be experiencing, and sought instead to understand how the world as we experience it comes to have the character of being "simply there." McKenna suggests that Tuedio's version of the problem presupposes that the world is simply there as an environment for transactions. McKenna, on the other hand, argues that Husserl's "reduction" is designed to look behind the scenes of this bias: thus, "instead of assuming that objects are simply there and operating on that basis, the 'simply-there' character of the world and all that is in it is thematized and taken to be a problem for theoretical inquiry" (McKenna: this volume, p. 210). On this reading, the task would be to comprehend how the "simply-there" character comes to be "a feature of the world as we experience it."

McKenna also seems to challenge Tuedio's interpretation of the noema as the meaning-content of a noetic apprehension. Following Aron Gurwitsch, McKenna contends that the noema of a perceptual act is in fact the perceived object, "as that object i~' viewed in phenomenological reflection" (p. 211). This indicates that the "phenomenological reduction" is designed not to analyze the intentional content of mellfal states but to pave the way for a new and better perspective on the world and the objects we expelience in this world. McKenna implies that this new perspective is vital to a proper definition and resolution of the real problem of objective reference.

Since the world and its objects always "overflow" the content of our noetic apprehensions, the question arises, from where do we derive our sense of the "object itself" as something we can come closer to

Page 223: Perspectives on Mind

CommentalJ 215

understanding through subsequent intentional transactions. McKenna suggests that we can clarify what we mean by "object" once we see that "the object itself" is something "located within the noema [that is, within the perceived object] ill the Joml oj all Idea," having been "posited by the mind as having its own complete nature [even if this nature is forever beyond our grasp]." Thus, one purpose of Husserl's method is to identify "factors intrinsic to subjectivity" and thus to account for our positing of the "regulative" idea.

McKenna is also critical of Tuedio's contention that the experiential character of the phenomenon is a strict intentional correlate of the meaning-content intrinsic to "noematic prescriptions." By equating "phenomenon" with "flowing sense," McKenna is in a position to argue that the phenomenon is actually illlrillsic to the noema, and by equating "object" with "identical sense," he can pack another feature into the noema as well--namely, the regulative Idea in terms of which the experiencing subject relates to phenomena as objects possessing a fixed and determinable nature. This seems to imply that the experienced object is comprised (at least partially) of mental qualities. But here we have to be careful, for we should not overlook the importance of Husserl's assertion that he perceives the tree there ill the gardel1, not a "mental representation" of the tree. Can such an assertion be reconciled with the gist of McKenna's twofold intentional scheme? Clearly we would need either to re-think our notion of what counts as "a tree in ihe garden" (as McKenna has proposed) or we would need to explain how one can perceive the tree there in the garden without the tree itself being the content of his perceptual apprehension (as Tuedio has proposed). In either case, it would be important to explain the extent to which mental functions underlie our capacity to enter into intentional transactions within an environment that is given to us in experience as "simply there."

What is it about our "noetic apprehensions" that would account for the "directed" character of attentive awareness? McKenna's discussion of the relationship between regulative ideas and perceptual apprehension suggests the role of a "positing" act intrinsic to the noesis. Does he not suggest that this positing act gives rise to a regulative idea (the "identical sense") which is intrinsic to content of the noematic Sinn? Is this really so alien to Tuedio's notion of a noematic prescription? It is hard to see how the implications of the two positions could differ in any significant way on the question how the mind comes to he related to objects experienced as having a determinable nature. For each position seems to assert that the content of the noema is dependent on noetic functions of mind, and that this content plays a crucial role in organizing our experience of a world replete with objects whose nature can, at least in principle, be approximated by sufficiently rigorous intentional transactions.

This would seem to imply that Husser! 1\'as interested in investigating

Page 224: Perspectives on Mind

216 Chapter Two

the content of mental states, as Tuedio has proposed. But why argue as McKenna does that the object is intrinsic to the noema just because there is an idea within the noema which happens to refer us to an object which is experienced as hal·ing a fixed and determinable nature? Why not focus on the meaning-content intrinsic to our noetic apprehensions by means of which we apprehend certain phenomena as objects existing in a world which is perceived as being "simply there?" If we took this approach, perhaps we could take advantage of McKenna's comments to help clarify a point that is central to Husserl's theory, and also implied in Tuedio's formulation of the problem to which Husser! was addressing himself: namely, that our ability to experience an "objective" world implies the presence of a texture of meaning which has been posited by the mind in the form of a regulative idea. The task of phenomenological reflection would then be to examine the relation of this regulative idea to the life of mind in general, in an effort to account for the "objective reference" of our perceptual apprehensions.

McKenna and Tuedio seem to agree that the capacity of the mind to experience phenomena as "transcendent objects" depends in crucial ways on how the mind responds when it is affected by sensory input. How the mind comes to be affected, and why it responds to certain mental phenomena as if they were transcendent realities, may well be questions which lie beyond the parameters of phenomenological research. But questions about how intrinsic features direct the mind to a transcendent state of affairs are clearly within the scope of phenomenological research and will need to yield answers that can be reconciled with important developments in neuro­physiology and cognitve science research. In the final analysis, Husserl's theory of intentionality appears to be a provisional rather than a defini­tive approach to the study of the inner workings of the mind. Furthermore, it will require careful resolution of important points of dispute regarding the nature and status of the noema before we can determine the extent to which Husserl's theory will assume a leading role in contemporary debates about the nature and function of mind. Nevertheless, it remains an impor­tant point of departure for the development of a more comprehensive first person ontology of mind.

Page 225: Perspectives on Mind

Chapter Three

MIND, MEANING, AND LANGUAGE

As contemporary philosophers of mind press forward in their effort to understand mental reality, the challenge posed for naturalism is to remain open to the full range of human experience while remaining true to a general philosophy based on the concept of natural law. The former re­quirement implies being faithful to both the subjective and the objective dimensions of the human person; the latter means adhering to a philosophy of science which is empirically grounded, and--though dependent upon the use of logic and mathematics as methodological tools--carefully purged of any ontological commitments not ultimately endorsed by scientific observ­ation alone.

3.1 Schemas

Generalizing on notions drawn from the fields of AI. psychology. and certain of the social sciences, Michael A. Arbib seeks to meet this challenge by developing a conception of functionalism that aims to avoid the traditional pitfalls of reductionism. In the following paper, Professor Arbib introduces a notion of "schema" designed to provide a naturalistic analysis of "human person" that he believes will accommodate basic phenomenological insights about subjectivity without sacrificing the rigor and simplicity of a functionalist account of the human organism. The liberal range over which he is willing to apply schema theolY allows him to extend his speculations to difficult issues concerning the syntax and semantics of natural language.

Arbib's schema theory is designed to integrate a functionalist account of mental representations with a naturalist account of "the way in which we interact with the world." In this connection, he emphasizes both the interpretive mode of attentive awareness and the embodied character of our transactions with the world. While these themes have been emphasized for some time by members of the phenomenological tradition, Arbib's work represents a compelling attempt to deal with them from the standpoint of the cognitive science tradition. He emphasizes the broad and dynamic range of "background" knowledge that helps to shape our transactions with the world, and he is equally sensitive to the "active" character of perception. This leads him to incorporate observations on the inductive character of the organism's inferential capabilities. This aspect of his theory presents intriguing parallels to Tueelio', treatment of the corrective

217

Page 226: Perspectives on Mind

218 CllOpter Three

character of intentional transactions, and suggests the need for further work on the nature and role of transactional schemas.

Finally, Arbib notes the intriguing possibility that "networks" of schemas may function as "operative rules" possessing both "stability" and "adaptability" as they become "embodied habits." He stresses the corrective, hypothetical character of these schema-networks, and discusses the extent to which we move (especially as children) "from observed patterns of behavior to internal patterns which provide appropriate representations." (Arbib: this volume, p. 232) The dynamic of this process lies in the ability of the organism to recognize "discrepancies between what it experiences and what it needs or anticipates." (p. 233)

In this connection, Arbib appeals to the "process" notions of assimilation and accommodation as explained by the psychologist Jean Piaget, as well as to the "pragmatic criterion of ... prediction and control" (p. 223) widely discussed in the philosophy of science. Moreover, Arbib invokes a notion of "schema-assemblage" as constitutive of "world knowledge," which is currently deemed essential to our ability to understand and use language. It is important for the reader to note that although Arbib conceives schemas to be "units" in our psychological reconstruction of reality, he does not insist that they be atolllie units. Thus, for Arbib, explanation becomes associated with "levels" of scientific description, and thus reductionism is not a necessary condition for a scientific understanding of the world.

The theme of "world knowledge" is explored in the second major paper by Christopher Fields, who seeks to develop a theoretical account of the role played by background knowledge in natural language understanding. Another requirement for carrying a view such as Arbib's from the stage of a holistically inspired conception of mind and nature to that of a "working" philosophy that would allow the emergence of appropriate scientific theories, is some means for rigorously applying the tools of logic and mathematics. In the third of the major papers of this chapter, Herbert R. Otto advances a concrete proposal for satisfying this requirement. Thus, both papers which follow Arbib's work may be viewed as complementing his general approach.

Page 227: Perspectives on Mind

MICHAEL A. ARBIB

Schemas, Cognition and Language: Toward a Naturalist Account of Mind

Through the centuries, more and more of the properties of matter have been explained by physics and chemistry. Subtle new concepts like mass, the electromagnetic field, and the electron have first been postulated as theoretical entities, and then have become accepted as part of physical reality as they become integral parts of scientific theories which are intellectually compelling and which meet the pragmatic criterion of successful prediction and control.

But what is the situation when we turn from physical reality to the reality of the person? Can we still hold the naturalist view that there is only spatiotemporal reality, or must we countenance realities--like the mind or will or soul of the individual, or the historical sweep of social forces, or a God--which transcend space and time?

What I shall propose I call "schema" theory, a vehicle which I maintain can provide a coherent, thoroughly naturalist account of person. Of course, schema theory--indeed, cognitive science more generally--is no more complete an account of human cognition then was Kepler's a complete account of matter in motion. Thus, in what follows I shall test schema theory in its current form against issues which have arisen from recent debates over the nature of person, mind, and later on, language. But first, I should tell you what I take "cognitive science" to be.

lt is actually a loose federation of work in artilicial intelligence (AI), linguistics and psychology. I use the term as an umbrella uniting three problem areas: AI, "the attempt to program computers to do things that you would swear require intelligence until you know that a computer has been programmed to do them"; cognitive psychology. the use of a language of information processing to design models for emulating the overt behavior of a human performing intelligent tasks; and, at the finest grain of detail, brain theory, the attempt not only to explain hehavior patterns of animals and humans, but to incorporate into such expl'anations data on neural circuitry. What these areas have in common is that they all yield models of cognition and intelligence that can be run on the computer. Thus, to the extent that cognitive science succeeds, we will have theories of mind that are operatil'e. The machine can simulate intelligence. This raises the question: can an artificial intelJigence--a computer, whether programmed ad hoc, or by using principles culled from behavioral or neurological observation--actually exhibit intelligence? It is, I hasten to point Qut, for any given propert)' an open question whether a simulation of it will itself exhibit the property. For example, in the case of

219 H. R. Guo and 1. A Tuedio (eds.), Perspectives on Mind, 2l9-23X © /988 hy D. Reidell'ublishmg Company.

Page 228: Perspectives on Mind

220 Michael A. Arbib

motion: a standard computer simulation of a moving arm does not exhibit motion, but a robotic simulation does.

My answer to the question "Can machines exhibit intelligence?", and its cognate "Can human intelligence be given a naturalistic explanation, without invoking a non-spatiotemporal mind or soul?" has two facets. I accept that AI is currently limited, but I argue against the philosophical claim that AI is limited in principle. However, when we contrast intelligence in abstracto with what is specifically human about the way in which we are actually intelligent, I maintain that we must take into account the way in which we are embodied, both in having human bodies and in being participants in human society.

J. Searle's Critique

To begin, I want to distinguish formal systems (symbol manipulations and word networks) from systems in which symbols are linked to action in the world. I find this distinction useful in looking at John Searle's (1980) argument that an AI system can only emulate intelligence, not exhibit it. As his "straw-man," he took Roger Schank's theory of scripts (Schank and Abelson 1977), an account of how we can go beyond given information when answering questions. Schank has formalized an ability to bring general knowledge to bear in such situations. This he accomplishes in terms of "scripts" which incorporate knowledge of various types of situations. For example, a "restaurant script" allows one to infer that a waiter or waitress had served the food even if this was not explicitly mentioned in a particular account of dining out. Schank argues that a computer, equipped with enough scripts and control programs for processing incoming stories sufficiently to generate answers to questions, would exhibit intelligence. Searle counters this claim as follows. Imagine that, instead of a computer equipped with programs to take in stories and questions and print out answers, we have a human, who only understands English, sitting in a large box. Imagine, further, that the input and output are in Chinese, and that the man has a set of rules which do not involve translation into the English he understands, but rather tell him how to arrange the, to him meaningless, Chinese characters in order to generate an output. The idea is that these rules correspond, formally and exactly, to those Schank gives his computer. Searle argues, I think convincingly, that the man in the box would in no way understand the story. He then concludes, by analogy, that Schank's program cannot really understand stories, and thus does not actually exhibit intelligence.

At first this argument is compelling. But then Searle must answer critics who force deeper consideration. Imagine that, instead of symbols being fed into a computer, a TV camera is arranged so that pictures served as input. Imagine that instead of s\'mbols as output, the computer has been

Page 229: Perspectives on Mind

Sc/temas, Cognition alld Language 221

augmented so as to be able to move effectors--and was, in fact, a robot. Add a more and more complex data-base to the system. Replace the computer circuitry with an electronic analogue of a neural net. At each step Searle continues to insist that the system still can only simulate intelligence, not exhibit it. But, at this point, I am reminded of a classic story about Norbert Wiener, the "father" of cybernetics. Wiener believed he had resolved a famous 19th-century mathematical conjecture, the Riemann hypothesis, and mathematicians flocked from Harvard and MIT to see him present his proof. He soon filled blackboard after blackboard with Fourier series and Dirichlet integrals. But as time went by, he spent less and less time writing, and more and more time pacing up and down, puffing on a black cheroot, until finally he stopped and said, "It's no good, it's no good, I've proved too much. I've proved there are no prime numbers." And this is my reaction to Searle. It's no good, it's no good, he's proved too much. He's proved that even human beings cannot exhibit intelligence.

Note that Searle is not denying that there may be a naturalist explanation of mind, rather he wants to show that only a system with the biochemical structure of the human brain can actually exhibit intelligence. To get another perspective on this issue, recall the bald man paradox. "If a man has no hair on his head, he is certainly bald. But if a man is bald, adding I hair will not remove his baldness. But from 0 we can pass to any number by adding I sufficiently many times. Thus, no matter how many hairs a man has on his head, he must be bald." This is sometimes called the "slippery slope" argument: what are we to make of it? I think the conclusion here must be that baldness is not a predicate with a hard cut-off at some exact number of hairs. A few hairs and there is baldness, many hairs and there is no baldness, but there is no sharp transition in between. Now, it would seem that evolution gives a similar result regarding intelligence: the amoeba is not intelligent, the human is, and one can draw evolutionary trees which carry from one to the other without any single evolutionary step that converts an unintelligent species into one that exhibits intelligence. Yet, though we affirm evolution, we do not deny human intelligence. And, I think a similar claim car be made for Al systems. At most, today's systems exhibit but limited aspects of intelligence. But no successful argument has been given that blocks continuing development of such systems until they finally do exhibit intelligence in some very rich sense of the term.

Nonetheless, I do want to argue that there is something distinctive in the way we, as humans, are intelligent. We have already noted transition from pure symbol-manipulation systems to ones that interact with the world. A dictionary, for example, tells you what a book is only by giving a definition in terms of other words, and if you do not know these words, you must refer to further definitions in terms of yet other words. The dictionary never breaks alit of that vast "web of words." But a person

Page 230: Perspectives on Mind

222 Michael A. Arbib

comes to understand new words because enough of the other words encountered make contact with one's own lived experience. One comes to understand the concept of book to the extent that he can look at a book and recognize it. Nor is this recognition limited to naming the object; one knows that it can be picked up, the pages turned, the writing read, the pictures examined, and information 01' entertainment gained from it. We go beyond the formal structures and semantic networks of artificial intelligence--our knowledge is rooted in real-time interaction with the world.

The "evolution" of linguistics provides a related insight. Modern linguistics was initiated by Chomsky's theory of syntactic structures, of formal relationships required between words if they are to constitute a "legal" sentence of the language. The apparent opposition between syntactic and semantic approaches to sentence structure has been diluted as those who built on Chomsky's insights placed increasing emphasis on the concept of a lexicon, the idea that general rules of grammatical structure are not sufficient to characterize the patterns of language without considerable information about the particular roles played by specific words, including certain semantic roles such as which words stand in the role of "instrument" or "agent" for a particular verb. Much of the AI work on knowledge representation, including Schank's scripts, can be viewed as an attempt to go from the lexicon to an encyclopedia--an almost open-ended compilation of background knowledge, knowledge that may implicitly shape our discourse. This suggests that much relevant linguistic knowledge goes well beyond the formally codified. An embodied mind is enriched by a multitude of experiences of living within a human body and as a member of human society.

2. Schema l1!eolY

What, then, is this "schema theory" in which we are to give an account of the embodied mind, an account which is to transcend mind/body dualism by integrating mental experience with the way in which we interact with the world?

The history of schemas goes back to Immanuel Kant and beyond, but I shall start with work undertaken early in this century by the neurologist Sir Henry Head (Head and Holmes 1911). He explored the notion of body schema: A person with damage to one parietal lobe of the brain may lose all sense of the opposite side of his body, not only ignoring painful stimuli but neglecting to dress that half of the body; conversely, a person with an amputated limb but with the corresponding part of the brain intact, may experience a wide range of sensation from the "phantom limb." Even at this most basic level of personal reality--knowledge of the structure of one's own external body--the brain is responsible for constructing that reality.

Just how far scientific understanding takes us from "common sense," is

Page 231: Perspectives on Mind

Schemas, Cognition and Language 223

shown in the work of one of Head's students, Frederick Bartlett, who in his book Remembering (1932) observed that when a person retells a story, it is not based on word-by-word recall, but rather on remembering the story via internal schemas and then finding words in which to express this recollec­tion. This account paves the way for the views of Kenneth Craik who, in 771e Nature of Explanation (1943), argued that the function of the brain is to "model" the world, so that when one recognizes something, one "sees" in it things that will guide one's interaction with it. Note that there is no claim of incorrigibility, no claim that our interactions with the world will always proceed as expected, The point is that one recognizes things, not simply as a linguistic animal, merely to name them, but as an embodied animal. The "building blocks" of just such models are precisely what I use the term "schema" to name, To the extent that our expectations are false, our schemas can change, and thus we learn, Such writers as Richard Gregory, Donald MacKay and Marvin Minsky have also built upon this notion of an internal model of the world--initially in the cybernetic tradition--as a means of developing the concept of representation so central to much work in Al today,

One of the best known users of the term "schema" is Jean Piaget (see, e,g" 1971), the Swiss developmental psychologist and genetic epistem­ologist. He traced cognitive development in the child, starting from basic schemas which guide sensorimotor interactions with the world, through var­ious stages of increasing abstraction leading to language and logic, and finally to abstract thought. He talks both of assimilation, the ability to make sense of a situation in terms of current stocks of schemas, and of accommodation, the way the stock of schemas may change as expectations based on assimilation to current schemas are not met. Such processes in the individual are analogous to the way in which a scientil1c community is guided by the pragmatic criterion of successful prediction and control. Science updates theory as it tries to extend the range of phenomena so understood, Moreover, the increasing range of successful prediction is often accompanied by revolutions in ontology--that is, in our understanding of what is real. Just such a revolution appears to be underway currently as we shift from the inherently deterministic reality of Newtonian mechanics to the inherently probabilistic reality of quantum mechanics,

There are a number of studies in the literature of cybernetics and artificial intelligence which use terms such as 'frame', 'schema', and 'script' to describe processes through which knowledge is represented. Schemas have been used for purposes as diverse as providing functional analyses of animal sensorimotor coordination to that of providing intermediate-level programs for mediating vision and touch with control of the movements of a robot (for a review, with peer commentary, see Arbib 1987), and even to providing formal models of language acquisition and production (Arbib, Conklin and Hill 1987), There is no agreed upon

Page 232: Perspectives on Mind

224 Michael A. Arbib

definition of a schema, and so it may seem somewhat presumptuous to talk as if there is a specific body of science answering to the name schema theory. However, we see examples of schemas which suggest a richness that grows as more and more scientific contributions are made.

This bit of history suffices to show how the development of schema theory is providing cognitive science with means to address issues in human perception and movement, in language and learning. For the moment, I want to briefly discuss perception and movement.

3. Sc/zemas in Action

A key notion is that of the action/perception cycle, in terms of which humans are viewed, not as stimulus-response creatures who wait passively until, hit with a stimulus, they emit some corresponding reflex response, but rather as built from a network of schemas which represent "world knowledge" in which there is always a "schema-assemblage" of currently active schemas constituting its representation of current goals and situations. This schema-assemblage guides action, but is in turn updated as particular actions lead to new perceptions; and as this interaction of events and expectations drives the process of accommodation whereby the total network of schemas is updated and expanded.

To make this more concrete, I shall consider work on the machine vision project led by my colleagues Ed Riseman and AI Hanson. Input to the computer encodes a colour photograph of an outdoor scene consisting of trees, houses, grass, sky, and so on. How is the computer made to recognize these scenic elements and correctly label the parts of the image to which they correspond? The first step is called segmentation. breaking the scene into segments or regions which may serve as candidates for meaningful parts of the picture. Segmentation cues might include discontinuities of colour or texture or (with stereo pairs) depth; while commonalities of colour or texture might suggest picture elements to be aggregated into a single region.

Unfortunately, even the best of such "low-level" vision procedures cannot yield perfect segmentation: perhaps a shadow on a shutter will cause an image to be mis-segmented into several regions; maybe the highlighting on a roof of bluish-grey slate will lead the roof to be improperly fused with the sky. Thus, the computer must use "high-level" vision programs capable of invoking perceptual schemas embodying general knowledge about objects likely to occur in a scene. The bluish region at the top of the picture may be sky, or it mar be a slate roof; a few contigious regions which can be aggregated into a rectangle mar be a good bet for a window or shutter or door; if we find a portion of a parallelogram that might be the boundary of the roof, and if there is a good bet for sky just above it, or a good bet for shutters just below it, then the level of confidence in the

Page 233: Perspectives on Mind

Schemas, Cognition and Language 225

roof-hypothesis can be raised. Machine-vision programming is not an "all-or-none" process. Some

regions may be provisionally joined, some may be provisionally split. Different interpretations will have different confidence levels. There will be a process of cooperatil'e computation between mUltiple hypotheses, strengthening some and weakening others, bringing about convergence upon a coherent interpretation of the whole image. My claim is that similar processes occur in our brains as multiple brain regions, each composed of millions of concurrently active neurons, interact to commit us to a single overall perceptual interpretation or course of action.

To put some numbers on these speculations, it seems plausible that to make sense of any given situation we call upon hundreds of schemas in our current schema-assemblage; and that our lifetime of experience, skills, general knowledge, and recollection of specific episodes, might be encoded in a personal "encyclopedia" of hundreds of thousands of schemas, all enriched by the verbal domain so as to incorporate representations of action and perception. of motive and emotion, of personal and social interaction, of the embodied self. It is in terms of these hundreds of thousands of schemas that I would offer a naturalistic account of self as a reality in space and time.

This must, however, raise for many the question: Could these hundreds of thousands of schemas. billions of neurons, really cohere to constitute a personality, a self. a personal consciousness? My approach to this question is through a critique of reductionism--the attempt to reduce the laws of one science to those of another. I shall start by looking at one-way reductionism, the claim that all laws of the mental are deducible from the laws of the physical brain; that there is a basic science. say, neurophysiology, from which all of psychology can be deduced if only sufficient analysis is carried out. To assess this, let us ask whether such one-way reduction is possible even within physics.

Consider statistical mechanics. Here, the aim is to explain bulk properties of matter by appropriate averaging over interactions of a myriad constituent particles. To do so. one must be guided by phenomena observed at the macro-level. Some properties. found for example in the history of the photo-electric effect, defy explanation in such micro-physical terms, and so provoke redefinition of underlying theory. This may yield paradoxical results, as when averaging leads from reversible Newtonian mechanics to irreversible thermodynamics. What seems at first a purely deductive process from micro to macro eventually forces a change in ontology. The practical result of such findings--of both new underlying assumptions and new methods of deduction and approximation which "propa­gate upward"--Iead to the construction of new realities in the macro-world, such as computers and atom bombs. In sum, we see that no set of laws at one level figures as the ultimate arbiter of reality at other levels of

Page 234: Perspectives on Mind

226 Michael A. Arbib

observation and control. This suggests, as a more appropriate philosophy of science, a two-way reductionism.

Two-way reductionism is the way in which I would regard cognitive science. We assert neither that we have a complete theory of neurons. nor of schemas, as though if only we could compute enough we could deduce all to be known about human cognition and personal reality from one or the other. Rather, schema theory and cognitive science are evolving in response to critiques of the limitations of current understanding of mind and person. Let me. then, make several points which will summarize my understanding of schema theory as "open-ended"--responding to, but also changing, our concept of the reality of our personal and social worlds.

(1) There is an evel)'day reality of persons and things. If you cut a person, they bleed. If you drop a kettle, boiling water may scald you. Love can turn to jealousy. How can we come to know this reality?

(2) Schema theory replies: minds comprise a rich network of schemas. Assemblages of these represent current situations; planning then yields coordinated control programs consisting of motor schemas which, in turn, direct action. As we act, we perceive; as we perceive, so we act.

(3) Perception is not passive. Rathel', it provides feedback as current schemas determine what is taken from the environment. If we perceive someone as a friend, what from them we take, say, as humor. we may well take from someone we dislike as an insult.

(4) A schema, as a unit of interaction with, 01' representation of. the world, is partial and approximate, It provides not only for recognition and guides to action. but also expectations about what will happen, These may be wrong, Schemas, and their ties within the schema network, change, Piaget provides insight into the process of schema-change in his discussion of assimilation and accommodation.

(5) There is no single set of schemas imposed upon all persons. Even the young have distinct personalities. Each of us has very different life-experiences on the basis of which our schemas change over time. Each of us thus has Ollr knowledge embodied within a unique schema-network.

Because individuals constrllct different "world-views," they will have differing realities. Let us, therefore, briefly examine the diverse roles

Page 235: Perspectives on Mind

Scilel1lQs, CogniTion and Language 227

that schemas will have to play if we are to build a truly satisfactory theory of how persons come to know their reality.

4. The Role of Schemos

I think of a schema as a unit for the construction of representations of reality, but not necessarily as an atomic unit. In the same way, a computer program may be used as a subroutine, as a unit in building larger programs, and may itself have been built up from other programs in its turn. What must be added to the notion of a program for a serial computer is (1) the idea of concurrency. i.e. of many schemas being active at the same time; (2) the notion of the embodied subject requi ring schemas for action and perception; and (3) the idea that schemas constitute a network which bring together our notions of reality at various levels. Thus, we have the neurophysiological level where the brain theorist seeks to instantiate schemas in terms of neural networks; above that, the cognitive psychologist's analysis, in information processing terms, of schemas for basic pattern-recognition or memory tasks; then, the level of the coherence and conflicts within a schema network that constitute a personality, with all its contradictions (e.g. consider Freud's concept of identification as providing person-schemas); and, finally, the level of holistic nets of social reality, custom, language and religion.

Quite independently, Marvin Minsky (1975) chose the word 'frame' for his AI account of such social situations as a birthday party; while Erving Goffman (1974) chose the title 'frame analysis' for his sociological analysis of the way in which people's behavior depends on the frame, or social context, in which they find themselves. One example: a patient enters the doctor's office, and the doctor asks "How are you?" and the patient replies "Fine, thanks." After they are seated. the doctor again asks "How are you?", and now the patient replies "Doctor, I have this terrible pain ... " What changed? The action moved fmm the "greetings frame" to the "doctor-patient frame".

This hierarchical view of schemas is close to the views of C.S. Peirce who discussed what he called "habits." For Peirce, a habit was any set of operative rules embodied in a system. He emphasized (and this anticipates Piaget) that they possess both stability and adaptability. He had in mind an evolutionary metaphor: species form a stable unit for analyzing the present state of the animal world, yet we know that these units are subject to evolutionary change. Peirce's habit. like our schema, can serve as a building block in a hierarchy of personality, society, science and evolution. and yet may itself change over time.

Thus. we can understand the diversity of possible schemas. We see that they may rest on individual style. yet they can be shaped by the social milieu. A network of schemas--be it an individual personality, a

Page 236: Perspectives on Mind

228 Michael A. Arbib

scientific paradigm. an ideology. or a religious symbol system--can itself enter into the constitution of a schema at a higher level. Such a "great" schema can certainly be analyzed in terms of its constituent schemas. but--and this is the crucial point--once we have the overall network, these constituents can find their full meaning only in terms of the network of which they are a part.

5. Schemas and Language

When we take up the subject of language, we find, at the core of our concern, the ineradicable tension between language and the richness it seeks to express. I have suggested that we need to construct a schema the­ory in which various "schemas" embedded in complex networks integrate the manifold experiences of the embodied self--a schema theory which is to somehow provide a scientific approach to the richness that words fail to express. Of course, any formalism, and that includes schema theory, that separates part of the network from its connections inside and outside must itself do damage to the plenitude of lived experience. Will this frustrate efforts at building a science of the mind? I think not, but there is admittedly a tension within the science of schema theory itself between current formal models of limited phenomena--whether expressed in computer terms, mathematical equations, or neural networks--and the richer description of human experience given in everyday language.

Here, our job is two-fold: to provide explicit accounts where we can, and, importantly. to understand the limitations of those accounts. Thus. we always operate in that tension. that dialogue, between the known and the unknown. It might seem to some observers that current limitations will in due time be removed by further research; while, to others, looking at the same tension, it may seem that such limitations are not temporary but irremovable in principle. I want to look briefly at this issue, and at an application some people have made in this context of ODdel's incompleteness theorem as an argument for the shortcomings of so-called "artificial intelligence" models in general. and then go on to examine a similar tension surrounding a model of language production, in particular.

6. Code/'s 771eorem and the Role "l Logic

In the early part of this century. philosophers and mathematicians pondered the question whether there existed a formal logical system in which, starting from axioms and using certain rules of inference, one could deduce as a theorem each and every true statement (and no false ones) about arithmetic. ODdel studied logical systems which were adequate (in the sense that they had enough expressive power to state the truths of arithmetic whether or not the\' could be proved) and consistent (in the

Page 237: Perspectives on Mind

Schemas, Cognition and Language 229

sense that it was never possible to prove both a statement and its negation). In 1931, G6del shattered the expectations of many eminent men from Hilbert to Russell by proving that any arithmetical logic that was both adequate and consistent was necessarily incomplete in that there were true statements that could be expressed but not proved (i.e., neither the statement nor its negation could be proved) within the system.

This result is of great importance in philosophy of mathematics, and some philosophers have tried to apply it in philosophy of mind as well, claiming that it shows that no machine can think. This appears to me a bizarre conclusion, for it rests on the assumption that a machine model of mind must have all its "knowledge" carefully coded in logical form and must be capable of no "mental operation" other than strict deduction from information encoded in it "at birth," as it were. Yet, no cognitive scientist would deny that an adequate model would have to represent mind as open to eJperience. Piaget has argued convincingly that the mind is not merely a system that starts with a set of consistent axioms and then stays strictly and consistently within the realm of those axioms. That is, one of the basic tasks in modeling the mental is to explain our ability to learn from mistakes.

However, if we make mistakes, that means that we have transgressed the limits of consistency. Yet machines have been built which incorporate learning algOlithms, and G6del's theorem would seem to have nothing to say about their limitations. Since his theorem would seem to imply that if one's actions do embody a complete logic, then one's a(:(ions are incon­sistent. So, the import of G6del's theorem for formal models of mind would seem to be that if one must make decisions, one will of necessity make mistakes.

It may seem that I have been somewhat abrupt about a purely logical model of mind, one in which mental life is restricted to formal inference. But this is not to be seen as a total rejection of logic. I see logic, not as the sine qua non of human thought, but as an important limiting case. On the whole, humans do not live by rigorous argument, but rather they make numerous decisions that merely seem "plausible" to them.

I shall leave this brief discussion of G6del's theorem with the conclusion that schema theory, as I conceive it, asserts that logical deduction, although an important case of our decision-making, is in some sense exceptional. Real human decision-making is embedded within a network of schemas, so that an analogical schema-based process comes a lot closer to characterizing mental behavior than does logic.

7. A Model of Language Production

As a second example of the tension between formal description and a richer but informal description of cognitive phenomena, let us examine a

Page 238: Perspectives on Mind

230 Michael A. Arbib

model of a fragment of language production--one developed by Jeff Conklin (1983). The problem he tackled was to provide a computer with the ability to describe a scene in much the way a human would. The latter requirement ruled out blind scanning from left to right and top to bottom, describing every possible object and relation encountered. Actual humans seem to describe what is salient, and the most salient things get described first. However, to have the machine simply state "there is a person", "there is a house", "there is a fence", and so on, in order of salience, is still not a very good model of actual human language production. For instance, suppose a red door and a red car both occur in the scene. Then, even though the car might be several items before the door in terms of salience ordering, a human would very likely recognize the commonality and so might produce a sentence combining the two items: "There's a red door, and just down the road a red car as well." Conklin's description generator took as input a network expressing the names of the main objects of a scene and their relationships; each node or arc was labeled with a number expressing the salience of the corresponding object or relationship. The system then used both the salience ordering and rhetorical rules to extract salient objects and relations from the network. A system called MUMBLE (McDonald 1983) then packaged these into grammatically well-structured sentences, including proper use of anaphora.

Here we see certain aspects of human experience captured in a way that is rather formal and yet one that begins to show some of the richness of relationships between what we try to express and the way in which we say it. Simultaneously, we can quickly formulate a critique of how far such a model falls short of the richness of our language behavior. We know very well that the state of brain and body can modify the process of language production in subtle ways, not only in deciding what indeed the salience of something is, but also in producing the tone of voice that often says more than words. When we describe something, we do not simply give a neutral description of it. There is a communicative "intent" involved. When we speak, it is not only to express the most salient aspects of some semantic network, but to reflect awareness of a social setting, perhaps even to seek to communicate different things to different people. Or, again, an utterance may be a designed to be a lie. Conversation can literally be stymied by misunderstanding of that deep sort growing out of participants having radically different stocks of schemas. Moreover, language is not only for communication. It is part of the way in which an individual makes thoughts available internally in order to work on them, play with them, and create new thoughts from them. Paradoxic~l! .... though, language also has the role of providing firm anchor points, w;,ys of providing repeatable structures which let us develop and reshape an argument, either alone or in conversation.

To attempt a full account of human behavior I'ia a logical model would

Page 239: Perspectives on Mind

Schemas, Cognition and Language 231

be unduly limiting though there are situations in which there is no doubt that we proceed by logical analysis of alternatives. In one sense language is impoverished, instituting fixed patterns that ride rough-shod over nuance and subtlety of feeling; while in certain other situations it allows us to penetrate into realms of thought othelwise denied us. In all this, we see the tension between language and that experiential richness of which even schema theory is but a partial representation.

8. TIle Individual and the Social ill Relationship

Up to this point, we have stressed schemas at the individual level--Ianguage as something that an individual uses to communicate with other individuals, however inadequately. It is time now for a key transition: from schema theory as a description of the mind of the individual to schema theory addressing the apparent reality of social forces and institutions. Here we confront the paradox of the individual actor who discovers that much of what he took to be his individuality appears to be the "playing out" of social schemas.

How can schema theory be developed so that it will serve as a tool for resolving this paradox? The problem can, perhaps. be expressed in a different way: "How does one become a member of a particular community?", whether it be a language community, a religious community, or a social community in general. "How does the individual acquire the schemas which constitute, or construct, his social reality?"

In order to proceed we must distinguish several senses of the word "schema." The first is between a schema about society which the individual "holds" in his own head, a schema which embodies his knowledge of relations with and within society, on the one hand; and what I might call, on the other hand, a social schema, a schema which is held by society ell masse, and which is in some sense an external reality for the individual. I shall try to explain what it means for a schema to be a social scllema, something not within an individual person's head, and yet how such a schema can affect what an individual does. But as background we need another distinction, still at the level of the individual, namely that between a schema as an imel7lal structure or process (whether a computer program, a neural network, or a set of information-processing relationships in the head of an animal, robot or human) and the schema as an extcl7lal pattern of overt behavior that we can see when we look at someone "from the outside." In Piaget's work, one finds a recurrent ambiguity as he switches without warning from talking about a schema as if it were a structure inside the head to talking of a schema as something directly observable.

This distinction between a schema as internal and as external can be better grasped by looking at the theory of finite automata. Very simply, a finite automaton is a machine which receives, one at a time, symbols from

Page 240: Perspectives on Mind

232 Michael A. Arbib

some fixed set of inputs and produces, one at a time, symbols from some fixed set of outputs. What makes it interesting is that these are not just stimulus-response mechanisms. The current input symbol does not alone determine what the current output will be--that is, a particular stimulus does not always elicit a fixed corresponding response. Rather, the automaton exhibits sequences of behavior, sequences of inputs corresponding to sequences of outputs. The extemal description of such an automaton characterizes how it behaves by specifying for each sequence of inputs the corresponding sequence of outputs. given that it starts in some standard state. This corresponds to the psychologist looking at a child and deciding that its next sequence of activity constitutes an example of a particular schema, e.g. a grasping schema or a suckling schema or the schema for object permanence. An intemal description of a finite automaton explains its behavior in terms of a finite set of states, together with a specification of what happens when each input arrives, namely how the automaton changes state and, in so changing, which output it emits. The reason that we get different immediate responses to a given input is that different histories (sequences) of input/output can drive the automaton to drastically different states.

Our task, then, with schemas at the level of the individual is to infer. from externally observed regularities, internal structures that can provide an internal state explanation of them. This, of course. raises problems. Within automata theory itself, there is the problem of knowing that particular sequences of behavior are each initiated when the machine is in some given standard state. If you are observing someone from the outside, how do you decide that they are in comparable states when you look at their behavior on different occasions? Additionally, there is the problem of deciding which pieces of behavior correspond to one schema and which correspond to another. Our task here further requires that we note a problem not inherent in finite automata theory, namely that of specifying a way of deciding what the input and output alphabets are. i.e. what the units are into which behavior is to be decomposed.

Such problems face not only cognitive scientists. they also confront the child (largely at an unconscious level) when it interacts with its world. The child is trying to find out how to behave in such a way that it can achieve what it wants, avoid punishment. and gain some pleasure from its interactions with the world. This suggests that the child is trying to go from observed patterns of behavior to internal structures which will provide appropriate representations. Freud's notion of identification provides an interesting case in point: the child engages in a process of extracting from parental behavior schemas for behaving like the parent. To some extent this contributes to the child's growth, and to some extent it causes problems as the child attempts to incorporate behaviors which the parents abrogate for themselves. punishing the child for exhibiting them.

Page 241: Perspectives on Mind

Schemas, Cognition and Language 233

This leads inevitably to certain tensions in schema acquisition--that is to say, in mental development.

Piaget theorizes that the child has certain basic schemas and ways of assimilating knowledge to schemas, and that the child will find at times some discrepancy between what it experiences and what it needs or anticipates, In such instances the child's schemas will change as what Piaget calls "accommodation" takes place, It is an active research question as to what constitutes the initial stock of schemas, Much of Piaget's writing emphasizes the initial primacy of sensorimotor schemas, whereas others, like Colwyn Trevarthen, have focussed on interactions between mother and child thereby stressing social and interpersonal schemas as fundamental to the basic repertoire on which the child builds, Either way, the child has from the very beginning schemas--whether in terms of personal relationships or in terms of hunger and trying to become comfortable--in which knowledge, perhaps not yet conscious, of holl' to do something is inextricably intertwined with the knowledge of what to do, To the extent that we acquire skills to do something, we acquire implicit value judgements that this something is worth doing,

Another important concept in Piaget's work is that of reflective abstraction (cf. his discussion in Beth and Piaget 1966). Piaget suggests that we do not respond to unanalyzed patterns of stimulation from the world. Rather, these are analyzed in terms of our current stock of schemas. It is the interaction between stimulation--which gives variety and the unexpected--and schemas already in place that provides patterns from which we can then begin to extract new operational relationships. These relationships can now he reflected into new schemas which form, as it were, a new plane of thought. Then--and this is the crucial point--since schemas form a network, these new operations not only abstract from what has gone before, but now provide an environment in which old schemas become restructured. To the extent that one can form a general concept, one's earlier knowledge of, say, a dog or a ball becomes enriched. Piaget may, however, be too sanguine in his view of the child progressing through stage after stage of increasingly coherent abstraction, thereby smoothly making sense of a richer and richer body of experience. As Freud observed, at least in the process of identification, new schemas need not always just generalize or enrich old schemas. Again, we are reminded of our discussion of Giidel's theorem: the mental network is not a collection of consistent statements. Even though schema acquisition may, in part, reduce inconsistencies and press toward greater generality, inconsistencies nonetheless may (perhars must) remain.

We conclude, then, that shared behaviors within a community may provide patterns that present regularities allowing a child to build schemas which will internalize for that child the social schemas of the community. It is important to stress the contrast between schemas

Page 242: Perspectives on Mind

234 Michael A. Arbib

outside--observable patterns of behaviors--with those inside, i.e. schemas as processes within the individual's head. As an individual comes to assimilate communal patterns, his behavior will provide part of the context for others later.

9. Acquiring Language

To be competent in English is not to interiorize some formal grammar, but to have the ability to communicate with others who speak English. The concept of English is an example of Wittgenstein's notion of "family resemblance"--a speaker of English is someone who speaks a language intelligible to enough people who agree that they speak English. So, our question is "How does the child extract from utterances of his speech community those patterns of speech behavior sufficient to form schemas which ultimately interiorize as his own version (idiolect) of the language?"

A Chomskian approach to language acquisition would suggest that innate schemas already incorporate most of the constraints of a transformational grammar, including such general categories as noun and verb, along with certain constraints on the relationships between these categories. I favor an alternative approach--one exemplified by Jane Hill's study of language acquisition in the young child. Her model starts, not from innate syntactic constraints, but from the child's need to communicate: it wants to communicate, it likes to repeat pieces of verbal behavior gleaned from the environment. However, and this accords well with our schema-theoretic approach, when the child "repeats" an utterance. it does not do so word-for-word. Nor does the child omit words at random. Rather. the child's behavior is compatible with the suggestion that it already has some schemas in its head, and that an active schema-based process is involved in assimilating input utterances and g.enerating simplified repetitions. To focus her study, Hill looked at a two-year old responding to adult sentences with either a simple paraphrase or a simple physical response. The child was studied once a week for 9 weeks to provide a data-base wi th which to balance general findings reported in the literature.

Intriguingly, the child changed evel)' week. There was no such thing as "two-year old language" to be lumped into a single model. Hence. the model had to be one of micro-changes, in the sense that every new utterance might possibly work a significant change in the child's internal structures. Hill's model thus began with a certain set of innate mechanisms capable of driving the child's acquisition of a certain portion of language. These mechanisms. however, do not explain how it is that language eventually becomes nested or recurs;I'e in the sense that certain language structures can repeatedly incorporate simpler forms of that structure in increasingly elaborate constructions. Hill has suggested what

Page 243: Perspectives on Mind

Scllemas, Cognition and Language 235

these latter mechanisms might be. but has not yet studied them in depth, It does not appear. however, that such elaboration of the model will require her to include structures like those Chomsky would claim to be innate,

At birth a child already has many complex neural networks (fundamental schemas) in place, Thus, for example. it is able to suckle. grasp. breathe. excrete. and to feel pain and discomfort sufficiently to allow it to learn that to continue certain actions or to discontinue others is pleasurable (or painful). Note that to say that a schema is innate. is not to imply that the adult necessarily has that schema. Once one begins to acquire new schemas they alter the information environment of old schemas so that the latter may well change.

We know that certain portions of the brain have to be intact initially for a person to have language, and we know that language is degraded in specific ways by removing certain portions of the brain. What is at issue is what exactly it is that the initial structure of the brain provides to the child. Does it give the concept of noun and verb. does it give certain universals concerning transformational rules, or does it give, rather. the ability to abstract sound patterns, to associate sound patterns with other types of stimulation or patterns of action? Hill's model suggests that. at least for certain limited portions of a child's linguistic development. innate processes of schema-change yield an increasingly rich language without "building in" linguistic universals.

In any case. let me briefly survey the "internal schemas" associated with Hill's findings: there were basic schemas for words. basic schemas for concepts. and basic "templates" which yielded a grammar marked by a richness of simple patterns which the child had already "broken out" of experience. rather than the grand general rules we find in the grammarian's description of adult language. And what was "built in" was /lot a set of grammatical rules but rather processes whereby the child could form classes. try to match incoming words to existing templates. and use those templates to generate responses. Let me give one brief example of the way studies of this kind can provide insight. 1'01' a while, the child produces only two-word utterances. A simple way of accounting fm this would be to posit a notion of limited complexity which increases as the child matures. On such a model, one might next expect three-word utterances; but. in fact. what comes next, though only for a week. is a predominance of four-word utterances which seem to he concatenations of two-word utterances stich as "second ball, green ball." By the following week. instead of saying something like "second ball, green ball", the child in effect says "second green ball." Hill's model explains this by invoking a process which collapses four-word utterances to three-word utterances by deleting the first occurrence of a repeated word. This hypothesis also explains the earlier findings of Ed Matthei who had been looking at the semantics of

Page 244: Perspectives on Mind

236 Michael A. Arbib

young children. In order to probe the child's understanding, Matthei placed in front

of a child a row of red and green balls, and ask it to pick out the second green ball. The child looked to see if the second ball was green, and was frustrated if it wasn't! Occasionally, it would even rearrange the balls so that the second ball was green. It would pick up the second ball if it was also green. This behavior seems to accord with Hill's hypothesis that for a young child the semantics of 'second green ball' is really given by the simple concatenation of "second ball" and "green ball", rather than by the "hierarchical" qualification of 'green ball' by 'second'.

One could continue to catalogue such partial models of linguistic development, contrasting them with other phenomena which we observe in our own use of language, phenomena not yet captured within the net of such "early language" models. But, at this point, I want to take a look at the way in which schema theory can provide fresh understanding of a more advanced piece of language behavior, namely metaphor.

10. Language as Metaphor

Language can be viewed as providing partial expression of a richer network of schemas which, in turn, is the partial crystallization of the true richness of lived experience. This leads to the view that there is a sense in which all language is metaphorical. Some approaches to language take the "literal" and the "metaphorical" as forming a strict dichotomy. Literal meaning is somehow "real" meaning; each word is viewed as having its own meaning, or perhaps a finite set of different meanings. To find out what a sentence means, one composes individual literal meanings. Metaphor is seen as aberrant, an exception to be found in poetic language. Somehow, metaphor is a distortion of literal meaning, countenanced only when the literal rendering of a sentence fails.

By contrast, from our schema-theoretic point of view, meaning is extracted by a dynamic process which is virtually endless. A sequence of words is always seen as an impoverished representation of some schema­assemblage. In conversation we engage in a process of interaction which may tease out more and more of the meaning rooted in this assemblage, and it may even change the meaning as the conversation proceeds (whether that conversation is actual dialogue, or an internal "conversation" in one's exploration into the network of one's own internal meaning). Thus, a sentence may have a "literal" meaning in terms of skimming off the most common set of associations from related schemas. But there is no dividing line setting off the literal from the metaphorical. In all cases, discourse provides an entry into a schema network. Thus, "literalness" is perhaps simply a measure of the speed with which we break off exploration. When we interpret a poem, we see ourselves going indefinitely deeper into

Page 245: Perspectives on Mind

Schel1las, Cogllitioll alld Lallguage 237

this network, both by progressively articulating the knowledge implicit in the schemas providing the core of the interpretation, and also by exploring the network of associations that take us further and further out into the richness of experience.

Consider the sentence, 'The sky is crying.' There are no fixed dictionary entries that include literal meanings of 'sky' or 'crying' which yield a coherent interpretation of this sentence. Rather, each word has a network of meanings, and these networks can interact until a coherent reading of the sentence is found--a reading which can be accepted as is, or elaborated even further. If we start from the "literal" meaning of 'sky' then, in making sense of 'crying' we must strip away those meanings that require eyes to be present, and perhaps come to focus on the falling of moisture. We might then come to the notion of rain as teardrops falling from the sky. But, because of other associations with crying, we can go further into the network, seeing 'the sky is crying' not simply as a funny way of saying 'it's raining' but as enriching this bare statement with a mood of sadness. If the sentence were within a context, there would be even further richness that could be pursued.

On the other hand, suppose it is the "literal" sense of crying that is anchored. If we think of the sky as normally blue, we might interpret the sentence as asserting that the person who is crying has blue eyes. However, if we let the two metaphors interact so that the literal sense of 'crying' is combined with the metaphorical sense of 'raining'--if, that is, it is the sky we take to be crying, then the crying would be rain, and thus the color of the sky would be gray--then our metaphorical interpretation tells us that the person in question has eyes that are gray, the melancholy gray of a leaden sky, not the bright blue we normally associate with sky.

In this way, a schema-theoretic explanation of meaning leads to the view that metaphor is in a real sense quite normal (Hesse 1980, Arbib and Hesse 1986). This is because we view language as embedded within a changing and holistic schema network. Literal and metaphorical are understood as ends of a cOlltinllllm, rather than as posing a dichotomy. Understanding this continuity allows an extension of th,e field of science itself as we enrich our vocabulary for talking about that to which science has yet to do justice.

--'J--

One of the fundamental points in Arbib's presentation is that no schema network is beyond replication in the programmed states of a sufficiently powerful computational device. His only cal'eat is that the function of the device would need to replicate the biochemical structures of a human brain as well. It is not clear whether Arbib is implying that

Page 246: Perspectives on Mind

238 Chapter nll"ee

the device would have to possess the causal propel1ies of the brai n, or whether it would need to replicate its actual biochemical makeup. But he does seem to imply that these properties lend themselves to computational replication and, furthermore, that the schema networks themselves are equally capable of being replicated as formal operations on symbolic representations. This suggests that Arbib might side with Husserl against Heidegger's contention that most of our "background" know-how is non-representational (in practice and in principle). But whether or not Arbib would disagree with Heidegger and Dreyfus, it is clear that he allows for the possibility that schema networks function as skill-based dispositions (or, as Dreyfus proposed, as forms of "know-how" rather than kinds of knowledge). What is not clear from Arbib's presentation is the extent to which he believes "skill-based dispositions" can be understood well enough to allow for accurate replication. If Dreyfus is right, it won't help to ask the experts. But, then, with what other means might we inventory and analyze these schema networks?

In the following commentary, Harrison Hall argues that Arbib's notion of schema is so broad as include "everything from Kantian rules and standard AI programs to the sort of habitual skills and competence which seem to be proving fatal to [the] cognitive science [movement]." (Hall: this volume, p. 239) Professor Hall concludes that, consequently, Arbib has sidestepped more important philosophical issues concerning the viability of an information processing approach to the study of mind. For example, he thinks that Arbib has missed the point of Searle's "Chinese Room" argument, and that this reveals one way in which Arbib has misconceived the problem of human cognition. In order to indicate another way in which Arbib's schema theory may fail as an analysis of cognition. Hall invokes a brief analysis of Dreyfus' five-stage model of skill acquisition (Dreyfus: this volume). An adequate schema theory, Hall insists, would have to abandon the hypothesis "that cognitive processes consist of computational operations on formally defined symbolic repre­sentations." (Hall: p. 246) Furthermore, such a theory could serve as a viable model for mental replication only if it described "an artificial entity which duplicates the causal properties and functioning of the human brain." (p. 246) Here, he echoes Dreyfus by suggesting that the develop­ment of "massively parallel computing machines" may "facilitate" the sort of computational processing that would be required for true mental repli­cation, but concludes that Arbib's schema theory would not be able to benefit from such a breakthrough unless Arbib were to divorce himself completely from the basic assumptions of the information processing model of mind.

Page 247: Perspectives on Mind

HARRISON HALL

Naturalism, Scltemas, and the Real Philosophical Issues in Contemporary Cognitive Science

As is probably true of most of Arbib's philosophical opponents, I am in full agreement with his desire to give a "thoroughly naturalist account" of persons--especially if all that is required is to refuse to "countenance realities ... which transcend space and time." But even at this level of generality, Arbib's claims appear to be sufficiently vague as to beg some interesting philosophical questions. In current debates in philosophy of mind, naturalism is typically opposed to mentalism. The "thoroughly modern metalist" (Fodor el a/.). however, does not countenance non-physical realities. [I] What makes him a mentalist or non-naturalist is his belief that an adequate account of intelligent behavior cannot be given at the level of physical science, that is, his belief that mental terms are an irreducible part of any adequate science of cognition. The modern mentalist (cognitive scientist, Al theorist) holds a functionalist view of mental states and processes, identifying them with their functional role in the ongoing activity of an organism (or machine) rather than in terms of the stuff in which they are realized. Although such a view is in principle compatible with accounts of persons whicli countenance immaterial realities: in practice, current functionalists believe, not surprisingly, that functionally identified mental states and processes are realized in the strictly material stuff of the brain (or computer).

The controversy about past accomplishments and future prospects of cognitive science in general, and Al in particular, has virtually nothing to do with the debate over the existence of immaterial parts of persons and the relation such immaterial reality may have to the rest of the (material) universe. It has everything to do, however, with whether or not an ade­quate account of the mental is possible within the theoretical constraints of contemporary cognitive science. Those constraints require that mental processes consist essentially of computational (discrete, rule-governed) operations on formally defined elements (symbols). To the understanding and resolution of this central question it seems to me that Arbib has little to say. From an extended criticism of Searle's famous Chinese-box argument to introduction of a notion of "schema" broad enough to include evel),thing from Kantian rules and standard Al programs to the sort of habitual skills and competence which seem to be proving fatal to cognitive science (AI, in particular), Arbib begs most of the important questions raised by the information processing approach to mind.

What made Al so attractive to those engaged in the scientific and philosophical study of mentality was that it promised to free that study

239 H. R. Otto and I. A. Tuedio (eds.), Perspectives on Mind, 239-248. © 1988 by D. Reidel Publishing Company.

Page 248: Perspectives on Mind

240 Harrison Hall

from complete dependence on neurophysiological science. If the essence of mentality were the computational processing of symbols, then understanding of mind would not require prior understanding of the physiology of the brain. Psychology would not be forced to wait for a completed science of the brain in order to define the states and describe the processes which make up human mental life. Mentality could be conceived as a unitary set of phenomena, requiring only a sufficiently large and detailed set of symbols. Mentality would be mentality, the formal processing of symbols, and from the point of view of cognitive science it would be irrelevant whether those symbols were encoded and those processes realized in immater­ial (only accessible through introspection) stuff, human tissue, alien tissue or computer circuitry. Suddenly it seemed possible that computer science held the key that could unlock the secrets of the mind. If a computer could perform cognitive functions, it would ipso facto possess mentality and its program would literally be a psychological theory of those functions, applicable to human beings and any other cognitve entity. At this point, much of that excitement has disappeared and the prospects for the future have dimmed somewhat. The computer model of mentality has come under two very distinct forms of philosophical attack, and a number of well-documented AI failures have lent some support to one of them.

Searle's attack on AI [2] is entirely at the conceptual or non­empirical level. It consists of a thought experiment designed to show that formal processing of symbols is in principle insufficient to produce mental states or processes. For the sake of argument, Searle grants that such processing might be capable the same input/output correlations as, say, the actual understanding of a story by a human being. He then supposes a situation in which a monolingual English-speaking person has exactly what a computer would have in such a situation: a story in Chinese symbols (to the person in question, meaningless), a set of questions in Chinese (equally meaningless to our subject), and a set of instructions (the program) in English for matching Chinese symbol outputs (the answers) with Chinese symbol inputs (the questions) in accordance with nothing but formal properties of the symbols. Since the person in this thought experiment only simulates understanding of the story without having any real understanding of it, the conclusion we are to draw is that such simulated understanding is all that AI can hope to have using programmed computers.

Arbib seems to grant Searle's argument, but then appears to miss the point of it by "forcing deeper considerations" which are largely irrelevant to Searle's contentions. Thus, Arbib suggests that if the computer were turned into a robot so that input came from a camera and output moved mechanical limbs, it would make a difference. Or, again, if its data base were enlarged sufficiently in size and complexity, we would have real understanding. Searle's own response to the "robot reply" suffices for the first suggestion. Such elements can be added to the initial thought

Page 249: Perspectives on Mind

Naturalism, Schemas, and the Real Issues in Cognitive Science 241

experiment without change. The person's formal processing of the (to him, meaningless) Chinese symbols gives him no more understanding when the med­ium of input is a camera and their effect the movement of mechanical limbs than when they are simply sent in and sent out l'ia keyboard and printer.

The second suggestion--that increased size or complexity of the data base could be decisive--misses the point entirely. If anything has been learned from more than two decades of AI research into natural language production and comprehension, it is that the data base and programming required to approach even the simulation of human language skills is un­believably massive and complex. Since Searle has granted that there may be a story-understanding program for which the input/output correlation might be indistinguishable from that produced by real understanding, he has, in effect, granted any imaginable data base. The real point is that none of these considerations has any bearing whatsoever on the outcome of the thought experiment. That experiment proves, if Searle's rendering of it is correct, not that computer simulations of understanding have "only limited aspects" of human cognition, but that they have none. Computer understand­ing isn't partial, it is, to quote Searle, "zero." The problem is not a matter of degree and cannot be solved by more of the same.

What the computer lacks, according to Searle, isn't .sophistication or power, but rather the most basic component of human mental states and pro­cesses, namely, intentionality. What makes our understanding intentional-­that is, what makes the human understanding of a story about restaurants and hamburgers an understanding of restaurants and hamburgers--are the causal properties of the human brain which give its states and processes a non-arbitraty, non-formal connection to the world represented in those states and processes. Contrary to what Arbib writes, the claim is not that "only a system with the biochemical structure of the human brain" is capa­ble of mentality, but that only a system with the causal propel1ies of the brain is able to produce the required intentionality. And it is an open empirical question whether various biochemical or non-biochemical struc­tures possess the necessary causal properties. But, for Searle, it is a closed question whether the right formal properties and processing, however produced, are sufficient to constitute mentality--and, moreover, it is a question clearly to be answered in the negative inasmuch as Searle's thought experiment shows that no formal processing of symbolic representa­tions can be, alone, sufficient to produce any intentionality whatsoever.

The second line of philosophical attack on AI has, unlike Searle's thoroughly non-empirical sortie, definite empirical consequences although it issues from contemporary phenomenological philosophy. In its strongest form, it claims that AI will not even be able to simulate any interesting range of intelligent human behavior. The best known critics from this perspective are Hubert and Stuart Dreyfus [31. Their criticism can be divided into two stages, corresponding respectively to the philosophical

Page 250: Perspectives on Mind

242 Harrison Hall

pOSItions of Heidegger and Merleau-Ponty, reinforced by AI's inability to solve the so-called "common sense" knowledge problem and by the failure of expert systems research to achieve its objectives,

Heidegger's insight is that our understanding of things is rooted in our practical activity of coping with them in the everyday world, and that this everyday world is essentially a context of socially organized purposes and human roles which cannot be represented in terms of facts, features and rules, This non-representable background against which common sense understanding occurs is not something we know but, as the result of socialization, is more like a pal1 o!wlzat we are,

Ignoring the problem of common sense knowledge altogether, AI attempts in the 60's to program natural language comprehension, pattern recognition, and general problem solving met with failure after failure (though frequently these were portrayed in the literature as "partial" successes). In the early 70's a technique for dealing with everyday understanding in special domains called "micro-worlds" was introduced. There were a number of impressive results (e.g. Winograd's SHRDLU, Waltz's Scene Analysis Program, Evans' Analogy Problem Program, and Winston's program for learn­ing concepts from examples). Micro-worlds were constrained in such a way that the problems of context-dependent relevance and context-determined meaning seemed manageable. The hope was that micro-world techniques could be extended to more general domains. Micro-worlds made increasingly more realistic could then be combined so as to ultimately produce the everyday world itself. The computer's capacity to cope with micro-worlds would finally be transformed into genuine artificial intelligence.

Dreyfus claims that the failure of this strategy results from running head-on into Heidegger's "non-representable background" of norms and practices which determine the significance of things in the everyday world. Although impressive in the contrived domains for which they were invented, no program of such a design could possibly yield anything like human understanding outside the limits of its artificial constraints. The reason for this is that situations in the everyday world simply are not like micro-worlds in any important respect. This insight emerged quite sharply in the attempt to program the understanding of children's stories. It was quickly discovered that the "world" of even a single children's story, unlike a micro-world, is not a self-contained domain and cannot be understood independently of the larger everyday world onto which it opens. This everyday world is presupposed in every real domain regardless of size, and seems somehow to be a "'hole present in each of its parts.

The cognitivist conviction that the everyday world and our command of it must somehow consist of an elaborate collection of facts and theories--a very complicated system of implicit beliefs differing from explicit ones only by being more easily overlooked by the cognitive scientist and more difficult for the ordinary persoll to recall--has become less and less

Page 251: Perspectives on Mind

Naturalism. Sellemas. alld tile Rea/Issues ill Cogllitive Sciellce 243

plausible to maintain. Educated guesses at the number of beliefs which would be needed in such a system have grown from Minsky's 1968 estimate:

... a machine will quite critically need to acquire the order of a hundred thousand elements of knowledge in order to behave with rea­sonable sensibility in ordinary situations. A million, if properly organized, should be enough for a very great intelligence. [4)

To Dennett's suggestion in 1984 that "we know trillions of things" [5), Arbib, too, seems willing to add his own estimate when he writes:

our lifetime of experience, skills, general knowledge, and recollec­tion of specific episodes, might be encoded in a personal "ency­clopedia" of hundreds of thousands of schemas ... (Arbib: this volume, p.227)

When to this we add the problem of updatillg everyday knowledge in order to take account of changes as time passes and actions are performed (the "frame problem "), the already unmanageable task of programming a computer to display common sense begins to look absolutely impossible and gives rise to comments like the following by Fodor:

If someone--a Dreyfus, for example--were to ask us why we should even suppose that the digital computer is a plausible mechanism for the simulation of global cognitive processes, the answering silence would be deafening. [6)

What this ought to suggest, according to critics like Dreyfus, is that common sense or everyday understanding doesn't work this way at all and needs to be conceived other than as a massive collection of representations (beliefs. schemas or whatever). Just such a conception is provided by looking at common sense, and our command of the everyday world. in terms of skill rather than in terms of knowledge acquisition. A philosophical theory supporting such an approach is provided by Merleau-Ponty, and an important opportunity for bringing this theory to bear on cognitive science is afforded by critical appraisals of recent work in that branch of AI dealing with so-called "expert systems."

Merleau-Ponty held that all human behavior, including cognition, could be best understood in terms of the development and employment of habitual skills, all of which--from the motor or sensorimotor to the "purely" intellectual--had the same basic structure. On this view, the everyday background of common sense is to be understood as the ensemble of skills possessed by a human being. those skills ranging from the most basic perceptual and motor capabilities to the most sophisticated social and

Page 252: Perspectives on Mind

244 HQlTison Hall

intellectual ones. The world. organized in terms of such skills. cannot be captured in terms of representations, nor is the ensemble of skills equivalent to a belief system of any sort.

Expert systems research aims to endow computers with human expertise in very specific domains (e.g. medical diagnosis. spectrographic analysis, various areas of management. game playing, and so on) by ascertaining the rules or principles which human experts themselves employ. These are then programmed into the computer, along with relevant facts. Human experts and computers thus work from the same facts using the same inference rules. But, since the computer cannot forget or overlook facts. cannot make faulty inferences, and can make its inferences much more quickly than a human expert. the expertise of the computer should be superior. Yet in study after study. the computer proves to be inferior to human experts. These results are not so surprising if careful attention is paid to actual human skill acquisition and employment. rather than forcing expertise into the information-processing mold. A study by Dreyfus has revealed a five-stage progression in the acquisition of skills, proceeding from novice to expert. Moreover. his account has been found to fit almost perfectly considerable data. independently gathered. dealing with the acquisition of clinical nursing skills. [7) There is not space here to reproduce anything but the central conclusions of that analysis. [8)

A novice recognizes specific facts and features for which he has been trained to look. He employs rules which deal with such elements treated as context-free. He lacks global command of the situation and is concerned only to see that his performance conforms with the rules. The expert. on the other hand. proceeds not by identifying specific facts and features. but by an immediate. intuitive sense of the overall situation. He does not follow rules or devise plans. His skill has become so much a part of him that he is no more aware of it than he is of his own body in ordinary motor activity. Tools or instruments, in fact. become extensions of the expert's body. Chess masters. when engrossed, can lose entirely the awareness that they are manipulating pieces on a board and come to see themselves as participants in a world of opportunities. threats. strengths. weaknesses. hopes and fears. In general, experts neither solve problems nor make decisions, they simply do what works. They recognize whole situations on the basis of past concrete experience without feature by feature compari­sions. It has been estimated a chess grand-master can distinguish more than 50.000 positions in this global and intuitive manner. When a situa­tion is thus recognized, the appropriate course of action simultaneously presents itself. No conscious thought or analysis is evident.

What emerges in the transition from novice to expert is a progression from the analytic rule-governed behavior of a detached subject consciously breaking down his environment into recognizable elements. to the skilled behavior of an involved subject acting from an accumulation of concrete

Page 253: Perspectives on Mind

Naturalism, Schel7los, and the Rea//ssues in Cognitil'e Science 245

experiences and immediate unconscious recognition of new situations as similar to remembered ones. This innate human ability to recognize, holistically, the similarity of current and past situations facilitates the acquisition of high-level skills and seems thus to separate us dramatically from digital computers endowed with fact and feature recognition devices and inference-making power.

Expert systems fail to duplicate human expertise because of the stage of skill acquisition to which they are limited. When rules and principles are elicited from a human expert, he is forced, in effect, to revert to a much lower skill level--a level at which rules actually were operative in determining his actions and decisions. This is why experts. frequently have a great deal of trouble recalling the "rules" they use and seem more naturally to think of their field of expertise in terms of a huge set of special cases. It is not surprising that systems based on elicited principles of this sort do not capture the expert's expertise. In terms of skill level, the computer is stuck somewhere between the novice and next stage of the five-stage progression that culminates in expertise. What obscures this is the tremendous number of facts and features which can be stored in an advanced computer, as well as the number of rules or princi­ples which--with superhuman speed and accuracy--it can utilize. Although its skill is not much different in kind from that of the novice, its raw computing power makes its performance seem vastly superior to that of a human operating at the same skill level. But computing power alone is not sufficient to duplicate the intuitive ability of the human expert.

This account of skilled behavior not only explains the limitations of expert systems, it also helps to explain the more general failure of AI to produce everyday, common sense knowledge. Such knowledge is not merely a matter of knowing that, of explicit beliefs or relations between the mind and propositions, but is much more a matter of knowing holV, of being able to cope with a world of implicit social norms, human purposes, and instrumental objects. "Know-how" and "skill" are virtually synonymous.

In the central areas of everyday cognitive life we are, for the most part, experts. We are expert perceivers, speakers, listeners and readers of our native language, and expert problem solvers for a wide range of everyday problems. This expertise does not, of course, insure freedom from error, but it does make our performance different in kind, not just in degree. from that of a programmed digital computer. In each of these areas of everyday life the computer is, at best, a very powerful and sophistica­ted beginner, competeI1l in artificial micro-worlds where situational understanding and intuition have no part to play, but incompetent in the real world of ordinary human expertise.

Given the way Arhib uses the notion of schema, it is difficult to be sure that the criticisms sketched above apply to him. Por the most part, he seems to construe schemas in terms of standard AI programs and

Page 254: Perspectives on Mind

246 Chapter 7hree

strategies--Minsky's frames, Schank's scripts, typical scene analysis heuristics, etc,--all of which are subject both to Searle's and to Dreyfus' lines of argument which I have adumbrated above. At times, however, he speaks of neural nets and of body images and internalized social norms, but always with insufficient precision to make it clear where he stands in the actual debate about the status and future of AI.

If what Arbib intends is to abandon the constraints or working assumptions of AI--namely, that cognitive processes consist of compu­tational operations on formally defined symbolic representations--and instead describe an artificial entity which would actually duplicate the causal properties and functioning of the human brain, he can avoid all of these philosophical objections. But, then, such an entity would have to be capable of the intentionality so patently characteristic of human mental states and processes, Whether Arbib's notion of schema extends this far is simply unclear. Whether science will ever produce a machine incorporating such entities is an empirical question, and one to which none of the critical considerations above can speak. Massively parallel computing machines currently being developed may well facilitate the kind of processing that seems required. But none of this will be AI as originally conceived; and thus unless Arbib, in clarifying his notion of schema, is prepared to break radically from the computationalist conception of AI, he cannot provide us with a valid theory of the human mind.

--fJ--

As Professor Hall points out, the attraction of the information pro­cessing model of mind lies in its presumption that cognitive functions are not dependent on brain functions. If this were true, one could presumably "define the states and describe the processes which make up human mental life" without having to tie into a science of the biochemical structure of the human brain. But Arbib has defined his own position to include brain theory, even as he posits as his goal the development of a computer that exhibits intelligence. This, together with his insistence on taking into account "the way in which we are embodied, both in having human bodies and in being participants in human society," (Arbib: this volume, p. 220) suggests to Hall that Arbib has strayed from the computationalist's camp.

Hall contends that since Arbib continues to insist on a computation­ally based replication of human cognition, his problems do not diminish and his theory remains vulnerable to at least two arguments commonly raised against the information processing model of mind. But to what extent is Arbib's prerequisite for mentality--dependency on "a system with the bio­chemical structure of the human brain" (p. 221 )--any more constraining than Hall's prerequisite for intentionality--dependency on "a system with the causal properties of the human brain "? (Hall: this volume, p. 241) Is

Page 255: Perspectives on Mind

COl1ll1lentof)' 247

there not a sense in which biochemical structures sustain and make possible causal brain processes? And, is it not possible that these structures might lend themselves to computational analysis? If so, then perhaps Arbib has refined the computational strategy in a way that frees him from the force of philosophically inspired attacks on AI research.

In working out the computational foundation of schema networks. it is possible that Arbib might be helping those in Husserl's camp who contend. against Dreyfus, that expert "know-how" takes advantage of an "iceberg" of tacit understanding systematically pushed beneath the surface of attentive awareness. Follesdal (1982) has argued that this "iceberg" is comprised of a network of hypotheses that remain un-thematized even as they guide anti­cipations and govern corrective responses to environmental feedback. He also holds that hypotheses in this network are tested collectively, l11uch as networks of beliefs are open to confirmation or disconfirrnation by appeal to perceptual evidence. Like Arbib, Follesdal suggests that these networks of hypotheses become "sedimented" as natural propensities and dispositions which function to give us a holistic grasp of alternatives, and of the consequences of alternatives, that appear to be open to us.

On the other hand, if the non-representational character of expert "know-how" accords with that given by Dreyfus and Hall, then one wonders why, at this level of skill acquisition, one's expertise is so ingrained, so intuitive, so invisible to our attentive modes of awareness. An obvious response might be that this know-how has a computational structure that has become "hard-wired" into the biochemistry of the brain, encoded in neurological "object code," and now accessible by conscious executive control only in the 1110st extreme of cognizable situatiom. Granting this conjecture an initial plausibility, it would seem that Arbib's interest in "biological structures" becomes considerably more relevant to the skill­acquisition issue, since the structures to which he alludes would become necessary for the embodiment of such "consolidated" programs. Consequently, the transition from "novice" to "expert" might well turn out to be a process of internalization culminating in the encoding of cognitive skills at sub-cortical levels of brain function. Yet despite the amazingly power­ful hardware available to researchers in artificial intelligence, including memory capacities and processing speeds far exceeding anything in the human brain, the organization of the brain and the "software" it supports are far subtler than anything AI researchers have yet been able to approximate. Perhaps Arbib's discussion of genres and networks of interacting schemas reflects an effort to work toward a more imaginative treatment of the functional aspects of the human cognitive processing system.

Nevertheless, if the only non-distorting application of the computa­tional model of mind is to micro-world research, as suggested by Dreyfus and Hall, and if it is correct to conclude that nonattentive forms of awareness lack the sort of intentional content that lends itself to formal

Page 256: Perspectives on Mind

248 Chapter TI,ree

representation, then the highest levels of human understanding would not be dependent on "explicit, determinate representations of facts and features," nor on "systems of rules" for processing these representations (Hall: p. 244). Lurie's thesis that intricate relations hold between psycho­logical phenomena and their conceptual and empirical background would seem to be immune to this criticism. But Nelson's claim that non-conscious intentional states are more easily replicated than conscious intentional states probably would be turned upside down. Furthermore, it seems that a proper application of Tuedio's analysis of intentional transaction (along with Mcintyre's analysis of intentional content) would be limited to reflections on attentive modes of awareness. However, if Arbib's analysis of the multi-leveled relation between schema networks were to include a link between neurobiological structures and attentive forms of awareness, so as to make evident the basis and role of lIon-atte1lf;ve awareness, this might circumvent the negative force of the Dreyfus/Hall conclusions.

Arbib's naturalist commitment obviously tilts him toward functional­ism, yet it cannot be the narrowly constrained "computationalist" variety. Indeed, his position must be even more "liberal" in its attitude toward not only the range of mental phenomena to be accounted for, but also the types of conceptual and empirical resources available for theory formulation and testing. This appears quite evident from Arbib's own remark,

that to make sense of any given situation, we call upon hundreds of schemas in our current schema-assemblage; and that our lifetime of experience, skills, general knowledge, and recollection of specific episodes, might be encoded in a personal "encyclopedia" of hundreds of thousands of schemas, all enriched by the verbal domain so as to incorporate representations of action and perception, of motive and emotion, of personal and social interaction, of the embodied self. It is in terms of these ... that I would offer a naturalistic account of self as a reality in space and time. (Arbib: this volume, p. 225)

It is not merely the number and variety of schemas which Arbib sees as necessary for giving a naturalistic account of self that prompts Hall's claim about the liberality of Arbib's views, but it is also the assumptions implicit in Arbib's position that force such an appraisal. When, for example, he says that in any given situation "we call upon hundreds of thousands of schemas ... etc.". one cannot help but notice the concession to subjectivity which he is ready to make--quite in contrast to others who also call for naturalistic explanations, such as Rey, Moor, and Nelson. In the following commentary, Jan Garrett pursues this point about subjectivity ultimately suggesting that even Arbib's liberal model of naturalistic functionalism falls seriously short of what would be required to explain human mental experience and the phenomenon of personhood.

Page 257: Perspectives on Mind

JAN EDWARD GARRETT

Schemas, Persons, and Reality--A Rejoinder

Professor Michael Arbib defends the metaphysical position of naturalism on the basis of recent and projected work in cognitive science. Naturalism, he says, is the view that there is only spatiotemporal reality. He means to counterpose this position to the claim that phenomena of personhood, such as mind, will or soul, transcend space-time. Arbib proposes schema theory and the work of cognitive science as ways of showing how naturalism can accommodate such phenomena.

My response will attempt three things: first, to clarify the meaning of "naturalism" and the senses of "schema" which [ find to be at work in Arbib's argument; secondly. to review phenomenological research into the schemas which orient human being-in-the-world and to show how this research undermines Arbib's main claim; and, finally, to sketch two nonnaturalist strategies for saving the phenomena of cognitive science and satisfying the desire, which Arbib and I share, for an integral yet multi-tiered view of reality.

When Arbib asserts that only spatiotemporal reality exists, he does not mean that there exist only beings which experience objects in a spatiotemporal framework, for such beings might not themselves have spatiotemporal dimensions of the sort studied by physics. On such a position, the space-time matrix might be a construct, with but relative reality. Hence Arbib's view must be distinguished from Berkeleyan and Leibnizian phenomenalism, It must also be distinguished from the position of existential phenomenology, which holds that objective space-time is grounded in the structures of being-in-the-world. Arbib's naturalism regards objective space-time itself as foundational.

Arbib presumably also does not mean that we cannot speak of certain aspects of reality apart from particular claims about space-time. After all, the interesting aspects of computer programs and the schemas representable by them are their functional relations to one another, not the location they may happen to occupy in grey matter or computer hardware. This point separates Arhib's naturalism from two older varieties of materialism: the materialism of mind-brain identity theory which equates mental and physical events. and the logical behaviorism which equates mentality with dispositions to ranges of overt behavior. Arbib appears to draw his naturalism from the most recent kind of materialism, sometimes called "functionalism." This naturalism holds that the mind operates in the

249 H. R. Ouo lind 1. A. Tuetiw (edl" j, Penpeclil'eI Oil Mind, 249-260.

© 1988 by D. Reld"l PuMl~/lIng Company

Page 258: Perspectives on Mind

250 Jail Edward Garrelf

body or nervous system as a network of computer programs operates in the computer hardware housing it. Functionalism shares with older varieties of materialism the requirement of a material underpinning for mind, but it does not exclude the description of functional roles of particular mental structures independently of accounts of the neurological material in which they occur. This view has an advantage over identity theory in making some sense of the difference between language descriptive of mental activity and language descriptive of physical activity, and over logical behaviorism in permitting its defenders to conceive thinking as an actual internal occurrence. [I]

Functionalist naturalism lets Arbib jettison two connected pieces of baggage which made it difficult to keep the materialist ship afloat in the past: reductionism and one-dimensional physicalism. He recognizes that "no set of laws has arisen at one level which serves as the ultimate arbiter of what constitutes reality at some other level of observation and control." What he recommends as a substitute for traditional reductionism is "two-way reductionism." The phrase is misleading and the concept is confused. The phrase is misleading because he rejects reductionism as usually conceived, i.e., the view that it is possible and desirable to exhaustively explain or redescribe phenomena at one level in terms of phenomena at another level. The concept is confused because he seems to subs lIme under its "two-way" aspect both the relation between statistical and classical scientific inquiries and the relation between two inquiries that study phenomena on adjacent but different levels. He appears to think, for example, that the relation between statistical mechanics and classical thermodynamics is like the relation between neurophysiology, a biological science, and schema theory, a "cognitive science." But statistical sciences obtain their concepts from classical sciences and are distinguished from the latter by their striving for insight into the "ideal limits from which the nonsystematic cannot diverge systematically," [2] while sciences operating on different levels necessarily employ different concepts and mayor may not study non systematic subject-matter.

My criticisms on this point are relatively minor, however. Let us now turn to schema theory itself. What are schemas? Arbib says that no precise definition can be given, but it is clear that he regards them as distinguishing marks of mentality and personhood. I believe we can discover at least three senses of the term "schema" relevant to his argument. In sense I, "schema" refers to a pattern of expectation or orientation guiding perceptions, interpretations and actions of beings agreed by everyone to be persons. This sense appears to be Arbib's primary sense. He wants to conceive of a schema as "a unit for the construction of our representations of reality." He also remarks that "each of us has vel)' different life-experiences on the basis of which our schemas change over time .... Each of us has constructed a different world-view."

Page 259: Perspectives on Mind

Scilemas, Persons, and Reality--A Rejoinder 251

In sense II, "schema" denotes an inscription. perhaps in flow-chart form. which operates as an heuristic device for the construction of Artificial Intelligence programs. Arbib describes his own group's use of schemas in this sense to analyze what occurs in animal brains during sensorimotor coordination and to provine programs to mediate between "vision" and "touch" and the movements of a robot.

In sense lll, "schema" denotes a potentially intelligible "program" embodied in a brain or nervous system. on the one hand, or in the hardware of a computer-driven robot. on the other hand. I group such schemas together because our access to them is similar in principle: if we could somehow discover the "language" that controls the relevant brain functions. we could conceivably "dump" the sense III schemas contained in grey matter. that is, copy the code in which they are "inscribed" onto an output device and decode it into actually intelligible programs, just as we can "dump" the programs contained in a robot's operational storage.

It might be thought that there is yet another meaningful sense of "schema," sense IV. which corresponds to what is structurally common to a set of schemas I. the set of schemas II which tries to represent them. and the set of schemas III constructed through the mediation of schemas II. Yet this new sense may be fairly empty, since the transitions from schemas I to schemas III do not necessarily preserve details. Typically, what the AI researcher tries to do is to duplicate the behavioral successes of intelligence, not to simulate the operations that lie behind them. [3] But Arbib seems to assume that there is a non-empty sense IV. for he claims, in a discussion of a project from which he thinks we can generalize, that "processes ... in our brains" are "similar" to the execution of Al programs invoking perceptual schemas which embody "knowledge" about a robot's environment.

In any case, at least three senses of "schema" may be meaningfully distinguished. The reason is that our modes of access to them are different. Schemas I are typically encountered in everyday social contexts or in personal reflection, when we are engaged with other persons or our own past personal selves. Schemas II are external artifacts, available to the contemplative gaze of individuals or research groups. Schemas III are experimental objects. revealed operationally through the equipment and theories of the neurophysiologist or Al researcher. Schernas I are tacitly omnipresent in the world of normal interaction, where we meet the other as a "Thou." By contrast. schemas 1Il appear in the objective or "It" world of the relatively disengaged researcher. To be sure, this "It" world is a socially shared one, but it exists only insofar as the personality of those involved is intentionally minimized.

If schemas IV exist. they guarantee in principle the possibility of constructing schemas III duplicating the patterns of schemas I, through the mediation of schemas II. But their existence would not guarantee that all

Page 260: Perspectives on Mind

252 Jan Edward Garrett

the necessary conditions of minds informed by schemas [ are reproduced by the cO'Tesponding schemas III. Yet if naturalism as a species of materialism is true, and if schema theory can allow us to prove that it is true, then schemas III must provide necessary and sufficient conditions (given sufficient physical or "hardware" conditions) for personhood, not merely necessary ones. Phenomenological research into schemas [ confirms the suspicion that schemas do not provide sufficient conditions.

II

Something like schemas in the primary sense has been noted since virtually the beginning of philosophy. [4] But the fullest exploration of how schemas orient our consciousness and our conduct towards things and persons has been achieved by the phenomenological movement of the twentieth century. Its founder, Edmund Husserl, studied the ways in which our experience is pre-theoretically constituted, how our "intentional" relations to the world "make" it appear to us in certain ways. Husserl introduced the sophisticated concept of "horizon," which corresponds roughly to a network of schemas [, and carried out important work on the horizonality of temporal and other dimensions of human existence. [5] But it fell to Heidegger to point up the fuller importance of the concept in Being and Time, where it appears as "fore-structure" and "totality of involvements" as well as particular instantiations of "project," "attunement," "understanding" and even the "Being" of Oasein itself. [6] Merleau-Ponty's investigations of the lived body may be seen as an extension of this research into aspects of perceptual awareness. More recently, Hans-Georg Gadamer developed Heideggerian themes in his studies of events of understanding and interpretation in the human sciences. [n his work, the primary notion of schemas appears as "horizon" and "prejudgment." [7]

Such reflections permit us to see the many ways in which our horizons structure our personhood. For example, horizons reveal so-called material objects in various modes of Being, or coming to presence. An object of contemplation, such as a piece of chalk stared at by students undergoing an elementary lesson in the metaphysics of substance and accident, has one mode of Being. The "same" piece of chalk in use has another. [8] The chalk as scientific object, interpreted as composed of molecules and atoms, possesses yet another. Our horizons also reveal our existential temporal­ity and spatiality, i.e., time and space as actually lived prior to the leveling of perspectives which is essential to the constitution of (our notions of) objective time and space. [9] Phenomenological psychologists, moreover, have described the nestedness of spatiotemporal horizons, a nestedness which stretches backward into the past as remembered and forward into the future as anticipated, feared, hoped-for, etc. [10]

Page 261: Perspectives on Mind

Scilemas, Persons, and Realitl'--A Rejoinder 253

Researchers in this tradition have not missed the abrupt and gradual shifts of schema-networks or horizons. Abrupt shifts occur, for instance, when one's initial prodllctive engagement (say, in wood-chopping) is interrupted by an equipment failure (e.g., by a crack in the axe handle), and when a doctor moves from direct looking (say, at the patient's outward symptoms) to technically assisted seeing (e.g., computer-assisted blood analysis) and from such specialized seeing to dialogical inquiry (e.g., asking the patient how he feels). which gains access to the lived body of the Other. [II] Abrupt shifts may also occur between future-directed productive activity, characteristic of everyday life, and the receptive stance which lets an artwork disclose a new world. [12] Horizons shift gradually when some but not many prior expectations fail to find confirmation or fulfillment, as when the incoherence of a particular reading of a passage in a text forces the reader to consider that he is misconstruing the tone in which the passage should be read. (Perhaps it was meant ironically but he took it seriously.) Similm' gradual changes occur in conversations with other persons who challenge our preconceptions or prejudgments and offer grounds for so doing. [13]

Phenomenological studies of how horizons change report the following observations: when a particular schema fails repeatedly to make what is experienced coherent or meaningful, the schema will cease to be operative, that is, to guide future encounters with what is experienced. But it does not simply cease to function. Rather, it is first "detached" from the operative set of prejudgments and "moved" from the relative obscurity in which it has been actively guiding or misguiding our lives to the center of attention, where it is probed and tested. Perhaps more entrenched assumptions, other schemas making it seem probable or necessary, lying behind it in even greater obscurity, will have to be challenged as well. The likelihood of such schemas' singly or collectively being true is reconsidered and their relative strength with relation to their competitors is evaluated. If they now appear comparatively inadequate, one or more of them may be disqualified as operative schemas and radically excluded from candidacy to guide our future exrerience. [14]

How can the computer model of mind help LIS here? Arbib provides us with a sample of his approach when he describes the machine vision project led by Hanson and Riseman. Success in sllch an undertaking requires a compllter analogue to the hermeneutical circle. i.e., a nuanced shifting between regional and holistic search for the most probable hypotheses, If the initially most probable hypotheses at the regional level are incompatible with each other or with the most probable hypotheses at the level of the whole. then it may be necessary to revise upwards the probabilities of initially less probable hypotheses until compatibility is reached. The resulting revision of hypothesis schemas regarded earlier in the orientation process as highly probable corresponds formally to the

Page 262: Perspectives on Mind

254 Jail Edward Garren

prejudgment revision or horizon change described by the phenomenologist. Still, something in the latter description eludes the corresponding AI description: phenomenology reveals that an operative prejudgment, when challenged, moves from operative hiddenness to the center of attention. The meaning of this statement is not adequately revealed by descriptions of the process of schema revision that could occur indifferently in minds or computing machines. Something essential is omitted.

This omission is suggested in the terminological difference he tween Arbib and the phenomenologists. When we compare the term "schema" with the phenomenological near-synonym, "horizon," we notice immediately that the latter resonates with connotations of "field" and "open space," while the former does not. The difference is not accidental: it is precisely such an "open space" that every genuine form of materialism wishes to deny. What is this "open space"? First, it is not the objective space-time matrix which naturalism posits as the ground of reality. Rather, it is the condition for the possibility of every human reality-interpretation, including the naturalist one which holds space-time to be foundational. Secondly, it is a phenomenon most easily approached by indirection. Since antiquity philosophers have evoked it by analogies based on the vocabulary of sense perception. For Plato the open space of knowledge is like the upper world illuminated by the Sun. in comparison with which ordinary sense perception is constricted in the closed world of the Cave, where we see primarily shadows thrown by firelight. Western religious traditions speak of "hearing a call," the intelligibility of which phrase presupposes an opening to the call. Heidegger exploits sensory metaphors when he discusses the "Da," literally the "There," of Dasein. He calls it "die Lichtullg" or "the clearing." [15] (Etymologically," Liclllullg" means "lighting. ") A clearing, of course, is a place into which the sun shines, an open space in the forest.

Philosophers have anguished over the inability to give a "scientific" description of this field or open space. Scientific descriptions are invariably formal, and we can normally say of any intelligible form that it belongs to a class which has at least one other member from which the form may be distinguished. But the open space is no such formal thing. When we try to describe the space, we invariably produce a formal description of the limits and orientation of the field under some specific circumstance or other, not the "fieldness" of the field itself. Let us grant for the sake of argument that a circumstantially limited field can be correlated with schemas which, in principle, may be embodied in something like hardware. Let us also grant that the form of brain activity and the external behavior of human persons can in principle be reproduced on computers or computer-driven robots. Still missing is any ground for helieving that such hardware-software composites would possess an "open space." And if they do not, then few of those aware of what phenomenology has tried to

Page 263: Perspectives on Mind

Scilemas, Persons, and Realitl'--A Rejoinder 255

show us would accept the claim that these composites are persons. The modeling would be a mimicry of persons, not a creation of them.

It will not do to equate the "open space" with reflectivity and to argue that since software can treat other software as data. computers can in principle "turn back" upon their own mental states and thus can reflect, just as human beings can. Jean-Paul Sartre was correct to distinguish reflective consciousness from what he called "nonpositional self­consciousness," the open space presupposed by our ability to reflect. [16) Such self-consciousness is the awareness we have of ourselves as related to the world. prior to. and overlaying. any explicit reflection upon the world or our own selves as distinct from it. This pre-reflective awareness is the guarantee that reflection does not always distort that upon which it reflects. It is also what grounds our inability. noted by Descartes, to doubt that we exist.

Pragmatically oriented philosophers. who regard prediction and control as decisive criteria for the viability of a theOl)', may be as unmoved by these observations as an earlier generation of "tough-minded" thinkers were unmoved by Heidegger's account of those rare but significant moments when the dread-filled realization of our mortality yanks us from our usual concern with beings (in the plural) and brings us "face to face" with Being. But there is a more profound pragmatism which appeals to the criterion of the coherence of truths revealed in all modes of inquiry which serve our desire to know. Such a pragmatism will respect phenomenological research and will judge any materialism to be insufficiently empirical.

111

Arbib defends naturalism because he desires a unified but multi-tiered cosmology compatible with evolutionary theory and the general direction of cognitive science. This desire is reflected in claims that evolutionary trees can be drawn from unintelligent unicellular life to intelligent human beings "without there being any single evolutionary step" that produces intelligence. and that "we can make a similar claim for AI systems."

Yet there is a way other than naturalism to meet these concerns. Panpsychist process philosophy. associated with Alfred North Whitehead and Charles Hartshorne. teaches that mentality, corresponding to what I have called the "open space," exists at every level, from subatomic entities, through the amoeba and human being, to the universe as a whole. At the lowest levels this mentality is a dim. neeting, disconnected feeling, but at the higher ("mind") levels, where the feelings causally transmitted from the lower ("body") levels are combined, intensified. contrasted and clarified. thought as we know it emerges. [171 Process philosophy lets us put forth a unified worldview consistent with evolutionary gradualism and the evidence for the open space, albeit at the cost of admitting traces of

Page 264: Perspectives on Mind

256 Jail Edward Garrell

the opening where common sense would not expect to find it. Yet another response to the quest for an integral yet multi-level

cosmology is provided by neo-Aristotelianism. Unlike traditional materialism but like Arbib's naturalism, this view argues for the existence of multiple levels of reality and explanation. In Bernard Lonergan's formulation, it permits a precise definition of material and spiritual reality: reality is material insofar as it is ultimately intelligible though unintelligent; it is spiritual insofar as it is ultimately intelligible and intelligent. Clearly, then, our minds are spiritual and any materialism that denies the existence of spiritual phenomena must be wrong. [18]

A complicated reasoning process and an ancient tradition lie behind this simple point. For Lonergan, as for Aristotle, perceptivity is not in itself intelligent. Intelligence. or intellect, emerges "one level lip" from perceptivity. Intellect emerges only with insight into the natures of things and verification of their existence, resulting in judgment. Both insight and judgment require percepts (for, as Aristotle says, intellect never thinks forms apart from images), hut the mind is not the images. For Aristotle, it is the intellectually illumined forms themselves. [19]

Aristotle claims neither that the individual human intellect is immortal nor that the intellect can think without a link to the body, whose sense organs, after all, provide the images in which the intellect thinks the forms. Yet he argues that the intellect is "separable" from the body. [20] What sense can we make of this remark?

Since Socrates, rationalists have understood intellect as potentially master within the person. But if intellect is to body as master to servant, how can it be in the body? [21 J Aristotle and others noted that "art," or knowledge relating to making, rules its subject-matter. the material to be transformed. It could do so because it stands outside the material, in the artisan working on the material. Once art, as form or idea of the thing to be made, has entered the material, it no longer dominates but is accommodated to the now restructured matter. Similarly, intellect cannot be in the body in the same intimate way as perceptivity, which is mostly accommodated to body. So, if intellect is not to be enslaved to sensitivity and the body, as Hume and Nietzsche later insisted, [22] it must in some sense be "separable," even if not dissociated, from the body.

Pe'rhaps Aristotle's claim about the intellect's separability also reflects his attempt to capture the nonreflective presence of mind to itself. An intelligent being is aware that he thinks when he is actually thinking what he knows. [23] Would he possess such a luminous nonreflective self-presence if his thoughts were fully immersed in the givenness of the images?

Many questions will ,-emain, of course. regarding the ultimate validity

Page 265: Perspectives on Mind

Schemas, Persons, and Realiry--A Rejoinder 257

of panpsychism and neo-Aristotelianism as metaphysical positions, But such positions, which try to make room for the phenomenon of self-presence or the "open space," are more likely to lead to a defensible cosmology than one which denies them, as a consistent materialism must.

--9--

Professor Garrett has examined Arbib's position from a broadly histor­ical vantage point, and in so doing is convinced that Arbib's naturalism with regard to philosophy of mind is nor to be equated with either the older physicalist identity theory or modern behaviorism; he claims instead that it should be classed as a species of functionalism. To that extent, it would be a reductionism with all the shortcomings such a view possesses as an account of the richness of human experience.

Arbib has argued with unabashed optimism that a thoroughly adequate naturalist account of mind and personhood is possible, and that therefore the goal of AI--to build machines that exhibit intelligence--is realistic, i.e., AI is nor Iimired in principle ro mere simularion. Garrett tries to show that Arbib's optimism is this regard is unfounded. He begins by noting that Arbib uses 'schema' in three, possibly four, senses: as

(1) An expectation pattern (primary sense), (2) A flow chart. (3) An embodied program, (4) Whatever is common to (1)-(3).

He then argues that even if we allow the fourth sense of the term 'schema' to be non-vacuous (and thereby assure that type I schemas would become manifest given an appropriate medium for schemas of type III), it could not be established that naturalism in Arbib's sense is true; for to establish that, we would have to show not only that type 111 schemas are a necessary condition for type I schemas, but that the former are a sufficient condition as well. That is. we would have to show that if certain type 1 schemas were manifest in a context of persons then those persons could be shown to have a designated set of type III schemas structurally imple­mented in their being. But Garrett claims that this is precisely what recent phenomenological research has shown to be impossilble.

Garrett points out that type I schemas have been noted since the beginning of philosophy. but that the fullest exploration of them was initiated by Husserl. who showed how our experience is "pre-theoretically constituted" and how our "intentional" relations to the world "make" it appear to us in certain ways. 111 so doing. he employed the notion of "horizon" which. according to Garrett. anticipated the idea of a network of

Page 266: Perspectives on Mind

258 Chapter 77lrec

type I schemas. Heidegger and Merleau-Ponty carried this investigation even further; and most recently Gadamer uses "horizon" and "prejudgment" as primary schemas in approximately this same sense. Garrett maintains that these networks "structure" our personhood. Thus the "modes of Being" of an object are determined by our horizons. For example, depending upon individual purposes and predispositions a thing will "show itself" or "be taken" in correspondingly different modes of Being.

In his paper Arbib was trying, with his term 'schema', to make such notions more precise. Garrett interprets Arbib to be saying, for example, that "when a particular schema fails repeatedly ... [it] will cease to be operative ... " (Garrett: this volume, p. 253). But, what exactly is a "schema" in this context? Why not use the term 'hypothesis' here? Well. in some situations it would be an hypothesis (which is propositional), but in others it might be some other "schema," e.g., it might be an image or a template, items that are not propositional but pictorial. Thus, the term 'schema' is intended to encompass all sorts of perceptual, conceptual, and possibly even emotional, "instruments" of conscious processing. On this basis, the term 'horizon', is perhaps best understood as a nctwork of schemas--that is, as a logical, or analogical, or statistical system of hypotheses, templates, feelings, and the like.

Garrett's complaint seems to be that even if Arbib's project were successful--even if we were successful in precising our terminology by means of naturalistic explications--the essential aspects of conscious human intelligence would not be understood. There is an "hermeneutical circle" intrinsic to human nature, a circle which escapes systematic analysis. Any artificial intelligence would have to exhibit exactly the same phenomenon in order to approach the level of human personhood. Something still eludes the AI description: Garrett argues further that,

phenomenology reveals that an operative prejudgment [in the human], when challenged, moves from operative hiddenness to the center of attention. The meaning of this ... is not adequately revealed by descriptions of the process of schema revision that could occur indifferently in minds or computing machines. Something essential is omitted. (p. 254)

This omission is at the heart of Garrett's clltlque. Notwithstanding the sympathy with which Arbib approaches the task of accounting for human conscious experience as we find it, an important terminological difference between 'horizon' and 'schema' belies a critical deficiency in his theo­retical stance. Garrett observes, for example. that the phenomenological term 'horizon'

Page 267: Perspectives on Mind

CommentGl), 259

resonates with connotations of "field" and "open space" while the former ['schema'] does not." The difference is not accidental: It is precisely such an "open space" that every genuine form of materialism wishes to deny. (p. 254)

Nor does 'network of schemas' suffice: a network, he suggests, would just as certainly lack this "open space" as does an individual schema. But what is this open space? Garrett responds initially that it is not "the objective space-time matrix" posited by naturalism.

Rather, it is the condition for the possibility of every human reality-interpretation, including the naturalist one Secondly, it is a phenomenon most easily approached by indirection. (p. 254)

Garrett's resistance to naturalism appears to be based on both ontological and methodological grounds. If reality is understood to be the "basis" from which experience is constituted, then the first thing Garrett is unwilling to accept is that an objective space-time matrix is an appropriate ontology for explicating individual subjective consciousness. But that's not all: Garrett goes on to recommend methodological "indirection," and this he does in a manner suggesting that no direct approach can hope to succeed. He reminds us that:

Philosophers have anguished over the inability to give a "scientific" description of this field or open space. Scientific descriptions are invariably formal ... But the open space is no such formal thing. When we try to describe the space, we invariably produce a formal description of the limits and orientation of the field under some specific circumstances or other, not the "fieldness" of the field itself. (p. 254)

The ontological point is well-taken. From the individual's point of view, the particular set of things taken as constitwil'e of reality, as well as the particular constituting processes one presupposes, mayor may not correspond to the "scientific paradigm." Whether it does or not, its functioning is almost certainly massively implicit as compared with the case for scientific rationality which is consciously as explicit as possible. Yet, neither of these points militates against a fruitful employment of methodological formalism in philosophy of mind. We can concede Garrett's claim that a direct, or formal. approach to ontology is inadequate, while still holding out that a formal (direct) metllOdologv is, not only adequate, but preferable for phenomenological analysis.

Page 268: Perspectives on Mind

260 Cizapter Three

3.2 Background

On the one hand, a naturalism of some sort is philosophically attractive, for it falls in line with the desire "not to multiply entities" endorsed by the principle of Occam's Razor. Nevertheless, when we seek to understand, we cannot arbitrarily exclude significant regions of our exper­ience--we must "save the phenomena." Tuedio, Arbib, and Garrett have each, in their own ways, recognized the "open" and "indefinitely extendable" character of our interaction with the world. This theme is highlighted as we turn our attention from mentality in general to the specific phenomenon of natural language understanding.

Arbib has expressed his suspicion that any adequate model of language use and understanding would involve a very large stock of interrelated psychological schemas which remain continuously open to the impact of external "social" schemas, and thus to indefinite change. GalTett has further insisted that in all the major areas of human cognitive activity there must be recognized an open-endedness which not only allows for, but embellishes, the dynamic role of "horizon" and "pre-theoretic" judgment in the phenomenology of individual minds and their interaction with what they take to be the objective world.

In the next paper, Christopher A. Fields undertakes an exploration of the possible means for implementing this insight in relation to the specific problem of determining the resources necessary for understanding natural language. Meeting squarely the criticism raised by a range of analytic and phenomenological writers to the effect that no artificial system relying on narrowly-construed grammars, with or without limited prescribed "world knowledge" data bases, could successfully implement natural language understanding, Fields presents a strategy which he argues has promise. Specifically, he proposes that we interpret perception as a sealLIz routine that accesses as its data base, the actual world of our immediate experience.

Fields tries to show that such an approach is the only seriously credible one for providing the necessary pragmatic and semantic information needed for genuine natural language understanding. He also suggests that this approach is not only within reach, but is consistent with the constraints on understanding already argued by numerous writers as necessary. Chief among these is the "formality condition" often referred to as "methodological solipsism."

Page 269: Perspectives on Mind

CHRISTOPHER A. FIELDS

Background Knowledge and Natural Language Understanding

It could be argued that cognitive science was founded by Kant, who sought the necessary conditions that would have to be satisfied by any entity that perceived the world as he did (Flanagan, 1984). Kant rejected the view. championed by 20th-century behaviorism, that perception could be under­stood simply in terms of the impact of environment, of light, sound, pressure, etc., on the perceiver. He argued instead that perception involved active selection and structuring of such stimuli by the cognitive apparatus of the perceiver. He then set out to discover how perceivers selected and structured these environmental impacts [1). The components of Kant's model are familiar. He required the mind to impose a Euclidean space-time structure, as well as the concept of object and the "categories" of quantity, quality, modality and relation on sensation. Apperception (seeing-that) resulted from this structuring of sensation by the perceiver.

Humans do not, however, merely see a world made up of related things. We see, at least most of the time. meaningful things related in meaningful ways, and embedded in a meaningful world. Things, relations, and embeddings are meaningful, on most analyses, because they are represented as meaningful by cognitive agents (2). Husser! attempted to develop a theory to account for this, i.e., to explain how the mind could represent things as meaningful in propositional attitudes such as beliefs and desires. He argued that the mind could represent only if it stored information analogous to Fregean "senses" that would allow it to pick out the individuals to be represented and their properties. Husserl located this stored information in representation-schemes or noemata. The job of the noema corresponding to a type of object, e.g. a house, was to specify, by a set of hierarchically-organized rules. the characteristics that houses had in general, and the characteristics that could be used in particular circumstances to pick out particular houses. Thus, the noema supplied the meaning to 'house' and to perceptions of things recognized as houses, although not to houses perceived as something else, or not perceived as anything, or to non- or mis-understood tokens of 'house'. (3)

Husserl assumed that the vast information base which apparently informs human representing resides in the noemata in propositional, or sentential form. A similar assumption that the information necessary for human perceptual representation, whether in vision or in language under­standing, resides in discrete, explicit sentential data bases with contents similar to those of propositional attitudes has guided much of contemporary cognitive science and artificial intelligence research (e.g., Fodor, 1980).

261 H. R. Ouo and 1. A. Tuedio (eds.), Perspectives on Mind. 261-274. © 1988 by D. Reidel Publishing Compan,v.

Page 270: Perspectives on Mind

262 Chistopher A. Fields

Philosophers such as Heidegger (1962), and more recently Haugeland (1979), Dreyfus (1979), and Searle (1978, 1983), have argued that while some of the information required to construct representations may be stored in explicit, sentential form, it is impossible, even in principle, for all of it to be. They argue that representation is only possible against a holistic, non-represented background of conventions, practices, tools, etc. The purpose of the present paper is to examine this claim. and the prospects of cognitive science for dealing with it.

1. The Problem of Background Knowledge

To understand the ability of humans to use natural language, and to reproduce at least some of that ability with computer programs. are major goals of cognitive science. Natural language brings Husserl's problem immediately to the fore: what information must a reader or hearer use to understand a sentence of a language such as English, even a relatively simple sentence such as "the cat is on the mat"? The computer provides an ideal tool for testing proposed theories about this. If a computer cannot understand, by some reasonable criterion, a sentence such as the above example when it is provided with the information and methods proposed by a particular theory, then that theory is inadequate. or just false.

I will not say anything about what understanding amounts to in general, or about what conditions a system, natural or artificial. that appeared to understand would have to meet in order to really understand. This has been a focus of vigorous debate (see Winograd. 1972; Fodor. 1975; Shank and Abelson. 1977; Haugeland, 1979; Searle, 1980; and Cummins, 1983). I will assume that artificial systems could be "understanders" if they had the right knowledge and were placed in the right context. Denying this assumption amounts to denying that understanding can be given a supra-neuronal but sub-cultural explanation. and perhaps even denying that it can be explained at all. Such a conclusion is much too pessimistic until the alternative has been given fair chance. I will therefore address only the question of what information an artificial system would need. or what information we presumably have, that makes understanding possible.

The assumption that natural language understanding can be instantiated by computers places a strong constraint. the fomlalit), condition, on cognitive science (Fodor. 1980). The formality condition is the assumption that mental processes have access to (and. hence, can use) only formal, or syntactic properties of mental representations. Dreyfus' (1982) analysis suggests that Husserl may have accepted a similar formality constraint. The assumption of the formality condition is not implausible in the case of human beings. If mental representations are instantiated as neural states, and mental processes are instantiated as neural processes. then the formality condition requires that neural processes take as "input" only the

Page 271: Perspectives on Mind

Background Knowledge and Natural Language Understanding 263

shapes, i.e., the space-time configurations, of neural states. Given that neural processes are physio-chemical processes, it is completely implausible that they should depend on anything else. The formality condition has nonetheless been attacked (e.g., by Searle, 1980).

One approach to the problem of specifying the information necessary to understand a language is to provide a formalized semantics for the language (see, e.g. Davidson, 1967). A formal semantics would presumably assign a referent to each of the referring expressions of the language, and a logical role such as a connector or quantifier to everything else. It might use something like speech-act theory to deal with non-declarative sentence forms (Searle, 1969), and a possible-world approach to deal with modalities (Montague, 1974). Meeting the formality condition on this approach would require that such a semantics relate each term not to a real-world referent, but rather to a formally specifiable representation, such as a formal specification of the satisfaction condition for the term. Such representations are intentional in the same way that the contents of a propositional attitude are intentional. One way to represent a satis­faction condition is by a set of rules that allow the referent of the term in question to be picked out, e.g., from a visual image. Such rules were apparently, for Husserl, what constituted the noema for the term.

The only semantics consistent with the formality condition are, therefore, semantics that assume a finite vocabulary, and supply a finite satisfaction condition, or other representation, for each term [4]. Given such a proposed semantics, one can ask whether the information contained in the satisfaction conditions, or their equivalents, is sufficient for understanding. As might be expected, whether a semantics is sufficient even for a reasonable simulation of understanding is strongly dependent on the domain of discourse. Consider the term 'house'. A large fraction of English discourse involving 'house' can be understood (e.g .. by a small child) if the satisfaction condition for 'house' is something like 'a building where people live'. If, however, the domain of discourse is expanded to include, for example, Hollywood house-facades or dollhouses, this satisfaction condition is insufficient for understanding, or even for imitating understanding. It also fails if certain distinctions, such as those between houses and apartments, are made in the domain of discourse.

Even if the satisfaction conditions that the system "knows" are extended to cover as many cases as the system is likely to encounter, knowledge of satisfaction conditions is insufficient for what is generally thought of as understanding. In order to understand a sentence, a system must be able to use the information conveyed by the sentence. Systems are typically tested for understanding by being required w make correct inferences based on linguistic input: the "reading comprehension" sections of standardized achievement tests provide an example. Such inferences are not, at least normally. about the semantics of the input: they are rather

Page 272: Perspectives on Mind

264 Chisropher A. Fields

about the situation described by the input. Making the required inferences therefore requires a data base of /lon-semantic knowledge about the domain of discourse. If the domain of discourse is unlimited, then the understander must have an unlimited information base in order to make correct inferences from sentences describing the domain.

Thus, a common response to the problem of specifying the information base needed for language understanding is to limit the domain of discourse. This approach has been used by AI researchers, with varying results (see, e.g., Schank and Abelson, 1977). Such studies typically start with speci­fication of a "microworld," i.e., a limited context in which a fragment of English can be investigated. As much information about the microworld as is feasible and theoretically relevant is then placed in a data base, or set of data bases, that serve as input to the language understanding program. Some of this data is semantic, i.e., it encodes satisfaction conditions for terms in the input. Most of the data is, however, more general knowledge, encoded in structures such as "scripts" or "frames", that allow inferences to be made about the situation described in the linguistic fragment being processed. A script for understanding conversations about houses, for example, might contain such information as that houses are larger than people, that they are subdivided into rooms, each of which is larger than a person, that they typically remain in one place, and so on. The program uses these data in an attempt to understand, i.e. to extract useful information from, sentences it is given as input. Schank and Abelson (1977) present examples of programs that use fairly simple semantics together with scripts to answer questions about everyday situations given as linguistic input. The ability to answer such questions is taken as evidence of rudimentary understanding of the narratives in question. More recent research along these lines is reviewed by Graesser and Clark (1984).

The most successful systems of this type are so-called "expert systems." These use data bases containing knowledge used by human experts to solve problems in relatively small and well-circumscribed domains such as medical diagnosis. In order to be useful to human problem solvers, such systems communicate with their users (at least ideally) in fragments of natural language that describe the problem domain of interest. Such domains are typically described not only by relatively small, but also by fairly technical vocabularies. Technical terms have relatively precise definitions, and are much less open to variable interpretation than most of the terms of ordinary non-technical English. The knowledge required to understand technical terms is also, generally, of only one area of discourse. As the language encountered by an expert system becomes more technical and the domain of discourse becomes smaller, the performance of the system improves (see Barr and Feigenbaum. 1981, vol. 2).

Is it possible, however. to design an expert system for English. i .c.

Page 273: Perspectives on Mind

Background Knowledge and Natural Language Understanding 265

a system that uses whatever rules and knowledge an average speaker of English uses to understand sentences in a relatively unrestricted domain of non-technical discourse? To see some of the difficulty confronting such a project, consider the sentence, 'she took my tip.' What does this sentence mean? Assuming the sentence is non-metaphorical (see S,~arle, 1979a), it still has at least two meanings, depending on whether 'tip' refers to a payment for services or a piece of information. Assume the sentence is spoken by a customer in a restaurant to a companion. This would ordinarily disambiguate the sentence; a typical English speaker would take 'tip' to be referring to a payment to a waitress ('she'). This interpretation might change, however, if the reader knew that the pair in the restaurant had been talking about a female detective whose attention they were trying to distract. Would it change again if the reader knew that the speaker indicated a waitress by a nod of the head just hefore he :;poke? Did the nod fix a reference, or was it merely a ploy to confuse an unwanted listener, or to cause his companion to turn his head and better hear a whisper? Clearly, more information surrounding the situation is needed if any of these latter questions is to be answered.

This example is not unusual. As Searle (1978, 1983) points out, more information seems always to be required to completely disambiguate sentences of ordinary English. Competent natural-language users seem to decide on interpretations not when they have exhausted the possibilities, but rather when they decide to stop investigating further possibilities. If an interpretation is challenged by further linguistic, or other, information, it is normally open to revision. There seem to be no a priori limits on the amount, or kinds of information that may be used by a language user in some circumstances to understand a sentence.

Is all of the knowledge used in understanding such sentences as 'she took my tip' stored by English speakers in the representations, whatever they are, associated with 'she took my tip' by the semantics that they instantiate? Most of this information is not what is ordinarily thought of as semantic. Surely every token of 'she took my tip' does not call up all of the information used to disambiguate such a sentence, let alone all of the information that could possibly be relevant to its interpretation in any circumstance. Some of the information necessary, even for the semantic task of reference fixing must, therefore, be non-semantic.

Is the non-semantic general knowledge employed in understanding 'she took my tip' stored in explicit, sentential data bases before it is consciously "re-presented" in the attempt to disambiguate the sentence? If so, how is it accessed? Again, every token of 'she took my tip' does not bring to consciousness, even piecemeal, all of the information that could possibly be relevant to its understanding in any circumstance. Whatever information is retrieved must be guided in some way by the understander's representation of the discourse situation. What sort of information is

Page 274: Perspectives on Mind

266 Chisropher A. Fields

used to construct this representation, and how is it accessed? Language understanding models that limit the difficulty of the project by limiting the range of possible circumstances avoid these problems. Models of language understanding in general cannot.

Heidegger and others argue, using examples like those considered above, that not all of the information employed by average natural language users to understand simple, everyday sentences can be stored in explicit, sentential data bases of any sort. They argue that understanding virtually any sentence in an open, i.e. not artificially constrained, domain of dis­course can. at least in principle, depend on an indefinite amount of back­ground knowledge. Moreover, construction of context representations that determine what knowledge is relevant in given circumstances also appears to depend on an indefinite background. In the story about the tip, for ex­ample, is it relevant that the two men are wearing white ties? Why (not)? The answer clearly depends in part on context construction, and contributes to further refinement of the context description. Heidegger (1962) and Searle (1983) emphasize that much of this background is unrepresented "knowledge-how" of social practices, bodily abilities, use of tools, etc.

One response to this argument is that it applies not to language understanding proper, i.e. to understanding sentences. but rather only to the pragmatic understanding of speech acts. While this may be true of many formulations of the argument. that of Searle (1978). at least. is meant to apply not only to understanding sentences, but also to understanding individual words. Even ignoring this, the "it's only pragmatics" response will not help AI. Any AI language understander worth its salt will have to be able to understand speech acts: speech acts. not "bare" sentences. are the components of communicative English. AI must therefore deal with the arguments of Heidegger and others who call attention to the critical role of background knowledge in natural language understanding.

Call any knowledge that is not represented in explicit form, either in explicit satisfaction conditions or a data base such as a frame, inexplicir knowledge. The arguments from examples noted by Heidegger and others then purport to show that human linguistic abilities cannot be understood without invoking inexplicit background knowledge. This argument at least indirectly attacks the cognitivist framework in general, and the project of constructing AI systems that understand natural languages in particular.

2. Prospects for Dealing wirh rhe Ine.lplicir Background

There are two tactics that a cognitive theory of natural language understanding can take when faced with the problem of the putatively inexplicit background. The first is to refute the arguments of Heidegger, er al by showing that humans. and probably computers as well. can store all of the knowledge they need in explicit form. and can access it with

Page 275: Perspectives on Mind

Background Knowledge and Natural Language Understanding 267

sufficient speed and skill to exhibit the performance that they in fact display. The second tactic is to accept the arguments of these critics as pointing out an inadequacy in the standard view, but to show that natural language understanding can be explicated in terms of represented contents anyway. There would be, of course, a third tactic, viz. to accept the critical arguments at face value and conclude that language understanding cannot be understood except in terms of vague notions of background "knowledge" and praxis. This latter, however, amounts to an admission of defeat for cognitivism, and I shall not consider it further here.

3. T7le First Tactic: &p/ici/ Storage

If a device D explicitly represents a sentence S, then S must be literally written out, i.e. tokened by the physical structure of D. If humans, for example, explicitly represent sentences of English, then those sentences must be tokened by the brain, presumably by neuron firing patterns. Explaining the behavior of D by appeal to its explicit representations requires, at least in the long run, an analysis of D's causal structure that will map each represented sentence to a corresponding causal state. Dennett (1975) has pointed out that not all of the information that we would need to explain the linguistic performance of any system can be explicitly represented in this sense. Even the simplest computer programs cannot be described as mere collections of data; even if all of their representations are procedural (see McDermott, 1981). programs need a logical structure. or architecture to hold them together. If a VAX is running a program, even if the program is written in machine code, some of the information needed to explain the behavior of the V AX must be obtained not from the content or logical structure of the program, but from the structure of the VAX itself. [5] If the brain is running a language understanding program, some of the information needed to explain the brain's, or the agent·s. ensuing behavior must be obtained not from the content of the program, but from the structure of the brain.

Explanations of behavior in terms of programs are. therefore, only explanations relative to certain antecedently known. or presupposed. facts about the devices that instantiate the programs. Moreover, what one takes to be the fixed. architectural facts with respect to which algorithmic explanations are proposed itself depends on the explanatory task at hand (Cummins, 1977. 1983; Pylyshyn, 1984). These architectural facts are true of the device instantiating the program. but are generally not represented (i.e. tokened). and need not be represented. by the device (Cummins, 1982).

If these architectural facts are what Heidegger, et al mean by the "unrepresented background". then their quarrel is not with current cognitve science as such. but with a naive. albeit influential. version of the representational theory of mind. These critics. however. apparently mean

Page 276: Perspectives on Mind

268 Chistopher A. Fields

that not only architectural facts, but also facts about the external world, are among those not explicitly represented. Architectural considerations can account for the range of possible bodily movements, and the range of possible speech-encoding sounds, and perhaps even the range of possible grammars in humans, but cannot account for such things as social conventions or the availability of various tools. The debate therefore concerns not whether some information or other is not represented, but what specific information is not represented.

The proponent of the first tactic can simply give the Heideggerian his point that some information, i.e. architectural information, is not represented. In response to scepticism about the rest of the information that appears to be necessary for language understanding, one can argue that, as computer design is continually improving, all one has to do is wait until machines with memories large enough and fast enough are developed. This response is not simply wishful thinking. The amount of information any actual human being uses in the premises of inferences made in the service of language understanding in a finite lifetime is finite; therefore, a big enough computer could, at least in principle, use the same data base as an actual human. This tactic effectively shifts the issue from explaining, and reproducing, a presumed human competence to reproducing actual human performance. The problem of achieving human peljomJance then becomes the problem of designing a search procedure, with an attendant large, but finite, store of heuristics for guiding searches, that is fast enough and smart enough to find the right information at the right time. The theory of search procedures is an evolving discipline (see Barr and Feigenbaum, 1981, v. I), so it is at least plausible to assume that progress in this direction will occur.

But, the wait and see response at best shifts the burden of proof. It does not actually counter the arguments of Heidegger, et ai, which are explicitly concerned with language-understanding competence. A second, less presumptuous tactic is to concentrate on a given linguistic episode, and argue that the amount of information a human actually uses in such a episode is not only finite, but in fact fairly small. Even as expert as we are, most of us occasionally "miss one," i.e. we do not understand some passage or remark. Why? The "background" that would have allowed understanding of the remark in question is presumably global, i.e. it is the same as allows understanding of any, or at least most, other linguistic tokens. Heideggerians suppose that it is global in this sense. Competent speakers therefore have, in some form, the background that would have allowed them to understand remarks that they miss. Our non- or mis­understandings must. therefore. be due to an accessing problem: we fail to properly represent the facts in 0111' OIWl background that would have allowed understanding. These facts are thus, at that moment, not available for use as premises in inferences. This conjecture is made plausible by the fact

Page 277: Perspectives on Mind

Background Knowledge and Natural Language Understanding 269

that the addition of a few words, or a gesture, to the input in the situation often suffices to produce understanding, presumably by giving us either a representation of the relevant information or a clue that allows us to find the relevant information.

If this is correct, then appeal to the background, even if it exists, can at best explain (or maybe only label) where the information we use to understand language comes from. It cannot explain the understanding itself, since that requires finding and representing the information in question. Hence, the prohlem of representing the required information still remains. It suggests, moreover, that in each linguistic episode we use a fairly small number of facts to generate and support a hypothesis about what the input means. If the hypothesis is not shown to be wrong in the ensuing conversation, we use it as part of our basis for understanding the remainder of the conversation. The frame-script methodology discussed by Shank and Abelson (1977) is at least a plausible approach to the problem of representing the information actually available to a typical language user in a typical circumstance. What information people in fact use when understanding particular sentences in particular circumstances, and where they obtain it, are difficult empirical questions. Presumably, when we have answered these questions by performing the necessary experiments, AI systems will be built that match human performance.

However, two kinds of doubts plague this answer. First, it may turn out that we use (as opposed to merely have in some background) so much information on a continuous basis that the relevant regions of the brain do not have enough degrees of freedom to hold it all explicitly, or that, for some other reason, the brai n does not represent all of the required data explicitly. If this is the case, then eventual computer performance based on the assumption that all data is explicitly represented, even if up to human standards, will not explain human performance, i.e. the computer model will not explain how human cognitive architecture, using inexplicit knowledge, manages to perform the task of understanding language (Pylyshyn, 1984). The second doubt is that explicit representation may turn out to be a straightforward, but very inefficient, way for either humans or computers to cope with the data bases needed for language understanding. Both doubts suggest that, even if the first tactic is possible in principle, alternatives ought to be investigated.

4. Digression: Perception

Humans neither learn nor use languages in a perceptual vacuum. The other perceptual modalities continuously inform speakers and hearers in most linguistic situations. Such non-linguistic perceptual input is clearly required for reference-fixing in much of human language use: it also functions in many cases to fix, or at least constrain, the context in

Page 278: Perspectives on Mind

270 Chistopher A. Fields

which language use takes place. Without these functions of non-linguistic perceptual input, terms could not carry the affective information content that they do, and language users would not be able to communicate nearly as efficiently or as effectively as they do. Given the practical importance of non-linguistic perceptual input to human language understanding, it is at least plausible to suppose that such input is essential in some, or even most cases. If this is true, non-linguistic input will have to be modelled by AI systems if such systems are to understand natural language.

This possibility raises a serious problem. If non-linguistic percep­tual input serves to provide the perceiver with explicitly represented beliefs, it cannot matter how those beliefs were produced. This follows straightforwardly from the formality condition. Perceptual beliefs could simply be encoded in a "perceptions" data base accessed by the language understander in the same way a script or frame was, and they would serve the same purposes as they would have had they been produced by eyes, ears, etc. Hence, there is little need to single them out as perceptions, as opposed to any other type of data.

This analysis breaks down. however. if the language user must deal with several different perceptual situations. Perceptual beliefs could still be stored in a data base constructed by a programmer, but the way in which the data base was accessed would change. The data base would have to be accessed so as to provide the language understander with different "perceptions" at different times. The time at which a perceptual belief was produced or sent to the language understander, relative to the arrival time of a linguistic input, such as a token of 'that', would be instrumen­tal in determining if and how the linguistic token was understood. 'That' with no accompanying perception. if understood at all, would likely refer back to some subject discussed in the previous situation. 'That' accom­panying a perception would be treated quite differently. Perceptions thus constitute a data base with either time-varying content or time-varying access. The time a linguistic input is processed determines what percep­tual input will be present to affect its processing, and therefore deter­mines, in part, how the linguistic input will be processed. Time-varying perceptions, thus, form a data base radically unlike "general knowledge".

Perception can alternatively be thought of as a time-varying search procedure. The role of perception in language understanding is to provide information relevant to the processing of linguistic input in real time. This information could come from an arbitrary data base; it could, for example. come from the general knowledge base of the system. Suppose information is encoded in the general knowledge base in such a way that all information has an equal a priori probability of being accessed. This would be a very cumbersome data base for a language understander. since it would have to sift through the entire thing whenever it needed information. But, then. perception could function in such a system by increasing the

Page 279: Perspectives on Mind

Background Knowledge and Natural Language Understanding 271

access probability of particular items in the data base (i.e. those "perceived") relative to the rest. These items would then preferentially inform the processing of linguistic input. The problem of searching the entire data base for information relevant to the processing of a particular linguistic input would be reduced to the problem of searching a much smaller data base, namely, the "percept", for its relevance to the linguistic input.

Allowing a cognitive system to perceive, in real time, what much of its linguistic input is about thus limits the problem of locating the information relevant to understanding particular linguistic tokens. It does so, moreover, without violating the formality condition. For the purposes of the argument at hand, it is important only that the search problem is somehow limited by some represented information or other; it does not matter whether the information is correct, or even whether it had an environmental origin.

Limiting search requires information. An occurring representation of a scene contains only the information that it encodes about the environment, real or ficticious, of the device in which it occurs. It carries to the language understander, however, not only this explicit information, but also the unrepresented information that it is the occurring scene. This unrepresented information is crucial; it, not the represented information, tells the language understander where to look for information about references, causal connection, etc. Since this information is not explicitly represented by the device, it will be called "implicit" information.

5. Second Tactic: Implicit In/ol7l1ation

It has been suggested that real~time perception might provide a solution to some of the problems faced by designers of artificial language understanding systems (see, e.g. Searle's discussion of the "robot reply", 1980, and Cummins' critique, 1983). It is clear from the above discussion that including real~time perception would provide information that is relevant to understanding linguistic input. It is also clear that some of the most significant information that perception provides is implicit. Developing a full theory of 11011' perceptual input contributes to language understanding is well beyond the scope of this paper. A preliminary out~ line can be sketched, however. If perception is viewed as a search process that locates information in a data base, then the data base that it searches is at least ordinarily the perceptible world. Perception extracts information from this data base, and represents it so thai it may be used in inferences, e.g. in those inferences employed in understanding linguistic input.

The perceptible world is an indefinitely large data base. However,

Page 280: Perspectives on Mind

272 Chistopher A. Fields

only a very small fraction of it can be searched, i.e. perceived, in any given discourse situation. Understanding the role of perception in language understanding, therefore, requires investigating linguistic exchanges that are about the immediately perceived environment as the primary examples of language use. A similar suggestion that such "indexical" sentences are the canonical cases from which the semantics of "eternal" sentences must be derived has been made by Barwise and Perry (1983). Taken together, the facts that the perceptible world is an indefinitely large data base of information relevant to language understanding, and that perception is a selective search procedure that is capable of locating the information in that data base which is most probably relevant to a given discourse situation suggest that the perceptible world simply be idellfijied with the "background" discussed by Heidegger and others. Information in this background is, as they suppose, inexplicit ullfil explicitly represented by the perceptual process.

This suggestion incorporates an unrepresented background into the process of language understanding in a way that is completely consistent with the methodological presuppositions of cognitive science. It does not violate the formality condition. since the cognitive apparatus of the perceiver gains its access to the perceptible world only by means of formally specifiable representations. Moreover, it does not violate methodological solipsism, since the role of the representations produced by the perceptual apparatus is independent, on a case by case basis, of how they were produced. [6] Lastly, it does not violate the assumption that language understanding is to be explained algorithmically, since it merely postulates that the algorithms in question access yet another data base.

6. Conclusion

The arguments of Heidegger, Haugeland, Dreyfus, and Searle for a non-represented background that informs language understanding have appeared to pose a major stumbling block to cognitive science. I have proposed two responses to these arguments. The first suggests, in effect. that Heidegger et al have set their sights too high. Cognitive science need only concern itself with explaining the behavior of average language understanders in particular circumstances. The background which they propose has no explanatory force in particular circumstances; it only appears to have explanatory force when the linguistic situation is allowed to expand without bound. Explaining particular instances of language understanding requires appeal not to the background in general, but rather to the particular information. whatel'er its source. represented and used by the language users in the situation in question.

The second response proposes that this background be identified with the perceptible world. and that perception be regarded as a search routine

Page 281: Perspectives on Mind

Background Knowledge and Natural Language Understanding 273

that finds and represents the facts used by agents to understand linguistic input. This response is a special case of the first. The perceptible world cannot itself explain language understanding, but its selective representation by perceivers perhaps can, The extent to which the second response can account for the phenomenon pointed out by the critics is largely an experimental question, Answering it requires building AI systems that integrate language understanding and occurrent perception, Winograd's SHRDLU (1972) is a very primitive example of such a system; there is every reason to believe that much better, more approximately human systems will be forthcoming,

--"J--

There are perplexing philosophical questions facing those who attempt to "account for" the subjective character of experience or the intentional character of cognitive states, Professor Fields has put some of the key issues in abeyance in his pursuit of an answer to a related question about our ability to understand natural language. He simply assumes computation­al functionalism is essentially correct. He states at the outset,

I will not say anything about what understanding amounts to in general, or about what conditions a system, natural or artificial, that appeared to understand would have to meet in order to really understand ... I will assume that artificial systems could be "understanders" if they had the right knowledge and were placed in the right context. (Fields: this volume, p. 262; editor's emphasis)

Given this starting point, the task to which he addresses himself is that of specifying the kind, and source, of information needed to effect an understanding of natural language.

The common approach is to delineate a special situational context or "domain of discourse" together with "scripts" or "frames" in order to facilitate inference. Understanding sentences is then taken to be the ability to make those identifications and inferences required to answer questions relevant to a given domain. "Expert systems" are the more advanced programs of this sort. But one must ask: are such systems fruitful models of human language understanding? Noting criticism from analytic and Continental philosophers alike, Fields is persuaded that such models of natural language understanding are inadequate. Our understanding of language, he observes, seems to be grounded in a knowledge source that is indefinitely large, and largely lin explicated. There is in this a recog­nition of the "horizonal" depth of human understanding.

Fields notes that this issue could be treated as a memory problem: we simply wait for existing memory systems to be enlarged, and for appropriate

Page 282: Perspectives on Mind

274 Chapter 771ree

accessing algorithms to be developed. On this approach, the "inexplicit knowledge base" upon which we supposedly draw will eventually be rendered explicit and stored in our language understanding system. But Fields elects a second approach: he suggests that the background "knowledge-base" necessary and sufficient for general natural language understanding is precisely the world of our actual experience. He further suggests that perception itself be regarded as a process for accessing this data base in a manner which allows for selective representations specifically useful for comprehending sentences uttered in the context of that same perceptual experience. He refers to this as "non-linguistic perceptual input" and claims that this sort of thing is precisely what "will have to be modelled by AI systems if [they] are to understand natural language. " (p. 270)

Interestingly, such an approach does not involve processes which violate the abstract requirements associated with functionalism, namely, the formality condition, methodological solipsism, or the assumption that language understanding is to be explained algorithmically. This data base and its accessing process are not just another (albeit indefinitely large) data acquisition system; for, as Fields points out, this is a system with a data base that has the entirely novel property of possessing either "time-varying content or time-varying access ... " (p. 270) Time-varying perceptions form a data base radically unlike "general knowledge" as usually understood in AI research. Aside from the technical problems associated with trying to implement such a system, Field's proposal seems to move significantly in the direction of meeting arguments raised by Dreyfus and others regarding the importance of "a non-represented background that informs language understanding" and hence moves toward overcoming "a major stumbling block to cognitive science." (p. 272)

But there are some residual philosophical concerns here. First, one wonders just how Fields anticipates solving the problem of perception it­self: it is quite plausible to think that perception requires a knowledge­base of inexplicit and non-computational form or access. If we find that we are unable to explain how perceptual states manage to represent objec­tively real, or even apparently real states of affairs--if we are unable to explain the illfelllional character of our perceptual experience, not to mention the "directedness" of those processes by which we cull the ground of our experience for those items that have "importance" for us--then Fields' suggestion would seem to be unrealizable. This difficulty leads right back to the original philosophical issue Fields had hoped to side­step, namely whether functionalism in any of its forms is sufficiently rich to resolve the open questions regarding the intentionality of mental states. In the following commentary, Norton Nelkin re-examines those considerations set aside by Fields' original assumption in an attempt to show why the issues surrounding allthentic understanding cannot be ignored. even for the limited purpose f'ields has in mind.

Page 283: Perspectives on Mind

NORTON NELK[N

Internality. Externality. and Intentionality

Jerry Fodor in a recent review of work on intentionality sorts out positions into several categories. [[ 1 My own belief is that there is one categorization of such views that counts far more than any of the others. It is a bifurcation of theories of intentionality into two basic cate­gories. On the one hand, there are those [ would call "[nternalists." The [nternalists are those who believe meaning (in the German Bedeutung not Sinn) can be understood by an analysis of some present state of the organism. For instance, what makes some use of the sentence, "The book is on the table" mean some putative state of affairs for some speaker is an internal state of that speaker. [nternalists can come in all shades: some might claim that the internal state is a neural state; others, that it is a computational state or set of computational states; still others, that it is a phenomenal state, involving what might be thought of as a mental image. Each of the last two holds the view that the internal state in some way or other represents the world it means, and it is this representation that is its meaning the world. Externalists, on the other hand, hold that there is no internal state of the organism that is its meaning anything. To borrow from Kripke, the Externalists hold that even if God could perceive all the internal states of the organism, God could not know thereby what it was the organism meant or that it l1leallf anything at all. [2] [n a strict sense, according to the Externalists, there is no internal state of the organism that is its intentional state. Thinking about intentional states as mental states, in the way having a pain can be thought of as a mental state, is believed by many Externalists to be at the root of the problem. [3] Unless we know the social context, history, and social practices in which these internal states are embedded, we cannot ascribe meaning to the organism's utterance or even regard it as making an utterance. This limning has a third-person air about it, but Externalists want to hold that not even the organism itself could make such ascriptions to itself solely on the basis of what was occurrently going on in it.

Christopher Fields, in his interesting paper, "The Role of Background Knowledge in Natural Language l!nderstanding," seems to want to defend a sort of Internalist position. the kind that involves intentionality's arising from data structures, computational states, and stored memories. [ will call this view "Computation [nternalism" (C[). While [ have a lot of sympathy with Fields's goal, I feel he has not reached it. His own defense fails because he does not take into account the deeper arguments of the Externalists. Sometimes Fields talks as if he won'1. consider these arguments at all because they have such bleak results. 1:4] Yet when it

275 H. R. 0(10 and 1. A. Tuedio (eds.), Perspectives (In Mind, 275-282. © 198H hy D. Reidel Pl~hlishing Compuny

Page 284: Perspectives on Mind

276 N0/1on Nelkin

gets right down to it, Fields does seem to want to take on the Externalists and show that their arguments can be answered by CI in one of two ways, neither reply supposedly having anything obviously a priori mistaken about it. But Fields's argument proceeds in a peculiar fashion. It sometimes seems to run thus: if CI is right about the intentional states of human beings, then we could build a computer which had intentional states. There would be no a priori difficulty in doing so. That claim sounds pretty plausible, and I know few who would quarrel with ito-not even people like Searle and Heidegger whom he takes himself to be arguing against. The Externalists' arguments are not aimed against this conditional claim but rather against the truth of its antecedent. I want to focus on some of the Externalists' concerns. The reader may then look to Fields's paper to see if he has successfully dealt with them. At the end, I'll state, without argument, what sorts of replies I think the Internalist can--and probably sllOuld--make. Since Fields defends CI, I'll try to keep my remarks germane to that form of Internalism.

Both Computational Internalis!s (Cis) and Externalists would agree that any attempt to reduce mental states, especially the cognitive states. to types of brain states, when such states are considered only in terms of their physics or biology, is bound to fail. For instance, there are creatures whose perceptual states, seem to mean the color red but whose visual system is very different from our own. [5] Their neural processes are apparently quite different from ours, not at all of the same type. Even with human beings such reductionism seems highly unlikely. Most right handers have as their language "centers" a set of neurons localized narrowly in two areas (Broca's and Wernicke's) on the left side of their brains. Left handers, however. "ften have a quite different brain organization, many of them with their language "centers" apparently not localized at all. [6J Thus, the idea that cognitive states are type identical to particular types of brain processes seems farfetched. Even if we move to neural stlUcfure, viewing such structures only under their physical descriptions, the reduction seems hopeless, since the neural structures of the fish's visual system seem pretty clearly different from our own, and the left hander's neural-linguistic system seems pretty clearly to be structurally different from the average right-hander's. Of course, "structure" is a fairly slippery term. and perhaps some notion of physical stnlcture will one day be found such that all appropriate neural processes and only such proce"e, have it in common: but given the very great differences in neural architecture that can be noted, it is hard to think what such a COIIIIIIOIl structure might be. And if we allow for the possibility that noncarboniferous beings might, in principle. share our perceptions and beliefs, the iclea of a common structure or process, definable in terms of phrsical or biOlogical laws alone, becomes extremely unlikely. One can't pro!'c that there are no such common structures, but

Page 285: Perspectives on Mind

Internality, Exlemalil)" and Intenlionalily 277

it's pretty hard to prove that something, so unclearly conceptualized, doesn't exist. Since the best bet is that very different types of neural processes and structures are involved in the same cognitive state-­processes and structures which on the physical level seem to have nothing in common which would mark them off as tied to this and only this kind of mental state (whichever one we pick)--such structures provide no explanation for the states lin less each type individually can account for the existence of the intentional state. Thus, we might have one neural-structure explanation for why a human perceptual state means red and another for why the fish's does. We might have one neural-structure explanation of a right-hander's meaning the book is on the table and another for the left-hander's, etc. So while no one type of neural structure is necessary for a particular occurrence of an intentional state, some one member of a given set (whose membership boundaries may not be known) will be necessary and the occurrence of any member of the set will be sufficient for the occurrence of that particular intentional state,

But several questions arise at this juncture. ror starters, if it is admitted that the members of the set have nothing physically or biologically in common although one of them occurs whenever a particular and appropriate intentional state occurs, it is hard to see how any (let alone each) of these types of neural states can accolllll for, i.e, explain, the occurrence of that intentional state. In a certain minimal sense, perhaps, we could get an explanation in that we could get the generali­zation, "Whenever one of N, or N or N3 or ." occurs, I, occurs." But even at this level we can't add "and only whenever" unless we would know the rule of the set. But it has already been admitted that we don't know the rule and that it's unlikely there is one. Things get even worse if we ask how we can know even that Ihis generalization is true. How do we know that N l' say, can't occur while some olher intentional state, or even nonintentional state, of the organism is occurring? One might reply that that will be an empirical matter. But this reply illustrates that there is some deeper sense of "explain," other than simply the claim, "Whenever N. then I.," that none of these neural-structure types can achieve. We ar~ left totally in the dark as to hall' this neural state is connected to the intentional state. If the neural state is a causal condition, we don't understand how such a causal mechanism could possibly work to do what it does. And it's not merely an empirical doubt here, but a conceptual sort of doubt. It is the same sort of doubt we have about the stronger claim that the neural state is not the cause of, but is idenlical 10, the intentional state. In both cases what puzzles us is how the neural state can in any way account for Ilze cOl1lenl of the intentional states. And whatever else is true of intentional states, they have to have content. That is what makes them intentional states. If one cant show that N, can't occur without 1, 's occurring, except by induction, it's because one

Page 286: Perspectives on Mind

278 Nol1on Nelkin

is unable to "read off" from the neural state to the intentional one in a way that would be explanatory of the intentional state's illtelltiollality. [7] This claim is not meant to deny that neural states play an important causal role in cognitive states. It is, instead, to claim that they have no appropriate lawlike connection to these mental states or to each other and so cannot explain the mentality of these mental states.

I've spent a good amount of time talking about an Internalist position I said I wasn't going to focus on, but the criticisms of this position are instructive. The CIs agree with the Externalists about these criticisms; yet, it is these same kinds of criticisms that the Externalists want to bring against CI. But before examining these, let's consider how CI itself arises from these criticisms. If different neural processes accompany the same mental state (omitting for the moment the possibility that the same neural state can accompany different mental states), then we require some explanation of why just these neural processes accompany (play a causal role in establishing) just this mental state. In order for these neural states to be explanatory, there would seem to be required some property they all share in common or at least some lawlike connection among them all. The claim, however, has been that at the physical level there is no such commonality, no such connection. But if we abstract one plane up to a functional level, we can find what they all share in common: each of these structures serves to represent the putative states of affairs relevant to that particular mental state; and, thus, these neural states, which at the physical level find no connection which makes them explanatory, find their connection at this more highly abstract level of the representational state, a kind of functional state.

For instance, the use of timepieces helps explain some instances of human behavior [8]; but timepiece as a concept is not understandable at the level of physics. It is understandable only at the functional level. At such, and only at such, a level do timepieces become at all explanatory of human behavior. But if timepieces are to be useful as explanations, then there must be some way of specifying their function that doesn't beg any questions or empty them of explanatory power, making "timepiece" to be like Moliere's "soporific power." For instance, if what we want explained is why people sometimes look at hard objects worn on their wrists, the speci­fication has got to be more involved than just saying it's whatever causes human beings to look sometimes at hard objects worn on their wrists. If that type of individuation were the best we could do, we might as well stay at the physical level and just admit there are many causes of the same be­havior and we have no idea why or how these work. But we can do better than that in the case of timepieces. We can define their function in terms of their design, and their design is made understandable in terms of human conceptions of time, of how human beings divide time into segments, what sorts of segments they find important, etc.--i.e., in terms of human inten-

Page 287: Perspectives on Mind

Intemality, Extemality, and lntellfiollality 279

tions, goals, purposes, beliefs, and so forth. At that level, not at the physical level, we find an explanation of the behavior in question. When the object is thought of as a "timepiece," and not in Its mere physical terms, i.e., when it is thought of as a representation of time division, then the behavior becomes understandable in the requisite way.

But now the difficulty for CI should be apparent. If CI is to be an account of intentionality itself (and of all these intentional mental states just mentioned), then there must be some way of specifying the functional state of the neurons other than simply saying it's whatever causes intentional states. Again, if that were the best we could do, we might as well stop at the physical level of many-one, non-lawlike, nonexplanatory causation. The attempted way out of this difficulty is to say that the functional state accounts for intentionality because in these cases the functional states are representational Slates. But we S2W in the case of the timepieces that what made them function as representations had to do with understanding them in terms of their design according to human intentions, goals, purposes, social practices, and so forth. But we can no longer use "design," so explicated, to specify what all the neural­functional states have in common, or to specify their lawlike connection, without begging the question, that is, without explaining intentionality by resorting to intentionality in the explanans. But how, then, are we going to specify the nature of the jil11C1iollal states as representional states without resorting to intentions, purposes, social practices, etc., i.e., without already presupposing intentionality? If we can't, they just seem to add an extra layer of thickness in an already nonexplanatory structure. It seems as if we have to move outside the structures themselves to understand in some independent way what these structures are. They are explainable either in terms of the internal states of the organism (or quasi-organism), or in terms of some combination of such states with external states of affairs, or in terms of external states of affairs only. If the last be true, the external conditions are carrying all the work of specification and therefore all the explanatory weight for saying what the mental state is and why it is; and we may as well view the rest, the neuronal states and functional states, as a kind of black box. But that is just what the Externalists are claiming. The second position also seems to give the Externalists a lot of room and probably all the concessions they need. So it looks as if the Cis need to defend the first position. Some try to make the specification only by bringing in other parts of the organism's operations. One often hears talk of connecting this state of the organism to its other states of "belief," etc., especially through its "memory." Fields himself does this. But the problem is with the shudder quotes. It seems it is only if we read "belief" and "memory" without the shudder quotes that we can generate the intentionality of this p0/1iCllIar state. But, then, how did intentionality get specified in the system in

Page 288: Perspectives on Mind

280 Norton Nelkin

the first place? That is the question. Sneaking in intentionality somewhere else in the system isn't going to help, unless, as Dennett points out, we can ultimately get rid of the homunculi. But there's no way that it seems possible for the CIs to get rid of the homunculi. [9]

One move that might be made, and Fields also makes it, is to point at programs, data-structures, computational states (different people say different things) of computers and claim that computers can be thought of as having representations. When these programs, or whatever, are considered as among their internal states, we don't have to go outside the system to discover their representational powers. Again, the actual electronics of the machine cannot explain what the machine does, when what it does is the job it is designed by the program to do. However, when its states are thought of as so designed, when they are thought of as represen­tational states, then the computer's doings become understandable. But questions similar to those that reared themselves at the neural level once more arise. For instance, do all such programs have anything in common? In the timepieces case, it is very easy to think of the spring watch, the sundial, and the digital clock as each employing different programs. The same for computers. So if P ~ , P 2' P 3' and so forth, can each be thought of as leading to the appropriate representation, then the representation cannot be reducible to any particular type of program. So can we do better than "Whenever P or P 2 or P 3 or ... then 11?" Again, we will get "and only whenever" onty if we know the rule of the set. But what is it? What structure, programmed in any way, couldn't be "read off" as a represen­tation of the appropriate sort? What are the rules of representation? Even more, unless all these programs have something in common, then, in that deeper sense of "explain" noted before, how do the members of the set explain 11? However, as in the case of timepieces, maybe each member of the set explains how It occurs when it occurs, while no single explanation will account for how a II's work at once. But how is a single program (or data-structure or computational state) to account for the intentionality of the intentional state? Programs are purely formal or syntactical states of the machine, and the question is how such a state can account for the content of intentionality. In the case of computers the answer seems readily apparent. Their formal states, like the timepieces', can be understood as having intentionality because they are used by us, we who have intentional states, for our purposes; and their states are definable in terms of our purposes and practices. But that is to understand their intentionality in terms of our intentionality. The question remains how our intentionality is to be understood. It is no more understandable how syntax-formalism can give rise to content, to intentionality, than it is how physics or biology can. The Leibniz-Nagel questions seem as apt here, and surely that is the point of Searle's "Chinese Box" example. [10]

The problem is compounded when one argues that the organism could have

Page 289: Perspectives on Mind

inlemalily, Exlemality, and illfellfionalily 281

the same formal state but be in dijferellf menIal states. Computers can be in the very same computational state but be used for different purposes, serve different functions (functions themselves defined in terms of the previously given intentional states of their users). The organism, too, seems as if it could be in the same "computational" state but in different mental states. [11] So it's hard to see how, even if we allow there to be computational states of the organism, these states can by themselves account for the required functional state or be explanatory or be considered as representations, short of bringing in notions like "design," "purpose," "belief," social practices, etc.--i.e., short of begging all the questions. If a set of neurons' being in the right functional state is their being in whatever state accompanies the existence of intentional states (specified externally), then we're right back to the physical level and its inadequacy for explaining intentional states. This criticism runs deep and wide, and it's not at all obvious how an Internalist is to reply to it. I have focused on criticisms that cut mostly against the CIs, but there are others, of a similar nature, that bear on any form of [nternalism. Some of these go beyond anything I have said and also require heeding by the [nternalist. [12] One cannot just toss them aside,

But, then, why do so many [nternalists just seem to do so, much to the frustration of Externalists? [think many [nternalists must view the situation concerning intentionality in much the same way as many post-Berkeleyan defenders of a Lockean representational theory viewed Berkeley's criticisms of their view. On the face of it, the Berkeleyan criticisms were devastating, and one would have thought that any self-respecting philosopher would have surrendered the representational view. But most did not. Why not? Because Berkeley's own positive theory seemed itself unable to explain coherently the nature of perception or of the physical world. Since the positive theory built on the criticisms was so unlikely, many must have believed that the criticisms themselves were somehow, despite their apparent plausibility. mistaken and that there was a way to save representationalism after all. [n a similar way [nternalists would say that attempts at an Externalist theory of intentional states have been dismal failures: and the Externalists still owe us a believable theory, one with explanatory powers, before the Internali:;t theory is to be abandoned. That's not to say that an Externalist cheory won't be forthcoming. But one has, at the moment, as much right to believe [nternalism can overcome its own difficulties as to believe that the Externalist can provide a replacement theory. So why not continue working on an [nternalist Theory, one which can respond to the Externalists' criticisms? [find nothing wrong with this attitude as long as the Internalist takes the Externalists' criticisms PCIT selioltsly, seriously enough to refute them or deflate them. I think Fields intends to undertake a piece of this task, but I also think he misses the mark at which the

Page 290: Perspectives on Mind

282 Chapter T7wee

Externalists are almmg their criticisms. To be refuted or deflated, the apparent depth of these criticisms must first be appreciated.

--1/--

Nelkin's initial characterization of Fields' work as internalist is somewhat misleading--the most pointed evidence of this is Fields' proposal to take the world itself (or at least one's local part of it) as the data base essential to understanding natural language. Given the general tone of his account, it is plain he intended this unique "data base" to be just the commonsense, external world. But, this does not make Fields an exter­nalist, either. Indeed, his earlier confession that he is assuming "that artificial systems could be understanders if they had the right knowledge and were placed in the right context" (Fields: this volume, p. 262) makes it clear that he sees himself taking a middle ground between internalism and externalism. Nevertheless, it is true that Fields leans towQld internalism as the "limit" to which theory ought to tend. This implies that one should work to move the account of meaning "in the direction of" the internalist conception. In the end, Nelkin concedes that such an agenda is not without promise: he says, "one has, at the moment, as much right to believe Internalism can overcome its own difficulties as to believe that the Externalist can provide a replacement theory. So why not continue working on an Internalist theory, one which can respond to the Externalists' criticisms?" (Nelkin: this volume, p. 281) The important point here is Nelkin's observation that one who, like Fields, is working toward an internalist account of meaning must continue to take seriously criticism from the externalist perspective.

In the next commentary, Robert Richardson carries the criticism fur­ther; he begins his assessment of Fields' proposal by situating it in the context of the empiricist tradition of "associationist" psychology. He sees this move as critical because he thinks that "computationalism is associationism in modern mechanical garb," and that what Fields is propos­ing to do is "to save the new associationism from the [charge that it is unable] to reconstitute the object." (pp. 287-288) Richardson's point is that to make such a "save" Fields would have to explain how the machine model overcomes problems that parallel those that challenged the associationists. In his review of the associationism, Richardson identifies as central its failure to show how the real content of experience can be reconstructed from sensory impressions using merely associationist primitives such as "similarity" and "contiguity." The parallel problems, then, that have to be solved are those of duplicating behavior to the level of "decision and search procedures ... similar to those employed in human problem-solving." and doing so within a relatively unrestricted domain.

Page 291: Perspectives on Mind

ROBERT C. RICHARDSON

Objects and Fields

" ... man is in the world, and only in the world does he know himself." (M. Merleau-Ponty. 1945, p. xL)

"A psychology." Merleau-Ponty tells us, "is always brought face to face with the problem of the constitution of the world" (1945, p. 60). PelTep­tion was classically viewed as "a window on to things".·-as revealing the "truth in itself"--and, correspondingly, the subject became transformed into a "psycho-physiological mechanism" (1945, p. 54). The function of the mechanism is gathering information. Merleau-Pontyexplains:

Sense experience, thus detached from the effective and motor func­tions, became the mere reception of a quality. and physiologists thought they could follow, from the point of reception to the nervous centres, the projection of the external world in the living body. The latter, thus transformed, ceased to be my body. the visible expression of a concrete Ego, and became the object among all others. Converse­ly, the body of another person could not appear to me as encasing another Ego. It was merely a machine ... (1945, p. 55).

Whether inspired by empiricism or its "intellectualist antithesis" in the Cartesian tradition, the classical problem of perception is one of understanding the laws of projections which map the external world into the subject. and understanding the way in which the information given in sense experience is decoded. We are to begin with the "elementary perceptions." the impressions which are the subjective record of environmental stimuli, and analyze them in such a way as to reproduce in us the "original text" which is the world (Merleau-Ponty, 1945. p. 7). This reproduction is medi­ated in the empiricist tradition by association. and strengthened by the effects of learning; in the rationalist tradition, it is mediated by the synthesis of judgment and explained in terms of a harmony somehow maintain­ed between cognition and the world (cf. Merleau-Ponty, 1945, p. 32). [Il

This classical picture is both persuasive and pervasive. In the empiricist psychology and physiology of the Nineteenth Century, the connection between associationism and mechanism is especially clear, from the work of Alexander Bain and Herbert Spencer on the psychological side to that of Johannes Mueller and Carl Wernicke on the physiological (see Young 1970). In 011 rhe Srudr ofCharacrer (1861), Bain wrote:

The SCIENCE OF MIND, properly so called, unfolds the mechanism of our common mental constitutions .... We pay special attention to the

283 H. R Guo and 1. A. Tued/{) (et!l.j, PenpeclIves on Mlfld, ]83-292. © 1988 bJ' D. Reidel Puhflshing Company.

Page 292: Perspectives on Mind

284 Raben C. Richardson

distinction between the primitive and the acquired powers, and study with minuteness and care the processes of education and acquisition. We look at the laws whereby sensations are transformed into ideas, and thoughts give rise to other thoughts (pp. 29-30).

Among "primary attributes of intellect" Bain acknowledged a tendency to respond to the similarity between experiences: "In reviving ideas or exper­iences formerly possessed by us, one link, or medium of restoration, is a likeness of those past states to some one now actually present" (1861, p. 326). The second was a tendency to respond to continguity in space and time, even the dissimilar. Each of these is taken as a primitive psycho­logical operation, explicable, if at all, only in terms of physiological mechanisms. The physiological counterpart of Bain 's associationism may be found in Wernicke's discussion of the mechanisms of learning. He, too, postulates primitive, or primary, psychological operations establishing associations between ideas, and says:

... primary functions alone, can be referred to specific cortical areas. ... The cerebral surface is a mosaic of such primary elements whose properties are determined by their anatomical connections to the body-periphery. All processes which exceed these primary functions (such as the synthesis of various perceptions into single concepts and the more complex functions such as thought and consciousness) are dependent upon the fiber bundles connecting different areas of the cortex (1874, p. 92).

Wernicke dubbed the central effects of sensory stimulation "memory images" and the central effects of bodily movements "motor images." He held that the frontal lobes are specialized for motor functions, and the temporal and occipital lobes for sensory representations. Learning and memory are mediated by the association pathways; that is, pathways forming connections between the specialized loci. The possession of a concept is identified with the establishment of cross-modal associations; that is, connections between sensory images and other sensory images, or between sensory images and motor images. The capacity for learning--for establishing associative Iinks--Wernicke regards as "a unique, inherent feature of the central nervous system" which is grounded in "the decreased resistance observed in frequently-used neural tracts" (Wernicke J 874, pp. 92-93).

The picture has its roots in Descartes' mechanism. In discussing the functions of the soul, he says "the various perceptions or modes of knowledge present in us may be called its passions, in a general sense, for it is often not our soul which makes them such as they are. and the soul always receives them from the things that are represented by them" (Descartes 1649. Article 17). Descartes includes among the passions some

Page 293: Perspectives on Mind

Objects and Fields 285

functions integral to memory, and explains some crucial elements of its functioning in terms of simple physiological activity. He writes:

... when the soul wants to remember something, this volition makes the [pineal] gland lean first to one side and then to another, thus driving the spirits towards different regions of the brain until they come upon the one containing traces left by the object we want to remember. These traces consist simply in the fact that the pores of the brain through which the spirits previously made their way owing to the presellce of this object have thereby become more apt thall the others to be opelled ill the same way when the spirits again flow to­wards them. (Descartes 1649, Article 42).

The similarity between the doctrines of Descartes, Wernicke, and Bain is transparent enough, given the emphasis of both the Nineteenth Century associationists and Descartes on the physiological encoding and the view that learning is a result of the increased facilitation of channels or tracts with use. However, it is clear as well that Descartes does not take memory to be an attribute of the body alone, even though recollection is mediated by the body. There is indeed a kind of trace laid down by the "animal spirits" which facilitates recollection, but the recollection itself is guided by the desire to recollect a certain object, and the ultimate product is a thought of the object which one desires to recollect. The flux of the animal spirits leads to an enlarging of the pores of the brain, and " ... the spirits enter into these pores more easily when they come upon them, thereby producing in the gland that special movement which represents the same object to the soul, and makes it recognize the object as the one it wanted to remember" (loc. cit.). Just as Wernicke was will­ing to let the capacity to establish associative links be a "unique inher­ent feature of the central nervous system", and as Bain was willing to let an awareness of similarity (or resemblance) serve as a primitive associa­tive mechanism, so Descartes assumes without explanation that the flow of "animal spirits" will lead to increased permeability. Yet where Wernicke was content with the facilitation as sufficient for explaining memory, and where Bain allowed no faculties beyond primitive associative mechanisms, Descartes needed to introduce a "desire" to initiate the search. This desire gives direction to the search. [2)

This is the function Merleau-Ponty ascribes to attention within the "intellectualist" explanation of perception. Merleau-Ponty explains: "since [ am conscious that through attention I shall come by the truth of the object, the succession of pictures called up by attention is not a haphazard one" (1945, p. 27). The unreduced and unexplained role of attention 01' desire compromises the mechanism no less than the materialism, but that should /lot lead us to ignore the common commitments of the two

Page 294: Perspectives on Mind

286 Roben C. Richardson

traditions. Even though the attention or desire initiates the search for the memory trace, the conscious act does not modify the stored image of the object. It serves only to draw it out. Just as association depends upon some prior account of resemblance, so too the Cartesian mechanism for recollection is a matter of recoverillg rather than recollstructing a memory. As Merleau-Ponty puts it in discussing perception: "Since in attention I experience an elucidation of the object, the perceived object must already contain the intelligible structure which it reveals" (loc. cit.).

This doctrine of the illdependellce of the object has been no less attractive to philosophers in the Twentieth Century than it was in the Nineteenth or Seventeenth. In the Logische Ullfersuchullgell (1913), Edmund Husserl defended the view that the illfellfiollal COllfellf of a judgment or perception provided a set of rules, or what he called "anticipations of experience", which specify the possibilities inherent in an experience. To perceive an object is not merely to be presented with an object, but to grasp the potential for future experiences. These possibilities are given in the concept. [3] In recent discussions, Fred Dretske's construal of a concept as "a type of internal structure" (1981, p. 214) has a similar import. Dretske takes a concept to provide coordination between input and output, controlling behavioral output in light of ill/onllatiollai input. Dretske's use of information is not always consistent or clear, but it is at least clear that the information encoded in perception must be available in the environment to be received and encoded.

The mechanistic commitment uniting the empiricist and intellectualist traditions is one to treating memory and perception as simple records of the world. faithfully recorded piece by piece in discrete chunks; correspondingly, recollection becomes the simple recovery of what is latently represented. It is this commitment to the independence of the object in memory and perception that Merleau-Ponty proposes to undermine. To this end. he urges that the resources available in reconstructing the object are insufficient for the task.

In attacking mechanistic psychology, Merleau-Ponty presses that it is unable to explain what features we take to be significant. He points out, e.g., that in the case of phantom limbs, there are a number of attendant phenomena that cry out for explanation, and which defy the mechanistic paradigm. Anesthesia is impotent in relieving the suffering attached to phantom limbs; the vestigal limb is often perceived as being in the position it had at the time of injury; and emotional experiences can incite the experience of a phantom limb in patients that hitherto lacked it. Just as the phenomenon of hysterical paralysis drove Freud to accept the reality of psychic causes, so Merleau-Ponty sees such phenomena as the phantom limb to require a non-mechanistic explanation. He wrote, in applying the moral to the empiricist model: "If we confine ourselves to phenomena, the unity of the thing in perception is not arrived at by association, and as such

Page 295: Perspectives on Mind

Objects and Fields 287

precedes the delimitations which establish and verify it. and indeed precedes itself" (Merleau-Ponty 1945. p. 17). The "unity of the object" is not derived from associations; the "impressions" are. rather. associated by the "foreshadowing of an immanent order" (lac. cit.).

The basis for the charge is best seen in relation to the associa­tionist tradition. In the face of connections of ideas that are merely extrinsic--as. for example. resemblance and contiguity are··-the fundamental challenge raised by Merleau-Ponty is to show that there is an independent notion of similarity or resemblance available to the empiricist on which associations can be founded. What he says is this:

the sensations and images which should be the beginning and end of all knowledge never make their appearance anywhere other than within a horizon of meaning. and the significance of the percept. far from re­sulting from an association. is in fact presupposed in all association (\945. p. 15).

The problem in not that relations such as contiguity and resemblance do not obtain; the point is. rather. that they depend on a prior awareness of the world and cannot be used in constructing it. The point was recognized (belatedly) by Hume in his attempt to describe our knowledge of the world beginning with impressions and relations of ideas: the ideas of causality and the self were not there to be discovered. and had to be imposed as the effect of mental activity. In Kant. this became the Archimedian point for the "Copernican Revolution." The very idea of an object or of an objective order could not be derived from impressions alone. since even the order and relations of the impressions is dependent upon the awareness of an external object. Fields is right to find the beginnings of Cognitive Science in Kant. and in the central recognition that perception cannot be the product simply of the "impact of the environment" on our sensory organs. Recent work in cognitive psychology has a similar story to tell. There is substantial reason to hold that both memory and perception are not a simply matter of recollection and reception. but are active processes involving considerable reconstruction and synthesis.

The use of machine models in grappling with the problems of human understanding appears subject to essentially the same problems that confronted the associationist tradition. Within a computationalist theory. mental states are understood as operations applied to internally represent­ed formulae. The latter provide the collfent of the mental state. This content is not intrinsic. however. but is conferred by the place of a representation in a network. Thus. Reagan can be represented as a node. linked to other nodes, including President, Republican, Actor, COllsen'a­tive, and others. Computational ism is associationism in modern mechanical garb. [4] Fields proposes to save the new associationists from the

Page 296: Perspectives on Mind

288 Robert C. Richardson

challenge posed to the old by the inability to reconstitute the object. Fields accepts the view that the computer can serve as a testing

ground for models of understanding; that is, he accepts a restriction to mechanical models of cognition. When computational models are deployed as models of human cognition (much of the most interesting work in Al has no such pretentions), there are at least two further limitations which can be imposed. First, the model must provide a simulation of human performance. The decision and search procedures, in other words, should be similar to those employed in human problem-solving in the same domain. [n practice, psychologists require that the time demands and patterns of error in a simulation parallel those in human pelformance. Second, the model must provide an adequate simulation within a relatively unrestricted domain. The model, at least, should be capable of being extended to generate solutions to problems in a range comparable to that confronting their human counterparts. A machine model that could ollly understand restaurant scripts would be of no real interest. [n the development of both expert systems and model systems, A[ researchers have methodically limited the domain. This has the effect of simplifying the problem to be solved by limiting the search space, and consequently allows for the utilization of more powerful search procedures within the space of possibilities. The motivation for this is not uniform. In developing expert systems, the aim is to produce a system that is as reliable as possible, without regard to the requirement that it simulate human pelformance. There are. e.g., diagnostic systems applied to limited medical problems which do better within the relevant domain than do human practitioners.

In developing model systems, by contrast. the goal is to produce a mechanism that can solve problems within a limited domain ill a way that will allow it to be gelleralized as an adequate simulation within all unre­stricted domain. Insofar as we are concerned with the use of mechanical models to explain human cognition, it is to the latter paradigm that we must move. The corresponding challenge to AI is whether its models have promise of generalization, or whether their successes result from artifi­cial limitations. Even if we temporarily restrict our attention to limited domains, it is not enough to limit our view to what lies within the horiwn. We must somehow bring what is over the horizon into our purview.

Fields claims that the dependency on context--the fact that sensations and images gain significance only within what Merleau-Ponty calls a "horizon of meaning" --does not present an insuperable obstacle to machine models of cognition, and proposes two strategies: one is to render the context explicit; the other is to mechanize implicit knowledge of the context.

The first tactic rests on the observation that \l'e are finite. "One can argue," Fields says. "that. as computer design is continually improving, all one has to do is wait until machines with memories that are

Page 297: Perspectives on Mind

Objects and Fields 289

large enough and fast enough are developed" (Fields: this volume, p. 268). Husserl, likewise, thought that the background, or Lebenswelt, must be known and given as a set of beliefs, in terms of which the significance is interpreted. The difficulty with this tactic, as both Heidegger and Merleau-Ponty urge, is that the dependency of meaning on context is open-ended. In any given case, we may utilize only a limited amount of information, but what is potentially relevant has no clear limitations. To implement this information mechanically is certainly no trivial task. We might well be able to make the information concerning context that is required for any given assessment of significance explici t. But this will not insure that the model can be generalized. [5] It may even be that a "big enough" computer could incorporate explicit specifications of all the potential information. What then seems doubtful is that such a model would be a simulation of human performance at all. Thus, though it may not even be possible to render the knowledge of context explicit, if it were possible, the implementation of such a design is not likely to be an adequate simulation of human performance. [6]

The second tactic discussed by Fields would leave some information "inexplicit," to be accessed when necessary. Given that "perception is a selecti ve search procedure," Fields suggests that "the perceptible world simply be identified with the background" and that this "inexplicit [information remain] inexplicit until explicitly represented by the perceptual process." (pp. 271-272) It is not obvious that all the potentially relevant contextual information could be gleaned from perception. Even if it could, though, the view that perception should be taken as a resource for accessing information in the world presupposes that that information is already available in "the perceptible world" to be read off. This presupposition is, of course, the guiding precept of mechanistic moclels. In its most general form, the objection to mechanism which is posed by Merleau-Ponty, Heidegger, and their more recent followers, is a challenge to just this presupposition. It is intended to enforce the view that the "information" is not antecedently available "in the world" to be "read off." Meaning is not received, but conferred. The challenge to computational models of mind posed by this anti-mechanist perspective cannot be answered by presupposing what is at issue.

It is hardly clear that the objection to mechanistic psychology posed by Merleau-Ponty and Heidegger is conclusive. It is even less clear that we should follow Dreyfus and Haugeland in embracing their alternative. As Heidegger explains it in Sein lind Zeit (1927), the moral to be drawn from the open-ended dependency of meaning on context is finally that it is socially constituted: as he explains it, the significance (Bedeutsamkeit) provides the set of socially determined factors which determine. or at least condition, meaning (Bedeutung). The significance, as opposed to the meaning, is not something we knOll' but is, rather, constitutive of our

Page 298: Perspectives on Mind

290 Chapter 771ree

selves. Similarly, Merleau-Ponty presses his anti-mechanistic moral against psychological no less than physiological approaches. He says, following his discussion of phantom limbs,

This phenomenon, distorted equally by physiological and psycho­logical explanations, is, however, understood in the perspective of being-in-the-world. What it is in us which refuses mutiliation and disablement is an I committed to a certain physical and inter-human world, who continues to tend towards his world despite handicaps and amputations and who, to this extent, does not recognize them de jure. The refusal of the deficiency is only the obverse of our inherence in the world .... The body is the vehicle of being in the world, and having a body is, for a living creature, to be intervolved in a definite environment (1945, pp. 81-82).

This "inherence in the world" is an involvement in a social world, an "inter-human world." This social constitution of meaning is tantamount to a commitment to Methodological Nihilism: no science of psychology is possible if psychology is a science of Intention at the level of the individual. We must choose between psychology and sociology. The former is limited to mechanical causes. The latter alone can provide a science of meaning, and is unconstrained by the former. This corollary is not one we should accept lightly. The alternative to both mechanism and method­ological nihilism is an irreducibly Intentionalist psychology, which embraces the view that meaning is conferrred rather than received, and accepts Intentionality as an intrinsic dimension of mental life. [7] Whether such an alternative will prove viable in the long run is not clear. What is clear is that it is just such a prospect that unites theorists as diverse as Kant and Brentano, or James and Freud. Even if we ultimately accept the critique of mechanism rooted in Merleau-Ponty and Heidegger, we need not accept their council of despair.

--\J--

Richardson claims that the proposal which Fields makes IS essentially a "neo-associationism," and that consequently its validity depends upon its capacity to yield solutions for a set of problems paralleling those faced by classical associationism. The first of these is to be able to rival. computationally, the level of decision and search processing involved in human problem-solving; and the second is to do so in an essentially unrestricted domain. The latter. Richardson has maintained, is especially challenging since it seems that any system with such a capability would have to be sensitive to contextual meaning beyond the perceptual horizon.

Page 299: Perspectives on Mind

Commenral)' 291

Recent critics have argued that such "context-dependency" cannot be adequately dealt with on a computational basis. Nevertheless, Fields intrepidly denies that such dependency presents "an insuperable obstacle to machine models of cognition." Two strategies suggest themselves to Fields: either make the context explicit, or find a way to mechanistically access it in implicit form. The former is not seriously considered; the latter, of course, is precisely the germ of his idea to allow the world itself, accessed by perception as "a selective search procedUle," to be a key database feeding into the computational structure of natural language understanding. Richardson, however, objects to this sugge:;tion:

It is not obvious that all the potentially relevant contextual information could be gleaned from perception. Even if it could ... the view that perception should be taken as a resource for accessing information in the world presupposes that that information is already available in "the perceptible world" to be read off. (Richardson: this volume, p. 289)

The first point is hardly decisive since, if it means that the machine might overlook something, the reply is so might humans; whereas if it means that the context is not exhausted even by the information from perception, then we must be reminded that Fields does not rule that out. He is not prescribing the "world data base" as the only requisite for understanding language. The second point is more serious, and as Richardson notes, there seems to be an impressive array of critics who would advance it. They say, with Immanuel Kant, that meaning is not received so much as it is con­ferred, and that therefore the appeal to perception is actually question­begging. It may be, however, that the critics (Richard~on included) fail to see the significance of the distinction between sensory information and actual meaning. The basis of the former would appear to be there in the world; only of the latter might it be said that it is "conferred" rather than received. Moreover, the development of meaning as that process is envisaged by computationalists is incremental, recursive, and composition­al. Consequently, it is not at all clear that Fields begs the question.

Richardson notes that Heidegger, Merleau-Ponty and others have conclu­ded that context-dependency is altogether an "open-ended" function of the social matrix, and that this establishes the futility of machine models of mind. He suggests we admit the criticism and accept the idea of "an irre­ducibly Intentionalist psychology, which embraces the view that meaning is conferred rather than received, and accepts Intentionality as an intrinsic dimension of mental life." tp. 290) In other words, 10 deal with the extreme "externalism" of such critics, we are to adopt a tilorollghl\' "internalist" stance. This may be premature. In the next section, a different proposal is advanced for dealing with the problem of meaning--

Page 300: Perspectives on Mind

292 Chapter 7hree

one which favors avoiding the extremes of the internalist-externalist dichotomy.

3.3 Translation

There are two issues which are complicating philosophy of mind. One is intentionality--the problem of the "directedness" of mental phenomena. It has intruded upon current effort to account for human mentality in terms of the so-called "computer metaphor" while also confounding philosophical speculation as to whether genuine intelligence might ever be synthesized. The second issue. adding its own quanta of confusion to the situation, is the absence of any sufficiently rigorous means for communication between the opposing traditions which are now immersed in the problem of mind.

In the next paper, Herbert R. Otto offers a proposal which could serve to address both of these concerns. He argues that, contrary to common opinion, systematic translation from natural discourse to a logical notation is possible, and that it does not require that a theory of meaning specify ultimate meaning elements, but only that a means be provided for keeping track of empirically agreed-upon meaning equivalences. Also, the process seems generalizable so that an analogous sort of "translation" may be involved in a variety of mental phenomena to the end that intentionality as a characteristic feature of human cognition may be a property, not so much of mental "structures" or "states," but of internally based networks of algorithmic and heuristic trallslatioll systems. The idea of translation developed in the following paper is based on linguistic principles not unlike those employed in transformational grammars, and thus similarly subject to empirical validation against usage. In this way Otto attempts to avoid metaphysical entanglement.

Page 301: Perspectives on Mind

HERBERT R. OTTO

Meaning Making: Some Functional Aspects

When it comes to the phenomenon of meaning--which most would agree figures centrally in consciousness--there seems to be two, no doubt related, aspects that demand attention. These may be presented as a pair of questions. First, what features must something have if it is to be meaningful? And, second, what capabilities must a mind have if it is to be aware of meaning? The distinction between locutionary, illocutionary, and perlocutionary in the "speech act" approach to meaning reflects much the same concerns. It seems clear enough that meaning is not simply a matter of reference, and as l.R. Searle points out [1], there are good philosophical as well as practical reasons for not glossing over that fact. There is much to be learned from such broader perspectives on meaning. In this paper I want to explore some of what might be learned.

I. Meaning. Mind. and Translation: Two Traditions

The model which I see as unifying diverse views on meaning is the following. By ways largely objective, the rudiments of meaning arise in experience. We modify and extrapolate these. often unconsciously, in ways that accord with the systems of our perception and understanding. Various subsystems of these mayor may not be intentional in nature. From this origin arise enormous stocks of newly-minted meanings, many of which are "re-invested," as it were. As Austin noted, we "do things with words." [2]

Experience, I suspect, is itself "multi-layered," ranging in complex intertwined "strata" from the sensory to the social. One's sensory exper­ience of the momentary, physical event-environment is not necessarily to be accounted for in terms appropriate to one's "life" experience of, say, going to the university, or working in a political campaign. Consequently, what we come to take as "rudiments" of meaning could vary widely; certainly it would be at least premature to claim that there exist some ultimate "atomic" rudiments of meaning from which all other meaning must come.

Still, it may not be apparent why we should be concerned about problems of meaning outside those centering on referring, asserting, denying, and implying--for isn't it only the informative function of language that is germane to building and communicating knowledge? Why, in other words. should philosophic effort be put into an examination of such functions as those catalogued by Austin and others, e.g. functions like those listed in Searle's article [3]: warning, commanding, requesting, apologizing. censuring, approving, welcoming, promising, and so on? However much linguistic interest there might be in this, it is not clear

293 H. R. Otto and 1. A. Tuedio (eds.), Perspectives on Mind, 293-314. © }988 by D. Reidel Publishing Company.

Page 302: Perspectives on Mind

294 Herbel1 R. Otto

what philosophic relevance such inquiry has. Perhaps, theory of meaning is simply a scientific matter, having no

philosophical ramifications. Yet, if we recall Oilr pragmatic heritage, we are reminded that it is no less the case in philosophy than in engineering that idle notions seldom hit upon, much less hold fast to, truth. But what is it that needs doing in philosophy that a theory of meaning might do? One perennial suggestion is that it is to be clear: to be able to mobilize our communication, indeed, our very thinking, so that we may get the best possible grasp of whatever subject matter our attention falls upon. Theory of meaning is a meta inquiry in an essential and very important sense, and this suffices to nominate it for philosophical attention.

This view of the matter belongs for the most part to the so-called "analytic" current of western philosophy. A prominent attempt in this century to mobilize in this way has been logical positivism. Beginning in earnest with Carnap, the scheme to regularize philosophical (and scienti­fic) language by rigorously controlled translation from natural discourse into a systematic discourse has had times of great vogue, as well as massive criticism--the latter especially after Wittgenstein's later work. But "hope springs eternal" and the essentially Leibnizian vision ("Let us calculate") remains a deeply held, perhaps forever undislodgeable, philosophical goal. Even the weighty "skepticism" of a thinker like Quine is more truly a reflection of frustration at a tactical level than it is an abandonment of the project. It may be a far more formidable task than had been imagined, but it remains nevertheless a generally reasonable objective--especially when seen as an attempt to get ever better (and more useful) functional approximations.

But there is another stream of thought centering on the question of meaning. Commonly referred to as the "Continental" tradition, its concern is to understand the "subjectivity" of meaning, to understand how it is that experience comes to have the meaning for the individual that it does. On this view, language is "on the end of the line" so to speak--it is but the expression we give to what is already, in largely non-linguistic form, meaningful. Here, the key issue appears to be intentionality--the nature of its role in human consciousness, the mental structures it presupposes, and the possibility of its "reduction" to physicalist and extentionalist objectivity. It may not be so clear just where a concern for translation, of whatever kind, would come into play, but one thing is clear: if language and translation do have a role in the Continental mode of philosophy, it will not be limited to the merely informative.

If we generalize the notion of translation: view it momentarily as a principled 1Il0l'cmellt from one JOI7I1 oj meaning to Gnother, then perhaps we shall more easily be able to discern how translation might be relevant to Continental concerns. Thus, translation need not be understood exclusively as trading ordinary language for formal expression, but may be viewed a·s

Page 303: Perspectives on Mind

Making Meaning: Some Functional Aspects 295

including processes whereby experience becomes meaning--meaning only later expressed linguistically. In what follows, I shall concentrate on translation into a standard first-order logic; however, the general approach employed as well as numerous of the tactical principles involved are, I contend, "transportable" to such other translation projects as might be necessary to carry through the investigation invited by the above speculations regarding the chief issue cUiTently drawing together the currents of modern philosophy.

2. 171e Concept of Translation Derivation

Few who have taught modern logic, or who have tried to use symbolic logic to appraise reasoning, will deny that translation is a notably nasty problem. How, exactly, is one to prove to a student, or to oneself for that matter, that a certain formula does indeed correctly translate a given sentence? The usual appeal to "one's intuition as a native speaker" won't do if what we want is an appraisal which at least approaches the rigor and objectivity of the methods employed within the bounds of symbolic logic itself. Indeed, there is something unquestionably gratuitous in saying that because a certain symbolic sequence is valid (or invalid), a certain English argument must therefore be valid (or invalid), when no objective account of the translation move between the two has been given.

In logic, as in mathematics, we provide for showing the validity of complicated, unsure instances of reasoning by systematically analyzing such cases into sequences of inferences the validity of each of which lies squarely within a minimal and explicit body of uncomplicated, sure instances of reasoning. Cannot the same be done for translation? I think it can. Elsewhere (4). I have taken up this task in detail. Here, I would like simply to explain the central concept, that of the translation derivation, and give some idea of its systematic base along with a few illustrations of its use, and then show the relevance of this for theory of meaning (i.e. philosophical semantics).

The notion of formal proof is both venerable and well known. Usually, we define a formal deductive proof that a given argument, A, is valid as a sequence of statements each of which is a premise of A, or follows from some preceding statement(s) in the sequence by way of an elementary valid argument fOlm or logical equivalence, such that the last statement in the sequence is the conclusion of A. Thus, the validity of A becomes a ques­tion of the elementary validity of each of the pieces of reasoning involved in the proof sequence. And, in a system of logic, the set of elementary fornls is made fully explicit, maximally sure, and minimal in number.

In strict analogy with the foregoing, I define a translation derivation showing a logical formula, F, to be a correct translation of a given English sentence, E, as a sequence of strings each of which is either

Page 304: Perspectives on Mind

296 Herbert R. Orto

the given English sentence, or derives from some preceding string(s) in the sequence by way of an elementary pattern (or rule) of transcription, para­phrase, or interpretation, such that the last string in the sequence is the formula F. And, as in logic, the set of elementary patterns are to be made fully explicit, maximally sure, and minimal in number. Hence, the question of the correctness of a translation can, analogously with the question of validity in logic, be reduced to the acceptability of a set of objectively stated principles which, at all times, remains open to critique.

Before turning to the elementary patterns (rules) themselves, it is worth noting several general features of systematic translation. Very impoltantly, it is to be observed that translation is an interlocked complex of three sub-procedures: interpretation, paraphrase, and transcription. Seldom have logicians or others made any such distinction. The procedure of transcription is mostly ignored, paraphrase and interpretation are usually understood as the same thing, and synonymous with translation itself. Other terms not clearly distinguished are 'symbolizing', 'rendering', 'representing' and 'analyzing'. The key distinctions needed are reflected in Figure I below, which displays the typical flow of "phases" involved in constructing a translation derivation.

Well-

Feedback Loops

EE~------ TRANSLATION ------~~

~E~--------- ANALYSIS-----------~~

Figure I

But how, one may ask, does this technically specialized procedure relate to theory of meaning generally? The answer I give, and which I have striven to substantiate with detailed linguistic investigation, is just this: when one communicates--engages in the "speech act," as it were--one makes meaning by utilizing elements provided by the language one speaks. Indeed, it is precisely that Chomskian "linguistic creativity" that comes into play here. Meaning is made, and made anew, with each new speech act, in each new context. Translation, then, as I have formulated it, is just

Page 305: Perspectives on Mind

Making Meaning: Some Functional Aspects 297

the reverse order of that act. Translation aims to take our ordinary linguistic constructions and render them in an explicit and exact form. If the target language is a logic, as it is here, one's position regarding meaning theory is essentially logicist. Targets, however, are open to ehoice--which is why systematic translation is not tied to any particular ontology, nor confined to any particular subject matters.

When we do "logic" translation systematically we must somehow go through these stages in getting from ordinary linguistic expression to justifiably equivalent ones within the target technical language, which latter serves as the final step of the philosophical or scientific "regimentation" of expression. Each of those steps, or stages, contributes to the final meaning clarification in a way that can be as objective (i.e. open to critique) as a mathematical proof. Functionally, each stage performs part of the processing that must take place for the result to be acceptable. Interpretation, the first or "highest" level, is concerned to decide (and record the assumptions attending such decision) on what something shall be taken to mean so as to accord on the one hand with the "conventions" of the speech community, and on the other with the best evidence available as to the intentions involved. At the second stage, paraphrase centers on manipulating language to achieve a certain standard (or "normal") form of expression equivalent in meaning to the given one. And, third, at the transcription level the paraphrase given by stage 2 is rendered in the terms of the target language.

We can see, then, how studies like Searle's which deal with meaning at pragmatic and semantic levels, and which focus on species of meaning not strictly informative, are indeed useful. They get at principles and reasons germane to the intelpretatiolJ process. As such they provide the hidden assumptions that a full analysis is entitled to have. For example, if we are analyzing a text in which promising plays a role, we can turn to the theortieal results of studies of the promising function for such assumptions. These will help in reconstructing enthymematic contexts, and, hence, are very much like "meaning postulates." Thus, if in the context of an argument we have the proposition,

John promised Mary he would take her to dinner,

as a premise and the argument is enthymematic, we could legitimately assume, if need be, the following proposition as an additional premise:

If John made the promise and he is sincere, then John intends to take Mary to dinner.

But, now, I would like to turn to a deeper consideration of the philo­sophical ramifications of this approach.

Page 306: Perspectives on Mind

298 Herbel1 R. Orto

3. Current Semantics: A Critique

In an article entitled "Methodological Reflections on Current Linguistic Theory", Professor Quine has stated a view from which I believe he has not deviated. There, he says,

... the grammarian's deep structure is similar in a way to logical structure. Both are paraphrases of sentences of ordinary language; both are paraphrases that we resort to for certain purposes of technical convenience. But the purposes are not the same. The grammarian's purpose is to put the sentence into a form that can be generated by a grammatical tree in the most efficient way. The logician's purpose is to put the sentence into a form that admits most efficiently of logical calculation, 01' shows its implications and conceptual affinities most perspicuously, obviating fallacy and paradox. [5]

The pragmatic thrust of this observation is something with which I certainly have no quarrel. Translation into formal language is something we do because it is usefol to do. It is not revelation. But for that very reason, I find myself at odds with Professor Quine elsewhere, and all the more so with others doing semantics these days. Quine [6] argues that "stimulus meaning" lies at the bottom of linguistic utterance, and that this is what our regimentation of language in philosophy must finally show. Perhaps "stimulus meaning" might appropriately denominate ultimate psycho­linguistic objects within the individual's perceptual system (itself a special case of the general notion of translation), but it does not seem correct to make such objects a necessal)' ingredient in translation per se.

The main trouble with Professor Quine's claim that stimulus meaning stands at the root of our utterances is the same as in most other sorts of semantic theorizing from Montague to Kripke. The error is subtle. It appears that in every case, entities of one sort or another--atoms of meaning--are postulated as ultimate. and as such they are incorporated into the theory to serve as the single, unique, "rock bottom" ground down to which translation is supposed to carry the complexes of meaning encountered in ordinary discourse. Given the usual status of postulates, it strikes me that for philosophy such a procedure is simply backward.

At this point there may be some puzzlement, for I did earlier say that I view communication as "meaning making" from "elements." This I still affirm. but it must be stressed that for translation as I construe it the elements are not pal1 of the system of trallslatioll. Such elements are not prescribed by the translation procedure. Their character is in no way fixed by the rules of translation proper. Rather, the specification of what counts as an elemental object is a decisioll belonging to the subject

Page 307: Perspectives on Mind

Making Meaning: Some Functional Aspects 299

matter of the discourse at hand. Thus, "stimulus meaning" could be taken as fundamental in a particular analysis, but need not be so. The common temptation among semanticists appears to be to preempt the ontological decision in meaning-making situations. They seem to think that unless they specify what the elements of meaning are, no meaning-making procedures can be spelled out. This, I claim, is simply false. It's as if one were to maintain that the principles of radio transmission could not be given without also specifying what was to be broadcast.

Whereas my first complaint about most semantic theories is that they assume what, as philosophers, we properly seek to find; my second complaint is about the glaring absence of systematic treatment of the linguistic side of the problem of meaning. Note that I am not denying the existence of systematic treatment of the o/lfological aspect of meaning--quite the con­trary, on that score I have already suggested that it is too much. What has in most semantic theories not been adequately carried through is sys­tematic formulation of natural language vis-a-vis the problem of meaning. Chomsky has alluded to such a program but has not pursued it; some neo­Chomskians have tried, but have been side-tracked into taxonomic treatments which finally are in their own way also question-begging. [7]

Briefly, then, my view of current work in philosophy of meaning is that it is too much ontology, not enough linguistics. The trend has been strangely anti-empirical. There are, of course, countertendencies: work in linguistics using syntactic theory as a model, work in AI aimed at simula­ting language comprehension, certain aspects of Montague grammar, and recent investigations by Professor Hintikka involving a Wittgensteinian "game-theoretic" approach to meaning. I want to argue, however, that even these do not go as far as they could toward serving the philosophical requirement of clarity. In Montague, for example, linguistic facts are countenanced--in places fruitfully--yet, apparently, their theoretical role is discounted as unessential to an explication of meaning in general, and of the meaning of this or that sentence in particular. In commenting on the process of translating from English into Montague's system IL (Inten­sional Logic), Dowty, Wall and Peters assert,

For Montague, the translation is in fact a completely dis pen sible part of the theory. The "real" semantic interpretation of an English sentence is simply its model-theoretic interpretation. however arrived at, and nothing else. [8)

Now, while there is a sense in which this may be an acceptable view, it is methodologically the same serious error I have alleged to be infecting the views on semantics held by Quine and many others. Even the so-called "situation semantics" of Barwise and Perry [9] turns out the same: they all minimize the role of natural language in favor of postulated "units" of

Page 308: Perspectives on Mind

300 Herbel1 R. OtIO

meaning having nice, neat mathematical properties. The approach to semantic theory being taken by Jaakko Hintikka

appears to be significantly different. Originally impressed with modal logics and with possible-worlds modelling as a way of getting at meaning, he has pushed on into new conceptual territory as a way of dealing with various difficulties that have arisen for those other stances. In particular, and apparently as a result of new insights into the work of Wittgenstein, Professor Hintikka has developed what he has termed a "game-theoretic" semantic. [10] Here "game theory" is not to be taken in the sense of mathematical game theory, but rather in the sense of Wittgenstein's notion of a language game.

At the outset, Professor Hintikka poses squarely the philosophically central question, "Why formal semantics anyway?" What do we mean to accomplish by such devices as possible worlds, procedural systems, or situational reconstructions? The common answer, he observes, appears to be to get the meaning of assertions, where generally philosophers take this to be to get at the "tmth conditions" involved. "Truth conditions" are here understood in Tarskian (or near Tarskian) terms. The equally common strategy for accomplishing this, Hintikka says, is to move from natural language expression by way of some translation to a formal language, and thence via a phase of "semantic representation" to the meaning.

But, since, according to Hintikka, systematic translation is largely intractible, he advocates "bypassing" it altogether and moving directly to meaning in terms of a set of rule-governed transactions of a pair of players in a "truth-value" game. The players vis-a-vis the expression involved are called "verifier" and "falsifier" (or "self" and "nature"). One understands the meaning of an expression just in case he is able to play the game; and the meaning is the playing of the game, and nothing more. There are two aspects to such meaning: an abstract sense dictated by grammatical structures, and a "strategic meaning" conditioned by choices to be made from a domain of discourse. It turns out, therefore, that a great deal of attention must be paid to the functional role played by various grammatical constructions (thus, for example, Hintikka has suggested informally that strategic meaning itself relies upon a large variety of "linguistic mechanisms" yet to be dealt with).

It is for this latter reason that I feel that, as interesting as his approach is, particularly in its ramifications for possibly unifying our understanding of diverse "logics", it leaves important matters regarding the exact semantic role of linguistic structures largely unexplicated. To the extent that it deals with some of these "mechanisms," it does not do so in a way sufficiently empirical to avoid the charge of a priorisll1 lodged against those patently "ontologizing" approaches criticized earlier. Veri­fier and falsifier must access a world, and though--to Professor Hintikka's credit--the objects of that world aren't prescribed by a set of ontological

Page 309: Perspectives on Mind

Making Meaning: Some Functional Aspects 301

postulates, it is nevertheless the case that the manner of their access tends to be spelled out by rules not arrived at by empirical investigation of actual linguistic structures, but rather by a set of ad hoc rules summarizing the outcome of linguistic principles which verifier and falsifier must somehow intuitively grasp. Thus, it is my contention that, contrary to Professor Hintikka's original aim, translation has not really been bypassed, but rather has been rendered unacceptably implicit.

In my view, then, there are several critical points to be made with regard to the project to understand and use language, in whatever revised or refined form, as a medium for giving a philosophically adequate account of things. These points are:

(1) No adequate semantic theory can bypass translation. (2) Translation is not intractible. (3) Translation can be systematic. (4) A translation system can (and should) be empirically

grounded. (5) Ontology in translation can (and should) be minimal,

circumscribed and always tentative.

Only by satisfying these points can we achieve a philosophically unbiased employment of language. And in doing so we shall, I think, run truest to the various insights that have motivated previous critical and constructive efforts in semantics. We shall, for example, appreciate more subtly Quine's "regimentation of language" thesis while not being forced to read it as a perplexingly arbitrary measure; we shall be able to come close to the objective and systematic quality to be found in postulate-based mathematical semantics, yet not presume upon that which is a matter for empirical investigation; and finally, we shall come to understand the manner in which, as Professor Hintikka shows, meaning is enfolded within language-games which are the inescapable woof and warp of communication. But now I should like to turn to my examples.

4. Some Illustrations

The following set of elementary patterns (rules) is presented for the purpose of illustration. No claim is made for completeness of the set, nor for the grammatical universality of individual rules. Elsewhere [11], I have sought such features in more rigorous fashion. But since the patterns in such a set are ultimately empirical generalizations, and since they are therefore always open to revision, there is no bar to using the following simpler set of patterns to illustrate the concept of systematic translation derivation.

Page 310: Perspectives on Mind

302

Identifying (I)

IG Glossary items

IS Synonyms

Herben R, Otlo

Rules for Translation from English illfo Logic

L = .. ,

INTERPRETATION

where' ... ' is an elemental English sentence, 'L' is an appropriate sentence letler, enter 'L = ... ' into glossary

where' ... ' is an English word or phrase and '---' is an appropriate equivalent, enter'. = ---' into glossary

Enthymeme Restoring (E)

EP Missing premise

EC Missing conclusion

Eliminating (L)

LC Contraction

LE Emphasis

Extracting (X)

XN Negalion

XM ModalilY

Pi' ( ... J, Pn I:, C where '( ... )' is a suppressed premise, and 'P, ' is an explicit statement of it

Pi' Pn I:, ( ... J

-- n't

-- not

... does .. ,

... not.

where '( ... J' is a suppressed conclusion, and 'C' is an explicit statement of it

PARAPHRASE

where the same move applies to any such construction in the set [n't, 've, 'd, etc.] Note: sets designated by 1st member.

it is not the case that.

provided' not' does not pass over any quantifier or modal word

must ..

it must be the case that.

parallel restriction: also applies to others in [must]

Page 311: Perspectives on Mind

Making Meaning: Some Functional Aspects 303

XQ Quantifier

XQn Qual1fifier nOU1l phrase

Retrieving (R)

RS Subject

RP Predicate

RV Verb

RC COImecfive

RA Antecedent

REn Expletive universal

REe Expletive existential

De-tensing (0)

DF Future

Q NP VP

Q thing which is a NP VP

... Q NP wh+ ---

parallel restriction, where Q = [every, at least one 1 and 'NP'. 'VP' are noun/verb phrases

Q NP wh + --- is such that ... it

parallel restriction, and where 'wh +' is a relative pronou n

S P1 and S P,

S1 and S, P

Sl P and S, P

. V PP1 PP,

V PP1 and V PP,

if

if ... then ---

... N ... n

... N ... N.

where 's' and 'P' denote subject and predicate respectively

where 'V' is a verb phrase and 'pp' denotes prepositional phrase

where' ... ' and' ---' are antecedent and consequent clauses respectively

where '11' is a pronoun and 'N' is its antecedent noun

every thing wh + VP1 VP,

for every thing if it VP1 ' it VP,

at least one thing wh+ VP1 VP,

there is at least one thing such that it VP1 and it VP,

will V

Vs

where 'V' is a verb and 'Vs' is the present progressive form

Page 312: Perspectives on Mind

304

DP Past

DS Subjunctive

was Ving

Vs

were Ving

Vs

where 'Ving' is the past participle

Herbet1 R, Otto

Active-Passive (PA)

PA S V 0

o was Ved by S

where '0' denotes the direct object and other forms are in [Ved]

Quantifier Synonyms (QS)

QSn Negative

QSu Universal

QSe Existemial

no NP* B VP*

every NP B not VP

all NP* Vp*

every NP VP

some NP* vp*

at least one NP VP

TRANSCRIPTION

where' *' denotes singular or plural form, and B = [does]

where Q [some]

Abbreviation (A)

AS Sentence

AP Predicate

AC ConSlant

L

x,

Fx

a,

where 'L' is an appropriate propositional letter, ' .. ,' is an elemental English sentence, and 'L = .... is in the glossary

where' F' is an appropriate predicate letter. 'x .... is an elemental English open sentence, and 'Fx = x ... ' is in the glossary

where 'a' is an appropriate constant letter. ,:t"' is a name or description, and 'a *' is in the glossary

Page 313: Perspectives on Mind

Making Meaning: Some Functional Aspects

Introduction (T)

TV Variable

TS Synonyms

Connectives (C)

CN Negation

CC Conjunction

CD Disjunction

CI Implication

CE Equivalence

CQu Quamijier universal

CQe Quant(fier exisfenlial

... thing ... it ... it ...

... x ... x '" x ...

where 'x' is an appropriate variable and' it' has 'thing' as antecedent

... r ...

... s ...

where 'r' and's' are agreed upon as synonyms, and 'r = s' is entered into the glossary via the IS rule

it is not the case that L

(-L)

it is both the case that Land M

(L. M)

it is either the case that L or M

(LvM)

if L then M

(L=>M)

L if and only if M

(L !!M)

for every x .

(x)( ... J

there is at least one x such that ...

(Ex)( .. )

drop parenthesis where not needed

305

The operation of the rule set is best seen in term of concrete examples. The first is simple and straightforward, yet I think quite revealing. It is one of the first examples which Dowty, Wall and Peters present in order to illustrate the Montague system, and because Montague

Page 314: Perspectives on Mind

306 Herbe,1 R. Ouo

employs the lambda calculus as a "model". it is instructive to compare my derivation with his. They remark of the example sentence ('every student walks') as follows: "In the English nothing corresponding to a variable is evident." [II] The point they intend is that such explication "uncovers" a significant underlying logical structure. I do not deny the significance of the result, though I would favor the Quinian version of what is happening, namely that a "regimentation" of language is underway, rather than an "uncovering." However, I would insist on stressing that this regimentation is clearly rule-governed and not at all arbitrary. Thus. I claim that the logical formula that results is justifiably a correct regimentation, and one that has at root empirical support in linguistic usage. Quine does not and can not make either claim; Montague makes the former, but cannot make the latter since he merely assumes the lambda caluclus as a model of English. My derivation is as follows:

Given 1. every student walks

I XQ 2. every thing which is a student walks

2 REu 3. for every thing if it is a student, it walks

3 RC 4. for every thing if it is a student then it walks

4TV 5. for every x if x is a student then x walks

5 CQu 6. (x)(if x is a student then x walks)

6CI 7. (x)(x is a student:> x walks)

IG 8. Sx = x is a student; Wx = x walks

8,7 AP 9. (x)(Sx :> W x)

To further illustrate the power and incisiveness of systematic translation as a way of getting at what a particular locution mayor may not have meant, I shall consider the following ambiguous sentence.

Given I. nobody thinks he should win

IS 2. w = he

2,1 TS 3. nobody thinks w should win

IS 4. nobody = no person

Page 315: Perspectives on Mind

Making Meaning: Some FUllctional Aspects 307

4,3 TS 5. no person thinks w should win

5 QSn 6. every person does not think w should win

6 XQ 7. every thing which is a person does not think w should win

7 REu 8. for every thing if it is a person, it does not think w should win

8 TV 9. for every x if x is a person, x does not think w should win

9 RC 10. for every x if x is a person then x does not think w should win

10 CQu II. (x)(if x is a person then x does not think w should win)

II CI 12. (x)(x is a person :>x does not think w should win)

12 XN 13. (x)(x is a person :>it is not the case that x does think w should win

13 CN 14. (x)[x is a person:> -(x does think w should win)]

14 XE 15. (x)[x is a person:> -(x thinks w should win)]

IG 16. Px = x is a person; Txy = x thinks y should win

16,15 AP 17. (x)[Px :> -(Txw)]

Here line 2 shows explicitly a crucial assumption, which typically would require support from the larger argumentative context. In the absence of such support, the natural assumption would be to take "nobody" as the syntactical antecedent (llot the ontological referent, however, since that is not what matters here) of the pronoun 'he'. This means simply that steps 2 and 3 in the above derivation will be omitted yielding:

Given 1. nobody thinks he should win

IS 2. nobody = no person

2,1 TS 3. no person thinks he should win

3 QSn 4. every person does not think that he should win

Page 316: Perspectives on Mind

308 Herben R. OrIO

4 XQ 5. every thing which is a person does not think it should win

5 REu 6. for every thing if it is a person, it does not think it should win

6 CI 7. for every thing if it is a person then it does not think it should win

7 TV 8. for evel)' x if x is a person then x does not think x should win

14,13 AP 15. (x)[Px => -(Txx)]

Either way the translation goes, the interpretive move is entirely explicit and open to review. Moreover, the exact source-point of the ambiguity is isolated, and the relevant translational options clear.

This example can be pressed further to show how a more in-depth translation could be given, supposing a kind of analysis that would require it. Picking up at the appropriate line in the first of the previous two derivations (it could have been either) we have:

From 10. for every x, if x is a person then x does not think Above w should win

IOXN 11. ...... it is not the case that x does think w should win

11 LE 12. ...... it is not the case that x thinks w should win

12 CN 13. ...... -(x thinks w should win)

I2XM 14. ...... -(x thinks it should be the case that w wins)

IS 15. <p = it should be the case that

15,14 TS 16 ....... -(x thinks <P(w wins))

IG 17. Wx = x wins

17,16 AP 18. .. -(x thinks <p(Ww))

8.18 AP 19. .. -(Tx, <P(Ww))

Page 317: Perspectives on Mind

Making Meaning: Some Functiollal Aspects

19,9 Rewrite

20. (x)[Px::> -(Tx, t(Ww)))

309

Here the extended analysis introduces a relational predicate, as well as a deontic operator (should). For some purpose of analysis, e.g. for providing a fairer test of some argument containing this sentence, this more detailed translation would be available.

5. Some Philosophical Consideratiolls

At this point, it is worth raising a few questions germane to our earlier discussion of semantic theory done from the point of view of the philosophy of meaning and the broader program of seeking a maximally coherent, ontologically conservative worldview, or if you prefer, science. Where Hintikka had asked, "Why formal semantics, anyway?" it now seems in order to ask "Is formal semantics necessary if we do translation properly?" or, more pointedly, "Where, exactly, does "meaning" enter the translation process?" There are several things that need to be said. (1) Because there are different "logics," some semantic presuppositions are tucked away in the resulting formulas themselves, not to mention some that are built into the operation of the rules of a typical logic. But this is unavoid­able, and since logicians make it part of their business to investigate such complications, we are not likely misled on this account. (2) Because translation claims only to be mealling presel1'ing, not meaning revealing or meaning specifying, no ultimate metaphysical claim need be proffered, not even putatively empirical claims about stimulus meanings. The claim that the derived formula has the same meaning (for logical purposes) as the given sentence has now been cashed in for a minimal set of such claims all granted within the frame of a closely scrutinized system. Where assertions of synonymy are made, they are subject to agreement relative to a context, and perpetually revisable. Agreement may be based on stimulus meaning, or on any other basis deemed appropriate at the time. (3) Meaning assignments in the glossary are likewise open to critical review and revision. Should, for some defensible reason, an extended analysis be proposed for a given discourse, appropriate glossary changes can be made. And since the point of all such analysis is appraisal of the validity of argumentation (and not the contingent truth of propositions), glossary entries need only agree­ment, not an absolute meaning or fixed array of Tarskian truth conditions. Finally, (4) the use of domains of discourse--abstract possible worlds or otherwise--is not equivalent to an ontology provided they are not employed as the ground for asserting the truth of anything. In their proper application, "models" drawn from such universes are part of the search for counterexamples. and only that. As such, they merely show that a given form could collceivably have an interpretation that would invalidate it.

Page 318: Perspectives on Mind

310 Herbel1 R. Otto

But since a counterexample is never conclusive against an argument for which the form has not been established, possible worlds and the like are. properly speaking, no/meanings a/ all. They are at best "trial meanings," and interestingly, it is in this role that the search for them appears to be analogous to the language-game process described by Professor Hintikka.

6. Resolution of a Pseudo-Problem

One final example will suffice to illustrate the importance of translation derivation as a systematic part of philosophical analysis generally. In her book, Philosophy of Logics [13]. Susan Haack recounts a problem seemingly defying adequate translation using just ordinal), predicate logic. She claims that because the argument,

The President signed the treaty with a red pen.

Therefore, the President signed the treaty.

could be no better translated than Fa!:,Oa, where:

a = the President Fx = x signed the treaty with a red pen Ox = x signed the treaty

its obvious validity could not be demonstrated, and that this puzzle has led a number of philosophers to a more liberal policy in the matter of paraphrasing. Thus, says Professor Haack, "Davidson, for example, proposes a representation along the lines of:

(Ex)(x was a signing of the treaty by the President and x was with a red pen)

:, (Ex)(x was a signing of the treaty by the President)

which," she goes on to point out, "like the original is valid. [It] supplies the original argument with a representation within the standard predicate calculus by qualllifying over eVeI1/s alld treating ad,'erbs as predicates of e,'ellls." (Emphasis mine). She then adds, "another possibility would be to extend the standard formalism, e.g. by the addition of predicate operators to represent adverbs ... " as if ad hoc quantification over events wasn't bad enough!

Besides such flirtation with metaphysics--which seems to have become once again fashionable--there are other objections to such treatment, or better, mistreatment, of translation. To note a few: (1) On what principle

Page 319: Perspectives on Mind

Making Meaning: Some Functional Aspects 311

does Professor Davidson select the particular "representation" that he does--other, that is, than that it appears to give the desired result? In the absence of principles for deciding how to change events into quantifi­able objects, and, in general, how to tell just what has to be so changed into objects, the suggestion is patently ad hoc. (2) Why are we set back to talking about "representations?" The original aim was to translate arguments, not represent them--a much more arbitrary activity, to say the least. Ifit's representation that suffices, then how about: (P.Q) I:.P, where simply,

P = the President signed the treaty Q = it was done with a red pen

and we needn't quantify over anything! (3) Even if we accept Professor Davidson's "paraphrase," it would still leave us with a problem if what we had sought was the alternative conclusion, "the treaty was signed with a red pen," for from his premise the best one could get would be,

(Ex)(x was with a red pen)

or, even if we allowed a "fudged" substitution, "the signing of the treaty was with a red pen," which, though it may seem the same thing as "the treaty was signed with a red pen," is only so intuitively, not explicitly. What's lacking, not only in the facetious rendering in (2) above, but also in the Davidson rendering, is an explicit linguistic basis for the moves.

I want to claim that close attention to grammatical principles-­transformational patterns capable of being formulated just as objectively as the rules of logic, and having an empirical footing that is at least as ample as that behind a number of the logical rules (e.g. from p infer p v q)--will give us the justifications required by truly rigorous translation. And, such translation will, in turn, show quite clearly how Davidson has allowed a subtle language error to trick him into an un­necessary ontological commitment--Wittgenstein's warnings notwithstanding.

The following translation derivation is not only explicit at every step, but also more soundly motivated on philosophical grounds than Davidson's. It results straightfOlwardly in a formula of the standard predicate logic, requiring quantification over nothing more exotic than physical objects of the most mundane sort.

Given 1. the president signed the treaty with a red pen

1 PA 2. the treaty was signed by the president with a red pen

IS 3. the treaty = something which is the treaty

Page 320: Perspectives on Mind

312 Herbel1 R. Otto

3 TS 4. something which is the treaty was signed by the president with a red pen

4 QSe 5. at least one thing which is the treaty was signed by the president with a red pen

5 REe 6. there is at least one thing such that it is the treaty and it was signed by the president with a red pen

6TV 7. there is at least one x such that x is the treaty and x was signed by the president with a red pen

7CQe 8. (Ex)(x is the treaty and x was signed by the president with a red pen)

SCC 9. (Ex)(x is the treaty. x was signed by the president with a red pen)

9 RV 10. oooooo • x was signed by the president and was signed with a red pen)

10 RS 11. oooooo • x was signed by the president and x was signed with a red pen)

II CC 12. oooooo • x was signed by the president. x was signed with a red pen)

12 PA 13. oooooo • the president signed x . x was signed with a red pen)

IG 14. P = the president

14,13 AC 15. (Ex)(x is the treaty. p signed x . x was signed with a red pen)

IG 16. Tx = x is the treaty Rx = x was signed with a red pen Sxy = x signed y

16,15 AP 14. (Ex)(Tx . Spx . Rx)

Translation of the conclusion of the argument proceeds along similar lines. and results in the following:

Page 321: Perspectives on Mind

Making Meaning: Some Functional Aspects 313

(Ex)(Tx . Spx . Rx)

:. (Ex)(Tx . Spx)

which, in line with the original desideratum, is clearly seen as valid. And, besides not requiring that we quantify over events, this translation has the additional advantage that the other possible conclusion--namely,

(Ex)(Tx . Rx)

which did not follow without noticeable fudging from Davidson's version of the premise--follows just as easily from our premise.

Thus, I conclude that systematic translation is both feasible and philosophically significant. There is reason to suspect that numerous other problems of philosophy--including intentionality--will "come apart" upon analysis using proper and systematic translation; moreover, employing the concept in its most general form, ontologically sound insight into the actual structures of mind should emerge. Considerable empirical work remains, however.

--fJ--

The argument which Otto gives is that translation can be systematic, and that as such, one need not presuppose any particular "elements" of meaning. That is, translation does not require the principle of "composi­tionality" in the way generally assumed by analytic semanticists. Rather, what Otto's system requires is an objective, organized means of keeping track of empirically given (conventional) equivalences of expression and the complexes that may result from them. Thus, mental processes such as perception, insofar as they may be regarded as "translation" structures, likewise need not require all.1'tizing in pal1icular to be their ultimate "contents" nor need the "directedness" of intentionality be an "object" of any particular sort. This would give the phenomenologist concrete access to the tools of logic in a way quite compatible with the motivation behind the Husserlian technique of "bracketing." For, in effect, Otto is suggest­ing that the operative fonl1 of phenomenological bracketing is essentially the equivalent of "glossary formation" within the process of translation. In the case of linguistic translation (which, for Otto, is the paradigm) the glossary provides for an explicit accounting of what is to be mean­ingful within the context of one's reasonings; in the case of perception, an "internal" glossary provides one's perceptual system with an implicit set of "pre-reflective contents" necessary for making meaningful the situa­tion in which one finds oneself. Thus in both cases, ontological pre­supposition is subject to inquiry and revision.

Page 322: Perspectives on Mind

314 Chapter 77Jree

As an illustration we might consider the process of "pre-theoretic constitution" which GalTett discusses in his critique of Arbib. There, the "horizon" of which phenomenologists often speak is reflected in the possi­bility of perceptual transformation and re-adjustment as an individual's focus of attention in a given situation narrows. As Garrett describes it, there occurs,

abrupt and gradual shifts of schema-networks or horizons ... for instance, when one's initial productive engagement (say, in wood­chopping) is interrupted by an equipment failure (a crack in the axe handle), or when a doctor moves from direct looking (say, at the patient's outward symptoms) to technically assisted seeing (computer­assisted blood analysis) and from [there] to dialogical inquiry (asking the patient how he feels) which gains access to the lived body of the Other. (Garrett: this volume, p. 253)

What Otto is saying about such an occurrence is that the "semantic representation" involved is formulated (in this case implicitly) as a glossary the elements of which the perceptual system manipulates in accordance with (again, implicit) rules of a syntactical or morphological nature. As the shift of attention takes place, this glossary may be extended, resolved into sub-glossaries, or exchanged altogether for another. The important point is that the elements of such a glossary are precisely the specific "contents" of our momentary "takings" and, as such, they are not systemically fixed, although the rules goveming their manipulation are. Hence, philosophically, we must persist in asking what is meant by 'horizon', 'mode of Being' and so on, for otherwise we are systematically confused.

It does seem that Otto's example showing the manner in which Professor Davidson was misled into postulating events as entities is a case in point. There, we saw a concrete instance of the way in which linguistic subtleties give rise to philosophical presuppositions, the likes of which the later Wittgenstein cautioned us against. Otto, it appears, has undertaken to devise a systematic means for cashing in on Wittgenstein's insight. His stance is a radical departure from contemporary work in analytic semantics, but perhaps a departure that will find use in the project of phenomeno­logical description which now appears essential to achieving an adequate philosophy of mind. Consequently, close critique from both the phenomen­ological perspective and the traditional analytic point of view is in order. In the next paper, Herbert E. Hendry provides us with the latter.

Page 323: Perspectives on Mind

HERBERT E. HENDRY

Comments on 0"0 on Translation

Translation into logical notation is an art. According to Herbert Otto it can and should be a science. That is a major theme of his essay. It is not the principal one, but it is the one that I should like to focus my attention upon. Moreover, I want to restrict my attention to that part of the translation procedure that consists of what Otto refers to as paraphrase and transcription.

I agree with Otto when he writes that "... there is something unquestionably gratuitous in saying that because a certain symbolic sequence is valid (or invalid), a certain English argument must therefore be valid (or invalid), when in fact no objective account of the translation move between the two has been given." (This volume, p. 295) For this reason I find myself to be sympathetic with the program he outlines and apprecia­tive of the efforts that he has made, both in this essay and in his book 77le Linguistic Basis of Logic Translation [I], to bring that program into fruition. Nonetheless, I have a number of reservations and worries.

The first concerns the notion of a translation derivation, and the claim that it is analogous to the notion of formal proof. I think that there is an important point of disanalogy between these two notions. Normally, it is required that the rules in accordance with which one moves from earlier to later lines in a formal proof be effective. Otto does require that the rules that define a translation derivation be "fully explicit, maximally sure, and minimal in number". (p. 295) But, it is clear that he does not impose the requirement of effectiveness. for his own rules make explicit reference to the notions of equivalence and synonymy, notions that are clearly non-effective. (The rules also make reference to other, less conspicuous non-effective notions, e.g., those of being a noun phrase or a verb phrase.) Thus, at least as things presently stand and even assuming the correctness of his rules, the notion of a translation derivation cannot claim to fully share the "objectivity" of the notion of a formal proof. Could one reasonably impose the requirement of effectiveness on translation rules? Only if one has as precise a theory of the syntax and semantics of the home language as we do for the target language. And it is clear that no such theory is at this time available for English.

This does not mean that the system as it now stands is non-objective. I see no compelling reason to identify objectivity with effectiveness. But, by the same token, I do not see that the absence of a rule-governed translational procedure entails that the question of whether a given translation is correct is non-objective. There are two questions that appear to me to be sometimes conflated. One is the question of how a given

315 H. R. Duo and 1. A. Tuedio (eds.), Perspectives on Mind, 3/5-324. © 1988 by D. Reidel Publishing Company.

Page 324: Perspectives on Mind

316 Herbel1 E. Hel1d,y

translation has been arrived at. Was it arrived at by systematically applying some more or less well defined set of rules, or was it arrived at in some less systematic, more intuitive manner? A second question is, given something that purports to be a translation, assuming it to be "correct," and forgetting the question of how it was arrived at, can there be any objective justification that it is a correct translation? I think that we must agree with Otto that translation into logical notation is typically by hook or by crook. But, it does not follow that such translation is typically without justification. Let me illustrate by means of a simple example. Suppose that 'Es . Osp' is given as a translation of:

(0) The number of stars is even and greater than the number of planets.

Is the proposed translation correct? The question has no answer except relative to some il1lelprelaliol1 of the language in which 'Es . Osp' occurs. Apart from such an interpretation, 'Es . Osp' is a meaningless sequence of symbols. An interpretation of logical notation is normally taken to consist of two things: a non empty set, sometimes referred to as the domain of the interpretation, and an assignment function, a function that makes assignments satisfying certain constraints to predicates and names and perhaps other symbols. Suppose the above question is raised relative to an interpretation, I, that has as a domain the set of natural numbers and an assignment function.!, that has the following four properties:

(J) f('s') = the number of stars (2) f('p') = the number of planets (3) fCE') = the set of even numbers (4) f('O') = the set of ordered couples <m,l1> of natural

numbers such that III is greater than 11.

Is the translation correct relative to I? Once again the question has no answer. What justification is there for reading 'Es' as 'The number of stars is even' rather than as 'The number of stars is not even'? What justification do we have for reading 'Osp' as 'The number of stars is greater than the number of planets' rather than as 'The number of planets is greater than the number of stars"? And, what justification do we have for reading'.' as 'and' rather than 'or'? My answer is that there is no justification except relative to an account of what it means for a sentence to be true under an interpretation. Assuming a standard account, we have:

(5) 'Es' is true under I ifffes') is a member oft(,E') (6) 'Osp' is true uncleI' 1 iff <f(,s'),f('p'» is a member off(,O') (7) 'Es . Osp' is true uncler 1 iff 'Es' ancl 'Gsp' are both true under I.

Page 325: Perspectives on Mind

Comments on Otto on Translation 317

(1)-(7) together with certain elementary principles of set theory entail that 'Es . Gsp' is true under I if and only if the number of stars is even and the number of stars is greater than the number of planets. Thus, the sentence 'Es . Gsp' is true under I just in case the number of stars is even and greater than the number of planets. The last claim is clearly well justified, and I take it to provide good justification for regarding the translation as correct relative to the intepretation I (together with the assumed truth definition). Similar but more complicated justifications can be given for our usual translations of quantifications.

Let us look at the justification somewhat more closely, keeping an eye out for any shortcomings. (5) together with (I) and (3) entail by the substitutivity of identity that,

(8) 'Es' is true under I iff the number of stars is a member of the set of even numbers.

Similarly, (I), (2), and (4) together with (6) entail that.

(9) 'Gsp' is true under I iff <the number of stars, the number of planets> is a member of the set of ordered couples <111,11> such that 111 is greater than n.

And, (7)-(9) entail by a principle of replacement (or interchange) that,

(10) 'Es . Gsp' is true under I iff the number of stars is a member of the set of even numbers and <the number of stars, the number of planets> is a member of the set of ordered couples <m,lI> such that m is greater than 11.

By a set-theoretical principle often called cOllcretion. (10) entails that,

(II) 'Es . Gsp' is true under I iff the number of stars is even and the number of stars is greater than the number of planets.

Finally, from (II) we may conclude. by a process I shall refer to as colloquializatioll that,

(12) 'Es . Gsp' is true under I iff the number of stars is even and greater than the number of planets.

It is (12), I contend, that provides the ultimate justification of the translation in question.

There are several points at which the proposed justification is open to criticism. (I want to be as severe as is reasonably possible.) The

Page 326: Perspectives on Mind

318 Herben E. Hendl)'

first is its move from (1)-(6) to (8) and (9). The inference was based on the principle of substitutivity of identity, a principle notol;ously problematic. The problems, however, appear to arise only when substitution is made in intensional contexts, indeed one might reasonably take failure of substitutivity to be definitive of intensionality. Is the present context intensional? One's answer to this question will depend on whether he or she constmes 'iff' tmth functionally. Construed truth function­ally, there is no problem. The context is extensional, and the inference is unimpeachable. But suppose that 'iff' is constmed intensionally. Even in this worst case, I think that something can be said on behalf of the inference in question. Notice that the identities (1)-(4) are not merely contingent truths. They were taken rather as definitive of the interpreta­tion I. They are, if you will, analytic truths. And substitutivity of analytic identities in intensional contexts which, like the present one, do not involve propositional attitudes is unproblematic.

It might be argued that (1)-(4) are not really definitive of the interpretation I. For consider the interpretation I' that is just like 1 except that its assignment function f' satisfies the condition that,

(1') !('s') = the largest perfect number.

Suppose that there are only finitely many perfect numbers so that I' really is an interpretation. Then, just as I argued for (12) relative to I, [ should argue for

(12') 'Es . Gsp' is true under I' iff the largest perfect number is even and greater than the number of planets.

relative to 1'. Now, suppose that as a matter of contingent fact, the number of stars turns out to be identical with the largest perfect number. Then 1 and I' turn out to be one and the same interpretation. Surely,

(0') The largest pelfect number is even and greater than the number of planets.

and (0) cannot both be equally acceptable translations of 'Es . Gsp', and therefore 'Es . Gsp' cannot be an equally acceptable translation of both (0) and (0'). But by my own account it seems to be. Earlier, I insisted that the question of correctness of translation into logical notation makes no sense except relative to an interpretation. [should have insisted on something stronger. The question of correctness makes no sense except relative to a specification of an interpretation. [specified the inter­pretation 1 (=1') in two different ways. (0) is a correct translation relative to the first specification. (0') is a correct translation reI a-

Page 327: Perspectives on Mind

Commellts on Otto on Translation 319

tive to the second. Neither is correct relative to both. What is to be said about the inference from (7)-(9) to (10)? It is

unproblematic unless the 'iff' of (7) is construed intensionally while the 'iff' of (8) or (9) is construed truth functionally. But suppose the latter to be true. Then the inference is subject to all of the pitfalls of substituting identicals in intensional contexts. Once again I think that the criticism can be rebutted. I have urged that (I )-(4) can be construed as analytic truths. Similarly, I will urge that (5)-(7) are analytic. They are analytic because they follow from syntactical rules that define the system of logical notation that is being assumed together with the assumed definition of truth. I assume that the syntactical rules for the system in question entail, definitionally, that 'G' is a two-place predi­cate, etc., and that the truth definition entails, once again definition­ally, that a conjunction is true under an interpretation I if and only if both of its conjuncts are true under I, etc. (8) and (9) are entailed by analytic truths and therefore are themselves analytic. But the inference from (7)-(9) to (10) is problematic only if (8) and (9) are contingent.

The inference from (10) to (11), appealing as it does to set theoretical principles, is also open to criticism. One may have reserva­tions about set theory, but given the role that set theory has played in the development of logic, they would seem to be out of place at this point. The last inference, the inference from (11) to (12), appealing to colloqui­alization, is admittedly "unprincipled" in the sense that it not based on any system of well-articulated rules. It is nonetheless beyond reproach. Surely, by any reasonable standard of synonymy, 'The number of stars is even and the number of stars is greater than the number of planets' is synonymous with (0).

Earlier I claimed that it is (12) that provides the ultimate justification for taking 'Es . Gsp' to be a correct translation of (0). It is at this point, I suspect, that Otto would have his most serious misgivings concerning the proposed justification. I have been assuming a so-called truth conditional approach to meaning throughout, an approach that he apparently rejects. An impasse has been reached, and further discussion would have to be conducted at a different level.

Matters are not as straightforword as I have suggested, even for those of us who advocate truth conditional semantics. For example, problems arise in connection with our translations of conditionals. The chain of reasoning that led to (1\) leads to:

(13) 'Es ::>Gsp' is true under I iff either the number of stars is not even or the number of stars is greater than the number of planets.

Whether the reasoning can legitimately be carried one step further to:

Page 328: Perspectives on Mind

320 Herbel1 E. Hend,y

(14) 'Es ::>Gsp' is true under I iff if the number of stars is even then the number of stars is greater than the number of planets.

is, to say the very least, arguable. The problem, of course, is that in standard systems '::>' is interpreted as yielding conditionals that count as true under an interpretation if and only if either they have a false antecedent or a true consequent. Whether English indicative conditionals have the same truth conditions is a topic of considerable controversy. Frank Jackson [2] has aptly dubbed the thesis that the English indicative conditional is a material conditional, counting as true if its antecetent is false or its consequent is true, the eqllil'alence rhesis. Those like myself who accept the equivalence thesis will have no problem moving from (13) to (14). Those who would reject the thesis will have reservations. But I really don't see any threat to my proposed scheme of justification from these quarters. On the contrary, I take it to be a virtue of the scheme. It provides persuasive justification of translation only when one feels intuitively comfortable with translation. That is as it should be.

I have tried to argue that where we feel intuitively comfortable with our translations, those of us who accept truth conditional semantics are in a good position to give an objective, theoretically sound justification for those translations even in the absence of a system of translation rules of the sort proposed by Otto. Such a system may be a sufficient condition for objectivity, but it is by no means a necessary condition. Perhaps matters are not as bad as they first appeared to be. I do not mean to suggest that this observation in any way diminishes the importance of his program. Indeed, the process I referred to as "colloquialization", appears to be inverse of the process he refers to as "paraphrase". I have already conceded that it is not governed by an articulated set of principles. Thus whatever progress is made in the direction of codifying paraphrase would be welcome progress in the direction of codifying coiloquialization.

Another worry concerns the question of what is to be preserved in translation from natural language into the notation of logic. At one point in Otto's essay it is suggested that translation be "meaning preserving." At another it is suggested that the translation should have "the same meaning (for logical purposes)" of the sentence it translates. At still a third it is claimed that the translation be a "correct regimentation" of the translated sentence. Sameness of meaning, sameness of meaning for logical purposes and being a correct regimentation are not clear notions, and they are clearly not the same notion. (l do not mean to suggest that my own answer to this question, namely, truth conditions relative to a specitication of an interpretation, is without its problems.) Otto claims that his rules are open to empirical scrutiny. But surely those rules cannot be evaluated for empirical adequacy apart from some clarification of this issue. Consider. for example, the rule CI. the rule that allows

Page 329: Perspectives on Mind

Comments on Otto on Translation 321

passage from something of the type 'if L then M' to 'L :::> M', To my way of thinking it makes the translation derivation of 'Es :::> Gsp' from 'If the number of stars is even then the number of stars is greater than the number of planets' altogether too easy, I here assume that 'L :::> M' represents a material conditionaL In his book, Otto writes that he is there concerned only with standard first-order logic with identity, but there are a number of things in the present essay that suggest he has his sights set on more ambitious systems. If it is a material conditional, then those who reject the equivalence thesis should be made privy to the supporting empirical evidence. If it is not, some account should be given of the underlying semantics for ':::>'.

The reader will by now have discerned that Otto and I see the question of translatability into logical notation from radically divergent perspectives. My thoughts focused heavily on the semantics of the system of logical notation. His question "Is semantics necessary if we do translation properly?" suggests that he is more concerned with syntax. And his passing reference to different logics together with the observation that they incorporate different semantic presuppositions suggests to me that he sees translation as having more to do with structure than with meaning. All we need to know about ':::>' is that it is a connective used to form conditionals. Whether it be a material conditional, or the condtitional of an intuitionist logic or a conditional of one of the growing number of systems designed to codify the logic of indicative conditionals is beside the point.

Finally, I am somewhat puzzled by Otto's plea for ontological and metaphysical neutrality. Whether buying or selling a theory of meaning or translation, or for that matter a theory of anything, we have our beliefs concerning what there is and how it works. Intellectual integrity demands that those beliefs, no less than any other beliefs, be respected. Any theory worthy of the name will have ontological commitments. And those commitments must be weighed when evaluating the theory.

--'V--

Agreeing that objectivity need not be identified with (the mathe­matical notion of) effectiveness, Professor Hendry nonetheless wonders if "the absence of a rule governed translational procedure [really] entails that the question of whether a given translation is correct is non­objective." (Hendry: this volume, p. 315) He then goes on to give an illustration (involving the number of stars) in which he claims to present "good justification for regarding [a putative] translation as correct ... " yet as "arrived at in some less systematic, more intuitive manner." (p. 316) But inasmuch as his handling of the example in question is by

Page 330: Perspectives on Mind

322 Owpler Three

way of a set-theoretic interpretation, it is difficult to see how that amounts to an absence of rule-governed procedure. What rather seems to be the case is that the rules are of a different kind, and their use merely implicit.

But this raises another, more fundamental, issue regarding trans­lation: does it really require a llUlh-based semantic in order to be properly carried through? Hendry, as many others in the "mathematical" tradition, think so. In this vein, he argues that the "ultimate justifi­cation" of the translation 'Es • Gsp' for 'The number of stars is even and greater than the number of planets' is the result which he obtains as line 12 in a derivation of his own (p. 318), the result of which is:

'Es . Gsp' is true under I iff the number of stars is even and greater than the number of planets.

This, of course, follows Tarski. It is a tack that nearly all current semanticists follow. Their idea, briefly, is that a sentence is true just in case certain "truth conditions" are satisfied, and two sentences translate one another if they have the same truth conditions. Conse­quently, to "justify" a translation of 'X' to 'Y' is to exhibit an "interpretation" (an abstract model, as it were) under which the two always have the same truth value. This "interpretation" is proposed as the "meaning" of each, and since the two obviously have the same meaning, they must be "correct translations" of one another.

One difficulty with this approach, however, is that it conflates the notions of translation and logical equivalence. For example, treating the matter as Hendry (and others) do, '-(Es ~ ...()sp)' would be just as correct a translation of the sentence in question as 'Es . Gsp' and, indeed, so would infinitely many other equivalent possibilities be. This seems a terribly counterintuitive notion of translation, to say the least.

A more subtle and even more challenging difficulty for traditional set-theoretic semantics is to explain how one is to properly associate, for example, the number of stars with 'the number of stars', a step that has to be carried through if the Tarski approach is to succeed. Montague (1974) tried to do this, as have others, by attempting to reconstruct English in a manner that associates a basic vocabulary with hypothesized sets and then goes on to derive (by a "principle of compositionality") the meanings of phrases and sentences. The trouble here is that the rules that are then introduced (and they are considerable) are virtually ad hoc, since little attention is paid to the details of actual English grammar. What this approach does is imagine a world neatly packaged in sets and then, arbitrarily, reform the language to fit that structure. In other words, an ontology is fixed and grammar then fitted to it--a strategy uncomfortably reminiscent of dogmatic metaphysics. Theories do have ontological

Page 331: Perspectives on Mind

Commentary 323

commitments, as Hendry rightly observes. But, Otto would seem to be asking whether those commitments are to be insinuated into our theories simply because of the grammatical moves necessary to translate our descriptions of the facts into the terms of our theories? His view seems to be that grammar and the apparatus of translation is the very last thing that should contrive for us any sort of ontological commitment.

The approach which Otto proposes is grounded in the linguistics of usage. Interpretation, for example, is not set-theoretic modeling but rather a process of being explicit about what, relative to a context, is to count as given so far as unexplicated predications, synonymous expressions, and the like are concerned. Each new context, each new analysis. may result in different interpretive decisions. Thus. if one wishes a concrete extensional understanding, an appropriate ontological commitment can be specified. But there is no requirement that any such commitment be speci­fied. Translation, and the testing of inference, can proceed--and proceed systematically--without knowing "ultimate meaning." It sufficies to say what items are to be "taken" as linguistically equivalent (i.e. assumed the same in meaning). It is interesting to note that the "taking" decribed here can be viewed as a formal counterpart of the Husserl's "bracketing" in his discussion of the nature of perceptual understanding.

The same principle holds for paraphrase and transcription, though with some difference in detail. The difference in the former case is that paraphrase is syntactical, practically invariant for a given speech community, and therefore considerably more "rule-like" in character than interpretation which. to a considerable extent, is bound to situation and purpose. In the case of transcription. the rules amount to precising definitions of what we take to be essentially logical notions (e.g. connectives and quantifiers). Transcription might be thought of as a "master" interpretation which ties certain key (syncategormatic) terms of ordinary discourse to a particular formal method for assessing inference. But it is important to note that even this "master" interpretation (which grounds the rules of transcription, as it were) does not require extensional explication, nor is it immune to change.

It is illuminating in this connection to examine Hendry's criticism that Otto's rule CI (the transcription rule which specifies that 'if L then M' is to be rewritten as 'L :::>M')

... makes the translation derivation of 'Es :::> Gsp' from 'if the number of stars is even then the number of stars is greater than the number of planets' altogether too easy. (p. 321)

Since Otto's concern is show how translation into the notation of a logic formalism can be undertaken systematically, it is somewhat askance to suggest that the choice of the target language is "too easy." Nothing in

Page 332: Perspectives on Mind

324 Chapter 77u'ee

Otta's wark speaks to. the paint af whether specifying the target language will be easy or difficult. He daes, hawever. say that chaasing a target language is a theoretical concern and shauld nat be can fused with the cancerns af translatian per se; and. hence. we might fairly canclude, nat an easy matter at all. If it seems desirable far thearetical reasans to. translate into. set-thearetic natatian. apprapriate adjustment af the transcriptian rules wauld initiate an alteratian af the system af transla­tian. but it wauldn't change the linguistic basis af translatian derivatian ane bit. Nar daes it affect the task af assessing inference, and this far twa reasans: (I) to. answer the questian abaut validity, ane daes nat need to. knaw the truth value af premises; and (2) if a /01711 af an argument is valid, the argument in questian is valid: althaugh nat canversely. Just as soundness af an argument is nat to. be canfused with the validity af its farm. so. also. the chaice af a farmal language far thearetical purpases is nat to. be canfused with the apparatus af translatian.

But naw it becames clearer just what the cannectian is between Otta's thesis abaut translatian and theary af mind. If he is right in his claim that, far example. perception is essentially "translatian" from sensary processes into. the cantent af experience, then when we perceive we do. so. af necessity without antalagical assurances. Mareaver, the "intentianal structures" that give aur experience its cantent and that "drive" us taward antalagical cammitment are essentially translatian structures-- that is, deeply internalized rule-like functians. At this paint, it is appropriate that we take a laak at the prablem af translatian as viewed fram the perspective so. widely assumed to. be opposed to. the analytic traditian, namely the phenamenalagical. In the fallawing cammentary, Steve Fuller reacts to. Otta's paper by develaping a general critique af the standard analytic canceptian af translatian, particularly as that has been advanced by latter day analytic philasaphers such as Quine and Davidsan.

Fuller daes nat limit his discussian to. the technically narrow scape af translatian from ardinary discourse into. lagical natatian, but rather approaches the problem in its mast general linguistic terms as the task af translatian fram ane natural language (the alien language) into. anather (that af the translatar). In so. daing. Fuller aims to. emphasize the philasaphical difficulties invalved in attempting to. do. this given the intricate way in which meaning--particularly the intentianal--appears to. be baund up with cantext. the substance af which is so aften hidden in the background af a culture, and largely unexpressed. This latter. an "invisible" corpus. as it were, engenders what Fuller calls the "inscrut­ability af silence," a conditian having weighty consequences far the viability af the analytic project of translatian as that has been generally understaad.

Page 333: Perspectives on Mind

STEVE FULLER

Blindness to Silence: Some Dysfunctional Aspects of Meaning Making

One of Herbert Otto's main goals in "Meaning Making: Some Functional Aspects" appears to be to stage a sort of crucial experiment between contemporary analytic and Continental approaches to meaning. Otto construes the point of contention very much in the manner of an analytic philosopher. He assumes that whatever other things words do (and we are asked to think of LL. Austin here), they aim to inform. It follows that adequate translation must, at least, reproduce in the target language (TL) information conveyed originally in the source language (SL), [1] Moreover, Otto understands this information to be something objectively available to both languages, and capable of analysis in an extensional semantics.

The question to be put to him is whether information, in this strict sense, is all that is conveyed, He seems to think the answer is affirmative, Continental philosophers have suggested that there is more. They point to "expressive" or irreducibly "intentional" elements of lan­guage intimately tied to context and the subjectivity of the utterer, elements quite apart from the information conveyed by an utterance but no less necessary in making sense of it. Otto, however, seems to believe that the process of expressing an intention is, in principle, no different from any other that extensional semantics is designed to handle--only, in this case, the SL is "the language of thought." In short, Otto presumes that adequate translation must at least convey objectively specifiable content, or "information." Moreover, after examining seemingly exceptional cases raised by Continental philosophers, he concludes that even those cases can be analyzed as instances of conveying information. This would seem to make the conveyance of objectively specifiable content both necessary and sufficient for adequate translation, It is precisely this conclusion that I want to challenge. I shall proceed by showing that the leading analytic accounts of translation--Quine's and Davidson's--fall short of appreciating the full complexity of the task faced by professional translators and other practitioners of the human sciences, an appreciation that Continental approaches have tended to have.

I. The InsCiutabilin' of Silence and the Problem of Knoll'Iedge in the Human Sciences,

Perhaps the fundamental epistemological problem besetting anyone who seeks a systematic understanding of human beings, and who wishes to rely on utterances as evidence. concerns The criTeriological sTaWs of silence.

325 H. R. OliO and 1. A. Tllt'dio (eds.j, Perspectives Ofl Mind, 325-33g

© J9N8 hy D. Reidel Publishing Company.

Page 334: Perspectives on Mind

326 SIeve Fuller

More explicitly: Is silence a mark of the familiar or of the alien? Still more explicitly: If a "concept" (a belief or other intentional state using the concept) familiar to the humanist is missing from "the record" of a culture, should he conclude that the culture found the concept so familiar as not to require mention, or should he conclude they simply lacked the concept? As we deal with this problem--which we may call, with all due respect to Quine, rhe inscnllability of si/ence--it will become clear that the final verdict has yet to be delivered on what Kuhn and Feyerabend have canonized as "the incommensurability thesis." Indeed, we shall see that far from refuting this thesis, Davidson's transcendental argument for the translatability of alien discourse is quite compatible with it. [2]

To take a first look at silence, let us consider a case adapted from Paul Feyerabend (1975, ch. 17). The Homeric epics mention various parts of the human body without ever mentioning the body as a whole. Does this mean that the archaic Greeks had no concept of the body qua unit, or, as we might normally think, that they intended the concept as implicitly under­stood? Such a question is fundamental to the epistemology of the human sciences because it forces us to jusrif" a maxim without which no systema­tic understanding of human beings would seem to be possible: namely, for speakers A and B, if A says something that B understands as p, then, unless B has reason to think othelwise, A may be taken as intending all that is normally presupposed by p. Of course, "normally" needs specification, but, relative to our example, if Homer appears to be speaking of limbs and organs, then it is clearly "normal" for the classicist to understand Homer as presupposing at least a whole body of which those limbs and organs are parts. Moreover, since it is difficult to imagine an interpretation of Homer where the presupposition would be mis-attributed, the classicist would want first to find evidence of mis-interpretation, such as anomalous utterances, before suggesting an alternative interpretation. All this seems to be sound practice--that is, until we try to justify it.

As Paul Grice (1957, 1975) and others have pointed out, B is justified in attributing certain "implicitly understood" presuppositions to A, only if A is understood as addressing B. It is important to see why this is so, since the classicist may persist in believing that it makes sense to attribute to Homer the concept of body qua unit, even though fully aware that Homer is not specifically addressing him. What the classicist fails to see is that though it may seem reasonable to make such attribution, that does not explain why Homer never articulates the notion. An intuition, though primordial, need not go without mention; indeed, in the psychology of perception, our own concept of bodily wholeness is a major topic, namely, "proprioception."

It might seem that a Gricean account could explain how Homer possessed, yet never mentioned, the concept of bodily wholeness: Homer was addressing an audience for whom mention of the concept would have been

Page 335: Perspectives on Mind

Blindness to Silence: Some Dysfunctional Aspects of Meaning-Making 327

gratuitous; thus, he was obeying the Quantity Maxim of conversational implicature--that speakers should say no more and no less than is needed to be perfectly understood by their intended audiences. Such an account would be true were Homer addressing Ils--but clearly he is not. Indeed, as Matthew Arnold famously pointed out, we know less about Homer's intended audience than about Homer himself (Newmark 1981, p. II). And even if the classicist did know the identity of Homer's audience, since Homer himself would not recognize the classicist as one of its members, the classicist ends up engaging the Homeric text in the epistemic role of a spectator to an exchange--between Homer and his audience--in which none of the utterances are intended for him. [3] The ultimate private conversation!

Thus, the classicist cannot justify his taking silence as a mark of the familiar by appealing to the philosophical account of communication represented by Grice. This is not to say that the classicist must therefore conclude that Homer did not have a concept of bodily wholeness. Rather, the classicist's epistemic stance simply does not permit him to decide between the two interpretations. While the Homer example is extreme, the same problem can be refashioned for all cases in which an author did not intend the interpreter as part of his audience.

Besides highlighting the inscrutability of silence, the above example functions as a kind of "duck-rabbit" Gestalt for the problems of interpretation that have recently vexed philosophers of language and science: Does the inscrutability of silence illustrate the indetel7ninacy of translation, or does it illustrate the incommensurability thesis? Ian Hacking (1975, 1982) has suggested that these two theses offer contrary diagnoses of what can go wrong during interpretation. On the one hand, the scientist may anive at several incompatible, but equally adequate, TL interpretations of a SL: on the other hand, he may be faced with not even one adequate interpretation. Interestingly, while most philosophers find the former thesis--indeterminacy--the more compelling, most practitioners of the human sciences (especially literary critics and anthropologists) are pulled toward the latter--incommensurability. On the surface, these two views appeal to quite divergent intuitions about the nature of inter­pretation. However, I shall argue that, like the "duck" and "rabbit" faces of the famous Gestalt, indeterminacy and incommensurability are themselves just complementary ways of interpreting the inscrutability of silence.

First, notice the difference in the kinds of arguments used to justify the two theses. From Davidson's (1983; see also Rorty 1972) articles on interpretation, it is clear that the indeterminacy thesis is a consequence (intended or not) of a transcendental argument to the effect that every language is translatable into our own. Davidson asks us to conceive of a situation in which we could identify a collection of signs as a language without at least having implicitly interpreted them. Since he believes that such a situation is inconceivable, Davidson concludes that transla-

Page 336: Perspectives on Mind

328 SIeve Fuller

tability is a necessary condition for recogOlzmg signs as linguistic. Beyond that, however, Davidson is not much concerned with which interpre­tation we confer on those signs. Given his concept of translatability, this makes sense since he does not offer the interpreter any real choice: translatability is defined as showing most of the sentences in a TL to be true and the rest understandable errors. And so, even when an alien sounds his strangest, the scientist must opt to interpret him either as having false beliefs forged from familiar concepts used familiarly or as having true beliefs forged from those concepts used idiosyncratically.

But even if Davidson were interested in resolving this indeterminacy, his use of transcendental argumentation would not help. The reason, simply put, is that a transcendental argument typically establishes that X must be the case without (and perhaps instead of) establishing how one would identify instances of X being the case. Thus, why does a Humean remain unimpressed after hearing a Kantian argue that our experience of the physical world would be inconceivable if every event did not have a cause? The answer, of course, is that such an argument, even if valid, does not help determine PQl1iCll/ar causes of panicu/ar events--which, of course, is the Humean's central interest. Hence, it is clear that an affirmative transcendental argument about the general case (cause per se) is quite compatible with an empiricist argument skeptical about particular (actual) causes. Not surprisingly, we find the incommensurability thesis typically appearing as the skeptical outcome of empiricist arguments grounded in particular cases of failed or impeded translation from native cultures.

As an illustration of the incommensurabilist's role as "Humean hermeneutician," consider how Peter Winch reconciles his view that native cultures can be understood only from the inside (that is, complete translation into one's own discourse is impossible) with his view (which he shares with Davidson) that there are cross-cultural principles of interpretation and rationality:

I never of course denied that Zande witchcraft practices involve behavior we can identify as "the recognition of a contradiction." What I was arguing, though, was that we should be cautious in how we identify the contradiction, which may not be what it would be if we approach it with "scientific" preconceptions (Winch 1970, p. 254).

The key words here are "recognition" and "identify." The Zan de and the anthropologist may assent to exactly the same rules of deductive inference, yet make different judgments on whether particular natural language arguments are valid by those rules. Failure to see this stems from a failure to appreciate the inscrutability of silence. Thus, an anthropologist may identify an argument uttered by a Zande speaker (assuming, probably contrary to fact. that "arguing" is a legitimate Zanc1e

Page 337: Perspectives on Mind

Blindness to Silence: Some Dysfullctional Aspects of Meanillg-Making 329

speech act) as invalid simply because he fails to supply suppressed premises readily supplied by the speaker's intended audience. Or, again, an anthropologist, much to the consternation of the Zande audience, might judge an argument uttered by a Zande speaker valid because he, the anthropologist, read into the argument more than what was warranted by the speaker's actual utterance. In either case, whether out of parsimony or charity, the anthropologist has failed to appreciate how the Zande language (with its particular syntax and semantics, together with those universal principles of rationality that equip it to convey truths) is converted into timely and efficient pieces of discourse by its speakers. In short, the anthropologist has failed at Zande pragmatics.

To recapitulate: I have claimed that the indeterminacy and incommensurability theses are just complementary ways of interpreting the inscrutability of silence. We then saw that Davidson believes, on the basis of a transcendental argument, that the problem of interpretation is solved once it is shown that at least one interpretation is possible for any given text. However, we also saw that the incommensurabilists, echoing Hume, believe the problem of interpretation is just beginning once it seems we can go no further than to provide transcendental argument. In effect, they emphasize Davidson's failure to show that a text can have exactly olle interpretation. They thus intimate that incommensurability is a very subtle, if not impossible, problem to solve. In any case, it requires that the communicative context of the text we aim to interpret be empirically specified. Indeed, a key reason why Davidson and his partisans do not explicitly derive the incommensurability thesis from their own failure to overcome indeterminacy is that they regard the sentences of a text as the sole objects of interpretation, thereby neglecting the silences that allowed those sentences to function as economical expression of thought when originally uttered. In short, the Davidsonians commonsensically, but fallaciously, equate the unsaid with the unspecified.

2. TIle Blindness To Silence: Deconstructillg the Analytic Approach To Translation.

If the inscrutability of silence is indeed the fundamental epistem­ological problem of the human sciences, why then has it gone relatively unnoticed? In particular. how could the error just attributed to the Davidsonians have arisen in the first place? One obvious source is the way the problem of interpretation was originally posed in analytic philosophy; that is, through Quine's (1960, ch. 2) radical trallslation episode (RTE). Because Quine stipulated that the anthropologist had to translate discourse from scratch, the RTE was not presented as an especially colllmwlicalil'e one--notwithstanding that the native speaker had to at least recognize the anthropologist as his intended audience when answering "yes" or "no" to

Page 338: Perspectives on Mind

330 Stel'e Fuller

various analytic queries. This is a subtle but significant point, since, as we have seen, the incommensurabilist reading of inscrutability denies the possibility of interpretation, ullless the communicative context of an utterance be recovered.

Since there are no commonly accepted means for determining that we have understood what someone has said, and since communication is essential to any sustained human endeavor, we are forced to presume that we have understood our interlocutor until a misunderstanding has been brought to our attention. This seems a reasonable strategy, one that has supported many years of successful human interaction. However, it assumes that misunderstandings are sooner or later detected and, furthermore, that they are detected as such. But we can imagine misunderstandings persisting for long periods because the parties are using much the same language, yet are using it to mean systematically different things. The longer such discourse proceeds unchecked, the more the misunderstandings accumulate, until finally a "crisis" arises causing a breakdown in communication. To make matters worse, it may be that the crisis is not then diagnosed as the result of accumulated prior misunderstandings, but rather is taken to issue from some deep conceptual problems that none of the current interlocutors are able to resolve to everyone's satisfaction. It would not be surprising were these "problems" to manifest either as an inability to apply a concept to an anomalous case, or as a paradox whose solution required a more finely grained lexicon than then available. In the end, the discourse community would fragment into schools, paradigms, and disciplines--quite in keeping with the Biblical tale of the Tower of Babel (for which reason it might be called the "Babel Thesis"). Notice that for this entire scenario to be true, nothing in tile historical record would have to be different.

Analytic philosophers of language, such as Quine and Davidson, make a point of arguing that the Babel Thesis must be false if translation is to be possible at all. In order to make such a strong claim, they must have ways of talking about the transmission of knowledge that systematically prevent the Babel Thesis from being expressed as an intelligent alterna­tive. With this thought in mind, I shall examine four features of Quine's RTE that indirectly serve to make the Babel Thesis less plausible: first, the way in which the idea of translation is construed; second, the way in which the idea of linguistic rule is construed; third, the role given speech as the paradigm of language use; and 10lll1i1, the implicit aims for constructing translations.

First, the theory of translation implicit in the radical translation episode (RTE) is quite unlike the one in the actual practice of natural language translators. Indeed, this theory reflects Quine's training as a logician in the heyday of logical positivism. To see what is being suggested here, consider two general strategies for translation.

Page 339: Perspectives on Mind

Blindness 10 Silence: Some Dysfunctional Aspects of Meaning-Making 331

(TI) The translator renders an alien text in sentences nearest in meaning to ones speakers in his own language would normally use, even if it means losing some of the ambiguity or nuance in the alien text.

(T2) The translator renders an alien text in sentences nearest in meaning to ones that, though grammatically possible in his own language, require a suspension of normal usage, perhaps includ­ing the introduction of neologistic terms and distinctions intended to capture semantic subtleties in the alien text.

In short, translation proceeds in (TI) by the translator adjusting the alien language to fit his own, while in (T2) it proceeds by the translator adjusting his own language to fit the alien one. Quine's RTE complies with (TI), the native speaker simply responding to test cases selected by the translator specifically to reflect semantic distinctions drawn in the translator's own language.

As a general account of translation, (TI) implies that a set of noises or marks does not constitute a meaningful utterance unless it can be translated into one's own language. The model of this position is Wittgenstein's Tractartls, which argues that the limits of translatability cannot be recognized as such: either one is able to give a complete rendering of the propositional content of an alien text in one's own language or one is forced into silence. The historical sources of (TI) are Russell and Camap, who gave the name "translation" to the task of isolating the propositional content of natural language sentences and reproducing that content in a formal language. And even though this project of translation--championed by logical positivism--was abandoned thirty years ago, Quine (as well as Davidson) continues to privilege the translator's language as the non-negotiable basis for making sense of the native's utterances. He manages this rather subtly by claiming that all languages are implicit theories of physical reality, with the anthropolo­gist's language differing from the native's only in terms of its richness. This move commits the natives to, among other things, having the same interests that the anthropologist's culture has in using language (specifi­cally, interests associated with representing reality). Thus, the fact that the anthropologist needs to force the native to respond to specially designed situations can be interpreted as showing the native's language less adequate to its own goals than the anthropologist's language.

Moreover, the anthropologist has at his disposal a repertoire of linguistic distinctions that conceal this tacit evaluation without causing him to won)' that he might be misreading what the native has said. The two most frequently used distinctions of this kind are probably cognitil'e 1'5.

emotil'e and propositional \'5. {Jeljol711ati>'e. Any aspect of the native's

Page 340: Perspectives on Mind

332 SIeve Fuller

utterance that the anthropologist cannot readily check against the semantic categories of his own language becomes a candidate for the second half of the dichotomy. However, these distinctions start to look suspiciously ethnocentric, once it appears that mosl of what the native says turns out to be emotive or performative (Sperber 1982). Indeed, the principle of charity itself may be read as a covert statement of ethnocentrism, since it instructs the anthropologist to interpret the native either as saying something that the anthropologist already knows (and perhaps can articulate better) or as erring because the native lacks some background knowledge that the anthropologist has. In other words, charity does not allow the possibility that the anthropologist and the native may have a legitimate, cognitively based disagreement.

By contrast, the professional translator's implicit theory of translation, (T2) , affords him the opportunity to strike a critical stance toward his own culture. This opportunity arises whenever the translator confronts fluent native expressions that can be rendered in his own language only with great difficulty, as witnessed by the number of neologisms he must construct. Moreover, the awkwardness of these neologisms is readily noticed, giving the translation a distinctly alien quality, quite unlike the way the original expression must have seemed to the native. For the philosophically minded reader, a most vivid example is Heidegger's (1962) attempt to recapture in German the metaphysical distinctions drawn by the ancient Greeks, who were, of course, noteworthy for engaging in discourse much more publicly accessible than Heidegger's "faithful" rendition. In this case, we see that unlike the (Tl) translator, the (T2) translator recognizes the limits of translatability ill the velY act of trallslatioll; for the more attentive he is to the semantic distinctions drawn in the original (as Heidegger was), the more he also comes to emphasize the "otherness" of the native language and, hence, the inability of the native language to serve a particular function outside its original context. To put it bluntly, if a translation attempts to be too "close" to the original, it ends up defeating the overall purpose of translation, which is to render the foreign familiar.

Depending on specific goals, translators resolve this tension in a vaIiety of ways, but it should be noted that ill each case some bifol7llalioll cOlllailled ill the origillal is losl. And while some of this lost information can be recovered by returning to the original text and setting new translation goals, for reasons of economy attempts at recovery are rare, and never systematically executed. This means that to a large extent, the knowledge our culture has gathered and transmitted over the centuries has been captive to the changing aims of translators--which, in turn, is the basis for whatever truth there is in the Babel Thesis.

The secolld way in which RTE conspires against the Babel Thesis may be captured by the following question: How are linguistic rules to be

Page 341: Perspectives on Mind

Blindness to Silel1ce: Some Dysful1ctional Aspects of Meal1il1g-Making 333

characterized, independently of whether they appear as a generative grammar designed by a computational linguist in the office or as a translation manual constructed by an anthropologist in the field? In either setting, the rules are normally conceived as positive directives for arriving at syntactically and semantically correct utterances. However, in the RTE, the only evidence the anthropologist has that native discourse is governed by some rule or other is the native's negative response to the anthropolo­gist's incorrect utterances. Quine is fully aware of this; indeed, it is the essence of the indeterminacy of translation--which, in effect, claims that no amount of negative feedback from the native will ever be enough for the anthropologist to determine what positive rules he has been breaking. Thus understood, Quine's thesis locates the "indeterminacy" of translation precisely in the epistemic gap between the native's direct positive grasp of them and the anthropologist's indirect, negative reach for them.

But suppose linguistic rules were themselves inherently indirect or, as Wittgensteinians like to put it, "open-textured." Then, the fact that the anthropologist never seems to get enough evidence for inferring the native grammar would be the result of the rules themselves being nothing more than l1egative directives defining what cannot be meant by a certain expression in a certain context, but otherwise leaving open what can be meant. In that event, indeterminacy would not be merely a consequence of the anthropologist not being a native speaker; but rather, would be a feature built into the very structure of language itself, whose constraints on its users would be more ill-defined than normally supposed. We would not be surprised then at the native himself not being able to articulate the rules governing his own discourse, or, at least, not being able to articulate rules that are consistent with the judgment calls he spontan­eously makes about what counts as "correct usage." Linguists in fact constantly run up against such discrepancies when testing the psychological validity of a grammar (Greene 1972). Not only would the idea of linguistic rules as negative directives illuminate such discrepancies, but it would also explain the pragmatic source of terms imperceptibly shifting their meanings and referents over time, a phenomenon long evidenced in etymol­ogies, and which has been a cornerstone of the incommensurability thesis.

Third, in the RTE, since the native is in the presence of the anthropologist, mistranslations can be corrected immediately after they occur. This is one feature of speech as a linguistic medium that distinguishes it from writing. There is, for example, little actual face-to-face contact among either the people who make, or who record. the histOl)' of science. Admittedly, there is a fair amount of such contact among members of a school of thought or a scientific community confined to, say, one academic institution. Indeed, this constant, and largely speech-based, interaction ensures the formation of strong normative bounds on what can be said and done. But such bounds do not normally extend to

Page 342: Perspectives on Mind

334 Steve Fuller

other institutions, the work of whose members is encountered almost exclusively through the written media of journals and books.

In the case of written communication, members of one community regularly take their ability to incorporate the work of another community into their own research as evidence of having understood the nature of the other's activities--hardly a foolproof strategy for the kind of translation Quine's anthropologist wants. Indeed, the curious historical trajectories often taken by disciplines may be explained in part by this failure to distinguish clearly between understanding and using someone else's work. As long as this distinction is not made by a community of researchers, incommensurability remains a significant possibility.

The fOUl1iz and final feature of the RTE that may seem to cast doubt on the incommensurability thesis concerns the goals of translation--more specifically, whether all attempts at translation have at least one goal in common. Quine assumes there is some intuitive sense in which understanding someone else's discourse can be pursued as an end in its own right, namely, as the project of semantics. This explains why Quine's reader is never told what the anthropologist's aim is in translating the tribal language in the first place, aside from preserving the content (i.e. the reference, not the sense) of tribal utterances. On the other hand, an adequate under­standing of another's discourse is often a means for one's own cognitive ends. And depending on the nature of those ends and the constraints on how they may be achieved, various translations may pass as adequate understand­ing. The point here is analogous to the one made by Bas van Fraassen (1980) about the nature of explanation in 771e Scientific Image: just as there is no privileged "scientific" explanation that is the best answer to all requests for an explanation, so too there is no privileged "semantic" translation that is the best answer to all requests for a translation.

But let us turn briefly to the Roman orator Cicero, who is credited with originating what analytic philosophers generally take to be the only theory of translation: namely, that the sense of the translated language should be preserved in the translating language (Bassnett-McGuire 1980). For we shall find that Cicero's motivations were not quite as they seem to modern eyes. Cicero did not advance the sense-preserving view of translation in order to curb the tendencies of readers solely interested in the use-value of texts. On the contrary, he held that sense-preserving translation offered the most effective means for preserving and transmitting the accumulated wisdom of the Greeks. In other words, Cicero took maximum understanding as necessary for making the most use of another's discourse. No one today would hold this up as particularly rational once the amount of time needed to fully understand what a precursor meant is weighed against the likely payoff of this understanding for our own research. But of course, Cicero presumed the view, strong even during the Scientific Revolution, that intellectual progress consists in

Page 343: Perspectives on Mind

Blindness 10 Silence: Some D),sjundionai AspedS of Meaning-Making 335

showing how one's current research, whether speculative or empirical, illuminates some ultimate source of knowledge, usually some obscure Greek, Hebrew or early Christian text. Indeed, a key wedge dividing what we now call "the sciences" from "the humanities" occurred in the eighteenth and nineteenth centuries, when the sciences lost that Ciceronian sensibility--a fact likely responsible for our current inability to see any problems in using someone's ideas (that is, paraphrases of his text) even if we cannot fully see what he had intended when he first articulated them.

A common literary construal of the sense-preserving thesis that has escaped the notice of analytic philosophers is the genre-presen,ing translation, which requires that the translator capture not only the "content" of the original but also some sense of how its syntax indicated the kind of work it was to its original audience. To take some simple examples, works originally composed as poems should look like poems in translation, histories should look like histories, science like science. But, since we are far from a general theory of stylistics capable of distinguishing histories, sciences, and other so-called cognitive discourses from one another, it is not clear what changes would need to be made in actual translation practices.

The reductio of the stylistic strategy is captured perhaps in the idea that not only should the translator represent the syntactic features that made a text accessible to its original audience but also those features that make it inaccessible, or at least alien, to its current audience. In formulating the hermeneutical enterprise, Friedrich Schleiermacher argued that the only way in which the reader is encouraged to seek out the tacit presuppositions (and hence underlying meaning) of a previous discourse is through a translation whose obscurity forces the reader to question even the most elementary thought processes of the author. The maxim assumed here, that difficult expression provokes deep thinking, may be repugnant to the instincts of analytic philosophers; nevertheless, we should not forget the fact that Schleiermacher's counsel of obscurity was followed not only in Germany, but also as the major criterion of adequacy for translation during the Victorian period, which led translators and other conveyors of distant cultures (including Carlyle, Browning, Pater, and Fitzgerald) to render that "distance" stylistically in an archaic, stilted English prose.

Notice that the hermeneutical strategy turns Quine's principle of charity on its head. For rather than minimize the number of sentences in the translated language that turn out false or strange, Schleiermacher proposed to maximize their number. Quine would, no doubt, respond by pointing out that the hermeneutical strategy actually removes the most crucial check on the adequacy of translation, namely, that it renders the author an independent rational agent. However, Schleiermacher would probably reply that the "rationality" of human beings lies not in their recurrent--perhaps even universal--patterns of conduct, but rather in their

Page 344: Perspectives on Mind

336 Steve Fuller

ability to render meaningful unrepeatable--perhaps even unique--situations. Moreover, this reply would not be due merely to the influence of Romanticism on Schleiermacher, but it would reflect a major alternative tradition in the history of rationality, beginning with Aristotle's discussion of judicial discretion in which the paradigm of reason is located in the practical rather than in the theoretical (Brown 1977).

The point is that even if philosophers such as Quine and Davidson are correct in regarding a theory of translation as a covert theory of rationality, that at best gives us only a functional, not a substantive, definition of "rationality." In other words, the analytic philosophers should be taken, not as having argued for any pa/1icular theory of translation or theory of rationality, but only for a necessary connection between those two sorts of theories, regardless of their content. But, in that case, incommensurability again looks plausible if only because the very idea of sense-preserving translation has itself been subject to changes in sense and the very idea of rationality has itself over the years been exemplified by individuals \Vito lI'ould not consider each other rational.

But even if we granted the correctness of Quine's belief that optimal translation strategy should take the principle of charity as a regulative ideal, it still would not follow that misunderstandings will tend to be minimized and incommensurability eliminated. As we have seen, outside the artificial setting of Quinean concerns the attempt to minimize the number of sentences in the translated language that turn out false or strange can be quite easily seen BS a strategy for co-opting the Buthor's beliefs into the translator's set of beliefs and smoothing over whatever real differ­ences remain. In other words, the principle of charity might be designed to promote a form of Whig History, where the historical figures have only the options of either giving inchoate expression to our current beliefs or simply being deemed irrational. The unpalatability of these alternatives has moved Michel Foucault (1975) to devise an historiography of science that does away with the principle of charity and presumes incommen­surability as a regulative ideal of historical inquiry (Hacking 1979). Foucault's "archaeological" strategy has been, roughly, to take the apparent strangeness of past discourse to indicate a genuine break with our own discourse. In this way, the sovereignty we ultimately exercise over interpreting the past can be methodologically curbed. [4]

3. Conclusion

In the foregoing discussion, we have seen that in spite of Quine and Davidson's noblest intentions to render the natives charitably, the consequences of simply presuming (until shown otherwise) the accuracy of such translations are likely to foster an ethnocentric account of the natives that is incommensurable with their own. Moreover, since most

Page 345: Perspectives on Mind

Blindness to Silence: Some Dysfunctional Aspects of Meaning-Making 337

translation tasks are text--rather than speaker--based, the natives are not normally available to offer the translator the sort of regular feedback that would point up errors. Without any natives (or their surrogates), the incommensurability passes unnoticed, and hence, the inscrutability of silence. In arguing for this conclusion, I have raised several aspects of actual translation practice, which taken together cast, I think, serious doubts on the foundation of Otto's project of rendering the problem of translation in terms of "meaning making." Whereas Otto sees the expressive function of SL texts as something perhaps reducible to, or at least additional to, the text's informative function, we have seen that professional translators find that these two functions normally pull in opposite directions in any given translation task. The result is a "negotiated settlement" in TL, determined by the translator's interest in having the SL text translated in the first place. Consequently, in the history of translation practices, there have been different conceptions of "preserving the sense" or "conveying the information" of SL texts in the TL, which themselves have changed as the typical reasons for wanting a translation have changed. This variance in the idea of invariant content, the bane of all analytic accounts of translation, is precisely the ironic insight offered by current deconstructionist approaches to language.

--'J--

As we saw earlier, the criticism which Hendry directed against Otto's paper was initiated from a stance quite in line with accepted analytic policy. Like Quine and Davidson, Hendry takes it for granted that a correct approach to translation implies a single, ultimate, ontological commitment. We have just seen that Fuller, by contrast, rejects this "neo-positivistic" conception of translation, and in so doing advances a critique of Quine and Davidson which he takes to be decisive against Otto's view of translation as well. He may be missing important differences between Otto and those proponents of classical analytic philosophy. For he seems to overlook the fact that Otto's approach opens up translation in a way that no longer requires a fixed ontology--a way that provides for the very suspension of ontological commitment called for by the phenomenologi­cal "bracketing" Fuller himself views as crucial. Yet at the same time that Otto's formulation of systematic translation eliminates the require­ment of a fixed ontology, it nevertheless continues to adhere to the traditional analytic goals of clarity and rigor. The fact that Fuller directs attention to the problem of translation between natural languages, while Otto gives his attention to translation from ordinary discourse into the artificial language of logic does not appear to be essential to the key issues. since Otto has said he views his notion of translation as

Page 346: Perspectives on Mind

338 Chapter T71ree

extendable to the general case--indeed possibly even to processes that may not initially seem to involve anything like translation (e.g. an important portion of the human perceptual system).

In his concern for preserving the integrity of human understanding as it occurs in diverse cultural settings and is expressed in different natural languages, Fuller argues that the Quine/Davidson approach to translation is but one of two possible strategies available to the anthropologist. Briefly put, these are (cf. Fuller: this volume, p. 231):

or,

(I) The translator is to render a text by adjusting the linguistic elements of the alien language to fit those of his own language,

(2) The translator is to render a text by adjusting the linguistic elements of his own language to fit those of the alien language.

Fuller points out that Quine and Davidson employ the first of these and consciously eschew the second, while Fuller himself would seem to opt for just the reverse. Otto, by contrast, seems to think that to the extent that the distinction makes any sense at all, his way of daing translation accommodates either strategy. The point is that the strategy that one adopts depends upon what Fuller himself has earlier noted, namely, the pwpose for which the translator is doing the translation in the first place. If, for example, one happens to think--as Fuller says Quine does--that all languages are theories of reality (and one happens to be interested in theories of reality), and if one further believes--as Quine might--that our language is superior to every other for that purpose, then one might well want to, and have some justification for, following strategy #1. If on the other hand, one is trying to grasp how the alien perceives things or feels about things, and if one believes that the suhjective consciousness associated with the alien's words has no adequate counterpart in our own language as it stands, then it makes sense--and has some justification--to follow strategy #2. Indeed, this possibility provides an important insight into why languages are constantly undergoing change and elaboration.

But the issues are complicated, and there are surely other arguments to examine when contemplating a philosophical reconstruction of analytic practice. We shall look at some of these in the next chapter.

Page 347: Perspectives on Mind

Chapter Four

PROSPECTS FOR DIALOGUE AND SYNTHESIS

The popularity of the computer "model" of mind has brought about a near-revolution in the psychological and related sciences, as well as a tremendous resurgence of speculative and critical activity in the philosophy germane to these areas. Talk of a Kuhnian "paradigm shift" has become commonplace. One can still feel the excitement of Aaron Sloman's The Computer Revolution in Philosophy (1978) reverberating across the disciplines, even as critics continue to echo the challenges against this "new philosophy" sounded quite early in the discussion by Hubert Dreyfus in What Computers Can't Do (1972).

4.1 Convergence

Recognizing that the various papers in this volume are individual efforts approaching the problem of mind from different directions, we feel it necessary to emphasize that we are not claiming this anthology to have achieved a synthesis of the presently diverse methods of analytic and Continental philosophy. Rather, what we are claiming is that by bringing together work from the two traditions we have allowed our audience to see in a single volume the concerns and styles of writers from both sides of the "divide", and that by way of our organization of the material and by the commentaries included, we have sought to achieve a serendipitous "environment" of investigation and dialogue in which new ideas and views may have been suggested, useful suggestions and insights exchanged, and varied concerns and puzzles shared. We are not trying to force any sort of "amalgamation" of methods-- though we are open to discussion of that possibility. Nor are we, at this juncture, trying in any way to act as judge in current debate regarding which of the two current "methodologies" is the "better." Our aim has been to be constructive, not contentious.

Indeed, we see nothing remiss in the thought that perhaps a very different approach than either the analytic or the Continental is required, one that might ultimately become the "privileged" perspective in the study of the mind. But, even in that event, an environment of open discussion and diversity of views is more likely to engender such a result than is an environment of conformity. Even "privileged" methodologies stand to gain by competition.

Regarding the suggestion that terminology presents a problem for the broad audience we envisage. we readily agree. It has been our intention,

339

Page 348: Perspectives on Mind

340 Chapter Four

at this stage, to deal with this difficulty by two means. First, by bringing into a common arena varying modes of expression, we have encouraged sympathetic appreciation of other ways of seeing and articula­ting aspects of human experience that might ultimately prove to be helpful in formulating a common language for understanding the phenomenon of mind; and second, we have brought forward several fairly concrete proposals regarding the possibility of natural commonalities underlying both human perception and human communication. Here, the notions of "intentional transaction" and "systematic translation" were central.

In this quickening of inquiry and debate. an old estrangement in philosophy has come to be seen in a new light. The analytic and the Continental traditions of philosophy suddenly find themselves facing a growing set of common philosophical and technical issues. The foregoing papers and commentaries have exempli fed the increasing extent to which lines of communication are being opened between these two major attitudes. Thus our question at this point must be: is any of this effort helping to initiate an authentic "convergence" of understanding or method between these two seemingly disparate traditions?

The following essay by Joseph Margolis is an investigation of this pressing question. In it, Professor Margolis argues that there is indeed a "convergence" in the making, and its characteristics can be grasped--as well as its existence indicated--by way of seven "themes". These themes. he argues, can be adduced from close inspection of contemporary philo­sophical and scientific discussion. The philosophical "conduits" through which this new spirit of mutual concern and cooperation appears to be running are, according to Margolis, pragmatism, on the one side, and phenomenology on the other.

Page 349: Perspectives on Mind

JOSEPH MARGOLIS

Pragmatism, Phenomenology, and the Psychological Sciences

What are the prospects of reconciling analytic and Continental theories of the psychological sciences?

I. Convergence and the Current Situation

There is a touch of absurdity in the work of the conceptual peacemaker who attempts to find a measure of agreement or near-agreement among seemingly warring factions. Of course, contention itself is a form of "neighborly" relation. At the present time, there is so much exchange and mutual examination going on between theorists said to belong to the Anglo-American tradition of philosophy, on the one hand, and the Continental tradition on the other, that the peacemaker's function is being eased by the anticipation that there must be a good deal of convergence of ideas where, before, there was only contemptuous ignorance. It is hardly merely the guilt of self-recognition that has encouraged such expectations: there's a good deal of truth in them. If we permit ourselves a very generous use of labels--which stricter partisans will undoubtedly resent--we might dare to say that analytic or Anglo-American philosophy (hardly coextensive notions) is converging toward pragmatism and that Continental philosophy (hardly homogeneous on anyone's view) is converging toward phenomenology, and that pragmatism and phenomenology are fast becoming analogues of one another.

Perhaps there can be no adequate apology for such deliberately outrageous dicta, once offense is given. But there is no offense intended, only the convenience of an admittedly cartoon generalization that, once offered, invites more fruitful and more detailed comparison. In the liberal spirit here adopted, we could say that the center of gravity of what we are calling "pragmatism" lies somewhere between the overlapping commitments of John Dewey and W. V. Quine; and that the center of gravity of what we are calling "phenomenology" lies somewhere between the late views of Edmund Husserl and the early views of Martin Heidegger. Putting the matter this way is meant to offset the appearance of single favorites among contemporary theorists. [I]

Once we are past these initial niceties, certain crisp themes of convergence prove to be close at hand. They certainly include at least the following: (I) the conceptual symbiosis of realist and idealist aspects of cognition; (2) the rejection of any and all forms of the cognitive transparency of the real world: (3) the legitimation of cognitive claims only holistically or at least initially on non-epistemic grounds; (4) the

341 H, R. OUo and J A. TuedlO (t;dJ-.), Penprctil'n Oil Mind, 341-354-

© 1988 by D. Reidel Pub/i..lhing Company

Page 350: Perspectives on Mind

342 Joscph Margolis

priority, regarding matters of legitimation, of praxical functions over cognitive ones; (5) the rejection of all forms of "totalizing," and an insistence on the underdetermination of universal claims with regard to available evidence; (6) the historicizing of praxical activity as well as of cognitive inquiry; (7) an increasing tolerance of relativism, both with regard to first-order and second-order cognitive claims. [2]

It may appear remarkable that a set of themes of such scope and detail can be given. However that may be. if they are correctly ascribed to the principal currents of contemporary Western philosophy, then surely they signal a trend toward convergence of a most profound sort. Perhaps few actually subscribe to (I )-(7); indeed, there are even some clearly among the "invented" company we are trying to identify who would surely oppose this theme or that. Relativism, for instance, is not a doctrine that many embrace comfortably. No matter. The trends are nonetheless evident in the recent literature. A fair impression of that fact may be got quite straightforwardly hy comparing such seemingly incomparable documents as Quine's famous "Two Dogmas" paper and the Introduction to Heidegger's Being and Time. [3]

One of the most instructive features of the accounts given by Quine and Heidegger, respectively--a feature which is, in away, "prophetic" of the inseparability of the traditions of pragmatically and phenomenolog­ically oriented inquiry, as well as indicative of what could be discovered if the two traditions were integrated--is that, in each case, their distinctive power is also the source of their most distinctive weakness. This may be shown very easily.

Quine, for example, demonstrates (largely against Rudolf Carnap) that there is no principled demarcation hetween the analytic and the synthetic (between questions of meaning and questions of belief) and that, accordingly, there is no unique ordering or selection of distributed cognitive claims that can be confirmed or infirmed (taken singly) in "confirming" that the sciences (en bloc) may be fairly supposed to inform us about the actual world. But even if this be conceded--as the entire analytic tradition is inclined to do--we need not deny that, within a holistic science, workable divisions of analytic and synthetic statements will be found that are advantageous in supporting distributed claims. The point is that Quine's own argument precludes advancing allY grounds for favoring the physicalist and extensionalist programs he is known to favor; and that his own bias in this regard very strongly suggests that there are weighted grounds for favoring his choice (contrary to the force of the "Two Dogmas" argument). This means that Quine's theory does not provide for compararivc assessments of alternative ways of handling distributed cognitive claims (though Quine has his own convictions about how this should be done). It does not do so because it cannot do so; to have attempted to legitimate any pertinent rule or criterion would have been to

Page 351: Perspectives on Mind

Pragmatism, Phenomenology, and the Psychological Sciences 343

defeat the point of exposing the very dogmas Quine attacks, and to have subverted the non-epistemic (pragmatist) legitimation of science en bloc. But what is extraordinary is that Quine himself, as well as a preponderant number of analytic philosophers who accept the "Two Dogmas" argument, has blithely continued to press for physicalism and the extensional elimination of intensional complexities within natural languages without the least defense or pretense of a defense, and without supplying any grounds for justifying decisions about what aspects of the apparent experience and discourse of actual communities can be sacrificed in the name of the paraphrastic program favored.

Heidegger, on the other hand, historicizes the sense in which, on his view, Dasein mediates the disclosure of alternatively compelling (and stable) metaphysical and scientific systems that, in principle, cannot be taken to represelll in any sense at all the actual structures of an order of reality independent of inquiry. This is just the point of Heidegger's radical interpretation of the "subjectivity" of phenomenological discourse--that is, that a world of many things of given kinds depends, in an "originary" way. on what (metaphorically) is regarded as a relationship between Dasein and Sein, neither of which may be construed as "things" of some or any ("ontic") kind. One may fairly claim that the point of the fable about Dasein is just to deny that there is a transparently accessible world, or that we are endowed with the cognitive power to discern such a world. The upshot is that the Heideggerean picture is also incapable of legitimating any comparative assessment of the fruits of systematic science and philosophy; that, though it is not opposed to such assessment, it is much more interested in warning humanity of the inherent dangers of investing too fixedly in the peculiarly "contingent" order of whatever it is that science and philosophy may claim to have discerned.

Surprisingly perhaps, Quine and Heidegger "converge" here, not only in a broad way with regard to our seven themes, but also in their respective (and, it is important to note, principled) methodological incapacity. Thus, Quine advances his famous indeterminacy of translation thesis, in spite of the fact that demonstrating the actual extensional equivalence, holistically construed. of alternative parsings of a given body of science would require epistemic resources greater than those Quine admits; for if we had such resources, we should hardly be restricted to assessing the success of science ell bloc and on non-epistemic grounds. [4] In a similar manner, when Heidegger contrasts Aristotelian and Newtonian physics, he merely "elucidates" their difference in conceptual orientation (in that they take opposed views on linking the explanation of change to the nature of the particular kinds of entities changed, and in that the Newtonian view shows a characteristically greater "ontological" danger than does the Aristotelian); but Heidegger never turns to assess the relative explanatory power of the two in terms of any account of scientific realism. [5] And,

Page 352: Perspectives on Mind

344 Joseph Margolis

if he had, he would have been obliged to construe existential and ontological concerns as nothing more than a special subdivision of the "ontic" concerns of standard metaphysics. [6]

The upshot of these remarks is that, to the extent Western philosophy is converging along pragmatist and phenomenological lines, more or less in accord with the seven themes noted above, and more or less sympathetically with the sense of direction provided by Quine and Heidegger, we can discover no principled or legitimating grounds for an exclusive or near-exclusive or even very strongly weighted preference for an extensionalist treatment of the phenomena of the psychological sciences. Once phenomenology comes to be treated existentially in Heidegger's manner, and not (as with Husserl) in the manner of a transcendentally apodictic inquiry (which Heidegger effectively undermines with the fable of Dasein) [7], and once the realism of science comes to be treated in the pragmatist manner (holistically and non-epistemically), we are left with a very definite lacuna of a systematic sort: all our epistemic, methodological, and metaphysical projects become conceptually arbitrary in both pragmatist and phenomenological terms.

This is not to say that they are actually arbitrary, only that there is no way as yet supplied, within the terms of either Quine's or Heidegger's philosophical directives, to show that they are not arbitrary. In fact, it is a matvelous irony of recent efforts to bridge the gap between pragmatist and phenomenological themes that one of the most fashionable conclusions drawn from them is to repudiate altogether the pretensions of traditional philosophy itself: this of course is Richard Rorty's thesis, so much debated at the moment. [8] Nevertheless, it is quite clear that neither Quine nor Heidegger regard the work of the sciences as conceptually arbitrary; and neither does Rorty. [9] But, unfortunately, there is no sense in which we can acknowledge a competent first-order science--competent, that is, in assembling rigorously and self-correctively a body of distributed cognitive truths--without acknow­ledging the substantive relevance and ineluctable influence, on the direction of such a science. of second-order speculations about the procedures that should be rationally favored in such a science; and there can be no pertinent second-order speculations that are not addressed to the practices of an actual first-order science. Furthermore, the distinction between "first-order" and "second-order" questions of these sorts is itself the provisional recommendation of second-order reflections; and all such distinctions are entertained (in accord with something like our seven theses) in a way that is ill/emal to the comprehensive cognitive life of a society--and not in any way "ordained" by the putative powers of a hierarchy of merely formal systems of discourse.

Page 353: Perspectives on Mind

Pragmatism, Phenomenology, and the Psychological Sciences 345

2. The Role of the Inquiring Subject

Once we grasp the "direction" of the conceptual symbiosis of empirical science and epistemology, we cannot fail to see that our converging themes place the inquiring subject (both individual and societal) in a peculiarly strategic and ineliminable role, and do so just at the point of--and, curiously, in the theories of Quine and Heidegger, just because of-­disallowing any systematic, legitimated access to the distributed claims of a realist science. The irony is that contemporary programs, say, of extensionalist cognitive science, psychology, history, sociology and the like, that seek to eliminate or to reduce human persons or human selves to mere objects of inquiry, fly in the face of the underlying conceptual orientation in terms of which their own inquiries must have been originally encouraged and launched. For, the converging themes of both pragmatism and phenomenology--precisely in denying a global cognitive transparency (in which the intentional complexities of cognition itself would have been safely ignored as not distorting whatever of reality it disclosed [IO])--oblige us to concede, for the entire range of cognitive claims, the ineliminability of cognizing subjects. This is what phenomenology insists on in an absolutely central way; and this is what pragmatizing science and philosophy effectively come to. In however skewed a sense, this is also what is intended by Popper's rejection of "the theory of ultimate explanation" (reductionism) and of scientific determinism (this, at least partly, on the grounds that "we cannot predict, scientifically, results which we shall obtain in the course of the growth of our own knowledge." [II]

Broadly speaking, to admit a science as an effective inquiry commits us (consistently with our seven themes of convergence) both to the pertinence of legitimating science en bloc and of making comparative appraisals of would-be rational strategies regarding distributed claims within science, and to the ineliminability of what is meant by "subjectivity" in the phenomenological idiom. It follows rather simply, and without any need to pursue detailed disputes in particular disciplines, that inasmuch as (in the human sciences) human beings are at once both the subjects and objects of inquiry, and cannot be objects except insofar as they are, reflexively, inquiring subjects, the reduction or elimination of persons or selves requires the reduction or elimination of science and philosophy as well. [12] This is surely the point of Popper's claim. It is in a sense already adumbrated in Neurath's objection to Carnap's theory of the epistemic privilege of protocol sentences, and in Carnap's failure to provide a promised demonstration that discourse about the Erlebnisse of protocol sentences may be translated without remainder by means of a physicalist idiom. [13]

It also provides a reductio of Wilfrid Sellars's well-known version of

Page 354: Perspectives on Mind

346 Joseph Margolis

scientific realism, regarding which Sellars declares:

According to the view I am proposing, correspondence rules would appear in the material mode as statements to the effect that the objects of the observational framework do not really exist--there really are no such things. They envisage the abandonmellf of a sense and its denotation, [14]

For, Sellars does not actually address (and, on the evidence, could not carry out any more successfully than Carnap) the elimination of persons, that is, of ourselves as cognizing subjects of the scientific theory by which they (we) are to be eliminated. [15] The same over-sanguine confidence lurks in Daniel Dennett's assurance that:

The personal story ... has a relatively vulnerable and impermanent place in our conceptual scheme, and could in principle be rendered "obsolete" if some day we ceased to treat anything (any mobile body or system or device) as an intentional system--by reasoning with it, communicating with it, etc. [16]

The same unexamined and undefended confidence is entailed in Donald Davidson's anomalous monism, so influential at the moment, according to which the holistic, profoundly intensionalized idiom of the rationality of selves proves to be a mere Jaron de par/er for the convenient management of a physicalism expressed in terms of a token, but not type, identity between mental and physical events. [17] Apart from the demonstrable incoherence of his thesis [18], Davidson makes no attempt at all to defend his reductive monism--and apparently believes he need not, on the strength of the claim that the adequacy of an extensionalist idiom for science (effectively, the formal provision of a physicalist idiom) can be confirmed without attention at all to local semantic or interpretive uses of such an idiom within the operations of any particular science. But this is simply to confuse the philosophical neutrality of Tarski's conception of truth with the hardly neutral imp0l1 Jar natural languages of Tarski's application of his own concept to portions of completely extensional languages. [19] Here, one sees directly the peculiarly subversive effect of post hoc application, along lines already sketched, of Quine's "Two Dogmas" argument.

3. Counterconvergence: Recenr Attempts to Save Reductionism

What is peculiarly dampened in the analytic literature, and perhaps too floridly emphasized in the phenomenological. is that the notion of the human subject plays a double role in the human sciences and, at least

Page 355: Perspectives on Mind

Pragmatism. Pilenomenology. and the PsycilOlogical Sciences 347

implicitly. in the physical and formal sciences as well. One sees this at once if one concedes. for instance. that physics is as much concerned with human perceptions of the world, and human efforts to explain what is thus perceived, as it is with the putative structures and properties of physical nature independent of human cognition. This is the relatively unnoticed implication of Neurath's challenge to Carnap which, in effect, has been systematically muted (if not ignored) in the tradition that moves inexorably from Carnap to Quine to Davidson. [20] There can be no reduction or elimination of persons or selves as "objects" of psychology and sociology if there is no reduction or elimination of the very cognizing scientists who make the effort; and the undertaking of the first is conceptually inseparable from the second. The point was already adumbrated in the pre-phenomenological discoveries, by Brentano, of the double complexity involved in isolating the intentionality of mental states--and in the criticism of the inadequacies of Hume's empiricism and Kant's transcendentalism. [21]

But once we concede this point, we musto-consistently with the themes of convergent pragmatism and pilenomenology--see that there is no promising general strategy for eliminating or reducing persons or selves in physicalist or extensionalist terms. Here, too, the problem is a double one. For one thing, the effort to eliminate or regiment, in extensionalist terms, the strongly intensionalized idiom of intentional discourse cannot be managed in purely formal terms (as Quine and Davidson seem to think--the latter, apparently convinced he is following Tarski's lead, which seems not to be the case) but only in epistemically operative tenllS. And, second, the reductive project must, as we have already seen, be applied to the double role of selves or persons--where. by the double role is meant whatever may be contingently ascribed to selves under scrutiny. and whatever may be drawn from such scrutinizing relative to such objects. On a strong reading of our themes of convergence, as well as on the strength of the poor record of attempted eliminations of intensional complexities, we have absolutely no basis at the present time--and no reasoned basis for confidence in the future--that a thoroughly extensionalist or reductive treatment of selves is possible. Needless to say, an extensional simulation or first-phase (input-output) mapping of finite segments of human behavior neither provides grounds for speculation about secOlld-p/wse (dynamic. or processing or real) similarities. nor provides grounds for any speculation about the simulation of open-ended capacities manifested in such segments. [22] The general weakness of reductionist arguments of the sort in question was already clearly shown hy Hilary Putnam, at a time that he himself was still strongly attracted to a reductive treatment of the mind/body problem. [23]

There have been many strategies designed to obviate the finding we seem inexorably drawn to. One may even anticipate that. as the convergence

Page 356: Perspectives on Mind

348 Josepiz Margolis

we have sketched grows stronger, and as the peculiarly central theme of "subjectivity" is correspondingly featured, we may expect the partisans of strong reductionism (extensionalism) to launch more and more radical programs for warding off what seems an inevitable stalemate or defeat. A few specimen views will afford a fair sense of what such alternatives may be like. One possibility is to make the complexities of cognitive Iife--intensional complexities, in particular--parasitic on some more fundamental sub-cognitive stratum of the real world, one suited to a realist reading of cognition or perhaps to a restricted selection of (transparently construed) properties of that deeper stratum. If such a stratum could be viewed as behaving congruently with extensionalist requirements, then the problems of the psychological sciences could be resolved in the canonically favored way. This, of course, is precisely the Leibnizian-Iike strategy adopted recently by Fred Dretske; its import and strategic force are reasonably clear from the following opening remarks:

In the beginning there was information. The word came later. The transition was achieved by the development of organisms with the capacity for selectively exploiting this information in order to survive and perpetuate their kind ... [Let us) think about information (though not meaning) as an objective commodity, something whose generation, transmission, and reception do not require or in any way presuppose interpretive processes. [24)

Needless to say, the "information" postulated by Dretske is directly amenable to extensional treatment. Nevertheless, he does not explain how, in epistemica/ly relevant ten1l5, to specify such information independently of the complexities of cognition itself--which returns us to the themes we have seen as characterizing the convergence of pragmatism and phenomen­ology. [25) Dretske's theory (not unlike its predecessors) is a purely formal schema of the conceivability of an extensional model for the human sciences, not an epistemically motivated argument for its adoption. But no one denies that extensional or reductive models of selves (or restrictive, but not otherwise intensionalized, accounts of how cognizing agents use materials that are not themselves intensionally complex) are at least capable of being coherently formulated. [26) The issue is rarely the formal one; it is rather that of how, within the real-time constraints of inquiry, the candidate findings of particular disciplines and the consensually acknowledged cognitive capacities of human societies may be reconciled with such high second-order speculations as Dretske offers.

When we consider that the motivation for such theories is very likely one of bringing an extensionalized psychology into accord with the general drift of such themes as the seven we originally proposed, we can hardly fail to conclude that a strategy like Dretske's is, no matter how

Page 357: Perspectives on Mind

Pragmatism, Phenomenology, and the Psychological Sciences 349

skillfully or modishly fashioned, little more than an adjustment of the Quinean or Davidsonian dogma already noted. Like theirs, it conflates the descriptive and the phenomenological, or dismisses the phenomenological altogether. Thus, precisely because of its insistent extensionalism, Dretske's proposal effectively threatens to reinstate (illicitly) some version of the "cognitive transparency of nature" thesis.

There is hardly any point any longer to merely improvising abstract models of cognition--that is, without bringing them closely into line with actual first- and second-order cognitive constraints. Why, for instance, should one, if the intensional complexities of actual cognitive efforts are patently intransigent to extensional reduction, simply ignore the general problem of showing how the handling of particular claims can be reasonably brought into line with something like Dretske's model? We can always assume it to be a foregone conclusion that intensional contexts can be regimented extensionally; but we should not fail to notice that this assumption itself tends to be increasingly characterized as, somehow, more likely true than false (or even well-nigh certain) when the truth of the matter is that it is (at best) a most important dogma, the tendentious status of which simply cannot be ignored.

Another strategy, Steven Stich's, offers us the "syntactic theory of the mind" (STM), which, as Stich frankly concedes, "is not itself a cognitive theory [but a theory] about what cognitive theories are or ought to be." It differs from its nearest rival--the "strong representational theory of the mind" (Strong RTM), favored notably by Fodor [27]--in that "STM is not sanguine about the use of folk psychological notions in cognitive science. It does not advocate cognitive theories whose generalizations appeal to the notion of content." [28] In spite of this, Stich actually maintains that:

The basic idea of the STM is that the cogmtlve states whose interaction is (in part) responsible for behavior can be systematic­ally mapped to abstract syntactic objects in such a way that causal interactions among cognitive states, as well as causal links with stimuli and behavioral events, can be described in terms of the syntactic properties and relations of the abstract objects to which the cognitive states are mapped. More briefly, the idea is that causal relations among cognitive states mirror formal relations among syntactic objects. If this is right, then it will be natural to view cognitive state tokens as tokens of abstract syntactic objects. [29]

What Stich attempts to do is to show that "the folk psychological concept of belief ... ollght not to play any significant role in a science aimed at explaining human cognition and behavior"; that it "does flOt playa role in the best and most sophisticated theories put forward by contemporary

Page 358: Perspectives on Mind

350 Joseph Margolis

cognitive scientists"; and that the argument may be applied "to the whole range of 'intentional' folk notions or 'propositional attitudes' --to any folk psychological state characteristically attributed by invoking a sentence with an embedded 'content sentence. '" [30]

Nevertheless, Stich does not directly support his thesis. What he does do is sketch, more narrowly, reasons for thinking that if a strong RTM can provide an explanation of folk psychological phenomena (by treating causal relations between mental states in terms of nomic connections regarding their "content"), then the STM can do as well by rejecting representationalism and by treating causal linkages solely in terms of the syntactic properties of the "abstract objects" it postulates. Stich concedes that, insofar as he is prepared to venture a view about the states that do enter into causal relations (that are "specified indirectly via the formal relations among the syntactic objects" to which those states are to be mapped), he favors a system of neurological or physical or brain states. [31] So it is quite clear that Stich is prepared to commit himself to the adequacy of a certain model of syntactic objects for empirical psychology or cognitive science, without recognizing a need first to adjust such a model to the (perhaps provisional) empirical findings of the range of current work--possibly including the findings (such as they are) of "folk psychology. "

Of course, to accommodate the latter would effectively call into doubt the fittingness of any such theory, both because we have grave doubts about the independence of the syntactic and semantic features of allY relevant system, and because the admission of folk psychology may specifically require accommodating intensional complexities that an inflexibly extensional syntax suited to physicalist descriptions could not manage. There is, then, something very odd about the presumably "empirical" relevance of positing such a syntactic theory, without first providing a reasonably convincing sketch of psychological causation.

The truth is that Stich believes that the explanation of psychological phenomena can be scientifically managed in principle, if: (a) the phenomena themselves are characterized solely in terms of behavior without reference to content (in effect, in strong behaviorist terms) or in terms of neurological or other physical brain processes and their stimuli (without reference to content); and (b) the explanatory model invokes nomological connections exclusively in physicalist terms. The model is certainly coherent, but Stich seems not to appreciate the conceptual dependence of what he calls a "syntactic" theory on prior empirical psychology, or even on the sort of psychology afforded by the rival Strong RTM. [32]

But even so, there is no point in claiming (as Stich does) that the STM is better than the Strong RTM jf all it can do is economize its operational criteria with respect to II'hatel'er empirical generalizations the Strollg RTM first establishes, or eschew reference to folk psychological

Page 359: Perspectives on Mind

Pragmatism, Phenomenology, and the Psychological Sciences 351

"content" only if a physicalism of the sort sketched above were first suitably confimled. And, that cannot be done without at least addressing the question of the physicalist reduction of phenomenological (as distinct from descriptive) subjectivity. Not only does Stich not lead us beyond these formal limitations, he positively insists that the states postulated by his theory are "non-observational" and that, "for all the foreseeable future," an STM theory will be obliged to make "ad hoc assumptions about causal links between B- and D-states on the one hand [that is, states that are physicalist replacements for folk-psychological, content-specified belief and desire states], and stimuli and behavior on the other." [33) Fair enough. But what then could Stich be proposing, except a straightforward program of empirical psychology of the behaviorist and physicalist sort? He is of course doing more. He is insinuating that there must be second-order considerations of some very strong sort that should convince us that the intentional psychology of our own day is moribund, that a better can be readily had, and that the improved theory can meet both descriptive and phenomenological requirements. In this, he is surely reiterating Quine'S own pleasant dogma, though now with considerable refinement at the descriptive level.

The fact is that Stich does not directly consider the psychological features of making psychological inquiry: it is certainly not clear, for instance, whether there is the least prospect of formulating the work of the psychologist in the physicalist terms Stich favors. It also seems clear that Stich has not directly considered the convergence of contemporary Western philosophy on what (if we may use the term in as neutral a way as possible) the phenomenologists have identified as "subjectivity." To have done so might well have suggested the (possibly even principled) persistence of intentional and content-indexed (hence, intensionally complex) discourse. [34) For example, if Popper's thesis is correct and if we cannot predict or explain the results of genuine growth in knowledge, and if (admitting for the sake of a fair argument) that behavioral and physical regularities cannot be nomoiogically correlated with intentionally indexed phenomena [35], then Stich could hardly count on detecting all psychologically pertinent events--particularly those associated with the folk psychology he is bent on dismissing.

4. Intentionality and Social/Historical Features of Cognitive Life

What we must finally consider are those features of cognitively pertinent life that are systematically ignored or somehow unnoticed by the reductive strategies we have been canvassing and that contribute to the sense of convergence of the pragmatist and phenomenological currents. There are at least two such features that are particularly important: the social or consensual and the ilislOrical or emergent. If we take linguistic

Page 360: Perspectives on Mind

352 Joseph Margolis

capacity as the paradigm of the distinctive powers of man and as the essential condition of the whole of cultural life [36], then both the consensual and the improvisational will seem entirely familiar to the practices of a natural society. For one thing, language and similar orderly practices will be ascribed to entire societies only. They are cooperatively shared, but neither completely nor equally internalized by all within a given society; hence, they are dependent for their smooth functioning on consensual tolerance or the interpretation of variations and innovations of orthodox practices. For another, the functioning of such practices can hardly obtain independently of a central body of changing but socially shared experiences and habits of life of a largely nonlinguistic sort--a fact which effectively insures the pertinence of the intentional content of the beliefs and institutions of each particular society.

These are factors that are noticeably not featured, or are inadequately accommodated, by the theories of, say, Quine and Husserl; whereas they are more adequately managed by Dewey and Heidegger. It is an irony that the most distinctive recent contributions to Anglo-American philosophies of the psychological and social sciences have clearly ignored (or opposed) the consensual (hence the interpretive) and the historically emergent (hence the open-ended and improvisational). On the other hand, the most recent Continental contributions are entirely committed to these two themes.

Certainly, on the analytic side, the most extreme rejection of the genuinely social and the genuinely novel with respect to language and concepts is marked in the radical nativism of Noam Chomsky and Jerry Fodor. [37] What is even more telling is that what doubts there are about the adequacy of the nativist thesis--for example, those along the lines of Dennett's objections, or of Stich's--have no more to do with either consensual or historical complications than do the original nativist claims themselves. What is clearly decisive, however, is this: if these and related features of linguistic behavior are given their just due, then it appears well-nigh impossible to eliminate intentional (and intensional) complexities from the human sciences.

On the other hand, it is only grudgingly and with a strong sense of jeopardizing the apodictic status of a phenomenology that does treat the intentional as the indelible mark of "subjectivity," that Husserl considers the bearing of the social and historical nature of transcendental reflection upon phenomenological jllnctions. The social and the historical always struck Husser! as threatening unrecoverable concessions to contingency--which of course is precisely what Heidegger was willing to embrace in salvaging intentionality from the apodictic and in strengthening the sense in which the psychological could no longer be disjoined (at the human level) from the societal and the historical. We see the same impulse in the inquiries of Hans-Georg Gadamer and Paul Ricoeur, though the second

Page 361: Perspectives on Mind

Pragmatism, Phenomenology, and tire PsycllOlogical Sciences 353

was more closely drawn to Husser!. [38) The final irony is this. The blending of the pragmatist and the

phenomenological--along essentially all lines of our convergence themes (except for a muting of the sense of the historical)--appears quite early in the work of the Wittgenstein of the Investigations. [39) It must seem particularly odd that the analytic theorists of the psychological sciences quite regularly neglected Wittgenstein unless, that is, we concede that in spite of a general adherence to our convergent themes, those theorists were even more devoted (like Quine) to various physicalist and extensionalist ideals, and hence to ignoring or dismissing the intentional at both the merely descriptive and the phenomenological levels of psychological inquiry. On the Continental side, the matter is much less significant; for what is most memorable about Husserl's work is his recovery of intentionality and the theme of phenomenological subjectivity. For most contemporary Continental theorists, the search for the apodictic was, and remains, an aberration; and the denial of pride of place to the social and the historical was a consequence of Husserl's sufficiently suppressed alarm regarding the inescapability of those themes. The apodictic gives way almost at once in Heidegger and Merleau-Ponty. [40) Thus, Wittgenstein is much less salient in Continental than in analytic thought, particularly since he does not directly feature the analysis of history or intentionalty--although neither is really very far from his central concerns. [41)

In effect, we are predicting the next phase of the philosophy of psychology, as well as an increasing radicalization of the philosophy of all the human and cognitive sciences. To deny the bifurcation of the psychological and the social at the human level; to deny the independence of linguistic behavior from largely non linguistic socially shared experience; to historicize the processes of such sharing; to construe the smooth functioning of society as involving considerable division of labor, the impossibility of a total internalization of all or most social institutions by the members of that society, and the compensating work of consensual interpretation; to provide for irregular, partial, strongly varying, and improvisational idealizations of the intentional import of society's practices; to insist on the conceptual complications of phenomenological subjectivity beyond any merely descriptive concern with explaining behavior: to concede all these and similar tendencies is to incline toward an increasingly open-ended, unsystematizable conception of distinctly human existence--one in which history itself is increasingly radicalized, intentionally uncertain, and open to strongly relativistic constructions.

What has muted these tendencies in the analytic tradition has been the persistence of a distinctly alien commitment to extensionalism: a commit­ment originally incongruent with pragmatist themes in Quine's implied

Page 362: Perspectives on Mind

354 Chapter FOllr

challenge to his own adherence to that dogma. Now, with the infusion of a strong historicism and a strong emphasis on intentionality, the developing convergence of pragmatist and phenomenological currents cannot but quicken. It is unlikely that the almost solipsistic, systematically rigid. physical­ist, functionalist. non-social, and non-historical models of cognitive science will continue for long their noticeable dominance,

But the cross-breeding of the two traditions is only in its infancy.

--'J--

With little hesitation, Professor Margolis has endorsed the idea of "a trend toward convergence of a most profound sort." But this boldness presupposes specific characterizations of analytic and Continental philosophy. The former is explained by Margolis as a form of pragmatism falling "somewhere between ... John Dewey ... and W. V. Quine:" while the latter--Continental philosophy--he identifies as a species of phenomenology falling "somewhere between the late views of Edmund Husserl and the early views of Martin Heidegger." It is from his characterization of these, as "conduits" of Western philosophical development that Margolis derives his seven themes.

It may seem on the surface that Margolis has "tailored" these themes to support the convergence hypothesis, thereby leaving us to wonder whether he has begged the question. On the other hand, to the extent that these two philosophical stances--the pragmatic and the phenomenological--actually are central in contemporary Western philosophy, we do gain significant insight from Margolis' examination of the issue. However, there is an even deeper issue to be confronted, one whose exploration has additional important implications. It is an issue about reductionism, and in the following commentary, Ralph Sleeper after carefully tracing its threads through the Margolis themes, concludes that to the extent that it remains an issue improperly resolved, the vision of convergence will fade, while those who persist in reductionist dogma become justly subject to a philosophical "impeachment" of sorts.

Page 363: Perspectives on Mind

R. W. SLEEPER

The Soft Impeachment: Responding to Margolis

Margolis puts to himself the question: "What are the prospects of reconciling analytic and Continental theories of the psychological sciences?" Surveying these prospects optimistically, Margolis is nevertheless struck by the fact that not all participants in the analytic tradition are as well prepared for negotiating their differences with members of the Continental tradition as others are. It is evident to Margolis that only analysts who have foresworn "reductionism" are eligible, and that these "born again" analysts are those who have taken the message of pragmatism to heart. The trouble is that not all those who accept the new dispensation do so with the purity of heart that Margolis would require of them, for they maintain still--Margolis argues at length--a hidden allegiance to the old dogmas. Even Willard Quine, whose attack on the old dogmas has long been regarded as a paradigm of the new confession, fails to escape the soft impeachment. There is something "subversive," Margolis tells us, about Quine's teaching that corrupts his pupils, and he offers the work of Donald Davidson as a case in point.

Only when Margolis sets about listing the seven themes upon which the convergence between pragmatic and phenomenological approaches to the psychological sciences can be expected, do we begin to see why Margolis thinks it important to engage in this rear-guard action against reductionism. Translation of his seven points--they are so crisply given as to deter digestion without some preliminary chewing over--yields some needed background. I give the results of this more leisurely rumination by appending my translations to Margolis' originals, seriarum:

(i)

Margolis: "the conceptual symbiosis of realist and idealist aspects of cogni tion. "

Translarion: The old contentions dividing realists and idealists are mostly behind us; it is not easy in today's terms even to understand them. We worry today about "antifoundationalism" and "deconstruction," or "extensionalism" and "hermeneutics." We have seen Berkeley's idealism absorbed into Machist epistemology and, subsequently, re-absorbed into logical empiricism, where it was transformed by the Vienna Circle into a repudiation of metaphysics altogether. Expatriated to America it became the Unified Science movement and died a lingering but natural death: its successor, apparently, being but a vague and undifferentiated naturalism with a "linguistic turn" manifest in the nineteen-fifties and beyond.

355 H. R. Otto and 1. A. Tuedio (eds.), Perspeclh'es on Mind. 355-364. © 1988 by D. Reidel Publishing Company.

Page 364: Perspectives on Mind

356 R. W. Sleeper

Hegel's idealism was absorbed by both Dewey and Heidegger and traveled different paths according to the transformations that each philosopher worked out. From Dewey's early and Hegelian rejection of the logical dualism of the analytic-synthetic, to his mature attack on all such dualisms, there emerged the central themes of Quine's attack in "Two Dogmas." Like Dewey, Quine was attacking the a priori ontology imported into logical empiricism by Russell from Frege. Carnap's "logical objects" were, as he put it in Word and Object, "entia non gratia." The "foundations" of Del' Logisc/le Alifball del' Welt, already shaken by Otto Neurath's weighty objections, as Margolis correctly points out, collapsed. From Heidegger's early Hegelianism emerged an equally strong protest against Frege's domination of logic, a dissent that took the form of an urgent concern with the human knower as contributing, by means of his own interaction with the object of knowledge, to what can be evelUl/ally known. It is a strain in which the phenomenology of the "intentional object" is put over against the "scientific object" at first, but which becomes, in Hans-Georg Gadamer's genial reformulation, the view that: " ... science is no less science where it is aware of the Ill/manoria as its integrative function ... " If this be "symbiosis," so be it. [1]

(ii)

Margolis: "the rejection of any and all forms of the cognitive transparency of the real world."

Translation: This is implicit in the first point, as I have translated it, for it is surely a consequence of the absorption of idealism into the transformational processes of knowledge in both pragmatism and phenomenology that the mind is no longer to be taken as the "mirror of nature" any more than that nature can be taken as what can be "mirrored." It is the one valid point that Richard Rorty manages to eke out of his imaginary "conversation" with Dewey, Wittgenstein and Heidegger in Philosophy and the Mirror of Natl/re. Margolis himself has addressed the matter elsewhere, as if to lend support to Rorty's meager results, in his essay on "Pragmatism Without Foundations." The problem with this thesis is that it is not at all clear that this shared claim of "antifoundationalism" in Rorty and Margolis entails a shared repudiation of any role for metaphysics. Does Margolis imply. as Rorty does, that no form of realism can be defended? Does Margolis imply, as Rorty does, that pragmatism must do without method as well as without metaphysics? If he does, Margolis may find himself naked to his enemies. For how shall he defend himself against the ontological reductionism of "physicalism" without either of these convenient tools? What Margolis seems to be saying is that, after all, there is a point to be made in favor of "psychologism." That there is,

Page 365: Perspectives on Mind

The Soft Impeachment: Responding to Margolis 357

after all, something to be said for Lotze's Logik despite the combined efforts of Dewey and Frege to upset it. [2]

(iii)

Margolis: "the legitimation of cognitive claims only holistically or at least initially on non-epistemic grounds,"

Translation: This is not clear, for though there are all sorts of "holisms" around--a "pluralism of holisms" we might say--it is not easy to see them as "convergent," We are no better off when it comes to justifying cognitive claims on "non-epistemic grounds," at least if "non-epistemic" means--as I think it does in the vocabulary that Margolis favors--a rough equivalence to "praxis." In which case it is hard to see this thesis as adding anything to the three that follow, or--for that matter--to the two that precede it. "Praxis" is a notoriously slippery term, readily relativized to a variety of conceptual schemes and languages. "Holism" too, as the different relativizations of Quine and Davidson demonstrate, is equally compliant; Quine relativizes his to our common languages and our various Roots of Reference; Davidson to languages from which reference drops out altogether. [3]

(iv)

Margolis: "the PllOIIty of praxical functions over the cognitive, regarding matters of legitimation."

Translation: We should get our theory from our practice and take it back to practice for legitimation. This is sound pragmatic doctrine provided that we recognize the continuity between theory and practice as Peirce and Dewey taught us to do, each in his own way. But by speaking of the "priority" of the "praxical" Margolis ruptures this continuity in the direction of William James in "The Will to Believe." The reciprocity of "ends" and "means" is such that neither takes "priority" in the pragmatic logic of inquiry. Margolis' formulation leaves too much room for dualism here and the method of "tenacity" may slip in. I am relieved by the thought that what Margolis may be getting at is the recovery of "functionalism" in Continental thought after its early losses to "structuralism. II

(v)

Margolis: "the rejection of all forms of totalizing, and an insistence on the underdetermination of universal claims with regard to available evidence. /1

Translariol!: Although hesitant about the reference for "totalizing,"

Page 366: Perspectives on Mind

358 R. W. Sleeper

for possible contradiction with the use of "holistic" above, I see by the phrase after the copula that Margolis merely intends to note that we can never be certain of our "holistic" claims. It is this "tension" that Margolis finds in both Quine and Heidegger~~neither is able to wholly legitimate his respective holistic claims~~and that is a central theme of his essay. It is where Dewey's Experience and Narure clearly converges with Heidegger's Beillg alld Time. [4)

(vi)

Margolis: "the historicizing of praxical activity as well as of cognitive inquiry."

Trallslalion: Both pragmatism and phenomenology are "historicist." Margolis simply adds here another label to what he has already laid out. It would seem to me that he might also add that both pragmatism and phenomenology, after repudiating "psychologism," nevertheless work out their respective accommodations to it; a trend as evident in Dewey and Quine as in Heidegger and Husserl, and one which accords with their "historicism,1I

(vii)

Margolis: "the increasing tolerance of relativism both with regard to first-order and second-order cognitive claims."

Translalion: Margolis wants to make sure that we understand that the underdetermination of theory applies across the board; which it would--of course--if the preceding six points have been articulated correctly and are correct as articulated. It is an effort on Margolis' part to show that his seven points of convergence are Ihemselves underdetermined by the evidence available; a worthy reminder that fallibilism applies 10LlI COlll1 and not merely ell passanr. [5)

Admitting what Quine calls the "indeterminacy of translation" as applicable to the above--as well as the "elusiveness of reference" --it still seems clear that Margolis thinks that the future of the psychological sciences--at least to the extent that such a future may be determined by a convergence of phenomenology and pragmatism--must be protected against the subversive influence of "reductionism." What Margolis wants us to see is that the future of the psychological sciences is seriously in danger from the "reductionist" programs that are subliminally present in both Heidegger's early commitments and Quine·s. He wants the psychological sciences to avoid both the "physicalistic" reductions lurking in the underbrush of Quine's treatment of "propositional attitudes" and Husserl's treatment of "phenomenological reduction." Again and again, in the bulk of

Page 367: Perspectives on Mind

77le Soft Impeachment: Responding to Margolis 359

his essay, Margolis directs his arguments against the "extensionalist" interpretation of "intensional meanings" which he ascribes to the "inexorable tradition" that moves from Carnap to Quine to Davidson and that would block the convergence that Margolis envisages. That he makes no comparable attempt at refuting similar "reductionist" subversion among Continental thinkers--among Marxists, say--is both a signal weakness of his program and evidence of the importance that Margolis attaches to his own version of the predicament that we find ourselves in with respect to the philosophy of science generally, as well as to the future of the psychological sciences in particular. [6]

What, then, is Margolis' version of this predicament? And why should Margolis conclude from his assessment of the predicament that he must defend the future against the "inexorable" tradition which he represents as moving from Carnap to Quine to Davidson in much the fashion that a baseball was once said to move "inexorably" from Tinker to Evers to Chance? Clues abound, though succinct answers to neither of these two questions can be readily discerned from the surface of Margolis' text. Emulating Margolis' own technique in his approach to Quine and Davidson, we must look beyond the text and engage Margolis at the conceptual level. In order to do so, of course, we shall be examining not merely Margolis' expressed "propositional attitudes"--as Quine would call them--but the "conceptual ideology" upon which those attitudes are based. We examine the text, in short, for what lies olltside the text; thus ignoring as irrelevant to our task the principle of literary criticism that tells us that there is "nothing outside the text." Our justification for doing so hinges upon our success in following Margolis' example. [7]

It is instructive to note at the outset that Margolis employs as a "prophetic" feature of the arguments employed by both Quine and Heidegger that, as he puts it, " ... in each case, their distinctive power is also the source of their most distinctive weakness." We may, then, find a way to read Margolis' arguments as containing both a "distinctive power" and the source of their "distinctive weakness" lying within the same "power." Again, it is instructive to note how Margolis identifies the distinctive "power" of the arguments of his two protagonists. In Quine's case it is the demonstration that "there is no principled demarcation between the analytic and the synthetic," which is interpreted by Margolis as a demonstration that no hard and fast lines can be drawn between questions of "meaning" and questions of "belief." What this means, according to Margolis, is that: "Quine's own argument precludes advancing any grounds for favoring the physicalist and extensionalist programs he is known to favor." And it is to this "weakness" that Margolis points as a reverse parallel to the results of Heidegger's arguments. Where Quine "favors" physicalism, but cannot give either necessary or sufficient grounds for accepting it, Heidegger oppose.' physicalism, but cannot give either

Page 368: Perspectives on Mind

360 R. W. Sleeper

necessary or sufficient reasons for rejecting it. In each case, as Margolis presents matters, the initial arguments of these two philosophers preclude all and any reductions of the "en bloc" claims of "systematic science and philosophy." It is the point at which Quine and Heidegger "converge," Margolis tells us. The trouble is that the "traditions" that each is known to Javor are clearly not convergent at all. Margolis concentrates upon showing us why this is so with respect to the "inexorable" tradition from Carnap to Quine to Davidson, but it may be readily assumed that a similar reversely parallel and "inexorable" tradition might be shown to move from, say, Husserl to Heidegger to Habermas. It is, of course, a distinctive weakness of Margolis' essay that he fails to trace this reversely parallel tradition in Continental thought. [8]

This last remark is instructive, for it is a feature of Margolis' technique that he is able to identify the distinctive "power" of Quine's and Heidegger's arguments by first concentrating upon their distinctive "weaknesses." Turning this technique upon Margolis' own arguments, then, may be the way to understand him. Focusing upon the distinctive "weakness" of Margolis' arguments may reveal the distinctive "power" which they contain.

One such weakness has already been noted. While we might readily assume there to be some "inexorable" tradition reversely parallel to the reductionist and extensionalist "tradition" that Margolis identifies as a feature of Anglo-American thought that could be identified in Continental thought, none has been identified. We may well suspect that there is none, given the absence of evidence to the contrary. Scanning the evidence adduced for the reversely parallel tradition--from Carnap to Quine to Davidson--it strikes us that there is indeed a paraliel. This tradition too does not actually exist. Like the "fable" of Dasein that Margolis ascribes to Heidegger, Margolis here gives us a fable of "reductionism." What Margolis has shown by the distinctive "weakness" of his arguments is precisely where the equally distinctive "power" of those arguments lies. It remains, merely, to complete our task by tracing these weaknesses to their source. It is readily discovered when, near the end of his essay, Margolis tells us that the "final irony" of the story he has been telling can be found in the "convergence"--implicit "quite early in the work of the Wittgenstein of the Investigations." By comparing Wittgenstein's work with Husserl's Logical Investigations, Margolis suggests, we can see how open Wittgenstein is to the recovery of "intentionality and the theme of phenomenological subjectivity." [91

It is now clear why it is that the very weakness of Margolis' arguments is a mark of their strength. Margolis is prevented hy his own principle of the irreducibility of intensionalized discourse and the ineliminability of the inquiring suhject from giving an extensionalized

Page 369: Perspectives on Mind

The Soft Impeachment: Responding to Margolis 361

account of the "traditions" that he alleges. We see now why Margolis tells us that we must take seriously the reductively physicalistic program that Quine is known to "favor," rather than the program that he actually follows. It is why we must pay careful attention to the "unexamined" assumptions that Davidson does not--indeed cannot--express, instead of poring over the texts that he does express and the assumptions that he does examine. It is why Margolis can reveal the strength of his arguments only by displaying their weakness. We must reckon the strength of his arguments to the irreducible and ineliminable "intentionality" and "phenomenological subjectivity" of his own "conceptual ideology." [10]

How seriously, then, are we to take Margolis' arguments? I think that we must take them very seriously indeed. My reasons for doing so, however, are quite independent of the style in which they are articulated and which has been made familiar by the work of Leo Strauss. It is a style that is limited in 'its effectiveness by our willingness in acceding to the dramatic demand for suspension of disbelief. We must see the dramatic irony in the stories that Margolis spins. And we do. There is, surely, a dramatic tension in Quine'S philosophical incapacity to achieve the systematic and physicalistic reductionist program that Margolis correctly discerns that he is "known to favor." It is a tension clearly paralleled in Husserl and Heidegger and the work of their successors; a tension that I have noted as "reversely parallel" above. It is indeed at the point where these parallel tensions cross that convergence occurs in the respective trajectories of "analytic" and "phenomenological" movements. I concede, therefore, the soft impeachment that Margolis brings against physicalistic reductionism on the one side, and would bring against phenomenological reductionism on the other if he could find a way.

In a final burst of enthusiasm at the end of his essay Margolis presents us with a vision of the promised land once the prophesied "convergence" is at last achieved. It is a vision in which our philosophy of psychology is increasingly radicalized, socialized and historicized. It is a promised land in which there will be a "strong emphasis on intentionality" and in which "extensionalist" models of cognitive science will no longer be dominant. It is at this point that Margolis strains to the utmost our willing suspension of disbelief, and the dubiety that we have held in check all along breaks in. For what will be the language in which the affairs of this new age will be conducted? Will it be the "phenomenological" language of the Continentals? Or the "natural language" of Wittgenstein, Dewey and Quine? Margolis does not tell us of his intentions in this regard. Perhaps he cannot, and for reasons that I have already reckoned to his conceptual ideology. For it is a feature of that ideology, as I have been interpreting it, that intentions resist expression in the language of extensionalized discourse.

For my own part, I can see no alternative to the use of natural

Page 370: Perspectives on Mind

362 Chapter Four

language as the vehicle of convergence. If that implies the naturalization of both epistemology and ontology in the philosophy of the human and social sciences, as I think that it does, and a consequent assimilation of the methods of the expetimental sciences to the respective subject-matters of the psychological sciences, as I also think that it does, we have only nature itself to blame. For once we abandon the conviction that there is an order of reality that somehow transcends that to which our natural language provides access, the quest for knowledge becomes a matter of what we can make of what nature provides. Dewey once remarked that natural language is itself a "wonder by the side of which transubstantiation pales." Should Margolis reckon this as "reductionism," and attribute it to my own conceptual ideology, I shall willingly own the soft impeachment. [11)

--'f/--

It is not so much the convergence thesis itself that gives Sleeper pause, as it is Margolis' assertion that this apparently welcome develop­ment has been, and is being, impeded in its progress by that reductionist project which in the analytic tradition has been variously known as "extensionalism" and as "physicalism" (though the two are not identical). Although he is perhaps willing to suspend his disbelief on that point, Sleeper insists on a quid pro quo admission that a "parallel" charge can be lodged against the social-historical reductionism native to the Continental tradition.

Where exactly does this leave us? Are we (or, more specifically, are pragmatists), Sleeper asks, left "without method as well as without metaphysics?" (Sleeper: this volume, p. 356) Neither of the major traditions--here typified by the general critiques given, on the one side, by Quine and, on the other, by Heidegger--is able to show cause why their respective "favored" views on the relation of the sciences to the world is anything but arbitrary. That is, neither can successfully substantiate its philosophy of science.

Of course, it does not follow from this predicament that either of these views, or indeed any such view, is thereby arbitrary. It's just that we are stuck squarely in the middle: not knowing whether or /lot science-­particularly the social and psychological sciences--are ultimately in­capable of anything but arbitrary development. Both Margolis and Sleeper see this clearly enough. However--and this is the very nub of Sleeper's criticism--Margolis errs in his view of what is to be done about it. Margolis, it appears, retreats to an essentially Continental stance from which he proceeds to predict the characteristics of the oncoming conver­gence of the two traditions, He says,

Page 371: Perspectives on Mind

Commel1fQlY 363

we are predicting the next phase of the philosophy of psychology, as well as an increasing radicalization of the philosophy of all the human and cognitive sciences. To deny the bifurcation of the psycho­logical and the social at the human level; to deny the independence of linguistic behavior from nonlinguistic socially shared experience; to historicize the processes of such sharing; to construe the functioning of society as involving considerable division of labor, the impossibility of total internalization of all or most social institutions by the members of that society, and the compensating work of consensual interpretation; to provide for irregular, partial, strongly varying, and improvisational idealizations of the intentional import of society's practices; to insist on the conceptual complications of phenomenological subjectivity beyond any merely descriptive concern with explaining behavior--to concede all these, and similar tendencies, is to incline toward an increasingly open-ended, unsystematizable conception of distinctly human existence--one in which history itself is increasingly radicalized, intentionally uncertain, and open to strongly relativistic constructions. (Margolis: this volume, p. 353)

Sleeper views this as unwarranted--as inconsistent with the real strength of the Margolian argument. For to take such a position is to ignore the "reversely parallel" charge of reductionism that can be brought against Continental philosophy. While continuing to embrace the pragmatic elements in Margolis' argument, Sleeper insists that the cognitive sciences should not--and likely will not--develop along such "ideological" lines as those imputed in the Margolian prophecy. Just how current commonalities will actually affect the development of the sciences is simply unclear. But this is not so bland a result as one might initially suppose. For as Sleeper astutely asks,

what will be the language in which the affairs of this new age will be conducted? Will it be the "phenomenological" language of the Contin­entals? Or the "natural language" of Wittgenstein, Dewey and Quine? (Sleeper, p. 361)

For without a language--a scientific "common coin," as it were--there can be no merhod, and as Sleeper himself intimates, without method, no science is possible. So the real result of this inquiry is quite "spicy," for it is nothing less than a realization that yet another, wholly new, "linguis­tic turn" must be taken--one that somehow permits the expression and integration of the insights of borh schools of thought currently party to this discussion. Sleeper says,

Page 372: Perspectives on Mind

364 Chapter Four

For my own part, I can see no alternative to the use of natural language as the vehicle of convergence. If that implies the naturalization of both epistemology and ontology in the the philosophy of the human and social sciences, as I think that it does, and a consequent assimilation of the methods of the experimental sciences to the respective subject-matters of the psychological sciences, as I also think that it does, we have only nature itself to blame. For once we abandon the conviction that there is an order of reality that somehow transcends that to which our natural language provides access, the quest for knowledge becomes a matter of what we can make of what nature provides. (pp. 361-362)

It is, of course, precisely this direction in which the later Wittgenstein tugged at the philosophical community, and it is this same direction in which Otto in his essay attempted to move by making a systematic assault on the problem of translation. For in his proposal, while there is a clear aim at objectivity, there is an explicit reliance upon a base of natural language accompanied by an equally explicit rejection of fixed ontological commitment. If indeed "no hard and fast lines can be drawn between ques­tions of meaning and questions of belief," which Sleeper reports to be Margolis' interpretation of Quine's argument against the analytic/synthetic distinction, then translation cannot, and ought not, wait on (or require, or presuppose) prior specification of meaning (extensional or otherwise!). Rather, translation must deal with meanillg-equivalences, not meanillgs: much as internationally there are mOlletary equivalellces but no absolute monetmy value per se.

Additional insight into the current situation is provided by the next commentary. In it, James Munz critically examines possible senses of the term 'convergence', reminding us that even in the most appropriate sense, convergence is no assurance of correctness. In the end, Professor Munz cautions against taking the Margolian critique and predictions regarding the future of the cognitive sciences too literally.

Page 373: Perspectives on Mind

JAMES MUNZ

In Defense of Pluralism

Professor Margolis recognizes a two level "convergence" in contemporary philosophy. First, within the analytic tradition and the Continental tradition separately "convergence" has occurred. Since the first level "convergences" have identical themes in these two traditions, there is a second level "convergence," between the analytic and Continental tradi tions.

Margolis identifies seven "themes" of convergence. His goal is to use these themes to place limits on the "psychological sciences", (the term used in his title) or "theories of psychological sciences", (the term used in his first sentence) which will be acceptable to the converging traditions. My comments have three foci. First, it is worth considering what the observation of these themes does and does not mean and what the observation portends. Second, I have minor reservations about the formulation of one of the themes. Third, it seems to me that the themes themselves make Margolis's goal, the critique of psychological theories, virtually unreachable in any strong sense.

Obviously the fact of consensus does not imply the correctness or truth of the result nor that there is some correct product in the offing. The issue of truth or correctness is not raised by Margolis. I too will set aside the issue.

have misgivings about calling the current state a convergence. The term 'convergence' has several senses. It can mean the result of a deliberate process of trying to achieve community. The present case is clearly not convergence in this sense. When the similarities are pointed out to the principals the responses range from delight to disgust but none claims authorship.

There is a mathematical sense of 'convergence' which implies the existence of a limit. If this is the sense of 'convergence' intended, it is at best a moot metaphor. This sense may avoid the suggestion of deliberate action by philosophers, but it replaces that by the suggestion of an inexorable process whose existence is far from obvious. With this sense of 'convergence' there is a suggestion of the correctness of a solution of a converging series. The existence of a unique, correct solution cannot be presupposed--nor should it be prejudiced by using 'convergence' in its mathematical sense.

Finally there is the biological sense of 'convergence according to

365 H, R. OUo and 1. A. Tuedio (eds.), Perspeclit'es all Mind. 365-370. © J9~H hy D. Reidel Publishing Company

Page 374: Perspectives on Mind

366 James Munz

which two species only very distantly related genotypically may have similar forms (phenotypes) as a result of environmental pressures. Bats and birds are good examples of convergence in this sense (I will not attempt to offer identifications with philosophers, living or dead.) In this sense we do not require cooperation, inexorable process, or the existence of a limit (the fusion of bats and birds.) If there is a danger with this sense it is that the possibility of a "limit" is virtually precluded. My preference, given the ambiguity of 'convergence' and the seductiveness of the mathematical sense, would be to use a less prejudicial term like 'similarity' or 'consensus'. I will follow my own advice.

In the spirit of the speculative tone of Margolis's paper and in the biological spirit of 'convergence', I'll offer a guess about the prognosis for Western philosophy allowing the consensus described by Margolis. What are the prospects that the two streams of contemporary Western philosophy will produce a hybrid offspring capable of eliminating both parents in wholesome, Darwinian competition--or for the followers of SJ. Gould, an offspring at least capable of filling the nitch of the parents should they disappear? My guess is that no viable hybrid will result. The two traditions diverged quite sometime ago. They have acquired too many shibboleths, too much jargon, and standards of clarity and precision too disparate to successfully cross-breed.

II

There is one of Margolis's themes of consensus about which I have reservations. Professor Margolis and I have had extensive discussions about relativism--his seventh theme--and have not reached consensus. We are in agreement about the positions referred to and most of the pertinent facts about those positions. What we disagree about are the conditions required to appropriately apply the label 'relativism'.

On examination, the positions which either espouse relativism or may be accused of being relativistic appear to reduce to claims like the following. (I) We have disagreements about beliefs and lack means of resolving at least some of the disagreements. (2) We do not deal with truth values but only with estimates of truth value. Estimates are determined by a variety of tests and procedures which change with time. At no time are we absolutely assured that the estimated truth values assigned to a proposition will not have to be changed. I will not supply the argument for this reduction here.

Relativism has been disputed for well over 2000 years. It would be intolerable if we have been disputing a non-issue. Both (I) and (2) above are uncontroversial. It strikes me that rather than resolving the dispute between absolutism and relativism in favor of the latter we have stepped outside the arena which we must be ;n for the dispute to exist. In an

Page 375: Perspectives on Mind

In Defense of Pluralism 367

uncharacteristic show of humility philosophers are increasingly willing to forego the criterial use of truth values (though a regulative use remains.) I suggest that if we identify the consensual position as relativistic, we are confusing the disqualification of relativism with its triumph.

III

It is apparent that Professor Margolis construes psychological science rather broadly. In his critique he mentions linguistics, cognitive science, psychology, history, and sociology as appropriate targets. What is crucial is that the object of study is the cognizing human. Margolis hints that even the physical and formal sciences are appropriate targets for the same sort of criticism. Yet when Margolis begins his critique, what actually comes under attack are extensionalist programs and philo­sophies offered by Cat'nap, Quine. Sellars, Davidson, Dretske, Dennett. Stich, Chomsky and Fodor rather than the more mundane empirical research which the title suggests.

Since Margolis, at least initially, talks about psychological science and not just philosophies of psychology, it is appropriate to consider whether there could be a meaningful philosophic critique of the procedures and results of hard core experimental psychological science. The case is even clearer in experimental sciences outside the human sciences, and since Margolis suggests there is no sharp demarcation between the psychological and non-psychological sciences in this regard, it is fair to start with experimental work in the extra-psychological sciences.

The question then is this, "Can a philosophy which espouses the seven themes provide a meaningful and forceful critique of the concrete research decisions of non-psychological science?" According to the consensual philosophic position, actual, fleeting research decisions will be governed by a variety of values and standards, many non-cognitive, which have evolved in the research community. One could criticize a research decision by showing that it was not in accord with the prevailing values and standards. Such criticisms more often come from research scientists, who are aware of prevailing standards, than from philosophers who are not familiar with local values normally. If the consensual position is correct, then this sort of criticism is already an important part of the internal process of development within a science. It is not a character­istically philosophic kind of activity. Another kind of criticism, which would be in the province of philosophy, would be to show that a research decision disagreed with the consensual philosophy directly. To criticize research, say. on the properties of black holes. because it was exten­sionalist or failed to depict the communal and historical nature of astronomical research or ignored the intensional complexity of the language of astronomy would not have any impact on astronomy.

Page 376: Perspectives on Mind

368 Clrapler Four

Now let us return to concrete research decisions in psychological sciences. Consider moves like a decision to undertake a piece of explicitly behavioristic research. a study in linguistics which ignores both intensions and intentions. or a historical study on census and tax records to determine patterns of affluence by class and area. These seem to fly in the face of the consensual philosophic position. Such decisions might well be in accord with the values and standards of the research community. Further. the decisions. values. and standards can be justified by considerations like what data are available. what applications are important, what analytic tools are available, time constraints, and the like. Actual research decisions--even in the psychological sciences -are relatively immune to philosophic criticism.

Margolis's de faCIO targets are philosophies of psychological science or formulations of agendas for psychological sciences. One consequence of adopting the themes of the consensual philosophy is that strong demarcation between science and non-science must be forsaken. [f so, then such higher level theories may be protected from criticism in much the same way lower level research decisions are. Such theories are supported, according to the consensual position, by successful research produced. historical developments in the science, procedures and theories in related sciences, intended applications, etc. A theory which is anomalous with the themes of consensus need not yield ipso facto. [n fact the consensual philosophy suggests that a theory. research program, or "philosophy" which has strong local support will not yield.

From the consensual point of view one cannot even be sure that such anomalies and aberrations will inevitably be replaced by less anomalous positions. The development of "right headed" science may be thwarted by new technologies, new applications, and accidents which are not under philosophic control. [n this situation the consensual position forces itself to allow pluralism though that may rankle the philosopher of the new consen~ .s.

--'V--

A review of what convergence might mean is worthwhile; but has Munz adequately and exhaustively examined the possibilities? For example, when he says that "the present case is clearly not convergence in this sense ... [i.e.] a deliberate process of trying to achieve community," (Munz; this volume, p. 365) it seems that he is overlooking emerging trends in such things as the more recent philosophical literature. conference themes and agendas, and even course descriptions appearing at upper levels of study. More importantly, however, Munz seems to leave out the possibility of a convergence of concerns. problems, and insights. as contrasted with a convergence of presuppositions, methods. and results. Nevertheless. the

Page 377: Perspectives on Mind

Commentary 369

discussion is useful and perhaps on a certain reading the "biological" sense of 'convergence' which Munz favors will indeed assist us, provided we are not misled by its strongly technical orientation and the constraints this would imply.

Munz suggests that, "similarities" aside, "no viable hybrid will result," simply because the divergent traditions have acquired "too many shibboleths, too much jargon, and standards of clarity and precision too disparate to successfully cross-breed." (p. 366) Yet aside from the fact that such linguistic and conceptual features are just the sort of things known to be changeable, and so could quite conceivably change in the direction of convergence, the more pointed objection one might raise is that there is no clear reason why a means of translation might not be found that would effectively bridge the gap between the two linguistic/conceptual traditions of Western philosophical inquiry. Indeed, one of the ongoing themes of this volume is that just such a possibility exists.

The point Munz makes about relativism further underscores the incipient role of translation in any genuine convergence that might be underway. If the thrust of most contemporary discussion about absolutism and relativism is as Munz supposes, then as he states,

rather than resolving the dispute between absolutism and relativism in favor of the latter, we have stepped outside the arena which we must be in for the dispute to exist. (pp. 366-367)

But if that is so, then what becomes important is fixing at least some minimal standards or conventions for moving between one mode of expression and another. That is to say, with our withdrawal from the arena of metaphysical dispute, we are required to provide an alternate means for carrying on objective discourse. Otherwise, we are driven directly into the morass of a thorough-going scepticism--a result which would probably not sit well with either Munz or Margolis.

After pointing out that the "consensual philosophy" projected by Margolis implies that a "strong demarcation between science and non-science must be forsaken," Munz goes on to argue that it follows that "a theory which is anomalous [relative to] the themes of consensus need not yield ipso jaclo." (p. 368) The net result is that the philosophy which Margolis himself heralds must of necessity allow for a pluralism of views on all matters, among which we must number, of course, extensionalism and physicalism. Consequently, Munz is saying, the "shortfall" of Margolis' position is the very one he had attributed to Quine and company. This, too, was the upshot of Professor Sleeper's commentary.

It would seem to follow from all this that there is no such position from which to effectively criticize other positions, or theories grounded in other positions: nor even to muster a defense of the position (or

Page 378: Perspectives on Mind

370 Chapter Four

theory) in question that would actually count for something in the "coin" of the others. Indeed, this predicament presents itself so insistently that it seems to have the force of a theorem, or more precisely, a metatheorem about positions and theories grounded in those positions. We might call it the "Limitation Theorem," for in effect it limits meaningful c\itique to just those theories which are within the position that sustains their expression.

The practical implication of the Limitation Theorem would be this: for any two theories formulated from different positions, meaningful (objective) critique (or defense) of either theory is possible only if there exists an independently justifiable translation procedure between the expressions of the two positions. A "position" is a language together with its manner of use. A "language" is a set of syntactically governed symbols; and a "manner of use" is a set of semantic principles exclusive of an ontology. A "theory," then, is a set of statements within a position which is logically organized around a subset of ontological postulates, and which has a consequent set, the truth value of which is not in principle logically determinable.

4.2 Dialogue

The "computer metaphor of mind" has given important new dimensions to the problem of mind. This, in turn. has raised new and strikingly similar questions for the West's two main ways of doing philosophy. And as those traditions re-examine their goals while continuing to press their respective inquiries, it has become progressively more apparent that what each has been seeking to understand is much the same thing. To understand the natural world includes understanding the human mind, even as understander. Thus, the two traditions become simply two approaches to the same set of issues.

This fact, we argue, implies a critical need for cooperative investi­gation and dialogue. In an effort to capture some of the implications of this for ongoing research--the principal direction of which we see as based on the idea of convergence as exemplified and discussed in this anthology-­we have included an epilogue of our own. We feel it takes advantage of the momentum developed by our contributors. In it, we propose a nelV agenda of themes and issues that have arisen in the course of our study of the various perspectives set forth in this volume.

Page 379: Perspectives on Mind

EPILOGUE

Toward a New Agenda for the Philosophy of Mind

I. First Person vs. Third Person Ontologies of Mind

Several of the authors in this volume have concentrated on the problem of how to gain insight into the complexities of cognitive processing. Some favor a descriptive analysis of the workings of the mind, emphasizing the necessity of operating from the standpoint of a first person ontology. Others argue for an explanatory analysis of mental function operating from a third person ontology. The former stress the importance of studying "semantic content" thought to be intrinsic to complex "meaning networks" (horizons, schemas, backgrounds, prescriptions, glossaries, and the like). The latter focus on syntax with the aim of formulating such syntax as a computational system capable of replicating the input/output relations which characterize the "inner workings" of a mind.

Adherents of the former approach argue that third person ontologies are insensitive to the intentional character of mental life. Proponents of the latter respond that intentionality must be explained in terms of a third person ontology jf the thesis that intentionality is an integral feature of mental activity is to have any credibility. Since neither side has a decisive argument against the other--despite having presented their own position rather forcefully, we might seem at an impasse. What are the implications of this?

Suppose we were to accept what Rey takes as sufficient for mentality. What ramifications would this have for advocates of the first person standpoint, particularly if his views on discrete mental states were to be fruitful in studying cognitive processes? Even though we might require that his theOlY at least entail an undeniable appearance of conscious life structured in accordance with intentionality, might not such a move thoroughly undercut the status of the first person orientation? Would a descriptive, phenomenological analysis of intentionality, qualia or mental reference continue to have any value? Or would it simply become obsolete, just as Aristotle's "telic factor" became obsolete as a principle physics once the enhanced explanatolY power of 17th century mechanistic theory became fully evident?

Rey has argued that, properly conceived, cognitive science is limited to the study of mental features and operations sufficient to explain the problem-solving capacity of an intelligent being. Moreover, it is his view that careful analysis shows that it is far from obvious that consciousness is among these factors. Indeed, if he is correct in his claim that "intentionality operations" can be analyzed in terms of specific rational

371 H. R. Guo and 1. A. Tuedio (ed.\".), Perspectives on IHind, 371-376. © /988 hy D. Reidel Publishing Company.

Page 380: Perspectives on Mind

372 Epilogue

regularities, a major stumbling block to the computer model of mind is thereby removed. Has Smith demonstrated the fallacy of this argument? Certainly not for those who tend toward Rey's biases, for it must seem to them that Smith's assumptions beg the very points at issue! So long as the exchange remains on this level, it is not likely that fruitful dialogue will transpire. For this reason, we would suggest that an item be placed on the philosophical agenda calling proponents of the first person mode of analysis to address Rey's "disturbing possibility" thesis in a way that renders it harmless. Though Smith addresses this challenge, his attempt falls short to the extent that he merely asser1s the primacy of conscious­ness and intentionality without explaining why they should be viewed as "intrinsic" to intelligent activity, and why their nature cannot be captured by the types of clauses proposed by Rey.

But should we not also assess Rey's basic premise that all genuine intelligent processing is reducible to computational functions on discrete states? Smith and others contend that consciousness and mental operations stand in holistic relationship to one another. Thus, they deny what Rey affirms, that consciousness can playa role in cognitive processing only if it is a distinct mental operation linked to other such operations. Smith has proposed investigating how the intentionality of mental operations gives structure to psychological phenomena--a proposal that our agenda should not overlook. The analysis which Lurie has provided of the intricate relation between psychological phenomena (e.g. beliefs, desires, etc.)' might incline us to place the burden of proof on Rey's shoulders rather than Smith·s. But are we really faced with an either/or situation? Are there not some decent prospects for complementarity between the first and third person modes of analysis?

To illustrate: consider semantic reference. We have seen arguments from both perspectives. Mel ntyre holds semantic reference to be dependent on intentional reference, and thus analysis of the latter to be the proper domain of first person descriptive philosophy of mind. Emmett has countered by pointing to the extent to which linguistic reference depends on whether or not the lise of an expression "comports" with the practices of the linguistic community. She maintains that, since such "consonance" is crucial to linguistic reference, considerable suspicion is cast upon Mclntyre's strategy and the first person approach it presumes possible. It must be asked, therefore, whether "consonance" between a speaker's intent­ions and a listener's expectations can be determined without appealing to a first person theory of semantic reference grounded in the intentional character of mental transactions? For can it be denied that the meaning of another's words are experienced from one's own first person standpoint? Further, it would seem imperative also to give consideration to the intentional content--especially the operative conditions of satisfaction-­intrinsic to the intentions of the sreaker. We are not saying that Emmett

Page 381: Perspectives on Mind

Toward a New Agenda for the Philosophy of Mind 373

is wrong and Mcintyre right--rather, we are suggesting that Emmett's appeal to consonance as the ground of linguistic reference is perhaps more compatible with first person ontologies than her conclusion would lead one to believe. Hence, we think the new agenda should include analyzing her issues from the standpoint of a first person ontology (particularly the issue of how to account for consonance); for then we might begin to appreciate more fully the potential for a symbiotic relationship between the first and third person modes of analysis.

How about the qualia issue? Might it also provide prospects for symbiosis between the two philosophical traditions? Moor argued against such prospects, intending to deflect our attention from this question which he regards as fruitless to pursue. His "tests" for detecting the presence qualia in robots fail, not simply for want of competence to apply them, but in principle. This is why the question is to be avoided; indeed, it was implied that we face many of the same obstacles when trying to establish the status of qualia in human experience. So why speak of qualia at all? Moor's opinion was that by postulating the presence of qualia, we are provided with explanatory leverage otherwise unavailable when it comes to accounting for ordinary human behavior. Hence, the notion becomes embedded in our developing ontology of mental processing only because of its instrumental value, not because of any empirical or phenomenological evidence dependent on a first person standpoint.

Johnstone responded with the observation that machines lack the ability to attend to their operations; such operations are entirely mechan­ical. Humans, on the other hand, perceive stimuli as signs indicative of things about the surrounding world. Attending to these signs, they shape their behavior accordingly. But first the meaning of the signs must emerge. This would seem to imply that qualia arise on the basis of "taking" activities involving attention and interpretation. Johnstone proposed a communication test as the key to determining if something possesses the requisite "interpretive process" for experiencing qualia. But might it not be possible to program a computer in an especially clever way, so that it appears to be communicating when in fact it is not? Then a machine might pass Johnstone's test without possessing the requisite intentional makeup for entertaining qualia. But what would it mean for a computer to behave "as if" it were communicating? How are we to define communication? Accepting Moor's counsel, favoring as it does methodologi­cal considerations, 'communication' is to be defined in a way that yields maximum explanatory leverage. If this means a definition in terms of behavioral characteristics, then the "as if" clause becomes a meaningless appendage--possihly an attractive gambit. but hardly ontologically neutral. One would need to show \\'hv "good explanations" determine ontology; or else we need an explanation which makes it clear why humans have "qualia-laden" experiences. In other words, we are looking for a theory which would

Page 382: Perspectives on Mind

374 Epiloglle

explain from the third person standpoint why we have the kind of expelience which can only be entertained from a first person standpoint. This needs both careful description of first person experience and proper translation of this description into terms compatible with a third-person account of neurophysiological processing--plainly an item of joint interest.

2. Expertise and Tacit Intentionality

The new agenda should also include questions associated with the relation between attentive and nOll-attentive modes of awareness. What is the precise nature and role of so-called "tacit" intentionality? How is it related to the "intentional content" of attentive modes of awareness? Do expert problem solvers really operate at a different level of intentional involvement than those with lesser skills? Or have they merely forgotten the rules they follow (perhaps because it is no longer necessary to attend to the rules in order to incorporate rule-ordered procedures within the scope of problem-solving routines)? Dreyfus argued that tacit intent­ionality plays a pivotal role in the problem solving approach of experts. Since experts have so much difficulty explaining their expertise in terms of rule-following behavior. it appears that intentional content is flOt

implicated in non-attentive forms of awareness (e.g. "know-how" and "bodily skills"). Yet it would seem that there must be intentional content the moment there are conditions of satisfaction to be put to the test of intentional transaction. Don't experts operate with conditions of satis­faction when exercising their expertise? Surely they are enough in tune with what they are doing to recognize the presence of negative feedback. This would seem to indicate that some form of intentional content is involved in non-attentive modes of awareness after all.

Suppose this background is a system of tacit information which includes non-propositional hypotheses shaping our anticipations of feedback from transactions with the world. We might think of the back­ground as a network of propensities and dispositions which harbor implicit rather than explicit mental descriptions. Such a view explains why experts have a hard time describing the specific rule-governed procedures which make their expertise possible: since they are directing their skills to solving problems, they do not pay attention to them, hence they are not really aware of what they are doing. This hardly implies that they aren't doing something which an expert observer could describe in terms of rule-governed behavior. What we need. of course, is an expert observer with first-person access to the problem-solving activities of an expert problem-solver. Perhaps then we would begin to understand how experts go about the business of defining and solving problems.

Other papers in the anthology have dealt with these issues in a way more in keeping with functionalist strategies. Nelson distinguishes

Page 383: Perspectives on Mind

Toward a New Agenda Jor the Philosophy oj Mind 375

between tacit and conscious recognition schemes, arguing that the latter differ from the former only because they include an attentive mode of awareness. In all other respects the two kinds of recognition schemes are identically "intentional" in character. In both cases, the system (computer or human) acts on expectations and inferences that have arisen in response to feedback from prior transactions with problem-solving environments. When there is ambiguity, the recognition scheme moves to determine the state it would be in if it were receiving clear input; in the process, it "takes" the input in a way consistent with its expectations. Can this position be reconciled with the conclusions drawn earlier by Dreyfus? A possible link is provided in the proposal given by Fields; if his notion of "background understanding" is merged with Dreyfus' view of expertise, we have the hypothesis that perception as a set of "selective search procedures" is used to obtain explicit information from the implicit "data base" of the common sensible world in a way that is more like a skill than a J0l111 oj intelligence. An agenda task then is to unlock the secrets of this search process and translate them into computational functions.

3. Transaction Scllemas and the "Taking" Function

Several papers in the anthology have dealt with the nature of "intentional transactions" between mind and environment. One unresolved question is the extent to which cognitive processes are dependent on the intelpretive function of "schemas" and "prescriptions." Assuming their identification is possible, the next challenge would be to determine how these functions might be translated into computational routines. A related question concerns the source from which a mind gains its sense or awareness of the potentially corrective character of environmental feedback. If such awareness is dependent on interpretive functions, by what process does a mind determine the specific features of the corrective prescription? Is this prescription itself a by-product of interpretive activity? How deep do the layers of interpretation reach? Our general position is that philosophical perspectives on mind can no longer afford to avoid these sorts of questions. But in the process of including them within the scope of the new agenda, we are brought face to face with a new formulation of the enigma of transcendence: how does the first person point of view gain its sense of other points of view and, with it, its sense of being in the midst of a world of challenges and opportunities? How does it come to be a person?

In our work here, Nelson, Tuedio and Arbib have tried to lay some of the groundwork for this project. Nelson's analysis of self-referencing recognition states calls for careful examination. In particular, we need to determine more precisely how a self-referencing mechanism differs from a mere decoding device. What does it mean for an automaton to describe its

Page 384: Perspectives on Mind

376 Epilogue

structure to itself in a way that allows it to assign probabilities to possible recognition states? Granting that such a mechanism could deal more effectively with ambiguity, and might even develop goal-related relations to incoming stimuli, what underlying capacity are we presupposing? Is there phenomenological evidence to support Nelson's claims? If so, does it support Tuedio's reflections on intentional transaction? How does it relate to Arbib's analysis of schema networks? Once again it begins to looks like we need the means for an integrated, multi-leveled analysis of mental processing--one that can be secured on a base of careful phenomenological description. As Nelson has suggested, such an analysis would need to relate phenomenological evidence to levels of neurological processing, and would culminate in an effort to identify abstract features intrinsic to mental activity.

4. Translation and Convergence

If we are to answer the questions posed on this new agenda, and move toward accomplishment of the goals it sets, we need to ascertain the possibility for a more coordinated research effort. Are there means for systematically reformulating the concerns, insights, and claims of a given perspective into those of another? Can the constraints of what we earlier called the Limitation Theorem (p. 370) be confronted without frustration of our philosophical aims? For example, a specific challenge at this point is, to what extent are propositions grounded in a third person ontology translatable into those of a first person ontology? What exactly are these "ontologies," how do they arise, and how are they implicated in the mental activity of the individual person?

Suppose we were to say, as a first approximation, that an "ontology" is a logical means of indicating what is to be taken as the elemental "things" of a theoretical enterprise. An ontology, thus, is part of the apparatus of a system of expression--it is part of our effort to get at a useful map of reality, it is not that reality itself. It is, as Otto has pointed out, like the "glossary" associated with the effort to translate ordinary discourse into logic. What goes into a glossary is tentative, and relative to the purpose set at each level of analysis. What is supposed for an ontology is, although on a grander scale, much the same: with its help we seek a valid account of the world (or some piece of it we happen to think important at the time). Applying this to the problems at hand regarding the nature of mind and mental processing, it would seem that exploring further the potentials of systematic translation at the level of sensory activity and intentional transaction is a pivotal project--one which if taken up would fulfill the sense of "convergence" we continue to think possible. Of course, this sketch raises more questions than it answers, but that just goes to show our agenda must remain open-ended.

Page 385: Perspectives on Mind

FOOTNOTES: CHAPTER ONE

GEORGES REY A Question About Consciousness

• This paper is a heavily revised version of Rey (1983a), which was originally written for the Eastern Association of Electroencephalographers, Mont Gabriel, Quebec, March 1979. Intermediate drafts have served as bases for talks at the Universities of Arizona, Illinois, Belgrade, Ljublijana, Graz, and Vienna, at Bates College, at the 1984 Dubrovnik Conference in the Philosophy of Science, and at the filozofski fakultet, Zadar (Yugoslavia). I am indebted to these audiences for their hospitality and stimulating responses. I am also grateful to the National Endowment for the Humanities and the Fulbright Commission for financial support, and to Louise Antony, Ned Block, Richard Davidson, Joe Levine, Gary Matthews, Elizabeth Robertson, Michael Slote, and Eleanor Saunders for helpful discussions and suggestions. 1. For more recent developments of the Humean argument. see Parfit (1971a), and for replies, Lewis (1976) and Rey (1976). 2. See many of the essays in e.g. Eccles (1966); Globus, Maxwell, and Savodnik (1976); Schwartz and Shapiro (1976); Davidson and Davidson (1980); and Davidson, Schwartz, and Shapiro (1983). 3. See e.g. Wigner (1967, 1982) and Wheeler (1982). 4. The Oxford English Dictionary lists ten definitions of 'conscious', seven of 'consciousness', on the latter of which Natsoulas (1978) expanded, although Ryle (1949) considered only seven for both. S. That the business of providing definitions of even commonly used words is not an entirely a priori activity, but may go hand in hand with the development of a theory of their referents seems a natural corollary of recent discussions of reference in Putnam (1970, 1975) and Kripke (1972/1980). I develop this in relation to concept identity in Rey (1983b, 1985). 6. I shall use 'inductive' here very broadly so as to include not only principles of enumeration, but also any "reasonable" relations between evidence and explanatory hypotheses, especially those associated with "abduction, " "analysis-by-synthesis," and "inference to the best explanation"; see e.g. Peirce (190111955), Mackay (1951), Harman (1965), Chomsky (1972). 7. Behaviorists thought they had a better one that didn't advert to any mental terms at all. I take for granted the standard criticisms of that view, as found in e.g. Chomsky (1959), Taylor (1963), Dennett (1978, PI. 2, § 4, pp. 53-70), Gleitman (1981: Ch 5), many of which I summarize in Rey (1984). 8. I discuss practical reason of its status as a law of thought in Rey (1987). See also e.g. Wiggins, Horgan and Woodward (1985); and Elster (1986). 9. The properties of a sentential model that seem to me to recommend it are its capabilities of (a) expressing structured propositions; (b) capturing rational relations into which attitudes enter; (c) individuating attitudes sufficiently finely to distinguish ones containing, e.g. synonymous descriptions. co-referential names. and indexicals (cf. Burge 1978, Kripke 1979, Bealer 1982: Ch 3); Cd) explaining how attitudes can be causally effica-

377

Page 386: Perspectives on Mind

378 Footnotes

cious; and (e) allowing different roles and access relations for different attitudes. It is difficult to imagine any sort of representation other than a sentence that can perform all these roles (e.g. images certainly couldn't). 10. This seems to be the main issue bothering Searle (1980) in his well-known example of the "Chinese Room." In Rey (1986), 1 discuss this and related issues that I think are confounded in Searle's discussion. The Stampe (1977) proposal that I advocate here and there has also been advanced in slightly different form by Dretske (1981), Stalnaker (1984). and Fodor (1987). 11. This is not the place to defend such a theory of meaning in detail. Suffice it to say that (a) I intend it only as a sufficielll condition of meaning: there may be other ways that meaning may arise. but should something be able to operate as described. its states would seem to play the role that meaningful states seem ordinarily to play; and (b) such a basis figures in a variety of theories of meaning, from truth-conditional proposals of Wittgen­stein (1920) and Davidson (1967), to possible world proposals of Stalnaker (1984), to discrimination proposals one finds in the work of Behaviorists like Quine (1960: Ch 2) and even in Searle (1983: 177). To avoid extravagant idealizations, and to capture the structure of language, it is probably a good idea to look to such a theory only for establishing meanings of atomic elements in a language, relying upon recursion for the rest. 12. I have in mind here the "frame problem" currently vexing work in artificial intelligence (see McCarthy and Hayes 1969, and Dennett 1984 for useful discussion). Although I disagree with many of the ways Dreyfus (1972) proposes for thinking about the problem. I do think it a merit of his book that it anticipated many aspects of it, particularly what Fodor (1983) has called the "globality" and "Quinity" of central belief fixation. In distinguishing artificial intentionality from artificial intelligence. I hope I've made it plain that none of this discussion depends on a solution to these problems. 13. Some, e.g. Lucas (1961, 1968), have argued that it is a consequence of the GOdel Incompleteness theorem that the human mind is no machine: so, the machine 1 am imagining would necessarily fall short of human mentation. The short answer to this argument is that it presumes human minds are consistent. Aside from the vast implausibility of ,ha, claim generally. the mechanist may also reply that, if indeed we can decide the formally undecidable sentence. we do it only on pain of inconsistency. See Lewis (1969. 1979a) for longer answers, and Cherniak (1986) for an excellent discussion of the "minimal rationality" that needs to be required of us, and consequently of any machine. 14. In considering here and below the consciousness of machines programmed in this and other ways, I shall be citing what I take to be ordinary pre-theoretic intuitions about the notion of consciousness. 15. For similar, but less complex proposals by leading neurophysiologists. see Moruzzi (1966), Knapp (1976). John (1976). Wilks (1984: 123) makes the criti­cism I make here of such proposals in computer science of e.g. Sayre (1976). 16. One might argue that genuille thoughts. intentions. perceptions presuppose consciousness at least as a background condition. But they would need to account for the explanatory power. adumbrated above. that such ascription seems to possess without it. In any case we would still be entitled to an answer to

Page 387: Perspectives on Mind

Footnotes

the question addressed in this paper: whence this presupposition. 17. See also Castefieda (1968), Anscombe (1975), Perry (1979).

379

18, Where being "self-aware" or "self-conscious" doesn't merely verbally entail consciousness. That is, to avoid begging the question, these expressions must be used merely for emphatic reflexive propositional attitudes. 19. What is wanted here to clinch the case are experimentally controlled examples 11 la Nisbett and Wilson (1977) of the effects of specifically second­order emphatic reflexive thoughts that the agent is unable to express. Unfortunately, Laing el al (1966) don't provide any, and I've been so far unable to find any studies that do. Gur and Sackheim (1979) in their interesting work on self-deception provide some, but inconclusive evidence in this regard. 20. Cf. Wilks (1984:122) who backs a similar criterion, citing Danto (1960). 21. Dennett (1978: Pt. 3, § 9. p. 156) makes such a proposal, one that. incidently. permits ascribing consciousness to animals and to the humanly inarticulate. In Rey (1987) I develop this proposal in some detail, distinguishing "avowed" from "central" attitudes, and arguing that the distinction is important in a number of ways; for example, as a basis for an account of akrasia and self-deception. I think the proposal does capture a "weak" notion of consciousness; but it would seem to fail to capture the full, ordinary notion insofar as it is equally applicable to existing desk top computers that few people would seriously regard as conscious for a moment. 22. The problem here is 1101 that raised by Searle's "Chinese Room," concerning whether anyone following allY program could be said to understand Chinese. ThaI problem is solved simply by considering richer (particularly recursive semantic) programs than Searle considers, as well as by avoiding fallacies of division. See Rey (1986: § I). 23. So that the machine won't spend all of its time merely brooding over increasingly nested beliefs, a limit might need to be placed on the nestings generated by (6), particularly as it might interact with (7). 24. Bealer (1982: 247) claims that "attending, concentrating. meditating. contemplating" do not have "mechanically recognizable functions." I fail to see why the mechanical operations described here wouldn't suffice. 25. Searle (1981) argues that. since someone could follow a computer program like the one I am sketching without feeling pain, a machine running on such a program wouldn't feel pain either. This. of course, is simply another version of his "Chinese Room" argument, and would seem to be guilty of the same fallacy of division that advocates of the "systems" reply have deplored, and that I have further criticized in Rey (1986). 26. The important (im)possibility of scepticism about one's own present sensations that I shall be considering in what follows was, I believe, first raised by Wittgenstein (195311967: #243-315) in his famous "private language argument." Shoemaker (197511984: 189-190) in a passage that was the inspiration for much that I have written here. takes the impossibility of such scepticism to be an argument for functionalism, a specific version of which I am developing in this paper. I think Shoemaker is right in regarding function­alism as the only plausible account of mental states that preserves first-

Page 388: Perspectives on Mind

380 Footnotes

person privileges. Unfortunately it seems to do so at the cost of third-person attributions of such states to a computer of the sort I am describing. 27. See e.g. Descartes (Meditations. II), Locke (Essay, IV, 2), Hume (Treatise, I, iv, 2), Ayer (1956: 55), Malcolm (1963: 85), Chisholm (1981: 79-81). For a useful survey of these and other versions of privileged access, see Alston (1971). For interesting experimental data undermining the privileges even in the human case, see Hilgard (1977). 28. I am no longer confident of that claim. It now seems that all that is necessary are the probably very subtle cognitive effects of hormones and neuroregulators (e.g. beliefs about how one's body feels, the timing of cognitive processes). If so, then the appeal to physiology would be even less plausible than I argue it to be below. 29. Lycan (1981) has pointed out that in general one would expect our various psychological faculties to be characterizable on many different levels of functional abstraction from the basic physical properties of the brain. 30. That is, it is possible within the world(s) described by the psychological laws being considered here. It is important that the possibility here is not epistemic but metaphysical. Were it only epistemic, it could be dismissed as due merely to ignorance, as many previous arguments e.g. against materialism reasonably have been, cf. Maloney (1985: 32). 31. I say "unacceptable," not "false." I leave open as part of the puzzle I am raising whether one is entitled to move from the former to the latter. 32. By 'x is infallible about p' here I intend no claim about knowledge, but only 'it is impossible for x to think or believe that p when not-p': this latter is closed under logical implication even if 'x knows p' isn·t. For contrast with "arcane" conditions, I'm also presuming that the agent is cogllitively fully normal, thinking itself conscious clearly and confidently, not as one might do when asleep or in some other cognitively marginal state. 33. Proponents of life as a condition on consciousness include Wittgenstein (195311967: p. 97e), Ziff (1964), Cook (1969). Gareth Matthews (1977). 34. Why shouldn't the computer I am imagining have the status of an angel, a god, a ghost, or some other non-biological, but still psychological agent? 35. It is an indictment of our ordinary notion of consciousness that we seem to have contradictory intuitions about its application. our judgments about something's consciousness being sensitive to the way in which we encounter it. Described from the "bottom up." i.e. imagining first a non-biological machine, and then a program that would realize various human mental abilities. we are disinclined to regard it as conscious: but proceeding from the "top down" and discovering that close friends. or we ourselves, are just such machines seems to elicit the opposite verdict. I suspect this has to do with the ways in which our understanding of people and animals are "triggered" by certain patterns of stimulation (cf. the penultimate paragraph of this essay). 36. The image of consciousness as an inner light is advanced in many places, perhaps most recently in Smith (1986). Even Wingenstein. so critical of traditional pictures of the mind. compares "turning my attention to my own consciousness" to the glance of someone "admiring the illumination of the sky and drinking in the light" (195311967: p. 124). (Other passages in

Page 389: Perspectives on Mind

FooTl1otes 381

Wiltgenstein, e.g. § 308, do suggest, however. a version of the eliminativist proposal being avocated here.) The peculiar amalgamation of epistemology and ethics that the traditional image involves is explored at some length by Richard Rorty (1979: 42-69). when he discusses what he regards as the traditional belief in "Our Glassy Essence," a term he draws from Shakespeare and Peirce). The view I am entertaining in the present paper might be called "eliminativist" with respect to consciousness. although (token ontologically) "reductionist," with regard to the propositional attitudes. 37. Cf. Wittgenstein (195311967: p. l78e): "My attitude towards him is an altitude towards a soul. I am not of the opinion that he has a soul." 38. James (1912) also denied the existence of consciousness. although he went on to explain that he meant "only to deny that the word stands for an entity. but to insist most emphatically that it does stand for a function" (p. 4). When I say there may be no such thing. I Olean no such thing whalsoever.

DAVID WOODRUFF SMITH The Unquestionability of Consciousness

1. Cf. G. Rey, "A Question about Consciousness," in the present volume. 2. Cf. D.W. Smith, "The Structure of (Self-) Consciousness", Topoi, 1986, and a longer treatment in my the Circle oj Acquaintance (forthcoming).

YUVAL LURIE Brain States and Psychological Phenomena

I wish to thank my friends, H. Marantz and A. Zaitchick, for their helpful comments. 1. A detailed discussion supporting this hypothesis with respect to certain psychological phenomena can be found in Brandt and Kim (1969). They define it as follows: "For every phel10menal properly, M. there is a physical property P such that it is lawlike and true that for every x at every I an M-event (i.e. an event instancing M) occurs to x at I if and only if a P-event occurs in the body of x at I. Further, distinct phenomenal properties have distinct physical correlates. " Ibid., p. 219. 2. Counter to what many physicalists have supposed. this assumption is compatible with many different kinds of mind-body theories. 3. That the relationship between brain states and psychological phenomena may be a one-to-many relationship has generally seemed implausible, since it conflicts with determinism and points in the direction of supernaturalism. 4. Deutsch (1960). for example, offers this suggestion: ". the change which occurs in learning in [a] machine could be engineered in many different ways. Any component which could be made to assume either of two steady states could be used. Similarly. the rest of the "central nervous system" could be constructed of completely different types of components without affecting the behavioural capacities of the machine. The precise properties of the parts do not matter: it is only their general relationships to each other which give the machine as a whole its behavioural properties. These general relationships can

Page 390: Perspectives on Mind

382 Footnotes

be described in a highly abstract way, for instance, by the use of Boolean algebra, This highly abstract system thus derived can be embodied in a theoretically infinite variety of physical counterparts, Nevertheless, the machines thus made will have the same behavioural properties, given the same sensory and motor side." See Deutsch (1960), p. 13. 5. See Putnam, (i) "Minds and Machines," in Hook (1961): (ii) "Robots: Machines or Artificially Created Life," in O'Connor (1969): (iii) "The Mental Life of Some Machines." in O'Connor (1969). 6. Putnam, Ibid., (ii), p. 248. 7. See Block and Fodor (1972). 8. Ibid., p. 162. 9. Brandt and Kim (1969). p. 220, for instance, argue that the hypothesis

has a logical status more like that of a Principle of Universal Causation than of any particular causal law in science .,. Its empirical basis is much more complex and indirect than that of specific laws. In effect. it asserts that there are specific laws of a certain kind. As such it cannot be refuted by observations that upset particular laws: although this is not to say that the facts could not force its abandonment--say in the light of persistent failures to discover even approximate correlations of the required kind." 10. It is easiest to formulate my argument when it is directed at the ane­ta-one version of the correspondence hypothesis. But the argument loses none of its force when brought to bear against the many-to-one assumptions as well. 1 I. To set up this example for other versions of the correspondence hypothe­sis, one need only replace the term 'brain state"--in the physical side of the relationship--by whatever is desired: e.g. 'a physical state,' 'a machine-table state,' 'a computational state,' 'a distinct disjunction of physical states,' or whatever. This has no bearing on the argument which follows. 12. This discussion is aimed mostly at one kind of functional theory: so-called "machine versions II of functionalism, as opposed to "analytical versions" of Functionalism. The latter. put forward. for example, by Lewis and Armstrong, are discussed in Lurie (I 979b) .

FORREST WILLIAMS Psychological Correspondence: Sense and Nonsense

I. This is the description given in Lurie (l979a), pp. 138-139. 2. Cf. Wittgenstein (1966), p. 54. 3. Cf. Quine (1960), §§ 31-32 (passim). 4. There are "propositions asserting that something is believed, doubted, desired, and so on, insofar as such propositions are known independently of inference ... It will be observed that propositions of this class are usually, if not always. psychological. I am not sure that we could not use this fact to define psychology .... However that may be, there is certainly an important de­partment of knowledge which is characterized by the fact that. among its basic propositions, some contain subordinate propositions." Russell (1940). p. 164. 5. Cf. Edmund Husserl, "Philosophy as Rigorous Science." in McCormick and Elliston (1981). p. 172: the absurdity of a theory of knowledge based on

Page 391: Perspectives on Mind

Footnotes 383

natural science, and thus, too. of any psychological theory of knowledge." 6. See especially §§ 32 and 33 of Heidegger's Being and Time as translated by Macquarrie and Robinson (1962): "As the disclosed ness of the 'there,' understanding always pertains to the whole of Being-in-the-world ... When an assertion is made, some fore-conception is always implied, but it remains for the most part inconspicuous, because the language already hides in itself a developed way of conceiving. "

RONALD MCINTYRE Husserl and the Representational Theory of Mind

• My thanks to David Woodruff Smith and Frank McGuinness for their invaluable help with the issues addressed in this paper. 1. Dreyfus (1982), p. 2. 2. See McIntyre (1984). 3. My views in fact are in basic agreement with those of Emmett (1983), although she seems to attribute Dreyfus' views to the contributors as well. 4. Putnam (1975). p. 220. 5. Fodor (1980), p. 64. 6. Husserl (1913), § 49, p. 115; my translation. 7. Husserl (1913), § 54, pp. 132-133; my translation. S. Cummins (1983), p. 34. 9. Fodor (1981). p. 114. 10. Husserl (1913), § 54, pp. 132-133; my translation. 11. See Smith and McIntyre (1982), pp. 96-99. 12. See Husserl (1954). § 72. p. 264. 13. Husserl (1925), § 3(e). pp. 27-28; with translation changes. 14. Husserl (1925). § 4, pp. 36-37. 15. Cf. Husserl (1913), esp. §§ 87-91, 129-133. The interpretation of Husserl's notions of noema and noematic Sinn I appeal to here is developed in Smith and McIntyre (1982), esp. Chs. 3 and 4. 16. Fodor (1984), pp. 8-11. 17. See Husserl (1900), I, § 7; Husserl (1913), § 124; Husserl (1929). § 3. For development. and some qualifications, of this view see Smith and McIntyre (1982), pp. 170-187. IS. Cf. Husserl (1913), §§ 130-131; Smith and McIntyre (1982), pp. 194-219. 19. See Bach (1982), pp. 123-127; Stich (1983), pp. 127-148. 20. Husserl (1913) § 43 pp. 98-9; § 90 pp. 224-5. Cf. Husserl (1900) I, § 23. 21. Fodor (1980). p. 300. 22. Husserl (1925) § 3d pp. 22-23; with trans. changes. Cf.Husseri (1900), V, § 25, p. 603; Husserl (1913), § 36, p. 80; Husserl (1931). § 14, pp 32-33. 23. Cf. Smith (1984). 24. Husserl (1900), I, § 15, p. 293; with translation changes. 25. Husserl (1913), § 128, p. 315; my translation. 26. Searle (1984). 27. Churchland and Churchland (1981). p. 140. 28. Putnam (1975). esp. pp. 223-227.

Page 392: Perspectives on Mind

384 Foot1101eS

29. See Bach (1982); Searle (1983), Ch, 8; and Smith (1984), 30. See Smith and Mcintyre (1982), Ch, 5, 31. Dreyfus (1982). p. 10. 32. Fodor (1985), pp. 85-88.96-99. 33. See Husserl (1931). §§ 19.20. 34. McGinn (1982). for example. advocates an explication of one component of meaning in terms of an "intra-individual causal" or "cognitive" role. but he denies that reference is determined by rhis component of meaning. However. since by reference McGinn means de Jacro reference relariol1s, it is not clear what happens to referential or intentional characrer on this theory of meaning. 35. For example, the whole of Husserl (1900), I. 36. Husserl (1913). § 86. p. 213; my translation. Cf. Husserl (1900). V. § 14. p. 565. 37. Husserl (1913), § 86, pp. 214·215; my translation. 38. Husserl (1913), § 142. p. 350; my translation and my emphasis. 39. Husserl (1931). § 21, p. 52; my emphasis. 40. Cf. Husserl (1913), §§ 59, 134. 41. Husserl (1913), § 71, p. 164; my translation. 42. Husserl (1913), § 72, p. 165; my translation. 43. Husserl (1913), § 75, pp. 173-174; my translation. 44. See Husser! (1913). § 134; cf. Husserl (1929). 45. Searle (1984). pp. 28-41. 46. Husserl (1913), § 75. p. 174; my translation. 47. See Dreyfus (1982), pp. 17-19.

KATHLEEN EMMETT Meaning and Mental Representation

1. Strictly speaking the intentional object of a state need not be extramental. I can have a belief about a desire of mine; the object of my belief would then be another mental state. In the ensuing discussion I shall use "extramental" to mean outside the particular act under consideration. 2. R. Mcintyre, this volume, pp. 74-75. 3. Fodor (1980) p. 70. 4. Ibid., p. 68. 5. Mcintyre, p. 28. 6. Ibid .. p. 22. 7. Searle, (1983). p. 199. 8. Searle argues that the causal theory is a variant of the descriptivist theory and does not constitute an indepen'dent theory of naming at all. See Searle (1983). Ch. 9. 9. Kripke (1980) p. 97. 10. Ibid .. p. 95. 11. Grice (1957), pp. 377-388.

Page 393: Perspectives on Mind

Footnotes

HUBERT DREYFUS Husserl's Epiphenomenology

385

1. Heidegger (1982), henceforth referred to as BP, In all quotations I have changed Hofstadter's translation to fit conventions adopted by the translators of Being and Time, 2. Heidegger (1962), henceforth referred to as BT. 3. H. Dreyfus and S. Dreyfus (1986). 4. See Rumelhart and McClelland (1986). 5. Rumelhart and Norman (1982). 6. Gurwitsch (19xx) p. 67. 7. Husserl (190011970), Investigation I, Chapter 1. 8. Edmund Husserl (1970), p. 149. 9. Mcintyre (1986), p. 110. My emphases. 10. For an example see Common-sense Summer: Final Report, Report No. CSLI-85-35, CSLI Stanford University, p. 3.22, 1985. 11. Mcintyre (1968) 12. Ibid. 13. Ibid., p. 106. 14. Ibid., p. 107. 15. Ibid., p. 108. 16. Ibid., p. 112. 17. Ibid. 18. Husser! (191311950), paragraph 75, p. 174; Mcintyre's translation.

FOOTNOTES: CHAPTER TWO

ROBERT V AN GULICK Qualia, Functional Equivalence, and Compntation

1. Kalke (1969). 2. Lewis (1972). 3. Block, "Introduction: What is Functionalism" in Block (l980a) pp. 171-184. 4. Kalke (1969), Dennett (1971), and Van Gulick (1982). 5. Shoemaker (1975). 6. Searle (1980).

RAYMOND J. NELSON Mechanism and Intentionality: The New World Knot

1. Nelson Goodman, Ways of World Makillg. Hackett Publishing Company, Indianapolis, 1978, pp. 4-5. 2. Holism of this variety is energetically represented. for example. by Grice (1974), pp. 32-35; and more recently by Peacocke (1979). 3. A computer logic model is anyone of the following or perhaps a mixture: Turing machines. program schemes. finite automata. finite transducers. phrase

Page 394: Perspectives on Mind

386 Footnotes

structure grammars, computer switching network and neural network formulas (recursive arithmeticf these is presented in Nelson (1968): and their applications to philosophy of mind in Nelson (1982). A more up-to-date volume on automata is Hopcroft and Ullman (1979). 4. Most of this Section and parts of III and IV derive from my paper "Can Computers have Intentional Attitudes?" read at a Symposium Language and Mind. John Carroll University, October. 1984. S. Early functionalists referred to everything in the head as a "state". This has created confusion (e.g. dispositions are not states, nor are belief attitudes, skills. talents, feelings. etc.). 'Entity' is a term traditionally used for any category or predicament. It might get strained here. but not as much as 'state' in the philosophical literature. 6. Hook (1960). 7. Cf. Putnam (1967). 8. See Nelson (1976), pp. 365-385. 9, This is a "place-holder" theory of dispositions in which the microstructure--the hardware adder--is already known. In psychology we speculate that attitudes qua dispositional are analogous computer-like neural network microstruc.ures. 10. See Nelson (1982), p. 73. 11. Fodor (1982), pp. 18ff; and Searle (1980), also reprinted in Hofstader and Dennett (1980). 12. Fodor, Op.Cit., p. 232. 13. This is no place to take up Kripke's arguments against the identity theory, which many consider to be devastating. See Kripke (1980). also in Davidson and Harman (1972). I think his argument can be defused. See Nelson (1982) pp. 331-335; and my article "On Causal Reference", forthcoming. 14. Quine (1960), pp. 221-222. 15. Fodor, Op. Cit. 16, D. Lewis. "General Semantics" in Davidson and Harman (1972), p. 170. 17. See Thatcher (1963). To see in more detail how self-description is used in taking. see Nelson (1976). pp. 24-52. 18. Chisholm (1957), p. 77. 19. In my "On Causal Reference", forthcoming. 20. See Nelson (1978), pp. 105-139; also Nelson (1982), pp. 254-257. 21. Quine (1967), pp. I 47ff. 22. Chisholm. Op. Cit .. p. 182. 23. See Grice. Op. Cit., p. 24; and Dennett (1978). 24. Fodor (1984).

JOHN W. BENDER Knotty, Knotty: Comments on Nelson's "New World Knot"

1. This objection can be found in Fodor and Block (1972), pp. 159-181. Reprinted in Block, (1980a), volume I. 2. See Searle (1980). reprinted in Hofstadter and Dennett (1981). 3. Block (1978) in Block (l980a). p. 270.

Page 395: Perspectives on Mind

Footnotes

4. Lewis. "Mad Pain and Martian Pain". in Block (1980a). pp. 216-222. 5. Putnam. "The Nature of Mental States". in Block, (1980a). pp. 223-231. 6. Searle (1983).

CHRISTOPHER S. HILL Intention, Folk Psychology, and Reduction

387

* I have been helped considerably by conversations with Richard Boyd and Chris Swoyer. Also, I have benefitted from Swoyer's comments on an earlier version of the paper. 1. See, for example, Boolos and Jeffrey (1974), pp. 245-6. 2. See Quine (1960) § 45. See also Churchland (1981). pp. 67-80. 3. The view that ascriptions of beliefs carry teleological presupposition is well argued in a still unpublished paper by Fodor ("Psychosemantics"). 4. The principle based on (a)-(d) is similar in some respects to a principle that has been endorsed by Dennett.

(x)(x believes that p " x can be predictively attributed the belief that p). There are significant differences between the two principles. For example, Dennett does not spell out "x can be predictively attributed the belief that p" in terms of the explanatory power that "x believes that p" has in conjunction with the laws of Folk Psychology (and other hypotheses about x's intentional states). He prefers to appeal to a somewhat different set of laws that he calls "Intentional Systems Theory.") See Dennett, "Three Kinds of Intentional Psychology" in Healey (1981), pp. 37-61. See especially p. 59. 5. See Nelson (1982), and the references given in that work. 6. Grice (1974), pp. 23-53.

JAMES A. TUEDIO Husserl on Objective Reference:

Intentional Transaction as a Primary Structure of Mind

1. Husserl's interest in mental experience can be traced to his earliest pub­lications. written under influence of Franz Brentano and Kasimir Twardowski. Many of these writings have been collected together in All/salle and ReZel1sionel1 (1890-1910) edited by Bernhard Rang. Hlisseriiana 22 (Nijhoff: The Hague. 1979). See also Klaus Hedwig. "Intention: Outlines for the History of a Phenomenologi­cal Concept," Philosophy alld Phellomel1ological Research 39, pp. 326-340. 2. Husser! calls this the "natural attitude," and discusses its relation to phenomenology in Ideas I (1913). 3. Op Cil., Chapter 3. 4. In this context Husserl speaks of a return to "the things themselves." 5. I am indebted to the Smith/Mcintyre interpretation of Husserl's "trans­cendental-phenomenological reduction," and to their presentation of the nature of Husser]'s phenomenological ",ethod in general. See Smith and McIntyre (1982). pp. 88-104 for more details of their reading of Husser!'s position. 6. One needn't perform transcendental-phenomenological reduction prior to an eidetic reduction. as Husserl makes clear in his Preface to the English tran5-

Page 396: Perspectives on Mind

388 Footnotes

lation of Edeas E. But one must perform the transcendental-phenomenological reduction prior to an eidetic reduction if one's intention is to grasp the essential structures of conscious life. It may be that one can grasp the essence of 'redness' independently of transcendental-phenomenological reflec­tion, but one cannot grasp the essence of conscious life without such reflection. For this reason, I am suggesting, (following Smith and Mcintyre), that eidetic reduction constitutes the fourth and final stage of phenomeno­logical method, and that it is dependent (within this domain of focus) on the prior stages of reduction and reflection. Husserl's emphasis on eidetic reduction and eidetic insight is less and less prominent as he passes from his "pre-transcendental" to his "transcendental" standpoint (a turn inaugurated in the aftermath of his 1907 Lectures on The Edea of Phenomenology and nearly completed by 1913, a year marked by the publication of Edeell l). 7. This issue ceases to be a concern once one has performed the transcend­ental-phenomenological reduction. 8. To my knowledge, this concept of "intentional correlation" is not expli­citly developed by Husserl, but I take it to be compatible with his position. 9. I am indebted to Ronald Mcintyre for this distinction. See his discussion and elaboration in the essay "Intending and Referring," in Dreyfus and Hall (1982), pp. 215-231. Compare Husserl's own reflections in Canesian Meditations, especially his discussion of "horizon analysis," pp. 44ff. 10. William R. McKenna (1982). p. 194. II. Susanne K. Langer, Philosopl,y in a New Key (Harvard University Press, 1957), pp. 26ff. 12. Cf. a similar discussion by Husserl in Erste Philosophie (1923124), Zweiter Teil: Theorie der Phanomenologischen Reduktion, Husserliana 8, edited by R. Boehn (Nijhoff, 1959) pp. 47ff. 13. As Husserl puts the matter in his Lectures on Passive Symhesis (/918-26), the conception under which a given object is intended can never be known to have exhausted the possible determinations of a given object. But our prescriptions may nevertheless "suffice" for our pragmatic or practical interests. See his discussion in Analysen WI' Passiven Synthesis (/918-26), Husserliana lI, edited by Margon Fleischer (Nijhoff, 1966), pp. 23ff. 14. McKenna (1982). p. 195. 15. For an elaboration of the notion of "definite description" viewed in relation to Husserl's theory of intentionality, see Mcintyre's essay on "Intending and Referring" (note 9 above). 16. Cf. Husserl's Crisis. pp. 162-63. Husserl is equally explicit in Appendixes IV and IX of the Crisis, and takes up the idea of "confirmation once and for all" in Experience and Judgmelll, pp. 62-63. 17. For a detailed discussion of Husserl's notion of "horizon" in relation to his theory of intentionality. see the discussion by Smith and Mcintyre in chapter 5 of Husserl and IlIIelllionality. especially pp. 233ff, where they emphasize the two-fold dimension of a priori structure and sedimented experiential content. Compare Husserl's discussion in Cartesian Meditalions, pp. 69ff, where the point is made that intentional experience would be forever "anonymous," were it not for the role of horizons in the constitution of

Page 397: Perspectives on Mind

Footnotes 389

object-senses or noematic prescriptions. Compare also the claim that "there is no vision without the screen," in Merleau-Ponty (1969), p. 150.

STEVE FULLER Sophist vs. Skeptic: Two Paradigms of Intentional Transaction

l. For more on the thesis that "methodological solipsism" has been the implicit research program of hoth cognitive scientists and phenomenologists, see the introduction to Drevfus (1982). 2. For more on the th'ird-person perspective to organism/environment inter­actions, see, among psychologists, Brunswik (1956), Gibson (1979), and, among philosophers, Barwise and Perry (1983). 3. Briefly, Fichte (1910) articulated this project as a way of reconciling Kant's free moral self and his determined physical self. As Fichte saw it, the apparently material side of the human condition was really just a projection of those features of our minds--the feelings of external resistance to conscious intentions (Fichte's term: "unconscious")--of which we are not in full control: ignorance rendered into inertness. so to speak. 4. Much of the modern tendency to confuse the Skeptics and the Sophists be traced to the fact that one of the few extant sympathetic sources of informa­tion about the Sophists, Sextus Empiricus' Oullilles oj PyrrilOllism, portrays the Sophists as precursors of Skepticism. And while both schools were indeed equally opposed to the "dogmatic" philosophies of Plato and Aristotle, only the Sophists clearly thought that ignorance could be relieved through the criticism of dogma; hence, the Sophists saw themselves as masters of a marketable skill (dialectic), whereas the Skeptics tended to withdraw from the public eye. S. An attempt at the Sophist's phenomenology in this vein entirely outside the German idealist tradition is Dennett (1984), especially pp. 36-40. 6. For the case of Derrida, see Fuller (1983). 7. Given his general antipathy to German idealism and its ideological fruits (esp. Marxism), Popper's evolutionary epistemology is remarkable in its s.m.­larity to the Sophist's phenomenological project, even to the point of arguing that World Three--the realm of "problems" and other objects of reason--results as the unintended and unanticipated consequence of human practical engagement with their environment. Where Popper stops short, of course, is in refusing to claim that World One--the realm of matter--is itself contained within World Three. Petersen (1984) provides an excellent account of these Popperian ideas. 8. "Eliminative idealism" is the author's play on Paul Churchland's (1982) "eliminative materialist" position in contemporary philosophy of mind. However, the sense of "elimination" involved in the two cases is somewhat different. Whereas Churchland claims that there is no need for non-material ent;ties, the eliminative idealist claims that there is no need for non~epistemic relatiol1s: that is. all entities that seem to exceed OUf

representational capacities are really reified forms. or projections. of our ignorance of the manner in which we have conceptualized them. 9. As Heidegger (1967. p. 130-133) puts it etymologically, whereas Obiekt is the natural complement of an act of thinking (i.e. the thing thought)

Page 398: Perspectives on Mind

390 Footnores

Gegel1sl<1l1d is the thing that stands "against" the thought itself (i.e" one can have false thoughts about it). The Objekt causes the subject to persist in the face of resistance from the CeRCllstand. 10. Indeed, Hegel was so aware that "learning from one's mistakes" implied having control over the memory of one's previous intentional transactions that he postulated a faculty, Eril1nfrung designed with precisely the required power. For discussion of this in light of cognitive science, see Risser (1986). 11. It has been unfortunately common among post-analytic philosophers such as Richard Bernstein and Richard Rorty to assimilate both Dewey and the later Wiltgenstein to the view that any system of representation is necessarily "open-textured" and thus requires context in order to represent any particular case. Stated this vaguely. the view is equally applicable to Dewey and Wittgenstein, but the reasons that each would give differ sharply. The later Wingenstein clearly does not believe that there is a fact of the matter about whether a concept applies to a case until it has been negotiated between the particular concept-appliers. In contrast to this sort of undeJdetermination. however. Dewey believes that cases are often overdetermined as a result of several concepts being applied rather unsystematically in the past. but all now laying claim to the case in question. For more. see Dewey (1922). 12. The difference between optimizing and salisficing arises from Herbert Simon's efforts to develop a model of the economic agent as a resource manager rather than as an ideal utility maximizer. Satisficing is now the leading rational decision-making strategy taught in business administration. The loclis classicus is Simon (1957). Part IV. Good application of this strategy to theory choice in philosophy of science is Nickles (1986). Cf. Fuller (1985).

WILLIAM R. McKENNA Commentary on Tuedio's "Husserl on Objective Reference"

1. Kant (178311966). p. 38. 2. See F011esdal (1982). pp. 73-80. 3. See Miller (1984), p. 48. 4. Husserl (195411970). pp. 142-47. For a discussion of this, see McKenna (1982). pp. 33-35. 5. Husserl, CrisIs. pp. 148-52. 6. Husserl, Ideas Pertaining 10 a Pure Phenomenology and to a Phenomen%gi­cal Philosophy. translated by F. Kersten. Martinus Nijhoff. The Hague. 1982. Sections 41. 85 and 98. Kersten translates AufJassul/g as "construing." 7. Husser!. Allalysen zur passi\'(!11 Syl11i1esis. Husserliona 11. edited by Margot Fleischer, Martinus Nijhoff. The Hague. (1966). p. 20. 8. Husserl. Ideas. § 131. pp. 313-16; and § 143 pp. 342-43. 9. Koffka (1935). pp. 27-28. 10. Husserl. Ideas. § 52. p. 119. J I. Ibid .. pp. 119-20. 12. Ibid .. pp. 122-23. 13. Ibid .. p. 122. 14. See especially Crisis. Part III A.

Page 399: Perspectives on Mind

Footnotes 391

FOOTNOTES: CHAPTER THREE

MICHAEL A. ARBIB Schemas, Cognition and Language: Toward a Naturalist Account of Mind

This paper grew out of my collaboration with Mary Hesse in our preparations for the Gifford Lectures given at the University of Edinburgh in 1983 (an expanded version of these lectures has been published as Arbib and Hesse 1986). I thank Professor Hesse once again for the great stimulation of our work together. The present paper is adapted from two chapters of my related book. In Search of rhe Person: Philosophical Explorations in Cognitive Science (University of Massachusetts Press, 1985), with kind permission of the University of Massachusetts Press. I refer the interested reader to this book for a fuller treatment of the topics treated herein.

HARRISON HALL Naturalism. Schemas. and the Real

Philosophical Issues in Contemporary Cognitive Science

1. See Fodor (1980) in Haugeland (1981). 2. Searle (1980). 3. Dreyfus (1979); and Dreyfus and Dreyfus (1986). 4. Minsky (1968). p. 26. 5. Dennett (1984). p. 134. 6. Fodor (1983), p. 129. 7. Benner (1984). 8. For a brief account of all five of these stages and the progression through them, see Hall (1987).

JAN EDWARD GARRETT Schemas, Persons. and Reality--A Rejoinder

1. Olen (1983), pp. 220-230; J.A. Fodor. "Functionalism." in Cahn. Kitchel'. and Sher (1984). 2. Lonergan (1978). p. 110. For a thorough treatment of the relations between classical and statistical sciences and betv.;een sciences investigating different" levels." see Lonergan. chapters 2-4. 3. D.C. Dennett. "Artificial Intelligence as Philosophy and Psychology." in Cahn. Kitchel', and Sher (1984). p. 17. 4. Cf. Heraclitus. Fragment 107. in Kirk and Raven (1973), p. 189; Thomas Aquinas on "habit." Summa Theologiae I-II. qq. 49-55; and Aristotle on /I hexcis. 1/ Nicomadrean Elhies. II, 1-6. and VI. 1-7. 5. Husserl (191311962). especiallv pp. 91-93: also The Phenomenology of illfernal Timc-ColIsC;OUSl1css. translated bv James S. Churchill (Bloomington: Indiana Universitv Press. 1973). -6. Heidegger (962). especial Iv §§ 24. 29 and 32. 7. Merleau-Ponty (194511978); Gadamer (1975). pp. 235-274.

Page 400: Perspectives on Mind

392 Foolnoles

8. Heidegger (1962) ch. 3. §§ 15-16. 9. Ibid .• §§ 23-24. 10. Keen (1975). ch. I. 11. Ihde (1977). pp. 44-45. 137-143; Ihde (1979). chs. 1-4. 12. M. Heidegger. Discourse on Thinking (New York: Harper Torchbooks. 1966); Heidegger. "On the Origin of the Work of Art." in Poelry, Language. Thoughl. translated by A. Hofstadter (New York: Harper and Row. 1971). 13. Gadamer (1975). pp. 317-333. 14. Cf. Gadamer (1975). pp. 265-266. 15. Heidegger (1962). § 28. 16. lean-Paul Sartre. The Philosophy of Jean-Paul Same. ed. R.D. Cumming (New York: Random House, 1965), pp. 51-54. 17. Whitehead (1929), pp. 100-130; see Charles Hartshorne. "A World of Organisms" in The Logic of Peifeclion (La Salle, IL: Open Court. 1973). 18. Bernard Lonergan (1978). p. 516. 19. Aristotle, On Ihe Soul, III. 7. 431a 15-17; III, 8, 432a 6-9; III, 5; 7. 431a I; III. 7, 431b 16-17. 20. Aristotle, III, 4. 429b 5. 21. Aristotle, III, 4. 429a 18-20. 22. David Hume, A Trealise of Humall Nature. vol. I. reprinted in part in Hume's Moral and Polilical Philosophy. ed. Henry D. Aiken (New York: Hafner Publishing Co .• 1966), p. 25: Friedrich Nietzsche. Beyond Good and Evil, tr. W. Kaufmann (New York: Random House. 1966). sections 5-6, and 191; 71te Portable Nieluche, tr. W. Kaufmann (New York: The Viking Press. 1968), p. 146. More recently. Richard Taylor has built an entire "voluntarist" ethics on the basis of a denial of the potential governing role of reason. Cf. his Good and Evil: A New Direclion (New York: The MacMillan Co .. 1970). 23. Aristotle. III. 4. 429b 8-9.

CHRISTOPHER FIELDS Backgl'Ound Knowledge and Natural Language UnderstandIng

1. My paraphrase of Kant's position contains a significant interpretive bias. Most cognitive scientists take the view that perceivers actively select some features of the sensory field for further cognitive processing, and process these segments in particular. sometimes idiosyncratic ways. Ecological realists or Gibsonians take the opposing view that organisms are only sensitive to--Le. can only detect and process--certain meaningful features of the environment. This view is found in Gibson (1979). Michaels and Carella (1981). and Turvey el al (1981). It is criticized by Ullman (1980) and by Fodor and Pylyshyn (1981). I have argued the two approaches are structurally. but not explanatorily, equivalent (Fields. 1983). 2. This is true in all theories that postulate representations with contents similar to those of propositional attitudes. A class of "new connectionist" models in which propositional attitudes are not represented has recently been proposed (McClelland and Rumelhar!. 1981: Feldman and Ballard. 1982: and Anderson. 1983). A discussion of meaning in these is beyond the present scope.

Page 401: Perspectives on Mind

Footnotes 393

Some philosophers (e.g. Searle, 1979c) distinguish representations from presentations, arguing that we experience presentations, not representations. This is surely true, but it is beside the present point. Our experiences of presentations depend on the proper functioning of representational states, i.e. states that carry information about the world. We do not experience these states; they rather allow us to experience the world. If we experience a meaningful world, then these states must represent the world as meaningful. 3. The theory of noemata is developed in Husserl (1970) and extended in Husserl (1972). Dreyfus (1982) provides a clear and succinct review of the theory, and argues that it closely parallels much of current cognitive science. The latter view has been criticized by Emmett (1983). 4. Fregeans such as Searle disagree with proponents of the historical theory of reference (reviewed in Schwartz, 1977) about whether internal satisfaction conditions are sufficient to fix the references of terms. This leads to a disagreement about what the referents are. Situation semantics (Barwise and Perry, 1983) assumes that the referents of sentences are not truth values. but abstract "situation-types". There is every reason to think that a finite set of rules cannot pick out a situation-type, even if the rules in some sense represent the situation-type. For the present purposes, reference-fixing will be considered to be someone else's problem. As Fodor (1980) emphasizes, cognitive science is interested in how the agent uses knowledge to attempt reference, not in how or whether reference actually succeeds. 5. This situation is complicated further if the machine is running a code written in a programming language. Programs define "virtual machines." i.e. they make a physical machine appear, to the programmer, as if it were hard­wired to run programs in the programming language. A general purpose machine running a program only imitates, i.e. is input/output equivalent to. the virtual machine defined by the program. The information that allows this imitation is stored not in the program, but in the compilers or interpreters that use the program to generate executable machine code. Thus, explaining the behavior of the machine requires knowing the program. the compiler or inter­preter that takes the program as input, and the architecture of the machine. 6. Methodological solipsism must be violated, however, if the goal includes explaining how the language is learned or how the language, or linguistic ability in general, evolved. These topics are considered by Lewis (l969)--the conventionalization approach; and Dretske (1981 )--the causal approach.

NORTON NELKIN Internality, Externality, and Intentiollality

.. I would like to thank Carolyn Morillo, Ed Johnson, Graeme Forbes, and Radu Bogdan, all of whom helped me get clearer on the issues involved. 1. Fodor (1985), pp. 76-100. 2. Kripke (1982), p. 21. 3. See, for instance. Wittgenstein (1953). § 154, pp. 60-61. 4. But why would such results be bleak? Kripke calls Wittgenstein's denial of Internalism a form of skepticism. It's not clear to me why such a result is

Page 402: Perspectives on Mind

394 Footnotes

in any way skeptical (see Kripke, esp. Chapter 2). Only if one goes on to claim that there can be 110 good theory which explains the relevant phenomena does one become a skeptic. But to deny Internalism simply to offer an Externalist account of intentionality is neither bleak nor skeptical. 5. For instance, there are some freshwater fish which don't even have a visual cortex, but which make the same color distinctions we make. See, for instance. Hurvich (1981), p. 138. 6. Even more dramatic cases are provided by persons born with water on the brain, who have had shunts inserted into their brains shortly after birth to drain the fluid. Some of these people have gone on to live normal and even exceptionally productive lives. However. CAT scans of these people's heads show them to be mostly hollow with only a few millimeters of brain tissue formed around the inside of the skull. See, for instance. Paterson (1980). 7. This is the point of Leibniz's example of walking into the eye, seeing all of its workings, and still having no idea of what the eye sees. It is, of course, also the point in Nagel (1974). 8. This illustration is meant as an analogy, not as an instance of the func-tions the CIs are talking about. The purpose of the analogy will become clear. 9. Dennett, who thinks they can be eliminated, is not a psychological real­ist. He thinks there is a sense of "really" such that no organism really has intentional states; so he needn't face these problems in the form the CIs do. In that way, Dennett is an Externalist. See his "Intentional Systems" in Dennett (1978), pp. 3-22. 10. See, for instance, Searle (1980), 3, pp. 417-457. 11. Compare his "Twin-Earth problem" from Putnam (1981). pp. 18-19. Also. Kripke (1982) raises similar difficulties, querying how rules that can be applied in potentially infinite situations could get encoded. 12. See, for instance, Wittgenstein (1953) and Kripke (1982).

ROBERT C. RICHARDSON Objects and Fields

This work was completed with the support of a grant from the National Endowment for the Humanities, and the Taft Committee of the University of Cincinnati. I am grateful for some discussion with H. Dreyfus. which helped to crystalize my understanding of many of the issues discussed here. 1. The means for explaining the harmony vary considerably, from a pre­established harmony to the guarantee of a Cartesian God. 2. It is useful to compare the account in the Passions with that in Meditarion VI, and in particular the discussion of imagination. in which Descartes describes imagination as "nothing else but an application of the cognitive faculty to a body which is intimately present to it" (1641. p. 50). For some related discussion. see Richardson (l982a. 1985). 3. This view foreshadows Michael Dummett's construal of the sCl/se of an expression as the conditions for its applicability. 4. The principal difference is that classical associationists lOok impress­ions and "simple ideas" as representational in their own right. New mechanists

Page 403: Perspectives on Mind

Footnotes 395

take representational content result from the network of associations. 5. For a discussion of the treatment of psychologism and the Turing Test in Block (1981) which exemplifies this point, see Richardson (l982b). 6. As I understand it, the point of H. Dreyfus' What Computers Call't Do (1972) is that the pattei'll of failures in model systems suggests that the models cannot be successfully generalized. As he sees, this is essentially an empirical issue, but that hardly counts again his view that there is indeed a pattern to the failures. See also Dreyfus (1982) for related discussions. 7. See Richardson (1981, 1983) for discussion of such a prospect.

HERBERT R. OTTO Meaning Making: Some Functional Aspects

1. Searle (1971). 2. Austin (1962). 3. Searle, Op. Cit. 4. Otto (1978). S. W.V. Quine, "Methodological Reflections on Current Linguistic Theory," in Davidson and Harman, (1972). 6. Ibid. 7. Katz (1972). 8. Dowty, Wall, and Peters, (1981), p. 181. 9. Barwise and Perry (1983). 10. Hintikka (1984). II. Otto, Op. Cit. 12. Dowty, Wall, and Peters, Op. Cit .. p. 68. 13. Haack (1978).

HERBERT E. HENDRY Comments on OUo on Translation

• 1 am grateful to my colleagues Richard J. Hall and Carol Slater for helpful comments on an earlier draft of this essay. 1. See Otto (1978). 2. Jackson (1979), pp. 565-689.

STEVE FULLER Blindness to Silence: Some Dysfunctional Aspects of Meaning Making

1. The source/target language distinction, which Otto occasionally uses, is professional translator's jargon for the difference between the original lan­guage in which a text was uttered (SL) and the language into which the text is translated (TL). For subtleties of the notion, see Bassnett-McGuire (1980). 2. "Translation" and "interpretation" are used interchangeably in this paper, following Quine's (if not Davidson's) usage. A little noticed feature of these two terms is that there is 110 agreement as to which is the more primitive. Whereas analytic philosophers tend to view construction of a translation manual

Page 404: Perspectives on Mind

396 Footnotes

as a precondition for interpreting particular alien speakers, continental philosophers and practitioners of the human sciences tend to treat interpre­tation as a largely prelinguistic understanding of an alien culture that often never advances to the stage of explicit translation. 3. Skinner (1969) is a rich discussion of the consequences of the idea that historical agents did not utter with the historian in mind. 4. Foucualt has been the beneficiary of the usual historical myopia. insofar as his "strikingly original" archaeological approach had always been a key feature of mainstream French historiography. Fustel de Coulanges, Durkheim's mentor at the Sorbonne. applied what he called "the Cartesian method of doubt" to historical documents: to wit, if there is no obvious record of the inten­tions we so readily attribute to the "founders" of some institution, then we should assume that they had no such intentions. See Cassirer (1950), chap. 18.

FOOTNOTES: CHAPTER FOUR

JOSEPH MARGOLIS Pragmatism, Phenomenology, and the Psychological Sciences

I. have attempted a pertinent comparison between Quine and Heidegger, in Margolis (l985a). 2. I have pursued the issue in a numher of places. See particularly Margolis (19850). 3. Quine, "Two Dogmas of Empiricism," in Quine (1953); Heidegger (1962). 4. See Quine (1960), Ch. 2. 5. See Martin Heidegger, "Kant's Manner of Asking about the Thing" in Heidegger (1967). 6. This, for instance, is why it is preposterous, though tempting, to read Thomas Kuhn as a Heideggerean theorist of science. See Rouse (1981). 7. See Husserl (195411970). 8. See Rorly (1979, 1982). 9. Cf. Rorly (1979). Ch. 5. 10. Perhaps the most telltale version of this sort of philosophical confidence, formulated for the psychological sciences, naive about its own legitimation but instructive in seemingly justifying the elimination of cognizing agents, may be found in Gibson (1979). 11. Popper (1982). pp. 161. 62. 12. See Margolis (1984). 13. See Otto Neurath, "Protocol Sentences," translated by George Schick. in Ayer (1959); and Rudolf Carnap. "Psychology in Physical Language," translated by George Schick. in the same volume. 14. Wilfrid Sellars. "The Language of Theories." in Sellars (1963), p. 126. 15. Sellars, "Philosophy and the Scientific Image of Man," op. cif .. p. 40. 16. Dennett (1969). p. 190. Cf. also. Dennett (1978). 17. See Donald Davidson. "Mental Events." in Davidson (1980). 18. See for instance Margolis (1984). Ch. 2.

Page 405: Perspectives on Mind

Footnotes 397

19. See Donald Davidson. "Reality without Reference." "The Inscrutability of Reference," "In Defence of Convention T," in Davidson (1984). I have discussed Davidson's view and other views. in some detail, in "Eliminating Selves in the Psychological Sciences." in Young-Eisendrath and Hall (1987). 20. The neglect of the double nature of the theme may be seen W. V. Quine, "Epistemology Naturalized" in Quine (1967). 21. See Chisholm, "Brentano's Descriptive Psychology." in McAlister (1976). 22. The contrast between first-phase and second-phase simulation is very usefully employed by Fodor (1968). The issue bears on the famous Turing Game puzzle. See for instance Searle (1980). This is not. of course, to subscribe to Searle's own formulation and resolution of the issue. 23. See Hilary Putnam. "The Mental Life of Some Machines." "The Nature of Mental States," in Putnam (1975). particularly p. 435. 24. Dretske (1981), p. vii; italics added; cf. also, Ch. 3. 25. See Margolis (1986). 26. The same difficulty may be ascribed to Jerry Fodor's nativism. See Fodor (1981); and Margolis (1986). 27. Fodor (1981). 28. Stich (1983), p. 149. 29. Loc. eil. 30. Ibid., p. 5. 31. Ibid., pp. 149, 151. 32. Cf. ibid., pp. 157-160. 33. Ibid., p. 156. 34. See also Church land (1979); and Churchland and Church land (1981). 35. Cf. Davidson, "Mental Events" in Davidson (1980). 36. The interrelationship between language and culture is explored in Margolis (1984). 37. See Chomsky (1980); Fodor (1975. 1981). 38. See Gadamer (1975); also Ricoeur (1967, 1981). 39. See Margolis (1984), Ch. 7. 40. See Merleau-Ponty (1973). 41. See Gier (1981).

RALPH W. SLEEPER The Soft Impeachment: Responding to Margolis

I. Gadamer (1981) p. 167. Quotations from Margolis are to the present essay. 2. Cf. Margolis (l984a); also. Margolis (l984d). Margolis has attacked Quine in both essays for lending tacit support for "foundationalism" by covertly implying that "... science as a whole is somehow cognitively testable" (Margolis, 1984a. p. 71). The argumentative technique of the present essay, therefore. is a familiar one. I first noticed it in Margolis's contribution to New Slud;es ill Ihe Philosoph,v of john Dewey. ed. S.M. Cahn, Hanover, N.H.' University Press of New England. 1977. 3. Cf. Quine (1974): Donald Davidson. "Realitv Without Reference," in Davidson (1984). .

Page 406: Perspectives on Mind

398 Footnotes

4. By far the best essay comparing Dewey and Heidegger is Michael Sukale's treatment directly relating them via their respective metaphysics, It is an approach that Rorty cannot take owing to his distaste for metaphysics. See Sukale (1976). Unlike Margolis, Sukale limits his comparison to explicit (textual) material. S. lowe the observation that ell passant fallibilism is very different from fallibilism lOll/ court to John J. McDermon of Texas A&M University. I share his commitment to the laner form. 6. I shall return to this "signal weakness" later. It is crucial to the understanding of Margolis's argument. 7. I extrapolate the term "conceptual ideology" from the context of his treatment of "perceptual ideology" in Quine (1983). 8. See note 6 above. 1 do not question Margolis's ability to trace the "reversely parallel" tradition in Continental thought as the "fable" about Dosein. It is just that it musl be traced in this way that is the "signal weakness" already identified. 9. Margolis forgets that the Wingenstein of the /llvesligalions had dropped the "phenomenological language" that is still retained by Husserl in his Logical Invesligalions. If Wittgenstein's Invesligalions opens the way to analysts to treat of "intentionality and the theme of phenomenological subjectivity" --as Margolis suggests--it will not be by means of the use of "phenomenological language," but the "everyday" natural language. Thus Wingenstein: "I used to believe that there was the everyday language that we all spoke and a primary language that expressed what we really knew, namely phenomena '" 1 do not now have phenomenological language, or 'primary language' as 1 used to call it, in mind as my goal. 1 no longer hold it to be necessary ... I think that essentially we have only one language. and that is our everyday language." These passages indicate that Wittgenstein had cast off the thrall of Frege's theory of language and meaning while Husserl was still under its spell. (Cf. Wingenstein in McGuiness (1979), p. 45; McGuiness § I, pp. 1.51. Cf. Husserl (1970), pp. 104, 109-10, 139, 181-2 and 208-9.) 10. I put matters this way to suggest that behind Margolis's "conceptual ideology" is an a priori "conceptual ontology." All that we can know of th is ontology is that it is not "physicalistic" in the sense that Quine is "known to favor." What other traits it might have are not disclosed by Margolis. It may be that they could be represented only in a "private language." 11. John Dewey (1981).

Page 407: Perspectives on Mind

BIBLIOGRAPHY

Albert, K, (1981) "Mechanism and Metamathematics", unpublished senior thesis, State University of New York at Purchase,

Alston, W. (1971) "Varieties of Privileged Access" in Americall Philosophical Quarterly, 8 (3). pp. 223-241.

Anderson, J. (1983) The Architecture of Cognition. Cambridge: Harvard University Press.

Anscombe, G. (1975) "The First Person" in Milld and Language edited by S. Guttenplan.

Arbib. M.A. (1981) "Perceptual Structures and Distributed Motor Control" in Halldbook of Physiology--7i1e Nervous System II. Motor COlltrol edited by V.B. Brooks, pp. 1449-1480. American Physiological Society.

Arbib, M.A. (1985) In Search of the Person: Philosophical Exploratiolls ill Cogllifive Science. The University of Massachusetts Press.

Arbib, M.A. (1987) "Levels of Modeling of Mechanisms of Visually Guided Behavior" in The Behavioral and Brain Sciences, in press.

Arbib, M.A. (1987) The Metaphorical Brain: An Introduction to SchemGS and Brain Theory, Second Edition. Wilet-Interscience.

Arbib. M.A. and M.B. Hesse (1986) The COllstruetiotl of Reality. Cambridge: Cambridge University Press.

Arbib, M.A., E.J. Conklin. and J.C. Hill (1987) From Schema 7heory to Language. Oxford: Oxford University Press.

Austin J.L. (1962) How to Do 7hillgs With Words. Oxford: Clarendon Press. Ayer, A.J. (1956) The Problem of Knowledge. London: Macmillan. Ayer. A.J., ed. (1959) Logical Positivism. Glencoe, Illinois: Free Press. Bach, K, (1982) "De re Belief and Methodological Solipsism" in 7!lOught alld

Object: Essays on Intelltionality edited by A. Woodfield. PI'. 121-151. Oxford: Clarendon Press.

Bain, A. (1861) On the Study of Character. Illcluding an Estimate of Phrenology. London: Parker.

Barr, A. and E.A. Feigenbaum (1981) The Handbook of Artificial Intelligence. Los Altos, California: Kaufmann.

Bartlett. F.C. (1932) Remembering. Cambridge: Cambridge University Press. Barwise, J. and J. Perry (1983) SilUations mId Allitudes. Cambridge: MIT

Press. Bassnett-McGuire, Susan (1980) Translation Studies. London: Methuen. Bealer, G. (1982) Quality alld Concept. New York: Oxford University Press. Benner. P. (1984) From Novice 10 Expert: Excellence and Power ill Clinical

Nursing Practice. Addison-Wesley. Berliner, H. (1980) "Computer Backgammon" in Scielltific Americall, 242 (6),

pp. 64-85. Beth, E.W .. and J. Piaget (1966) Mathematical Epi'tfll1ology and I'sycllOloliY,

translated from the French by W. Mays. Reidel. Block, N.J. (1978) "Troubles with Functionalism" reprinted in Block (1980a).

399

Page 408: Perspectives on Mind

400 Bibliography

Block, N.J., ed. (l980a) Readings in the Philosophy oj Psychology. Cambridge: Harvard University Press.

Block. N.J. (l980b). "Are Absent Qualia Impossible?" in rhe Philosophical Review 89, pp. 257-274.

Block, N.J. (1981) "Psychologism and Behaviorism" in rhe Philosophical Review 90, pp. 5-43.

Block, N.J. and I.A. Fodor (1972) "What Psychological States are Not" in Philosophical Review 81, No.2. pp. 159-181.

Boolos, G. and R. Jeffrey (1974) Computabiliry and Logic. Cambridge: Cambridge University Press.

Brady, J.M. and R.C. Berwick, eds. (1983) ComplIIational Models of Discourse. Cambridge: MIT Press.

Brandt, R. and J. Kim (1969) "The Logic of the Identity Theory" in Modern Materialism: Readings on Mind-Body ldemity edited by J. O'Connor. Harcourt Brace & World.

Brentano, F. (1874/1973) Psychology from lU' Empirical Standpoim edited by L. MCAlister. translated by A.C. Rancurello. D.B. Terrell. and L. McAlister. Humanities Press.

Brown, G. (1974) "The Believer System" in Technical Report RUCBM-TR-34. July. Department of Computer Science. Rutgers University.

Brown. H. (1977) "On Being Rational" in AmericlU' Pl,ilosophical Quarterly. Brunswik, E. (1956) Perception and the Representat;"e Design of Psychological

Experiments. Berkeley: University of California Press. Burge, T. (1978) "Belief and Synonymy" in Journal of Philosophy 75 (3), March.

pp. 119-138. Cahn. S.M. (1977) New Studies in the Philosophy of John Dewey. Hanover. New

Hampshire: University Press of New England. Cahn, S.M .. P. Kitcher. and G. Sher, eds. (1984) Reason at Work. New York:

Harcourt Brace Jovanovich. Cassirer. E. (1950) 77" Problem of Knowledge. New Haven: Yale University

Press. Casteiiada. H. (1968) "On the Logic of Attributions of Self-Knowledge to Others"

in Journal of Philosophy 65. pp. 439-456. Cherniak, C. (1986) Minimal Rationality. Cambridge: MIT Press/Bradford Books. Chisholm. R.M. (1957) Perceiving: A Philosophical Study. Ithaca: Cornell

University Press. Chisholm. R.M. (1981) rhe First Person. Minneapolis: University of Minnesota

Press. Chomsky, N. (1957) Symacric Structures. Mouton and Co. Chomsky, N. (1959) Review of Skinner's Verbal Behavior in Language 35. pp.

26-58. Chomsky. N. (1965) Aspecls o(Ih" n,corv o(Synlar. Cambridge: MIT Press. Chomsky. N. (1968) "Recent Contributions to the Theory of Innate Ideas" in

Boston Studies ill the Philosophy oISci<'llce 3. pp. 81-107. Reidel. Chomsky. N. (1972) Language and Milld (enlarged edition). New York: Harcourt

Brace Jovanovich.

Page 409: Perspectives on Mind

Bibliography 401

Chomsky, N. (l9S0) Rules and Represellfations. New York: Columbia University Press.

Church. A. (1946) "A Review of Morton White's 'A Note on the Paradox of Analysis' "." in Joul'l1al of Symbolic Logic 11, pp. 132-33.

Churchland. P.M. (1979) Scientific Realism and the Plasticity of Mind. Cambridge: Cambridge University Press.

Churchland, P.M. (19SI) "Eliminative Materialism and the Propositional Attitudes" in Journal of Philosophy 78, pp. 67-80.

Churchland, P.M. (1982) Mauer and Consciousness. Cambridge: MIT Press. Churchland, P.M. and P.S. Church land (1981) "Functionalism, Qualia, and

Intentionality" in Philosophical Topics 12, pp. 121-145. Churchland, P.S. (1986) Neurophilosophy: Toward a Unified Science of the

Mind/Brain. Cambridge: MIT Pres siB radford Books. Conklin, E.J. (1983) "Salience as a Heuristic in Planning Text Generation,"

Ph.D. dissertation. Department of Computer and Information Science, University of Massach usetts at Amherst.

Cook, J. (1969) "Human Beings" in Swdies in the Philosophy of Wi"genstein, edited by P. Winch. pp. 117-151. New York: Humanities Press.

Craik, K.J. W. (1943) The Nature of Explanation. Cambridge: Cambridge University Press.

Culbertson, J. (1982) Consciousness: Natural and Artificial. Roslyn Heights, New York: Libra Publishers.

Cummins. R. (1983) The Nature of Psychological Explanation. Cambridge: MIT Pres siB radford Books.

Danto, A. (1960) "On Consciousness in Machines" in Hook (1961) pp. ISO-IS7. Darley, J.M. and E. Berscheid (1967) "Increased Liking as a Result of the

Anticipation of Personal Contact" in Human Relations 20, pp. 29-40. Davidson, D. (1967) "Truth and Meaning" in Synrhese, 17, pp. 304-323. Davidson, D. (1980) Essays on Acrions and Events. Oxford: Clarendon Press. Davidson, D. (1984) Inquiries illfo Trurh and IIIferprefation. Oxford: Oxford

University Press. Davidson, D. and G. Harman. eds. (1972) Sernallfics of Natural Lal1guage.

Dordrecht: Reidel Publishing Company. Davidson. J. and R. Davidson, eds. (1980) The Psychobiology of Consciousness.

New York: Plenum Press. Davidson, R., G. Schwartz, and D. Shapiro. eds. (1983) Consciousness and Sel{­

Regulation 3. New York: Plenum Press. Davis, L.H. (1982) "Functionalism and Absent Qualia" in Philosophical Studies

41. pp. 231-249. Dennett, D. (1969) Contenr and Consciousness. London: Routledge & Kegan Pau\. Dennett. D. (1971/1978) "Intentional Systems" in Joul'l1al of Philosophy 68. pp.

87-106. Dennett. D. (1975) "Brain Writing and Mind Reading" in Minnesota Studies in the

Philosophy of Science 7, pp. 403-415. Dennett, D. (1978) Brail1storl/ls: Philosophical Essays 011 Mil1d alld Psychology.

Cambridge: MIT Press/Bradford Books.

Page 410: Perspectives on Mind

402 Bibliography

Dennett, D. (1984a) "Cognitive Wheels: The Frame Problem of AI" in Hookway (1984).

Dennett, D. (1984b) Elbow Room: the Varieties of Free Will Worth WW1Iing. Cambridge: MIT Press.

Descartes, R. (164111911) "Meditations on First Philosophy (with replies and objections)" in The Philosophical Works of Descartes edited by E.S. Haldane and G.R.T. Ross. London: Cambridge University Press.

Descartes. R. (164911984) the Passions of the Soul, translated by John Cottingham, R. Stoothoff, and D. Murdoch. Cambridge: Cambridge University Press.

Deutsche, J.A. (1960) The Structural Basis of Behavior. Chicago: University of Chicago Press.

Dewey, John (1922) Human Nature and Conduct. New York: Random House. Dewey, John, (1981) Experience and Nature. Later Works, Volume I. Carbondale,

Illinois: Southern Illinois University Press. Doore, Gary (1981) "Functionalism and Absent Qualia" in Australasian Journal of

Philosophy 59, pp. 387-402. Dowty D., R. Wall, and S. Peters (1981) Illfroduction to MOllfague Semw1Iics.

Dordrecht, Holland: Reidel Publishing Company. Dretske, F. (1981) Knowledge and the Flow of Injormation. Cambridge: MIT

Press/Bradford Books. Dretske, F. (1985) "Machines and the Mental" in Proceedings and Addresses of

the American Philosophical Association 59, pp. 23-33. Dreyfus, H. (1972/1979) What Computers Can't Do. New York: Harper and Row. Dreyfus, H. and H. Hall. eds. (1982) Husserl, lntellfionality, and Cognitive

Science. Cambridge: MIT Press/Bradford Books. Dreyfus, H. and S. Dreyfus (1986) Mind Over Machine: 711e Power of Human

ll1fuition in the Era of the Computer. MacMillan/Free Press. Eccles. J.C., ed. (1966) Brain and Conscious Experience. Berlin: Springer

Verlag. Elster, J., ed. (1986) Rational Choice. New York: New York University Press. Elvee, R., ed. (1982) Mind in Nature (Nobel Conference 17). San Francisco:

Harper & Row. Emmett, K. (1983) "Phenomenology and Cognitve Science: Should You Be Reading

Husserl?" in Cognition and Brain Them} 6. pp. 509-516. Feldman, J. and D. Ballard (1982) "Connectionist Models and Their Properties"

in Cognilve Science 6, pp. 205-254. Festinger, L. (1957) Cognitive DissollwlCe. Stanford: Stanford University

Press. Feyerabend, P. (1975) Against Method. London: New Left Books. Feyerabend. P. (1981) "Two Models of Epistemic Change: Mill and Hegel" in

Problems qf Empiricism. pp. 65-79. Cambridge: Cambridge University Press. Fichte. J.G. von (1910) 711e Vocation of Mall. Chicago: Open Court. Field. H. (1978) "Mental Representation" in Block (l980a) pp. 78-114. Fields, C.A. (1983) "Computational and Ecological Approximations to Perceptual

Systems" in AbstraCI5 or the 7th Intematiollal Congress of Logic. Methodology alld Philosophv of Science 5. pp. 31-.14.

Page 411: Perspectives on Mind

Bibliography 403

Flanagan, 0, (1984) The Science of the Mind, Cambridge: MIT Press/Bradford Books.

Fleischer M., ed. (1966) Analysen zur Passiven Synthesis (19 J 8-26), Husserliana II. Nijhoff.

F0l1esdal, D. (1982) "Husserl's Notion of Noema" in Dreyfus and Hall (1982). Fodor, J.A. (1968) Psychological Explanation. New York: Random House. Fodor, J.A. (1975) The Language of7hollght. New York: Crowell. Fodor, J.A. (1980) "Methodological Solipsism Considered as a Research Strategy

in Cognitive Psychology" in The Behavioral and Brain Sciences 3, pp. 63-109: reprinted in Haugland (1981): and in Dreyfus (1982), pp. 277-303.

Fodor, J.A. (1981) Represemations. Cambridge: MIT Press. Fodor, I.A. (l98Ia) "The Mind-Body Problem" in Sciemijic American 244, pp.

114-123. Fodor, J.A. (1982) Representation: Philosophical Essays in the Foundations of

Cognitive Science. Cambridge: MIT Press. Fodor, I.A. (1983) The Modularity q(Mind. Cambridge: MIT Press. Fodor. J.A. (1984) "Mental Representation: An Introduction", photocopy of

colloquium presentation at the University of California. Irvine, 1984. Fodor, I.A. (1985) "Fodor's Guide to Mental Representation: The Intelligent

Auntie's Vade-Mecum" in Mind 94. pp. 76-100. Fodor, I.A. (1987) Psychosemamies. Cambridge: MIT Press. Fodor, 1.A. and Z. Pylyshyn (1981) "How Direct is Visual Perception?" in

Cognition 9. pp. 139-196. Foucault, M. (1975) Archaeology q{ Knowledge. New York: Harper and Row. Fuller, S. (1983) "A French Science (With English Subtitles)" in Philosophy and

Literature 7. pp. 3-14. Fuller. S. (1985) Bounded Rationalitv in Law and Science. Unpublished Ph.D.

dissertation, Department of the History and Philosophy of Science, University of Pittsburgh.

Gadamer, H-G. (1975) Truth and Method translated by G. Barden and J. Cumming. New York: Seabury.

Gadamer, H-G. (1981) Reason ill the Age of Sciellce translated by F.G. Lawrence. Cambridge: MIT Press.

Gallie, W.B. (1957) "Essentially Contested Concepts" in Proceedings of the Aristotelian Society.

Godel, K. (1931) "Uber formal unentscheidbare Satze der Principia Mathematica und verwandter Systeme. I" in Monats. Math. Phys. 38. pp. 173-198.

Gibson, J.1. (1979) The Ecological Approach to Visual Perception. Boston: Houghton-Mifflin.

Gier, N.F. (1981) Wittgenstrin and Phcnol1lfllology. Albany: SUNY Press. Gleitman. H. (1981) Psychology. New York: Norton & Co. Globus, G .. G. Maxwell. and I. Savodnik eds. (1976) ConSCiOl<Sl1eSS and tile

Brain. New York: Plenum Press. Goffman. E. (1974) Fl'al11e Analysis: An Essay on tile Organization qf Experience.

Harper and Row. Goodman. N. (1949) "On Likeness of Meaning" in Analvsis. Goodman. N (1963) Fact. Fictioll. and Forecast. Indianapolis: Bobbs-Merrill.

Page 412: Perspectives on Mind

404 Bibliography

Goodman, N. (1978) Ways of World Making. Indianapolis: Hackett Publishing Co. Graesser, A.C. and L.F. Clark (1984) Ihe Structures alld Procedures oj Implicit

Knowledge. Norwood. New Jersey: Ablex. Gregory, R.L. (1969) "On How So Little Information Controls So Much Behavior"

in Towards a 7heoretical Biology, 2: Sketches edited by C.H. Waddington. Edinburgh University Press.

Grene, J. (1972) Psycholinguistics. Baltimore: Penguin. Grice, H.P. (1957) "Meaning" in Philosophical Review 66, pp. 377-388. Grice, H.P. (1974) "Method in Philosophical Psychology" in Proceedings and

Addresses oj the American Philosophical Association, 48, pp. 32-35. Grice, H.P. (1975) "Logic and Conversation" in Syntax and Semantics, Vol. 3:

Speech Acts, edited by P. Cole and J.L. Morgan. New York: Academic Press. Gur, R. and H. Sackheim (1979) "Self-Deception: A Concept in Search of a

Phenomenon" in Journal oj Personality and Social Psychology, 37 (2). Gurwitsch, A. (19??) Human Encounters in the Social World. Pittsburgh:

Duquesne University Press. Guthrie, W.K.e. (1969) A History of Greek Philosophy: The Fifth CentlllY

Enlightenmellf. Cambridge: Cambridge University Press. Haack S. (1978) Philosophy oj'Logics. Cambridge: Cambridge University Press. Hacking, I. (1975) Why Does Language Maffer to Philosophy? Cambridge:

Cambridge University Press. Hacking, I. (1979) "Michel Foucault's Immature Science" in NOlls. Hacking, I. (1982) "Language. Truth, and Reason" in Rationalily and Relalivism

edited by M. Hollis and S. Lukes. Cambridge: MIT Press. Hall, H. (1987) "Phenomenology" in the Encyclopedia of Arlijicial Imeliigenee.

John Wiley. Hanson, A.R., E.M. Riscman, J. Griffith. and T. Weymouth (1986) "A

Methodology for the Development of General Knowledge-Based Systems" in IEEE Trans. Pal/em Analysis alld Machine Inteliigence.

Harman, G. (1965) "Inference to the Best Explanation" in Philosophical Review 74, pp. 88-95.

Harman, G. (1973) fhoflghl. Princeton: Princeton University Press. Haugeland, J. (1979) "Understanding Natural Language" in Joumal oj' Philosophy

76, pp. 619-632. Haugeland, J. ed. (1981) Milld DesiXIl. Cambridge:MIT Press/Bradford Books. Head, H. and G. Holmes (1911) "Sensory Disturbances from Cerebral Lesions" in

Brain 34, pp. 102-254. Healey. R., ed. (1981) Reducfloll. lime alld Reali/y. Cambridge: Cambridge

University Press. Hegel, G. W.F. (1892) LeCflIres all the HislOlY of PhilosophY. London: Routledge

and Kegan Paul. Heidegger, M. (1927) Scill [Jild Zeil in the Jahrbuch iiir Philosophic lind

phiinomcl101og;che Forschul1g, VIII. Heidegger. M. (1962) Being alld Time, translated by J. Macquarrie and E.

Robinson. New York: Harper and Row. Heidegger, M. (1967) Whal Is a filing:' translated by w.n. Barton. Jr. and V.

Deutsch Chicago: Henry Regnerv.

Page 413: Perspectives on Mind

Bibliography 405

Heidegger, M. (1982) The Basic {'roblems of Phenomenology translated by D. Hofstadter. Indiana University Press.

Hempel, C. (1965) Aspeels of Scie11lijic Explanalion and Olher Essays in Ihe Philosophy of Science. New York: Free Press.

Hess, R. (1975) "The Role of Pupil Size in Communication" in Sciellfijie American 233 (5) pp. 110-118.

Hesse, M.B. (1980) "The Explanatory Function of Metaphor" in Revolutions and Recollslruclions in Ihe Philosophy of Science, pp. 111-124. Indiana University Press.

Hilgard, E. (1977) Divided Consciousness: Mulliple C011lrols in Human Thoughl and AClion. New York: Wiley & Sons.

Hill, I.C. (1983) "A Computational Model of Language Acquisition in the Two­Year-Old" in Cognilion and Brain Theory 6, pp. 287-317.

Hill, I.C., and M.A. Arbib (1984) "Schemas, Computation and Language Acquisition" in Human Developme11l 27, pp. 282-296.

Hintikka J. (1984) The Game of Language. Dordrecht, Holland: Reidel Publishing Company.

Hofstadter, D. and D. Dennett, eds. (1980) The Mind's I. New York: Basic Books.

Hook, S .. ed. (1961) Dimensions ~rMind. New York: New York University Press. Hookway, C. (1984) Minds. Machines and Evolulion. Cambridge: Cambridge

University Press. Hopcroft, I.E. and I,D. Ullman (1979) Inlroduclion 10 Aulomala Theory,

Languages and Compulalion, Reading, Massachusetts: Addison-Wesley. Horgan, T. and I. Woodward (1985) "Folk Psychology is Here to Stay" in

Philosophical Review, 94 (2), pp. 197-225. Hume, D, (173911965) A healise of Human Nalure edited by L.A. Selby-Bigge,

Oxford: Clarendon. Hurvich, L.M. (1981) Color Vision. Sunderland, Massachusetts: Sinauer and

Associates. Husser!, E. (1913/1950) Ideen zu einer rein en {,hlillomenologie und

phlinomenologischen Philosophie, Erstes Buch; (Husserliana V) edited by Marly Biemal. The Hague: Nijhoff.

Husserl, E. (191311962) Ideas: General InlrodllCfion 10 {'ure Phellomenology, translated by W.R, Boyce Gibson. New York: Collier.

Husser!, E. (190011970) Logical Invesrigalions, translated by J. Findlay. New York: Humanities Press.

Husserl, E. (192511962) Phenome/wlogieal Psychology, translated by J. Scanlon, The Hague: Nijhoff.

Husser!. E. (192911969) Formal and TranscendClllal Logic. translated by D, Cairns, The Hague: Nijhoff.

Husserl, E. (193111960) Carlesian Medilalions, translated by D. Cairns, The Hague: Nijhoff.

Husser!. E, (193111973) ExperiellCe and Judgmenl translated by K. Ameriks and I,S. Churchill, Northwestern University Press.

Page 414: Perspectives on Mind

406 Bibliography

Husser!, E. (195411970) 1I1e Crisis of Europeall Scicllces and Transcendemal Phellomenology, translated by D. Carr. Evanston, Illinois: Northwestern University Press.

Ihde, D. (1977) Experimellfal Phellomelliogy. New York: G.P. Putnam's Sons. Ihde, D. (1979) Technics alld Praxis. Boston: D. Reidel. Jackson, F. (1979) "On Assertion and Indicative Conditionals" in Philosophical

Review 78. pp. 565-689. Jackson, F. (1982) "Epiphenomenal Qualia" in Philosophical Quarterly 32.

pp. 127-136. James, W. (1912) "Does Consciousness Exist?" in Essays ill Radical Empiricism

alld a Pluralistic Universe edited by R.B. Perry. New York: Dutton. Jaynes, 1. (1977) The Origins of COI1Sciousness in the Breakdown of the

Bicameral Mind. New York: Houghton-Mifflin. John, E. (1976) "A Model of Consciousness" in Consciousness and Self­

Regulation. Vol. I, edited by G. Schwartz and D. Shapiro. New York: Plenum.

Kahneman, D. (1973) Attelltion and Effort. Englewood Cliffs: Prentice Hall. Kalke, W. (1969) "What is Wrong with Fodor and Putnam's Functionalism?" in NOlls

3. pp. 83-93. Kant, I. (178311966) Prolegomena to Ally Future Metaphysics translated by Peter

G. Lukas. Manchester University Press. Kaplan, D. (1979) "On the Logic of Demonstratives" in Contempormy Perspectives

in the Philosophy of Language edited by P. French. T. Uehling, and H. Wettstein. pp. 401-414. Minneapolis: University of Minnesota Press.

Katz, J.J. (1972) Semmllic Theoly. New York: Harper & Row. Keen. E, (1975) A Primer ill Phenomenological Psychology. New York: Holt.

Rinehart. and Winston. Kirk. G.S. and J.E. Raven (1973) The Presocrati, Philosophers, Cambridge:

Cambridge University Press. Knapp, P. (1976) "The Mysterious 'Split': a Clinical Inquiry into Problem of

Consciousness and Brain" in Globus, Maxwell, and Savodnik (1976). Koffka, K. (1935) Principles oj" Gestalt Psychology. New York: Harcourt, Brace

& World. Koyre, A. (1969) Newtoniall Stlldies. Chicago: University of Chicago Press. Kripke, S.A. (197211980) Naming alld Necessity. Cambridge: Harvard University

Press. Kripke, S.A. (1979) "A Puzzle about Belief" in Meaning mId Use edited by A.

Margalit, pp. 239-283. Dordreeht: Reidel. Kripke, S.A. (1982) Wittllensleill on Rules and Private Language. Cambridge:

Harvard University Press. Kuhn, T. (1977) The EsscllIial Tel1siol1. Chicago: University of Chicago Press. Lackner. J.R. and M. Garrett (1972) "Resolving Ambiguity: Effects of Biasing

Context in the Unattended Ear" in Cognition. I. Laing. R., H. Phillipson. and A. Lee (1966) {/ltC/personal Perceplio/l: A 71leory

alld Method of Research, London: Ravistoek. Lakatos. I. (1981) "The History of Science and Its Rational Reconstructions" in

Sciellfijic Iievolutio/ls edited 'by I. Hacking. Oxford: Oxford Univ. Press.

Page 415: Perspectives on Mind

Bibliography 407

Latane, B. and J.M. Darley (1970) The Unresponsive Bystander: Why Doesn't He Help? New York: Appleton-Century-Crofts.

Lewis, D. (1969) Convention: A Philosophic Study. Cambridge: Harvard University Press.

Lewis, D. (1972) "Psychophysical and Theoretical Identification" in Australasian Journal of Philosophy 50, pp. 249-258.

Lewis, D. (1976) "Survival and Identity" in The Identities of Persons edited by A. Rorty, pp. 1-40. Berkeley: University of California Press.

Lewis, D. (I 979a) "Lucas against Mechanism II" in Canadian Journal of Philosophy 9, pp. 373-376.

Lewis, D. (l979b) "Attitudes de dicto and de se" in Philosophical Review 88, pp. 513-543.

Lonergan, Bernard (1978) Insight. New York: Harper and Row. Lucas, J. (1961) "Minds, Machines, and Godel" in Philosophy 36, pp. 112-137. Lucas, J. (1968) "Satan Stultified: A Rejoinder to Paul Beneceraff" in Monist

52, pp. 45-48. Luria, A. (1978) "The Human Brain and Conscious Activity" in Schwartz and

Shapiro (1978). Lurie, Y. (l979a) "Correlating Brain States with Psychological Phenomena" in

Australasian Journal of Philosophy 57 (2). Lurie, Y. (l979b) "Inner States" in Mind 88 (350). Lycan, W. (1981) "Form, Function, and Feel" in Journal of Philosophy, 78 (I),

pp. 24-50. Lyotard, J. (1984) The Post-Modern Condition. Minneapolis: University of

Minnesota Press. MacKay, D. (1951) "Mind-Like Behavior in Artifacts" in British Journal for the

Philosophy of Science 2, pp. 105-121. MacKay, D. (1966) "Cerebral Organization and the Conscious Control of Action,"

in Brain and Conscious Experience, edited by J.C. Eccles, pp. 422-440. Springer-Verlag.

Maier, N. (1931) "Reasoning in Humans: II. The Solution of a Problem and its Appearance in Consciousness" in Journal of Comparative Psychology 12, pp. 181-194.

Malcolm, N. (1963) Knowledge and Certainty. Englewood Cliffs: Prentice-Hall. Maloney, J. (1985) "About Being a Bat" in Australasian Journal of Philosophy,

March, 63 (I), pp. 26-47. Margolis, J. (1984a) "Pragmatism Without Foundations" in American Philosophical

Quarterly 21 (I). Margolis, J. (I 984b) "Relativism, History, and the Objectivity of the Human

Sciences" in Journal lor the Theory ol Social Behavior 54. Margolis, J. (1984c) Culture alld Cultural Entities. Dordrecht: Reidel. Margolis. J. (l984d) Philosophy of Psychology. Englewood Cliffs: Prentice

Hall. Margolis, J. (1985a) "A Sense of Rapprochement Between Analytic and Continental

Philosophy" in History of Philosophy Quarterly 2. Margolis, J. (I 985b) "Objectivism and Relativism" in Proceedings of the

Aristotelian Society 75.

Page 416: Perspectives on Mind

408 Bibliography

Margolis, J. (1986a) "Information, Artificial Intelligence, and the Praxical" in Philosophy and Tecllllology, /I edited by C. Mitcham and A. Huning. Dordrecht: Reidel.

Margolis, J. (I986b) "Thinking about Thinking" in Grazer Philosophische Studien 27.

Matson, W. (1966) "Why Isn't the Mind/Body Problem Ancient?" in Mind. Matter, and Method: Essays in Honor oj Herbert Feigl edited by P. Feyerabend and G. Maxwell, pp. 92·102. Minneapolis: University of Minnesota Press.

Matthei, E. (1979) "The Acquisition of Prenominal Modifier Sequences: Stalking the Second Green Ball", Ph.D. dissertation, Department of Linguistics, University of Massachusetts at Amherst.

Matthews, G. (1977) "Consciousness and Life" in Philosophy 52, pp. 13·26. McAlister, L., ed. (1976) The Philosophy oj Drentallo. Atlantic Highlands, New

Jersey: Humanities Press. McCarthy, J. and P. Hayes (1969) "Some Philosophical Problems from the

Standpoint of Artificial Intelligence" in Machine Intelligence Volume 4, edited by B. Meltzer and D. Michie, pp. 463·502. Edinburgh: Edinburgh University Press.

McClelland, J.L. and D.E. Rumelhart (1981) "An Interactive Activation Model of Context Effects in Letter Perception: Part I. An Account of Basic Findings" in Psychological Review 88, pp. 375-407.

McCormick, P. and Elliston, F., eds. (1981) Husserl: Shorter Works. Notre Dame: University of Notre Dame Press.

McDermott, D. (1981) "Artificial Intelligence Meets Natural Stupidity" in Haugeland (1981).

McDonald, D.D. (1983) "Natural Language Generation as a Computation Problem: An Introduction" in Brady and Berwick (1983) pp. 209·266.

McGinn, C. (1982) "The Structure of Content" in Woodfield (1982) pp. 207·258. McGuiness, B.F .. ed. (1979) Wingenstein and the Vienna Circle. New York:

Harper and Row. McGuiness, B.F. (1980) Philosophical Remarks. Oxford: Blackwell's. Mclntyre, R. (1984) "Searle on Intentionality" in Inquiry 27, pp. 468-483. McIntyre. R. (1986) "Husserl and the Representational Theory of Mind" in Topoi

5. McKenna. W.R. (1982) Husserl's "Introduction to Phenomenology". The Hague:

Martinus Nijhoff. Merleau·Ponty, M. (1945/1978) PhblOmenologie de la perception. translated by C.

Smith as The Phenomenology oj Perception. London: Routledge & Kegan Paul. Merleau·Ponty. M. (1969) 71,. Visible and the Invisible translated by A. Lingis.

Northwestern University Press. Merleau·Ponty, M. (1973) Consciousness and the Acquisition of Language

translated by H.T. Silverman. Evanston: Northwestern University Press. Michaels. C. and C. Carello (1981) Direct Perception. Englewood Cliffs:

Prentice·Hall. Miller. G. (1956) "The Magic Number Seven Plus or Minus Two: Some Limits on Our

Capacity for Processing Information" in Psychological Review 63. pp. 81·97.

Page 417: Perspectives on Mind

Bibliography 409

Miller, I. (1984) Husserl, Perception, and Temporal Awareness. Cambridge: MIT Press.

Minsky, M.L. (1968) Semantic illformation Processing. Cambridge: MIT Press. Minsky, M.L. (1975) "A Framework for Representing Knowledge" in The Psychology

oJComputer Vision edited by P.H. Winston, pp. 211-277. New York: McGraw­Hill.

Moll, R.N., M.A. Arbib and A.J. Kfoury, (1987) An Introduction to Formal Language Theory. Springer-Verlag.

Montague, R. (1974) Formal Philosophy. New Haven: Yale University Press. Moor, J. (1976) "An Analysis of the Turing Test" in Philosophical Studies 30,

pp. 249-257. Moor, 1. (l978a) "Explaining Computer Behavior" in Philosophical Studies 34,

pp. 325-327. Moor, J. (1978b) "Three Myths of Computer Science" in The British Journal Jor

the Philosophy of Science 29, pp. 213-222. Moruzzi, G. (1966) "Functional Significance of Sleep with Particular Regard to

the Brain Mechanisms Underlying Consciousness" in Eccles (1966). Nagel, T. (1974) "What Is It Like to Be a Bat?" in Philosophical Review 83. Natsoulas, T. (1978) "Consciousness" in American Psychologist 12, pp. 906-914. Nelson, R.J. (1968) IlIIrodllct;OIl to Atllomata. John Wiley and Sons. Nelson, R.J. (I976a) "Mechanism, Functionalism, and the Identity Theory" in The

Journal of Philosophy, 83, pp. 365-385. Nelson, R.J. (l976b) "On Mechanical Recognition" in Philosophy of Sciellce 43,

pp. 24-52. Nelson, R.J. (1978) "Objects of Occasion Beliefs" in Synthese 39, pp. 105-139. Nelson, R.J. (1982) The Logic of Mind. Dordrecht: Reidel Publishing Company. Newmark, P. (1981) Approaches To Translation. Oxford: Pergamon. Nickles, T. (1986) "Remarks on the Use of History as Evidence" in Synthese 69,

pp. 253-226. Nida, E. (1964) Towards a Science of Translatiltg. The Hague: E.J. Brill. Nisbett, R. and T. Wilson (1977) "Telling More Than We Can Know" in

Psychological Review, pp. 231-259. O'Connor, J., ed. (1969) Modem Materialism: Readings on Mind-Body Identity.

New York: Harcourt, Brace & World. Olen, J. (1983) Persons and Their World. New York: Random House. Otto H.R. (1978) The Linguistic Basis oJ Logic Translation. Washington, DC:

University Press of America. Parfit, D. (1971a) "Personal Identity" in Philosophical Review 80 (I), pp.

3-28. Parfit, D. (197Ib) "On the Importance of Self-Identity" in Journal oJ

Philosophy, pp. 683-690. Patterson, D. (1980) "Is Your Brain Really Necessary?" in World Medicine,

May 3, 1980. Peacocke, C. (1979) Holistic Explanation. New York: Oxford University Press. Peirce, C.S. (1901/1955) "Abduction and Induction" in Philosophical Writings of

Peirce edited by J. Buchler. pp. 150-156. New York: Dover.

Page 418: Perspectives on Mind

410 Bibliography

Penfield, W. (1975) The MystelY of the Mind. Princeton: Princeton University Press.

Perry, J. (1979) "The Problem of the Essential Indexical" in NOliS 13, pp. 3-21. Petersen, A.F. (1984) "The Role of Problems and Problem Solving in Popper's

Early Work on Psychology" in Philosophy oj the Social Sciences 24, pp. 239-250.

Piaget, J. (1971) Biology and Knowledge. Edinhurgh University Press. Piaget, J. (1976) n,e Child and Reality: Principles oj Genetic Epistemology.

New York: Penguin. Popper, K.R. (1966) 111e Open Society aJld its Enemies. Oxford: Oxford

University Press. Popper, K.R. (1982) The Open Universe (He Postscript to the Logic oj

Scientific Discovery 1/) edited by W.W. Bartley, III. Totowa, New Jersey: Rowman and Littlefield.

Popper, K.R. and J. Eccles (1977) The SelJ and its Brain. London: Routledge & Kegan Paul.

Pribam, K. (1976) "Problems Concerning the Structure of Consciousness" in Globus, Maxwell, and Savodnik (1976).

Putnam, H. (1960) "Minds and Machines" in Hook (1961), pp. 138-164. Putnam, H. (1967) "Psychological Predicates" in Art, Mind, and Religion edited

by Captain and Merrill. Pittsburgh: University of Pittsburgh Press. Putnam, H. (197011975) "Is Semantics Possible?" in his Collected Papers, Vol.

III, pp. 139-152. Cambridge: Cambridge University Press. Putnam, H. (1975) "The Meaning of Meaning" in Putnam (1975), pp. 215-271. Putnam, H. (1975) Philosophical Paper.I, Vollllne 1/. Cambridge: Cambridge

University Press. Putnam. H. (1982) Reason. Truth, and HistOlY. Cambridge: Cambridge University

Press. Pylyshyn, Z. (1984) Computation and CogllitiO/I. Cambridge: MIT Press/Bradford

Books. Quine, W.V. (1953) From a Logical Poil1l oj View. Cambridge: Harvard University

Press. Quine, W.V. (1960) Word alld Object. Cambridge: MIT Press. Quine, W.V. (1967) Olltological Relativity Gild Other Essays. New York: Columbia

University Press. Quine, W.V. (1972) "Methodological Reflections on Current Linguistic Theory" in

Davidson and Harman (1972). Quine, W.V. (1974) The Roots oj Referel1ce. LaSalle. Illinois: Open Court. Quine, W.V. (1983) "Ontology and Ideology Revisited" in JOll/'llal oj Philosophy,

80 (9). Rey, G. (1976) "Survival" in The !delltiries ~f Persolls edited by A. Rorty. PI'.

41-66. Berkeley: University of California Press. Rey. G. (1980) "Functionalism and the Emotions" in Explaining EmOfiOilS edited

by A. Rorty. Berkeley: University of California Press. Rey. G. (1983a) "A Reason for Doubting the Existence of Consciousness" in

COIISciollSl1es> al1d Self-Regulariol1, Volume 3. edited I", R. Davidson. G. Schwartz. and D. Shapiro. New York: Plenum.

Page 419: Perspectives on Mind

Bibliography 411

Rey, G. (1983b) "Concepts and Stereotypes" in Cogllitioll 15, pp. 237-262. Rey, G. (1984) "Ontology and Ideology of Behaviorism and Mentalism" . commentary

on B.F. Skinner, "Behaviorism at Fifty" in 171e Behavioral and Brain Sciellces 7 (4), pp. 640-641.

Rey, G. (1985) "Concepts and Conceptions: a Reply to Smith, Medin, and Rips" in Cognition 19.

Rey, G. (1986), "What's Really Going On in Searle's Chinese Room" in Philosophical Studies.

Rey, G. (1987) "Beliefs, Avowals, Akrasia. and Self-Deception" in Essays Oil

Self-Deception edited by A. Rorty and B. MacLaughlin. Berkeley: University of California Press.

Richardson, R. (1981) "Internal Representation: Prologue to a Theory of Intentionality" in Philosophical Topics 12, pp. 171-211.

Richardson. R. (1982a) "The . Scandal' of Cartesian Interactionism" in Mind 91, pp. 20-37.

Richardson. R. (1982b) "Turing Tests for Intelligence: Ned Block's Defense of Psychologism" in Philosophical Studies 41, pp. 421-426.

Richardson, R. (1983) "Brentano on Intentional Inexistence and the Distinction Between Mental and Physical Phenomena" in Archiv fiir Geschichte der Philosophie 65, pp. 250-282.

Richardson, R. (1985) "Union and Interaction of Body and Soul" in Journal of the HistOlY of Philosophy 23, pro 221-226.

Ricoeur, P. (1967) Husserl: All Analysis of His Phenomel1olollY translated by E.G. Ballard and L.E. Embree. Evanston: Northwestern University Press.

Ricoeur, P. (1981) Hermeneutics Gnd the Human Sciences translated and edited by J.B. Thompson. Cambridge: Cambridge University Press.

Riseman, E.M. and A.R. Hansom (1986) "A Methodology for the Development of General Knowledge-Based Vision Systems" in Arbib and Hanson (1986), pp. 285-328.

Risser, J. (1986) "To Forget to Forget: Hermeneutics and Artificial Intelligence" in Logos 7. pro 83-92.

Rorty, R. (1965) "Mind-Body Identity. Privacy, and Categories" in Review oj Metaphysics. 14 (I).

Rorty. R. (1970) "In Defense of Eliminative Materialism" in Review of Metaphysics. 24 (1). pp. 112-121.

ROfty. R. (1972) "The World Well Lost" in Journal of Philosophy. ROfty. R. (1979) Philosophy alld the Mirror oj Nature. Princeton: Princeton

University Press. Rorty, R. (1982) The Consequences oj Pragmatism. Minneapolis: University of

Minnesota Press. Rosenthal, D. (1984) "Armstrong's Causal Theory of the Mind" in D.M. Armstrong

edited by R. Bogdan. pp. 79-120. Dordrecht: Reidel. Rosenthal. D. (1986) "Two Concepts of Consciousness" in Philosophical Studies

49 (3). pp. 329-359. Rouse, J. (1981) "Kuhn. Heidegger. and Scientific Realism" in Man alld World 14. Rumelhart. D. and J. McClelland (1986) Parallel Distributed I'rocessing.

Cambridge: MIT Press/Bradford Books.

Page 420: Perspectives on Mind

412 Bibliography

Russell, B. (1940) Inquiry into Mealling and Truth. London: George Allen & . Unwin.

Ryle, G. (1949) The Concept of Mind. London: Hutchinson. Sarraute, N. (1984) Childhood translated by B. Wright in consultation with the

author. George Braziller. Sayre, K. (1969) Consciousness. New York: Random House. Sayre, K. (1916) Cybernetics and the Philosophy of Mind. Atlantic Highlands,

New Jersey: Humanities Press. Schank, R.D. and R.P. Abelson (1911) Scripts, Plans, Goals, and Understallding.

Hillsdale, New Jersey: Erlbaum. Schiffer, S. (1912) Meaning. Oxford: Oxford University Press. Schiffer, S. (1980) "Truth and the Theory of Content" in Mealling and

Understanding edited by H. Parret and J. Bourveresse. Berlin: de Gruyter. Schmidt, C. and G. D'Addami (1973) "A Model of the Common Sense Theory of

Intention and Personal Causation" in Proceedings of Third International Joint Conference on Anijicial Intelligence. Stanford: Stanford University Press.

Schwartz, S.P. (1977) NOJIIing, Necessity alld Natural Kinds. Ithica, New York: Cornell University Press.

Schwartz, G. and D. Shapiro, eds. (1976) Consciousness alld Self-Regulation, Volume 1. New York: Plenum Press.

Searle, J.R. (1969) Speech Acts. Cambridge: Cambridge University Press. Searle, J.R. (1911) "What is a Speech Act?" in The Philosophy of Language

edited by J.R. Searle. London: Oxford University Press. Searle, J.R. (1978) "Literal Meaning" in Erkennmis 13, pp. 207-224; reprinted

in Searle (1979a). Searle, J.R. (l919a) Expression and Meaning. Cambridge: Cambridge University

Press. Searle, J.R. (1979b) "What is an Intentional State?" in Mind 88, pp. 74-92. Searle, J.R. (1980) "Minds, Brains, and Programs" in Behavioral and Brain

Sciences3, pp. 417-451. Searle, J.R. (1981) "Analytic Philosophy and Mental Phenomena" in Midwest

Studies in Philosophy, Vol. 6, edited by P. French, T. Uehling. and H. Wettstein. Minneapolis: University of Minnesota Press.

Searle, J.R. (1983) Intellfionality: An Essay ill the Philosophy oj Mind. Cambridge: Cambridge University Press.

Searle, J.R. (1984) "Intentionality and Its Place in Nature" in Synthese 61, pp.3-16.

Sellars. W. (195611963) Science. Perception. and Reality. London: Routledge & Kegan Paul.

Shoemaker, S. (1975) "Functionalism and Qualia" in Philosophical Studies 27, pp. 291-315: reprinted Shoemaker (1984).

Shoemaker, S. (1981) "Absent Qualia are Impossible" in The Philosophical Review 90. pp. 581-599.

Shoemaker. S. (1984) Identity. Calise. and Mind. Cambridge: Cambridge University Press.

Simon, H. (1957) Models oJMan. New York: John Wiley.

Page 421: Perspectives on Mind

Bibliography 413

Skinner, Q. (1969) "Meaning and Understanding in the History of Ideas" in History and Theory.

Sloman, A. (1978) The Compuler Revolulion in Philosophy: Philosophy, Science, and Models of Mind. Atlantic Highlands: Humanities Press.

Smith, D.W. (1984) "Content and Context of Perception" in Synlhese 61, pp. 6\-87.

Smith, D.W. (1986) "The Structure of (Self-)Consciousness" in Topoi. Smith, D.W. and R. Mcintyre (1982) Husserl and lnlenlionalily: A S/Udy of Mind.

Meaning. and Language. Dordrecht: Reidel. Smullyan, R. (1957/1969) "Languages in Which Self-Reference is Possible" in

Philosophy of Malhemalics edited by J. Hintikka. New York: Oxford University Press.

Sperber, D. (1982) "Apparently Irrational Belief" in Rationality and Relalivism edited by M. Hollis and S. Lukes. Cambridge: MIT Press.

Stalnaker, R. (1984) Inquiry. Cambridge: MIT Press/Bradford Books. Stampe, D. (1977) "Towards a Causal Theory of Linguistic Representation" in

Midwesl SlUdies in Philosophy /I: Siudies in Ihe Philosophy of Language edited by H. Wettstein and T. Uehling, pp. 42-63. Morris. Minnesota: University of Minnesota Press.

Stich, S.P. (1983) From Folk Psychology 10 Cognilive Science: The Case Against Belief Cambridge: MIT Press/Bradford Books.

Sukale, M. (1976) Comparative Studies in Phenomenology. The Hague: Martinus Nijhoff.

Taylor, C. (1963) The Explanalion of Behavior. London: Routledge & Kegan Paul. Thatcher, J.W. (1963/1976) "The Construction of a Self-Describing Turing

Machine" in Mall!ematical TheOlY of Computation. Brooklyn: Polytechnic Press.

Trevarthen, C. (1982) "The Primary Motives for Cooperative Understanding" in Social Cognition: SlUdies of the Development of Undemanding edited by G. Butterworth and P. Light, pp. 77-109. Harvester Press.

Turvey, M.T., R.E. Shaw, E.S. Reed. and M.W. Mace (1981) "Ecological Laws of Perceiving and Acting" in Cognilion 9, pp. 237-304.

Twardowski, K. (189411977) On the Content and Object of Presentations translated by R. Grossman. Nijhoff.

Ullman, S. (1980) "Against Direct Perception" in Behavioral and Brain Sciences 3, pp. 373-415.

Van Fraassen, B. (1980) The Sciellfijic Image. Oxford: Oxford University Press. Van Gulick, R. (1982) "Functionalism as a Theory of Mind" in Philosophy

Research Archives 7, pp. 185-204. Wason, P. and J. Evans (1975) "Dual Processes in Reasoning" in Cognition 3. Wason, P. and Johnson-Laird (1972) Psychology of Reasoning: SlruclUre and

COlltel1f. London: B.T. Barsford. Weber. M. (192211980) "The Nature of Social Action" in Weber: Selections in

Trallslalioll edited by W.G. Runciman. Cambridge: Cambridge University Press.

Weiner, N. (1954) 11Ie Human Use of Human Beings. Garden City, New York: Doubleday.

Page 422: Perspectives on Mind

414 Bibliography

Weizenbaum, J. (1965) "ELIZA--A Computer Program for the Study of Natural Language Communication Between Men and Machine" in Communication of the Association for Computing Machinery, 9 (I), pp. 36-45.

Wernicke, C. (1874) Del' Aphasische Symptomel1complex: Eine psychologische Studie auf dnatomisher Basis. Breslau: Cohn & Weigert.

Wheeler, J. (1982) "Bohr, Einstein, and the Strange Lesson of the Quantum" in Elvee (1982), pp. 1-30.

Whitehead, A.N. (1929) Process and Reality. New York: MacMillan Company. Wiggins, D. (1975/1978) "Deliberation and Practical Reason" in Practical

Reasoning edited by J. Raz, pp. 144-152. Oxford: Oxford University Press. Wigner, E. (1967) "Remarks on the Mind-Body Question" in Symmetries and

Ref/ections, pp. 171-184. Bloomington: Indiana University Press. Wigner, E. (1982) "The Limitation of the Validity of Present Day Physics" in

Elvee (1982), pp. 118-133. Wilkes, Y. (1984) "Machines and Consciousness" in Hookway (1984), pp. 105-128. Winch, P. (1970) "Comment on Jarvie" in Explanation in the Behavioral Sciences

edited by R. Borger and F. Cioffi. Cambridge: Cambridge University Press. Winograd, T. (1972) Understanding Natural Lmlguage. New York: Academic Press. Wittgenstein, L. (192011967) Tracta1us Logico-Philosophicus translated by

D. Pears and B. McGuiness. London: Routledge Kegan Paul. Wittgenstein, L. (195311967) Philosophical Illvestigations translated by G.E.M.

Anscombe, 3rd edition. Oxford: Blackwell. Wittgenstein, L. (1958) The Blue and Brown Books. New York: Harper & Row. Wittgenstein, L. (1966) "A belief isn't like a momentary state of mind, 'At

five o'clock he had a very bad toothache.'" in Lectures and COllversations on Aesthetics, Psychology. and Religious Belief edited by C. Barrett. Berkeley: University of California Press.

Woodfield, A .. ed. (1982) 7iloughts and Object: Essays on Imentionality. Oxford: Clarendon Press.

Young, R. (1970) Mind, Brain and Adaptation in the Nilleteemh Century. Oxford: The Clarendon Press.

Young-Eisendrath, P. and 1. Hall, eds. (1987) 711e Book of the Sell PerSOIl, Pretext, Process. New York: New York University Press.

Zajonc, R. (1968) "The Attitudinal Effects of Mere Exposure" Journal of Personality mId Social Psychology, 1968.8.

Ziff, P. (1964) "The Feelings of Robots" in Minds alld Machines edited by A.R. Anderson, pp. 98-103. Englewood Cliffs, New Jersey: Prentice-Hall.

Page 423: Perspectives on Mind

INDEX

Subject Index

adequacy conditions 79, 150, 151, 173, 174

analysis 296 artificial intelligence 10, 60, 75,

222 artificial intentionality 10 ascription 145 assimilation/accommodation 223 associationism 284, 290 automata 151, 152 awareness 143

attentive 15, 128, 129 non-attentative 248

Babel thesis 330, 332 background 55,97.217,218,238,

242, 243, 260, 266 262 Believer System 13. 14 bracketing (see epoche) brain states 36, 48, 51 Cartesian Intuition 4. 6. 20, 22,

24, 26-29, 33 chauvinism 23. 147. 163 Chinese Room (Box) argument 166.

220. 240, 280 circumspection 87 cognitivism 85. 98 cognitive psychology 182, 183,219 communication test 126-128 computationalism 57.60. 63. 71. 72.

103. 104, 142, 146. 159, 246. 275. 276. 287

computational functions 135, 136 conditions of satisfaction (see

adequacy conditions) consciousness 4-6. II. 14. 16. 17.

23-31. 56, 58. 86. 95 definition of 7

convergence 339. 341. 365. 366 counter-convergence 346 correspondence 33. 34

debate 35 hypothesis 35. 36. 47. 51

415

ideal of perfect 194 incoherence of 38. 51 one-to-one 36, 37

"cuddliness" criterion 23 cybernetics 221 Dasein 86, 96, 252, 254, 343. 344,

348, 360 detection regularities II discrete 24, 34. 48 disturbing possibility 24, 33, 76 eidetic features 53-55. 185

eidetically meaningless 54. 55 emphatic reflexive 14. 15 epiphenomenology 85 epoche (bracketing) 52. 53, 58, 59,

200. 202. 313. 323. 337 Erlebnisse 52. 53 expectation function 150, 151 experiential content extensionalism 349. 359, 367 externalism 77

vs. internalism 83.275.281. 282. 291

feedback 12 Folk Psychology 169-171.173.176.

177 formalism 71. 72 formality condition 78. 260 frame analysis 227. 246 functional equivalence I 17, 120,

121 functionalism 24, 43. 60, 61, 108.

116. 117, 125. 140,249.250,320 glossary 302, 314 Gode!'s theorem 228. 229. 233 Golemites 155. 156. 164. 165 hermeneutic 253. 339 holism 137.138-140.142.153.166 horizon 92.97. 186.253.254.258.

314 Implicit Definition Thesis 170 incomensurability 327 inscrutability of silence 326. 328 ideal of perfect correspondence 194

Page 424: Perspectives on Mind

416

intentional processes 52 system 349 transaction 104, 182, 197,205, 209,210, 277

intensionality 8-10,24,31,32,53, 57,59,67,69,74,78, 86,92-96, 134,137,158,169,171,195,294 n-order 13

interpretation 128,296-299, 302, 316,318

introspection lambda calculus 306 language 234

as metaphor 236, 237 game 300-302 error 311

language of thought 9, 16, 64 Limitation Theorem 374 linguistic representation mathesis 75,85,100,101,103,104 meaning 67, 70, 83,99,289, 293-296

natural meaning (meaning N) 81 stimulus 298, 299

mechanism 146, 162 mental states 52. 55. 214 methodological solipsism 56, 58. 60.

64, 67, 83. 85, 260 micro-world analysis 264 mind-body 43 naturalism 2219,239.249 nesting 14, 15, 234 noemata (noema) 99

noematic Sil1l1 (see Sil1l1e) phase of experience 197 prescriptions 191, 208

noetic-noematic correlation 190 ontology 45, 62, 115. 141, 182. 259,

300.301.321-323.337.343.356 ontological neutrality 182. 202 open space (see horizon) paraphrase 296. 297 302-304 perception 151, 157, 158. 159.

269-271, 283, 324 perceptual acceptance 157. 158 phenomenological

reduction 184

phenomenological reflection (see epoche)

phenomenology 74, 340, 341

Il1dex

transcendental 58,60,75,219 pluralism 365, 368 pragmatism 198, 340, 341 propositional attitude 50, 145, 146 qualia (qualitative experience) 105,

107,115,117,122,126,130,131, 133, 162 absent 108. I 16, I 17, 157

rational regularities 8-10, 13 recognition states 149 recursion 13, 14 reductionism 169,225,226,257,

345, 356. 358, 363 reference 76,79,83, 183

linguistic 83 causal theory of 80 self 159

relativism 366 replacement test I II-I 14, II 7, II 8.

127 representational

content 76 theory of mind (RTM) 57, 63. 66. 68.77.79,81,349,350

representation 56, 70, 73, 76, 279 mental 58, 67, 99

rules ceteris paribus 96 translation 302-305, 306

satisficing 207 schema 217,227-231,250-252,257

assemblage (network) 218. 224 theory 222. 226

second person mode of analysis 199, 200, 207

self-attribution (experiments) II, 12 self awareness 14 semantic

content 67.72.263.314.316. 322. 325 game theoretic 300, 30 I theory 298. 309 reference 159 state 99

Page 425: Perspectives on Mind

Index

semiotic relation (interpretation) sensations 17, 18 Sinne 58, 65, 70, 72, 73, 79, 188 Skeptic vs. Sophist 200, 201,

204-207 skill (expertise, know how) 87-91,

93, 102, 144,238, 244, 245 soul 5, 6, 21, 22 speech act 293 subject/object distinction 138 subjective experience 1, 345 subjectivising (erroneous) 95 syntactic

analysis 222, 349, 351 theory of mind (STM) 333

tacit intentionality 158 "taking" relation (function) 134,

149-151, 160 transmission 109, 110, 114, 117,

118, 122, 127 third person ontology 83, 199 transaction 181 transcription 296, 323 translation 296,297,311-315,324,

329-331, 333, 334, 369 derivation 295 indeterminacy of 327 internal 15 radical (RTE) 329 systematic 296, 313 rules 305-309

Twin-Earth argument 71 type-type identity 143, 147, 154.

158 Verstehen 8 World-Knot 137, 153, 156, 164, 165 "wunderbar" phenomenon 99

Name Index

Abelson 262, 264. 269 Arbib 217,218,237-241,243,

245-251,253.255-258,260,314 Aristotle 93, 256, 336 Austin 293 Bach 66, 71

Bain 283-285 Bartlett 223 Barwise and Perry 272, 299 Bender 158, 167, 168 Bentham 204 Berkeley 281 Beth 233 Block 23, 36, 120, 163 Brentano 148, 152, 290, 347 Brown 13 Carnap 138, 150, 294, 331, 342,

345-347, 354, 359, 360, 367 Chisholm 14, 149-151

417

Chomsky 154,222,235,296,299, 352, 367

Churchland 30, 70 Clark 264 Conklin 230 Craik 223 Culbertson 110 Cummins 262 D'Addami 13 Davidson 310-314324-331,336-338,

346,347,355,357,359-361,367 Dedeki nd 140 Dennett 10, 13-15, 243, 267, 280,

346. 352, 367 Derrida 203 Descartes 6,11,26-29,31,98,127,

138, 201, 255, 284, 285 Dewey 204,341,352,354,356-358,

361-363 Dirichlet 221 Dowty, Wall, and Peters 299, 305 Dretske 75,99-101,286,348,349,

367 Dreyfus 56,57,59,67,72-74.76,

83.84, 101-104,238,241-244, 246-248, 262, 272, 274, 289, 339

Emmett 76,83 Feyerabend 326 Fichte 200 Fields 8, 218, 260, 273-276,

279-282, 287-291 Fodor 8. 36. 56-61, 63-69. 72. 73,

76-78.83-85,98-101, 146, 148, 201.243.275.349,352,367

Page 426: Perspectives on Mind

418

FlIllesdal 247 Foucault 336 Fourier 221 Frege 65,69,71, 73, 79, 261, 356,

357 Freud 26, 156, 232, 233, 286, 290 Fuller 198, 207, 208, 324, 337, 338 Gadamer 252,258, 352, 356 Garrett 248,257,258-260,314 Gestalt 122, 125, 137, 151, 199,

213, 327 Gibbs 139 Glide1 148, 228, 229, 233 Goffman 227 Goodman 138 Graesser 264 Gregory 223 Grice 14, 16, 81, 179, 326, 327 Gould 366 Gurwitsch 92, 214 Haack 211, 212, 310 Habermas 360 Hacking 3327 Hall 238, 246-248 Hanson 224, 253 Harman 8 Hartshorne 255 Haugeland 262, 272, 289 Head 222, 223 Hegel 137, 200, 203, 204, 356 Heidegger 54, 57, 59, 84-87, 92-98,

101-103,200, 203, 238, 242, 252, 254, 255, 258, 262, 266-268, 272, 276,289-291, 332, 341-345, 352-354, 356, 358-362

Hendry 314,321-323, 337 Hilbert 229 Hill 170, 179, 180,234-236 Hintikka 299-301,309,313 Hume 5-7,22,48,256,287,347 Husser1 24, 48, 52-79, 82-86,

94-104, 159, 182-201,204, 207-216,238,247,252,257, 261-263,286,289, 313, 323, 341. 352-354, 358, 360

Huxley 156 Jackson 320

Index

James 290, 357 Johnstone 117, 126, 132, 133, 135 Kant 52, 98, 200, 203, 209, 222,

261,287,290,291,347 Kaplan 9, 14, 91 Kepler 219 Kripke 80, 275, 298 Kuhn 326 Laing 14 Langer 193 Lee 14 Leibniz 98, 294 Lewis 120, 148, 163 Locke 11, 110, 111, 123 Lonergan 256 Lotze 357 Lurie 3, 34, 47-56, lOS, 248 MacKay 223 Margolis 340, 354-369 Matthei 235, 236 Maxwell 139 Mcintyre 4, 56, 76-78, 83-85,

97-101,104-106,159,182,248 McKenna 194, 196,208,214-216 Merleau-Ponty 242, 243, 252, 258,

283, 285-291, 353 Mill 137 Minsky 76, 223, 227, 243, 246 Montague 298, 299, 305, 306. 322 Moor 106, 116-127, 131-135, 157, 181

248 Moruzzi 23 Muller 283 Munz 364, 368, 369 Nelkin 265, 282 Nelson 134-136, 157-169, 178-182,

248 Neurath 345, 347, 356 Newmark 327 Neitzsche 203, 256 Ockham 203 Otto 218,292,313-316,319-321,

323-325, 337, 338, 356, 364 Peirce 129,141,142,149,227,357 Penfield 23 Phillipson 14

Page 427: Perspectives on Mind

Index

Piaget 16,218,223,226,229,231, 233

Plato 98, 143, 254, 302 Popper 203,207, 345, 351 Pribram 23 Putnam 36,56,58-60,71,78,141,

163, 347 Quine 30, 147, 148, 152, 173,294,

298-301, 306, 324-326, 329-331, 333-338,341-347,351-364,367, 369

Ramsay 120 Rey 3, 4, 14, 24-27, 29-33, 56, 76,

105,106,134,157,181,248 Richardson 282, 290, 291 Ricoeur 352 Riemann 221 Riseman 224, 253 Roberts 67 Rorschach 15 Rorty 30, 344, 356 Rosenthal 14 Russell 50, 137,229,331,356 Sartre 255 Schank 220, 222, 246, 262, 264, 269 Schiffer 10, 14, 16 Schleiermacher 335. 336 Schmidt 13 Schopenhauer 137 Searle 7,24.57.70.71.75,77-79,

93, 94. 98, 100. 101. 124, 146. 164. 197,220. 221. 238-241,246, 262, 265, 266. 272. 276, 293, 297

Sellars 345, 346, 367

Shannon 141 Shoemaker 113. 123, Simon 141 Skinner 59 Sleeper 354. 362-364, 369 Sloman 339

419

Smith 14,24.31-33,61,68,69.71, Socrates 2, 8, 256 Spencer 283 Stamp 10 Stich 8, 66, 349-352, 367 Strauss 361 Tarski 148, 150, 300, 309, 322. 346,

347 Trevarthen 233 Tuedio 182. 197-200.204,208-217,

248, 260 Turing 142, 148 Van Fraassen 334 Van Gulick 117 118, 125, 126, 134 Von Neumann 141, 148 Weber 8 Weizenbaum 15 Wernicke 283-285 Whitehead 137, 255 Wiener 12,221 Williams 48, 55 Winch 328 Winograd 273 Wittgenstein 23, 25, 45, 59, 82,

234,294,299,300,314,331,353, 356, 360, 361, 363, 364

Ziff 108

Page 428: Perspectives on Mind

LIST OF AUTHORS

Georges Rey University of Maryland

David Woodruff Smith University of California, Irvine

Yuval Lurie Ben-Gurion University

Forrest Williams University of Colorado, Boulder

Ronald Mcintyre California State University,

Northridge

Kathleen Emmett University of Tennessee

Hubert L. Dreyfus University of California, Berkeley

James H. Moor Dartmouth College

Robert Van Gulick Syracuse University

Henry W. Johnstone. Jr. Pennsylvania State University

Raymond J. Nelson Case Western Reserve University

John W. Bender Ohio University

Christopher S. Hill University of Arkansas

James A. Tuedio California State University.

Stanislaus

Steve Fuller University of Colorado, Boulder

William R. McKenna Miami University (Ohio)

Michael A. Arbib University of Southern California

Harrison Hall University of Delaware

Jan Edward Garrett Western Kentucky University

Christopher A. Fields New Mexico State University

Norton Nelkin University of New Orleans

Robert C. Richardson University of Cinncinnati

Herbert R. Otto Plymouth State College, USNH

Herbert E. Hendry Michigan State University

Joseph Margolis Temple University

Ralph W. Sleeper Queens College, Emeritus

James Munz Western Connecticut State University

420