CHAPTER FIVE - COGNITION AS COMPUTATION

Embed Size (px)

Citation preview

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    1/20

    67

    PART II

    Retracing the Path

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    2/20

    68

    CHAPTER 5

    Cognition as Computing

    This chapter takes a closer look at the fundamental claims of computationalism. The

    discussion divides into four sections. The first explicates the central theses of

    computationalism and analyzes how they are related to one another. The second looks into the

    concept of computation as defined in terms of the Turing machine, and examines thedifference between the classical and connectionist models of computationalism. The third

    investigates the kind of intelligence assumed in the claims of computationalism. And the

    fourth examines the extent to which the claims of computationalism are intended to coverabout the nature of the mind.

    1. The Central Theses

    Herbert Simon and Craig Kaplan (1989, 2) define cognitive science as the study of

    intelligence and its computational processes in humans (and animals), in computers, and inthe abstract. This definition identifies the levels on which a computationalist investigation of

    the nature of intelligence is to be carried out; namely, on the abstract level, on the human (and

    animal) level, and on the machine level. Based on these levels, the central claims of

    computationalism can be said to be consisting of a general thesis, which concerns the abstractlevel of intelligence, and two sub-theses, which concern the levels of humans and machines.

    Accordingly, the general thesis corresponds to the claim that thinking or cognition is atype of computational process. Some put this as: cognition is a species of computing.

    Cognition is here defined abstractly, not specifically pertaining to whose intelligence

    humans or machines, but which can be instantiated by humans and machines. Consequently,the two sub-theses are precisely the human and machine instantiations of this abstract thesis,

    which we can respectively call the thesis of human computationality and the thesis of machineintelligence. The former corresponds to the claim that human cognition is a computational

    process, while the latter corresponds to the claim that machines that are capable ofcomputationally simulating human cognitive process are intelligent.

    The difference between human intelligence and machine intelligence is here regarded

    simply as a matter of degree, in that human intelligence is seen as just more complex andsophisticated than machine intelligence. But this difference is a contingent matter, and hence

    it is possible in the future for machine intelligence to equal or even surpass humanintelligence in terms of complexity and sophistication. Furthermore, while we speak of

    humans and machines in which the general thesis of computationalism are instantiated, the

    abstract level of this thesis requires that it also be instantiated in any other conceivable type ofentities that can be considered as intelligent, say the extraterrestrials. Meaning to say, if it is

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    3/20

    69

    true that cognition is a species of computing then any conceivable entity considered to be

    intelligent must be an entity whose intelligence is a species of computing.

    To further understand the theses of computationalism, we need to examine how

    computationalism regards the relationship between the two features assumed in these theses,

    namely the feature of being cognitive, which we can refer to as cognitivity, and the feature ofbeing computational, which we can refer to as computationality. The relation between these

    two features, according to computationalism, is not that they are identical nor thatcomputationality falls under cognitivity but that cognitivity falls under computationality.

    Meaning to say, while it is not necessary that all computational systems be cognitive, it is

    necessary that all cognitive systems be computational.

    The points of reference for these two features are not the same. Machines are the

    point of reference for computationality, while humans are the point of reference for

    cognitivity. On the one hand, humans are given as cognitive systems and machines are judgedto be cognitive or not based on their similarities to humans; on the other hand, machines are

    given as computational systems and humans are judged to be computational or not based ontheir similarities to machines. It is not that machines are given as both computational andcognitive and then human cognitivity is said to be computational on the basis of the

    similarities of human cognitivity with machine cognitivity, or that humans are given as both

    cognitive and computational and then machine computationality is judged to be cognitive onthe basis of the similarities of machine computationality with human computationality.

    Now, if the basis for saying that computing machines are cognitive is that they are

    capable of simulating human cognitive processes, what about the claim that human cognitionis necessarily computational, what is its basis? Definitely not that the fact that humans are

    capable of simulating the computing processes of a machine, for the mere fact that humans

    perform computations, as they do when doing mathematical calculations, does not necessarilymean that their cognitive processes are computational. What it only entails is that performing

    computations is one of the many types of processes that the human mind, regardless of

    whether it is computational or not, are capable of performing.

    What happens, it seems, is that since a computing machine is said to be cognitive

    when it simulates human cognitive processes, it is then thought that cognition, in the abstract,

    must be a kind of computation. The cognitive nature of the machines computationality is thenthought to be an instantiation of the abstract idea that cognition is a kind of computing.

    Consequently, it is also then thought that human cognition must also just be an instantiation of

    this abstract idea, and thus human cognition must also be a kind of computation. The line ofreasoning seems to be as follows. The general thesis of computationalism is abstracted from

    the thesis of machine intelligence and is then attributed to humans thereby forming the thesis

    of human computationality. Now what this implies is that ultimately the basis for the thesis ofmachine intelligencethe simulation of human cognitive processesalso serves as the basis

    for the thesis of human computationality.

    Our observations will definitely raise some questions. But we need to begin with the basics

    before we can appropriately deal with these questions. In what follows, we shall try to clarify

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    4/20

    70

    what it means to say that something is computational, and for something to be cognitive, both

    in the context of the theses of computationalism. Accordingly, we shall clarify the concept ofcomputation as it is defined generally and in the context of the Turing machine, and shall

    examine the two models or approaches in understanding the nature of computation. After

    which, we shall examine the concept of intelligence as defined in artificial intelligence.

    2. The Concept of Computation

    What does it really mean to say that thinking is a kind of computing? But first what does it

    really mean for anything to be computing? Computing is generally the process of

    implementing a computationwhich is also called an algorithm. To compute is simply toimplement a computation or an algorithm. What then is a computation? Computation is

    generally defined as a step-by-step effective procedure for getting a desired result. Harnish

    (2002, 125) puts it as follows: The essence of an algorithm (or effective procedure) is that it

    is a series of steps for doing something that is guaranteed to get a result. More precisely, it is afinite sequence of well-defined steps, each of which takes only a finite amount of memory and

    time to complete, and which comes to an end on any finite input. Computing is not limited tosolving mathematical problems or functions, for there can be an algorithm or an effectiveprocedure for solving other types of problems, such as how to cook eggs or wash the laundry.

    As Tim Crane (1995, 88) explains:

    Like the notion of a function, the notion of an algorithm is extremely general. An

    effective procedure for finding the solution to a problem can be called an algorithm, so

    long as it satisfies the following conditions: (A) At each stage of the procedure, there

    is a definite thing to do next. Moving from step to step does not require any specialguesswork, insight or inspiration. (B) The procedure can be specified in a finite

    number of steps. So we can think of an algorithm as a rule, or a bunch of rules, for

    giving the solution to a given problem.

    In this regard, thinking as computing means nothing but that thinking is a process of

    carrying out certain computations or effective procedures. But more importantly, it alsomeans that the process of thinking can be spelled out in terms of a well-defined series of

    steps. But of what kind must computations be such that the process of carrying them out

    constitutes thinking? For obviously not all sorts of implementing computations constitute

    thinking. As we noted earlier there can be an effective procedure, and hence a computation,for cooking eggs but the process of cooking eggs does not by itself constitute thinking.

    Roger Schank (1984, 68) speaks of human algorithms for understanding indescribing what researchers in the discipline of artificial intelligence intend to accomplish:

    AI researchers are still trying to determine the processes that comprise intelligence so that

    they can begin to develop computer algorithms that correspond to human algorithms forunderstanding. Schank speaks of a special type of algorithms in humans which when

    implemented will constitute human thinking. We can call this type of algorithms as cognitive

    algorithms, which artificial intelligence simply assumes as what constitute human thinkingand therefore sets for itself the task of discovering what these cognitive algorithms are.

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    5/20

    71

    2.1. The Turing Machine

    The notion of computation has initially been defined in terms of the Turing machine. The

    Turing machine does not refer to a particular type of machine that one can buy and use. It is

    an abstract specification of any possible computing machine. It specifies the basic operationsthat a physical system must be capable of performing in order to qualify as a computing

    machine. These basic operations consist of receiving an input and executing a command orinstruction to produce an output. For a physical system to perform these operations it must,

    however, have a storage system, a reading and printing device, a set of symbols, a translation

    system, and a set of instructions or commands.

    The storage system (corresponding to what is presently called memory) can be

    anything so long as it is divided into certain portions such that a bit of data or information can

    symbolically be stored in it. In the case of Turing, he conceives of the storage system as a tapeof infinite length that is divided into squares. In each square, a symbol can be written in case

    the square is empty or does not yet contain a symbol. If the square already contains a symbol,such symbol can be erased and either be left empty or a symbol be written anew. The symbolswritten and stored in the tape are also called representations for they are intended to represent

    certain data. These symbols are finite in number but they can be combined in unlimited ways.

    These symbols serve as the language of the machine for its operations (corresponding to the0s and 1s of modern computers). Needless to say, the machine does not receive inputs or

    information in the form of the symbols that it uses. The machine translates the inputs that it

    receives into its own language, so to speak. And this requires that the machine has a

    translation system. Translating the input data into symbols allows the machine to operate on avery general level, very much in the same way that the use of symbols in symbolic logic

    allows us to speak of reasoning in a very general manner. It enables the machine to represent

    a wide variety of data and process them in a wide variety of ways.

    For the symbols to be written, erased, overwritten, and stored there must be a scanning

    and printing device. Since the storage system is a tape consisting of squares, this device mustbe capable of moving the tape from left to right and vice-versa.1 And of course, whatever the

    device does is in accordance with the machines set of instructions or commands, called by

    Turing as the machine table. These commands are stated in the conditional form. For

    instance, if it reads 0 in square A, it should move to square B and write 1 or if it reads10 in square X, it should move to square Y and overwrite the symbol already written there

    with the symbol 1010. Now, as the machine is performing a particular task, Turing

    describes the machine as being in a particular internal state. There is nothing subjective ormysterious about these internal states, for their being internal only means that they refer to the

    physical states of the machine on the level of performing its tasks, to contrast it from the

    physical states of the machine on the level of the physical composition of the machinewhich would then be the machines external states. From the viewpoint of functionalism,

    these internal states are the functional states of the machine, which take place on the level of

    1In some accounts of the Turing machine, it is not the tape that moves but the scanning and printing device.

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    6/20

    72

    the functional organization of the machine, in contrast to the machines lower-level physical

    states, which take place on the level of its material components.

    This machine was originally conceived by Alan Turing for purposes of determining

    the computability of any given mathematical function, or what comes to the same, of

    determining whether a given mathematical problem is solvable. Before we proceed with ourdiscussion, it may be helpful to briefly discuss what a mathematical function is as Gottlob

    Frege sees it (see Frege 1960).

    According to Frege, a function is any incomplete expression. Examples are: the

    capital of , 2 plus 2 equals , and is the author of Noli Me Tangere. The missingparts can be represented by variables (x, y, z, etc.) and thus we say: the capital of x,

    2 plus x, and X is the author of Noli Me Tangere. Now what we replace the variables

    with to complete the function are called arguments. And once a certain argument is used to

    complete the function, the resulting complete expression yields a value. For instance, if wehave the argument Philippines for the function the capital of x the value is Manila. If we

    have the argument 6 for the function 2 plus x, the value is 8. And if we have the argumentRizal for the function X is the author of Noli Me Tangere, the value is (the) True. Amathematical function is no different, as in the case of the example 2 plus x. Of course, the

    meaningful arguments for a given function are not unlimited. If, for instance, we use the

    argument Philippines for the function 2 plus x, there is no value that will be yielded. Theset of meaningful arguments for a given function is called by Frege the functions value

    range. Obviously the value-range of the function the capital of X would include names of

    countries. And if we have the function X is the present king of France, this function has a

    null or zero value-range since there is no possible argument for X to yield a meaningful value(in this case, a truth-value).

    If we have a mathematical function, the effective step-by-step procedure to completeit, to find its value, or to find the appropriate argument so that it will yield the desired value, is

    called a computation or an algorithm. Now while there are mathematical functions that are

    obviously computable, there are some that are not. But more importantly, there are also somethat are not computable. Finding the solution to a mathematical function that is actually not

    computable, needless to say, is a waste of time and energyand some of these mathematical

    problems were only realized not to be solvable after a long period of trying to solve them.

    And so the great German mathematician David Hilbert raised the question of whether there isa mechanical procedure by which a given mathematical function can be determined to be

    computable. This question has been called the Eintscheidungsproblem, and Alan Turing

    was one of those who came up with an answer.

    Turings answer was precisely his Turing machine. Accordingly, a mathematical

    function is computable if it can be run in the Turing machine, or if it can be translated into thebasic operations of the Turing machine. [Turing in fact was able to demonstrate through his

    machine that some problems in mathematics are not computable or soluble, an example of

    which is the halting problem (see Penrose 1994, 28-29).] And what results from Turingsmachine is a definition of what computation is or what computability consists in.

    Accordingly, a computation is anything that can be run by a Turing machine. This definition,

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    7/20

    73

    which is widely accepted among mathematicians, has come to be known as the Church Thesis

    or sometimes as the Church-Turing Thesis2.

    Turings genius and amazing discoveries, however, do not stop here. In the course of

    developing his concept of the Turing machine, he realizes that all possible Turing machines

    could be run by a single Turing machine. If the operations of all Turing machines can bereduced to the same basic operations, then a single Turing machine will suffice to run the

    operations of all other Turing machines. What is simply needed is for the table machines (theprograms) of the other Turing machines be inputted in the tape (memory) of this single

    machine. This single Turing machine is called the Universal Turing Machine.

    The Universal Turing Machine is in fact the theoretical forerunner or model of the

    modern-day, general-purpose, digital computer.3 Think, for instance, of the individual

    machines that enable us to do mathematical calculations (the calculator), to view DVD

    movies (the DVD player), to view televisions shows (the TV set), to hear music (the radio,CD player, and MP3 player), to organize our activities (the electronic organizer), to

    communicate with other people (the telephone and cell phone), to write and print documents(the electronic typewriter), and play games (the play station). Each of these machines, beingan input-output device with memory and a set of instructions or program, is an instantiation of

    a particular Turing machine. But all of them can be put in one single machine, as it is already

    done in our present computers.

    If computation is defined in terms of being run in a Turing machine and the computer

    is the approximate embodiment of the Universal Turing Machine, then computation can also

    be defined in terms of the actions of the computer, as what Roger Penrose (1994, 17)precisely does: What is a computation? In short, one can simply understand that term to

    denote the activity of an ordinary general-purpose computer. To be more precise, we must

    make take this in a suitably idealized sense: a computation is the action of a Turing machine.Consequently, any computational system being an input-output device with memory and a set

    of instructions is a Turing machine; and so if the human mind is regarded as a computational

    system then it too is an instantiation of a Turing machine (more precisely of a universalTuring machine). Furthermore, if the computer and the human mind are both regarded as

    computational systems and instantiations of a Turing machine, then the human mind must be

    a certain type of computer. This reasoning paves the way for the view that the human mind is

    a type of computer.

    Couched in the modern language of computers, the general thesis of computationalism

    has consequently been expressed as the view that the mind is to the brain as software is tohardware. The computer software or program is here understood as a class of encoded

    computation that is run or implemented by the computer hardware. Accordingly, the thesis of

    human computationality is thus expressed as the view that the human mind is a (digital)

    2For it is said that Alonzo Church has independently arrived at the same conclusions as Turings (see Crane

    1995, 99, Penrose 1994, 20-21)). It is, however, noted that another logician, by the name of Emil Post, has done

    the same even earlier than Church (see Penrose 1994, 20).3 One difference between a Turing machine and any concrete machine that instantiates it is that the Turing

    machine has an infinite storage capacity.

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    8/20

    74

    computer; while the thesis of machine intelligence as the view that a digital computer also has

    a mind. Consider the following two questions that Tim Crane (1995, 84) asks about therelation between computers and the human mind: (1) Can a computer think? Or more

    precisely: can anything think simply by being a computer? (2) Is the human mind a computer?

    Or more precisely: are any actual mental states and processes computational? The first

    question corresponds to the thesis of machine intelligence while the second to the thesis ofhuman computationality.4

    2.2. Classical and Connectionist Models

    At the present, it is usual to distinguish between two types of computationalism: the classicalor symbolic and the connectionist. These two types of computationalism present different

    models for how the human mind/brain does its computations. The computationalism that we

    have discussed thus far is of the classical type. It is called classical because it is the type of

    computationalism that has been existing prior to the advent of the connectionist type. It is,however, called symbolic because it regards computational process as a process performed

    over symbols or representations either in the case of humans or in the case of machines. Aclassic pronouncement to this effect comes from Simon and Kaplan (1989, 40): Computer,and (in our view) human beings are symbols systems. They achieve their intelligence by

    symbolizing external and internal situations and events and by manipulating those symbols.

    They all use about the same symbol-manipulating process. Pylyshn (1989, 57) makes thesame explanation: The important thing is that, according to the classical view, certain kinds

    of systems, including both minds and computers, operate on representations that take the form

    of symbolic codes. Basically, therefore, from a symbolic computationalist point of view,

    computation is essentially a symbol-manipulating process.

    One important formulation of this type of computationalism, in the area of artificial

    intelligence, is the physical symbol system hypothesis advanced by Newell and Simon (1995).According to Newell and Simon, intelligent systems, such as the human mind and the

    computer, are physical symbol systems. In this regard, intelligence is understood as symbol-

    manipulation. As they (1995, 97) explain:

    A physical symbol system is an instance of a universal machine. Thus the symbol

    system hypothesis implies that intelligence will be realized by a universal computer.

    However, the hypothesis goes far beyond the argument, often made on generalgrounds of physical determinism, that any computation that is realizable can be

    realized by a universal machine, provided that it is specified. For it asserts specifically

    that the intelligent machine is a symbol system, thus making a specific architecturalassertion about the nature of intelligent systems.

    In the area of philosophy, symbolic computationalism is best represented, and welldefended as well, by Jerry Fodor in his theory of mental representation, which is more

    4That the human mind is a computational system is strictly speaking the claim of computationalism, but that the

    human mind is a type of computer is strictly speaking the claim of strong artificial intelligence (or strong AI).

    Generally speaking, however, computationalism and strong AI are equated with one another, which is also what

    we do here.

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    9/20

    75

    popularly known as the mentalese or language-of-thought hypothesis. Fodors hypothesis

    supplements the view already articulated in Newell and Simons physical symbol system

    hypothesis. Fodor begins with the idea, which appears to him as self-evident, that there can be

    no computation without a system of representation. Given this, his task is then to investigate

    the nature of the human mind/brains system of representation. And his investigations have

    led him to suppose that the human mind\brains system of representation is inherent to thehuman mind\brain and this system of representation has a language-like structure.

    Fodor was greatly influenced by the idea of the famous linguist-philosopher Noam

    Chomsky that we are born with the same linguistic categories that enable us to learn various

    natural languages. This view is Chomskys alternative to the claim of the behaviorist B. F.Skinner that the learning of natural languages is a matter of conditioning or association

    between stimuli and responses, which Chomsky has shown to be mistaken. In the hands of

    Fodor, these innate linguistic categories have become the language of thought. The language

    of thought may be compared to the programming language of computers, as Fodor himself(1979, 67) explains: On this view, what happens when a person understands a sentence must

    be a translation process basically analogous to what happens when a machine understands(viz., compiles) a sentence in its programming language.5

    Fodor advances three basic arguments for his language-of-though hypothesis; namely,

    the productivity of thoughts, the systematicity of thoughts, and the systematicity of reasoning.The productivity of thoughts refers to the capacity of the human mind/brain to produce new

    thoughts from a number of old thoughts. For instance, from the old thoughts that the red

    book is on the brown table and that the yellow bag is under the table one can produce new

    thoughts such as the red bag is on the table, the yellow book is under the table, etc. Thesystematicity of thoughts refers to the capacity of the human mind/brain to produce and

    understand new thoughts built on already-understood old thoughts. For instance, anyone who

    understands John loves Mary will also understand Mary loves John, or anyone whounderstands the statement A small red ball is in a large blue box will also understand the

    statement A large blue ball is in a small red box. And systematicity of reasoning refers to

    the capacity of the human mind/brain to recognize inferences that are of the same logicalstructure. For instance, anyone who can infer the statement It is sunny from the statement

    It is sunny and warm and humid can also infer the statement It is sunny from the

    statement It is sunny and warm (or anyone who can infer P from P and Q and R can also

    infer P from P and Q). The idea is that without the assumption that the human mind\brain hasits own system of representation that has a language-like structure it would be impossible to

    account for the possibility of these three features or capacities of the human mind\brain.

    5We can also compare the minds language of thought with the programming language of our cellular phones.Notice that we can easily change the natural language of our cell phones, say from English to Filipino or

    German, for these languages are just programmed into the programming language of our cellular phones. As

    such, the rules governing the computational states of our cell phones are not the grammatical rules of the natural

    languages, but the rules of their programming language. In the same way, our mental states do not follow the

    rules of our natural languages, but the rules of the language of thought.

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    10/20

    76

    Let us now examine the connectionist type computationalism, or simplyconnectionism. According to this view, cognition, and thus computing as well (for as a typeof computationalism connectionism likewise adheres to the view that cognition is a species of

    computing) is basically the interaction among the units in neural networks (or neural nets).

    This view was advanced by Paul Smolensky (1993) and David Rumelhart (1989), among

    others. In what follows, we shall discuss some of the basic concepts of connectionism enoughto give us a general picture of its difference from symbolic computationalism. In particular,

    we shall explain what units in neural networks means, how the interaction among such unitscomes about, and where computation comes in.

    To begin with, units, also called nodes, are the basic elements of a connectionistsystem called a networkor net. These units receive, process, and send information; and they

    are interconnected thereby forming a network or net. Based on their functions, units are

    classified into three kinds: the input units, which receive information from outside the

    network; the output units, which carry the resulting processed information; and the hidden

    units, which are the units in between the input units and output units. The hidden units, which

    may come in various layers, receive data from the input units, process these data, and thenpass these processed data to the output units. The direction of the flow of information, oractivations, may be forward, that is from the input units the information is passed to the

    hidden units where it is processed and then the processed information is sent to the output

    units; or it may be recurrent, where from the input units the information is passed to thehidden units where it first passes through the several layers of the hidden units in a back and

    forth manner before the processed information is finally sent to the output units.

    The flow of information among the units is made possible by the connections amongthe units. These connections are said to have weights or strengths which affect the amount of

    information flowing through them. The weight of a connection is called a connection weight.

    Each unit is said to have a maximum amount of data that it can receive, called its threshold

    level. It is when the information received by a unit exceeds its threshold level that it is

    activated to pass information to other units. The amount of information that will be received

    by a unit will therefore be a combination of the strength of the connection through which theinformation passes through and the amount of information given off by the sending unit.

    Where does computation come in? The strengths of the connections among the units

    can be adjusted or manipulated so that given a certain input to a network one can get a desiredoutput. This process of adjusting or manipulating the connections among the units of a

    network is called the process ofteaching the network, and it is this process that is governed by

    a computation. A computation here specifies how much adjustments should be done to theconnections of a network such that given a certain input the network will give off a certain

    output. To get the appropriate computation for a desired output, one, however, need to

    experiment first on various computations. So teaching the network basically happens in a trialand error process. But why are the networks called neural? Basically, it is because it is

    believed, though arguably, that the connectionist model of computing is close to how the

    human brain works. The units correspond to the human brains neurons and the connectionscorrespond to its synapses.

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    11/20

    77

    Based on our general accounts of the two types of computationalism, the main

    difference between symbolic computationalism and connectionism is that in symboliccomputationalism a physical system processes an input to generate a desired output by

    manipulating symbols following a certain program, while in connectionism a physical system

    processes an input to generate a desired output by adjusting the connections among the units

    in networks following a computation or a learning program. Another difference is thatcomputation in the classical type is serial for it is a step by step process, while that in the

    connectionist type is parallel for the interactions among the untis in networks take placesimultaneously. This accounts for why connectionism is also sometimes referred to as

    PDPparallel distributed processing. Also, computation in the classical type is described

    as symbolic for computations are carried out through and over symbols, while that of theconnectionist type is described as sub-symbolic for the computations are carried without the

    use of symbols but simply by means of the adjustments of the connection weights. But if in

    connectionism, computation does not make use of symbols, how are information represented?

    There are two possible ways: localized representation, where one unit is made to representone item of information; and distributed representation, where one item of information is

    represented by the patterns of connections among the units.

    In response to the connectionist model, Fodor and Pylyshn (Fodors close ally), in a

    joint publication (Connectionism and Cognitive Architecture: A Critical Analysis, 1993),

    have launched criticisms against this model. The main contention is that only a symbolic typeof computationalism can account for the features of productivity and systematicity of thoughts

    and reasoning of the human mind\brain. Fodor and Pylyshn further claim that at best

    connectionism is just a theory concerning the implementation of symbolic computationalism.

    In this regard, they do not regard connectionism as a rival to classical computationalism butsimply as a sub-theory of classical computationalism. In any case, the difference between

    classical and connectionist computationalism does not touch the core of the claims of

    computationalism. For whether computation is symbol-manipulation or network-manipulation, it is still computation. But since the classical model is, relatively speaking, the

    well-established type of computationalism, we shall assume this type of computationalism in

    discussing computationalism throughout the remaining part of this book.

    3. The Nature of Intelligence

    Intelligence or cognition has a functional and a conscious aspect. Its functional aspect has todo with the type of activities that it can perform or tasks it can accomplish. These activities

    involve solving problems and replying appropriately to certain questions. Its conscious aspect,

    on the other hand, has to do with the conscious processes it undergoes to perform certainactivities or accomplish certain tasks. And as these processes are conscious they can have all

    the properties of consciousness. As cognitive processes primarily concern intentional states or

    propositional attitudes, they necessarily have intentional properties in that they necessarilyhave contents that are about some objects or states of affairs in the world. And as conscious

    processes, they can have phenomenal properties as well, as what cognitive phenomenology6

    6Cognitive phenomenology claims that the intentional also has a phenomenology. More specifically, it claimsthat there is something it is like to believe or know that p. To recall, one example to demonstrate this

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    12/20

    78

    argues. In light of these two aspects of intelligence, we can distinguish between thefunctional and the conscious view on the nature of intelligence. This distinction is not meantto deny the existence of either the functional or the conscious properties of intelligence. The

    distinction only concerns whether it is possible to define intelligence sufficiently by its

    functional aspect alone. The functional view says yes while the conscious view says no.

    3.1. The Functional View of Intelligence

    Perhaps the best illustration of the functional view of intelligence is Turings imitation game

    or test. To determine whether a computing machine is intelligent or not, Turing does not

    examine the internal processes of the machine when it performs certain tasks but the kind oftasks the machine is capable of performing. And simply, if a machine is capable of

    performing tasks which when performed by a human the human is said to be intelligent then

    the machine is intelligent.

    Most of those working in artificial intelligence follow the Turing test. Roger Schank

    (1984, 51, 39), for instance, writes that When we askWhat is intelligence? we are really onlyasking What does an entity, human or machine, have to do or say for us to call it intelligent?and We really have no standards for measuring intelligence other than by observing the

    behavior of something we suspect is intelligent. Simon and Newell (1995, 96-97), in theirPhysical Symbol System Hypothesis, define what an intelligent action in the following way:By general intelligent action we wish to indicate the same scope of intelligence as we see

    in human action: that in any real situation behavior appropriate to the ends of the system and

    adaptive to the demands of the environment can occur, within some limits of speed and

    complexity. And Simon and Kaplan (1989, 1): people are behaving intelligently whenthey choose courses of action that are relevant to achieving their goals, when they reply

    coherently and appropriately to questions that are put to them, when they solve problems of

    lesser or greater difficulty, or when they create or design something useful or beautiful ornovel

    Schanks discussion of the kind of understanding that humans can share withmachines assumes this view of intelligence. Schank (1984, 62) first qualifies that AI is an

    attempt to build intelligent machines, not people. The idea being that AI does not attempt to

    reproduce in machines the whole of human mentality. Machines may be as intelligent as, or

    even more intelligent than, humans; but still they are not humans; and this is preciselybecause there is more to being human than just being intelligent, or more precisely, there is

    more to human mentality than intelligence. Speaking of intelligence in terms of

    understanding, Schank then distinguishes the following three levels of understanding. Fromthe lowest to highest, these levels are the levels ofmaking sense,cognitive understanding, andcomplete empathy. Accordingly, Schank argues that while humans are capable of all three

    levels, machines are only capable of the first two. Let us elaborate on this point by Schank.

    phenomenon is the phenomenal difference between reading a statement in a language that one does not

    understand and reading it in a statement that one understands. What it is like for someone to read it in a language

    that he or she does not understand is different from what it is like for her or him to read it in a language that he or

    she understands.

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    13/20

    79

    On the level of making sense, understanding simply means knowing what is

    happening in a given situation; and hence this level of understanding only requires simplerecognition of the terms used and the actions performed in such situation (Schank 1984, 45).

    On the level of cognitive understanding, understanding means being able to do some of the

    following: (1) learn or change as a result of ones experiences; (2) relate present experiences

    to past experiences intelligently, i.e., by making useful comparisons and picking out importantand interesting relationships; (3) formulate new information for oneselfcome to ones own

    conclusions based on ones experiences; and (4) explain oneselfsay why one made theconnections that one did, and what thought process was involved in reaching ones

    conclusions (Schank 1984, 47-48). (With computers, experiences would mean inputs.) And

    on the level of complete sympathy, understanding means identifying with or being able torelate with the experiences and feelings of another. The assumption here is that the one who

    understands and the one being understood share some memories and experiences, or that

    ones memories and experiences resemble those of the other (Schank 1984, 45). To further

    illustrate the differences of these levels, let us take a look at the following example of Schank(1984, 57-58):

    INPUT:Did I tell you about what happened with Mary last night? We were at this party

    drinking and talking about nothing in particular and all of a sudden we werent talking

    anymore and I noticed her face was right near mine. She moved closer and we kissed.

    OUTPUTMAKING SENSE

    You and Mary were at a party. You were close to each other. You kissed each other.

    You didnt talk while kissing.

    OUTPUTCOGNITIVE UNDERSTANDING:

    Mary must like you. From what youve told me about other women you know, she ismore assertive than they are. She must have felt good being with you at the party.

    OUTPUTCOMPLETE EMPATHY:Thats like what happened to Cindy and me after the party. She came up to me and

    asked if I could give her a lift, and while we were in the coatroom she threw her arms

    around me and started kissing me.

    In this example, one understands the situation on the level of making sense by simply

    recognizing what is happening in the situation. On the level of cognitive understanding, one

    understands it by relating present experiences to past ones and formulating new informationbased on this relation. And on the level of complete empathy, one understands it by

    identifying or relating with the experiences of the person in the situation due to ones similar

    experiences in the past. According to Schank (1984, 58), while an appropriately programmedcomputer can understand the situation up to the level of cognitive understanding, no

    computer is going to understand this story at the level of COMPLETE SYMPATHY for the

    simple reason that no computer is ever going to go to parties and get kissed and hugged bypeople. Actually, it is conceivable that a machine run by a computer, say an android, to go to

    a party and get kissed and hugged by people. But what this machine would not able to know is

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    14/20

    80

    what it is like to go to a party, to get kissed, and be hugged by people. Schank (1984, 45)

    concludes that:

    Computers are not likely to have the same feelings as people, or to have the same

    desires and goals. They will not be people. The complete empathy level of

    understanding seems to be out of reach of the computer for the simple reason that thecomputer is not a person.

    Accordingly, the conscious aspect of intelligence, in particular its phenomenal feature,

    is only important on the level of complete empathy, and thus computers, whose intelligence is

    limited to the functional aspect, can only be capable of understanding on the levels of makingsense and cognitive understanding. It is important to note that Schank does not deny the

    reality of the conscious aspect of intelligence; he only claims that the conscious aspect of

    intelligence is only important on the level of complete empathy. On the levels of making

    sense and cognitive understanding, intelligence can sufficiently be defined functionally.

    3.2. The Conscious View of Intelligence

    The conscious view claims that intelligence cannot be sufficiently defined by some functional

    capacities. If a system that exhibits functional capacities that are normally regarded as

    intelligent but such capacities are not accompanied by some conscious processes on the partof the system, according to this view, such system not truly intelligent. Roger Penrose can be

    considered to be subscribing to this view when he argues that one cannot talk of intelligence

    and not talk of consciousness at the same time. Penrose (1994, 38-39), more specifically,

    regards the terms intelligence, understanding, and awareness as related in the followingway: (a) intelligence requires understanding and (b) understanding requires

    awareness. Intelligence or the activity of thinking is a necessarily conscious phenomenon,

    and thus to say that something can be intelligent without being conscious is to misuse or todeviate from the original meaning of the word intelligence. Consequently, attributing

    intelligence to machines necessarily implies attributing consciousness to machines as well.

    And this explains why Penrose attributes to strong AI the claim that certain types of machinescan be constructed such that they will not only be capable of thinking but of feeling as well.

    The following are remarks by Penrose (1989) to this effect: One of the claims of AI is that it

    provides a route towards some sort of understanding of mental qualities, such as happiness,

    pain, hunger (p. 14); The supporters of strong AI would claim that whenever the algorithmwere run it would, in itself, experience feelings; have a consciousness; be a mind (p. 180).

    The view that intelligence necessarily requires consciousness is also assumed inSearles Chinese room argument where he criticizes the claim of strong AI that machines that

    can simulate the intelligent behaviors of humans are capable of genuine understanding.

    According to Searle, genuine understanding, as in the case of humans, requires intentionalityor the awareness of what the symbols involved in ones thinking activity represent or are

    about. In his analysis, computers just manipulate symbols solely according to the syntactical

    properties of these symbols without knowing what these symbols mean. As such, computersare not really capable of genuine understanding, though their output behaviors seem to

    suggest that they are.

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    15/20

    81

    Searle is particularly reacting to the Turing test and Schanks concept ofunderstanding. According to Searle, even if a machine passes the Turing test and is capable of

    making inferences from input information (Schanks level of cognitive understanding), still

    the machine cannot be said to be intelligent because the machine lacks the conscious aspect of

    intelligence which is the intentionality of its internal states. Consequently, Searle disagreeswith the Turing that passing the Turing test is sufficient for the attribution of intelligence, and

    likewise with Schank that there are levels of understanding where intelligence can sufficientlybe defined functionally.

    The absent qualia and inverted qualia arguments, which are also used to criticizefunctionalist theories of mind including computationalism, also assume the conscious view of

    intelligence. If two systems exhibit the same functional capacities, these arguments claim that

    these two systems need not be the same in terms of conscious experiences, for it is possible

    that one, unlike the other, does not really have conscious experiences, or one has consciousexperiences that are totally different from the conscious experiences of the other. These

    arguments assume that the conscious experiences that go with the functional capacities of themind cannot be ignored. Ignoring them would mean leaving out some critical features of themind. And so the functional view of intelligence, or of the mind for that matter, must be

    mistaken.

    And if we grant the possibility of cognitive phenomenologythat cognitive states

    such as beliefs also have phenomenal featuresFrank Jacksons knowledge argument would

    apply to intelligence as well, as intelligence or cognition involves cognitive or intentional

    mental states or the so-called propositional attitudes. In support of cognitive phenomenology,Alvin Goldman (1993, 24) gives Jacksons famous knowledge argument the following twist:

    Just as someone deprived of any experience of colour would learn new things uponbeing exposed to them, namely what it feels like to see red, green, and so forth, so (I

    submit) someone who had never experienced certain propositional attitudes, for

    example, doubt or disappointment, would learn new things on first undergoing theseexperiences. There is something that it is like to have these attitudes, just as much as

    there is something that it is like to see red.

    Just as knowing the physicality of seeing a color (knowing the brain processes that go withseeing a color and the physics of colorsay its particular wavelength) does not suffice to

    really know what it means to see a color (for one needs to know as well what it is like to see a

    color), knowing the physicality of understanding or knowing something (which includesknowing the behavioral manifestations and functional capacities that go with understanding or

    knowing something) also does not suffice to really know what it means to understand or know

    something. What this means is that the conscious aspect, in this particular case thephenomenal features, of intelligence cannot be ignored in defining intelligence, or in

    attributing intelligence to some entity.

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    16/20

    82

    4. Computationalism: The Question of Scope

    While computationalism is a theory of the mind, it is, however, not clear up to what extent of

    the mind its claims are intended to cover. We saw that computationalism generally claims that

    cognition is a species of computing. We know that cognition or intelligence is not all there is

    in having a mind, for as the mind has a cognitive aspect it also has an affective aspect, whichprimarily refers to emotions and where the phenomenal features stand out. Steven Pinker

    (2005, 4), in the following, specifies what comprises each of these aspects: Our intelligence,for example, consists of faculties dedicated to reasoning about space, number, probability,

    logic, physical objects, living things, artifacts, and minds. Our affective repertoire comprises

    emotions pertaining to the physical world, such as fear and disgust, and emotions pertainingto the social and moral worlds, such as trust, sympathy, gratitude, guilt, anger, and humor. If

    computationalism merely concerns cognition then it is not really intended to be a

    comprehensive theory of the mind or a theory that accounts for the mind in all its varied

    aspects.

    But it is also usual to attribute to computationalism the claim that the mind is a kind ofcomputer or computational system. And the implication of which is that computationalism isa comprehensive theory of the mind. These considerations give rise to a question regarding

    the scope of computationalism as a theory of the mind. Accordingly, we can distinguish

    between a broad construal of computationalism, where computationalism is understood as acomprehensive theory of the mind, and a narrow construal of it, 7 where computationalism is

    understood as a specialized theory of the mind in that it merely focuses on a specific aspect of

    the mind, namely, its cognitive aspect, more specifically the purely cognitive aspect or

    functional aspect of intelligencereferring to cognition or intelligence without regard to itsconscious features.

    4.1. The Broad Construal

    For the broad construal, the claims of computationalism are taken to apply to the mind in its

    entirety, which includes the phenomenal and intentional features of the mind. It is interestingto note that most critics of computationalism take the broad construal, for in accusing

    computationalism for its failure to account for certain features of the mind, usually the

    phenomenal features, these critics understand the claims of computationalism as intended to

    cover the whole of the minds nature.

    One proponent of computationalism, however, that take the broad construal is Steven

    Pinker in his book entitled How The Mind Works (1997), and in his essay entitled So HowDoes the Mind Work? where he answers Jerry Fodors criticisms (Fodor 2002) of his views

    in the said book. We earlier quoted Pinker distinguishing between the cognitive and affective

    aspects of our mind. Now, without regard to this distinction, Pinker (2005, 2) claims that ourmind or mental life consists of information-processing or computation. Beliefs are a kind of

    7The expressions broad construal and narrow construal were originally used by Robert Harnish (2002, 2-7)

    to distinguish between two ways of understanding cognitive science, where broad construal regards cognitive

    science as an interdisciplinary study of cognition while narrow construal regards it as the computational study

    of cognition. We shall touch on this distinction by Harnish on Chapter 7.

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    17/20

    83

    information, thinking a kind of computation, and emotions, motives, and desires are a kind of

    feedback mechanism in which an agent senses the difference between a current state and goalstate and executes operations designed to reduce the difference. Pinker (2005, 1-2), though,

    combines computationalism with the modular theory of mind8

    and evolutionary theory, saying

    that the mind is not a single entity but is composed of a number of faculties specialized for

    solving different adaptive problems. In sum, the mind is a system of organs of computationthat enabled our ancestors to survive and reproduce in the physical and social worlds in which

    our species spent most of its evolutionary history.9

    In short, Pinker claims that the humanmind is an evolved computer that works in terms of modulesthe various faculties

    specialized for solving different adaptive problems. And here when he says the human mind,

    he is not only referring to the cognitive aspect of the human mind but to the whole of whatcomprises the human mind.

    Pinkers ideas, as earlier noted, encountered direct criticisms from Fodor in the latters

    book interestingly entitled The Mind Doesnt Work That Way (2000) and in his essay entitledThe Trouble with Psychological Darwinism (1998). We shall not go into the various

    arguments of Fodor and how Pinker responds to them. What is important for our purposeshere is that Fodors general reaction to Pinkers brand of computationalism is to insist on anarrow construal of the claims of computationalism, implying that Fodor takes Pinkers

    computationalism as taking a broad construal. The following first two paragraphs in his

    Introduction to his book, The Mind Doesnt Work That Way (2000, 1), immediately show this:

    Over the years, Ive written a number of books in praise of the Computational Theoryof Mind (CTM often hereinafter). It is, in my view, far the best theory of cognitionthat weve got; indeed, the only one weve got thats worth the bother of a serious

    discussion. There are facts about the mind that it accounts for and that we would be

    utterly at a loss to explain without it; and its central ideathat intentional processes

    are syntactic operations defined on mental representationsis strikingly elegant.There is, in short, every reason to suppose that the Computational Theory is part of the

    truth about cognition.

    But it hadnt occurred to me that anyone could suppose that its a very large part of the

    truth; still less that its within miles of being the whole story about how the mindworks. (Practitioners of Artificial Intelligence have sometimes said things that suggest

    they harbor such convictions. But, even by its own account, AI was generally

    supposed to be about engineering, not about science; and certainly not about

    philosophy.) So, then, when I wrote books about what a fine thing CTM is, I generallymade it a point to include a section saying that I certainly dont suppose that it could

    comprise more than a fragment of a full and satisfactory cognitive psychology; and

    that the most interestingcertainly the hardestproblems about thinking are unlikelyto be much illuminated by any kind of computational theory we are now able to

    8 Pinkers modules, however, differ from Fodors. Fodors are essentially encapsulated processors while

    Pinkers are functionally specialized mechanisms. See Pinker 2005, 15-17 and Fodor 2000, chap. 4 for a more

    detailed discussion of this difference.9Fodor (2000, 5) has called the combination of these theories as the New Synthesis. This synthesis is sometimesalso called evolutionary psychology.

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    18/20

    84

    imagine. I guess I sort of took it for granted that even us ardent admirers of

    computational psychology were more or less agreed on that.

    And in his footnote to the last statement in the first paragraph, Fodor states: This isnot to claim that CTM is any of the truth about consciousness, not even when the cognition is

    conscious. There are diehard fans of CTM who think it is; but Im not of their ranks. Clearlyhere Fodor distinguishes cognition from consciousness, for he is saying that CTM only worksfor cognition but not for consciousness. So if not the conscious aspect of cognitionas he

    states not even when cognition is conscious, then it can only be the purely functional aspect

    of cognition to which he regards CTM to apply.

    4.2. The Narrow Construal

    In contrast to the broad construal, the narrow construal regards computationalism as a theorynot of the whole of the mind but of the cognitive aspect of the mind. Furthermore, it is a

    theory of cognition in the purely functional aspect of it, where consciousnessas Fodor states

    aboveis not part of the concern. We already saw Fodor as an example of proponents ofcomputationalism taking the narrow construal. It shall also be observed that most prominent

    cognitive scientists and scientists working in artificial intelligence also take the narrowconstrual, as shown by how they define the goal of cognitive science and artificial

    intelligence, both of which are computational in their framework, as the understanding ofnature of intelligence or cognition as exhibited by certain types of behavior or as manifested

    in the performance of certain types of actions. We have previously touched on the views of

    Schank (his three levels of understanding) and Simon and Newell (their physical symbolsystem) all endorsing this construal. Observe now how Simon and Kaplan (1989, 1) in

    particular define the discipline of cognitive science: Cognitive science is the study of

    intelligence and intelligent systems, with particular reference to intelligent behavior; and the

    discipline of artificial intelligence (1989, 29): Artificial intelligence is concerned withprogramming computers to perform in ways that, if observed in human beings, would be

    regarded as intelligent. And as regards intelligence per se, they (1989, 1) understand it in the

    following way:

    Although no really satisfactory intentional definition of intelligence has been

    proposed, we are ordinarily willing to judge when intelligence is being exhibited byour fellow human beings. We say that people are behaving intelligently when they

    choose courses of action that are relevant to achieving their goals, when they reply

    coherently and appropriately to questions that are put to them, when they solveproblems of lesser or greater difficulty, or when they create or design something

    useful or beautiful or novel. We apply a single term, intelligence, to this diverse set

    of activities because we expect that a common set of underlying processes is

    implicated in performing all of them.

    By mind, these scientists specifically mean intelligence or cognition, hence the

    expressions intelligent machines, cognitive computer, and cognition as a species ofcomputing. Furthermore, by intelligence, they specifically mean the capacity to behave in

    certain ways or to perform certain types of actions. Thus, in addition to limiting mentality to

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    19/20

    85

    cognition, these scientists further limit cognition to its purely functional aspectthereby

    putting aside the conscious or phenomenal features of cognition. In short, these scientists limittheir investigations of the nature of the mind to the purely functional aspect of intelligence

    thereby holding to the view that we have earlier called the functional view of intelligence.

    One critical consequence of this distinction between two ways of construing theclaims of computationalism concerns the appropriateness of the criticisms leveled against

    computationalism. If we take a broad construal, these criticisms are appropriate regardless oftheir validity; but if we take a narrow construal, these criticisms are inappropriate, or better

    yet misplaced, for they are imputing to computationalism claims that computationalism itself

    does not make. If these arguments are misplaced, even if they are valid in themselves theyhave no bearing on the claims of computationalism. Now between the broad and narrow types

    of construal, there seems to be more reason to believe the narrow construal as this is the one

    taken by most proponents of computationalism in cognitive science and artificial intelligence.

    5. Summary and Conclusion

    We saw that computationalism has a general thesis, which claims that cognition is a species ofcomputing, and two sub-theses; namely, the thesis of human computationality, which states

    that human cognition is computational, and the thesis of machine intelligence, which states

    that computing machines capable of simulating human intelligent actions are intelligent. Tofurther understand these theses we needed to clarify the concept of computation and the

    nature of intelligence assumed in these theses.

    The concept of computation, generally referring to a finite set of step-by-stepeffective procedure to get a desired result, was first given a theoretical definition by Alan

    Turing through his Turing machine in the course of finding a mechanical procedure to

    determine the computability of mathematical functions. Accordingly, computation is whateverthat can be implemented in a Turing machine. At present, there are, however, two models for

    understanding how computation works in the context of human cognition. One is the

    symbolic model, which defines computation as symbol-manipulation; while the other isconnectionist model, which defines computation as the manipulation of the connections of the

    units in neural networks. On the other hand, with regard to the nature of intelligence, we

    distinguished between the functional view, which claims that intelligence can be sufficiently

    defined in terms of functional capacities alone, and the conscious view, which maintains thatany definition of intelligence would be incomplete if it does not take into account the minds

    conscious properties (most especially its phenomenal and intentional properties).

    Finally, we distinguished between the broad and the narrow construal of the claims of

    computationalism. The latter regards such claims as limited to the purely functional aspect of

    intelligence or the cognitive aspect of the mind, while the latter regards them as covering thewhole of the mind. We noted that taking the narrow construal would render the many

    criticisms leveled against computationalism as misplaced, and that there are good reasons to

    take such construal considering that it is endorsed by most proponents of computationalism incognitive science and artificial intelligence. In this consideration, what is needed is an

    evaluation of the theses of comptutationalism that will be appropriate to the narrow construal,

  • 8/7/2019 CHAPTER FIVE - COGNITION AS COMPUTATION

    20/20

    86

    or that will be appropriate regardless of which construal one takes. And this is precisely what

    the immediately following chapter intends to provide.