43
The Turing Test The Turing Test Computing Machinery and Intelligence Alan Turing

The Turing Test Computing Machinery and Intelligence Alan Turing

Embed Size (px)

Citation preview

The Turing TestThe Turing Test

Computing Machinery and Intelligence

Alan Turing

Some Theories of MindSome Theories of Mind

• Dualism

– Substance Dualism: mind and body are differerent substances. Mind is unextended and not subject to physical laws.

• Interactionism: mind and body interact

• Occasionalism/Parallelism: mind and body don’t interact

– Property/Event Dualism

• Epiphenomenalism: physical events cause mental events but mental events don’t cause anything

• Property Dualism:(some) mental states are irreducibly non-physical attributes of physical substances

Some Theories of MindSome Theories of Mind

• Physicalism: mental states are identical to physical states, in particular, brain states or, minimally, supervene upon physical states.

– (“Analy tical” or “Log ical”) Behaviorism: talk about mental states should be analyzed as talk about behavior and behavioral dispositions

– The Identity Theory (Type-Physicalism): mental states are identical to (so nothing more than) brain states

– Functionalism: mental states are to be characterized in terms of their causal relations to sensory inputs, behavioral outputs and other mental states, that is, in terms of their functional role.

Dualism(s)Dualism(s)

Pro

• Qualia

• Irreducibility of psychology

• The Zombie Argument

• The Cartesian Essentialist Argument

Con

• Causal closure of the physical

• Simplicity

DescartesDescartes’’ Arguments for Dualism Arguments for Dualism

• Essentialist Argument

– It is conceivable that one’s mind might exist without one’s body

– Whatever is conceivable is logically possible

– Therefore, it is possible one’s mind might exist without one’s body

• Empirical Argument

– The complexity and flexibility of human behavior, including linguistic behavior, couldn’t be achieved by mere mechanism so we need to assume some non-physical substance as an explanation for such behavior.

The Zombie ArgumentThe Zombie Argument

• A (philosophical) zombie is a being which is a perfect duplicate of a normal human being—including brain and neural activity—but which is not conscious.

• The Zombie Argument for property dualism

– Zombies are conceivable (David Chalmers singing the Zombie Blues)

– Whatever is conceivable is logically possible

– (Some) mental states/properties/events are not identical to any brain states/properties/events

• Note: this argument doesn’t purport to establish substance dualism or, as Descartes wished to show, that minds/persons could exist in a disembodied state.

Problem with Cartesian DualismProblem with Cartesian Dualism

• “We do not need that hypothesis”: complex behavior can be explained without recourse to irreducibly non-physical states.

– Contra Descartes, purely physical mechanisms can exhibit the kind of complex, flexible behavior, including learning (or “learning”) characteristic of humans.

• All physical events have sufficient causes that are themselves physical events

– Physicalism is an aggressor hypothesis: we explain more and more without recourse to non-physical events/states

– Agency explanations are eliminated in favor of mechanistic explanations—including explanations for agency itself.

– Claims to the effect that non-physical events cause physical events introduces an even bigger mystery: what is the mechanism?

EpiphenomenalismEpiphenomenalism

• Motivation for Epiphenomenalism

– All physical events have sufficient causes that are themselves physical events

– But some mental events—qualitative states, the what-it-is-like experience—seem to be irreducibly nonphysical: it seems implausible to identify them with brain events.

• Problem: intuitively some mental states cause behavior

– E. g. pain causes people to wince

– Moreover, part of what we mean by “pain” seems to involve an association* of with characteristic behavior

*We’ll leave “association” intentionally vague

(Philosophical) Behaviorism(Philosophical) Behaviorism

• Motivation

– We want to hold that there are no irreducibly non-physical causes of physical events

– But we also need to accommodate the fact that what we mean by terms designating mental states involves an association with characteristic behavior.

• Problems

– Intuitively, there’s more to some mental states: the problem of qualia

– Intuitively, there can be less to mental states: it’s conceivable that one may be in a given state without even being disposed to characteristic behavior—or that one may be disposed to uncharacteristic behavior

– Dispositions aren’t causes so, while behaviorism associates mental states with behavior, they still don’t cause behavior.

The Identity TheoryThe Identity Theory

• Motivation

– We want to hold that there are no irreducibly non-physical causes of physical events

– But we also want to understand them as “inner states” that are causally responsible for behavior

• Problems

– Qualia again: intuitively there is more to consciousness than brain states

– Species chauvinism: if we identify a type of mental state, e.g. pain, with a type of brain state that is responsible for pain in humans, e.g. the firing of C-fibers, what do we do about non-humans who don’t have the same kind of brain states but who, we believe, can never the less have the same kind of mental states?

What a theory of mind should doWhat a theory of mind should do

• Make sense of consciousness: “The Hard Problem”

• Avoid commitment to irreducibly non-physical states, events or substances

• Explain the causal role of mental states as

– Effects of physical events

– Causes of behavior

– Causes of other mental events

• Allow for multiple realizability in order to avoid species chauvinism

– We want to be able to ascribe the same kinds of mental states we have to humans who may be wired differently, other animals and, possibly to beings that don’t have brains at all, e.g. Martians, computers

FunctionalismFunctionalism• What makes something a mental state of a particular type does

not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part.

– Note: “function” here related also to “function” in math sense.

• Topic Neutrality: mental state concepts don’t specify their intrinsic character, whether physical or non-physical—that’s a matter for empirical investigation.

– So Functionalism is in principle compatible with both physicalism and dualism

• Multiple Realizability: A single mental kind (property, state, event) can be "realized" by many distinct physical kinds.

– The same type of mental state could, in principle, be “realized” by different physical (or non-physical) states

– Disagreement about how “liberal” we should be in this regard

An Example: PainAn Example: Pain

• We’re interested in analyzing or ordinary concept of pain

• We understand it in terms of its causal role

– As being typically produced by certain stimuli, e.g. bodily injury

– As tending to produce certain behavior, e.g. wincing

– As producing further mental states, e.g. resolving to avoid those stimuli in the future

• We recognize that different kinds of physical (of non-physical) mechanisms may play that role

– Compare to other functional concepts like “can opener”

– We leave empirical questions to empirical investigation

The Big Questions About FunctionalismThe Big Questions About Functionalism

• Consciousness: some mental states appear to have intrinsic, introspectable features—and those features seem to be essential

– Inverted Qualia (see Block “Inverted Earth”)

– Zombies

– The Knowledge Argument (see Jackson “What Mary Didn’t Know”)

• Understanding: controversial whether understanding can be reduced to the ability to mediate input and output by manipulating symbols (see Turing “Computing Machinery and Intelligence” vs. Searle on The Chinese Room

The Turing TestThe Turing Test

• Functionalism: mental states are to be characterized in terms of their causal relations to sensory inputs, behavioral outputs and other mental states, that is, in terms of their functional role.

• A Turing Machine can do this!

• So if Functionalism is true, a machine should in principle be able to do anything a person can do

• Can a machine do whatever a person can do?

• And can it meet…

The Cartesian ChallengeThe Cartesian Challenge

If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others. For we can certainly conceive of a machine so constructed that it utters words, and even utters words that correspond to bodily actions causing a change in its organs. … But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do. Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs. For whereas reason is a universal instrument, which can be used in all kinds of situations, these organs need some particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act. [Descartes Discourse on Method]

What can people do that computers canWhat can people do that computers can’’t t do?do?

• Telling Humans and Computers Apart Automatically

• A CAPTCHA is a program that protects websites against bots by generating and grading tests that humans can pass but current computer programs cannot. For example, humans can read distorted text as the one shown below, but current computer programs can't:

• The term CAPTCHA (for Completely Automated Public Turing Test To Tell Computers and Humans Apart) was coined in 2000 by Luis von Ahn, Manuel Blum, Nicholas Hopper and John Langford of Carnegie Mellon University.

Empirical and Conceptual QuestionsEmpirical and Conceptual Questions

• The Turing Test: Can a machine* meet the Cartesian challenge?

– Use language in a way that humans do rather than merely uttering sounds?

– Exhibit the complexity and flexibility of behavior in a wide range of areas as humans do?

• What, if anything, of philosophic interest would it show if a machine could pass the Turing Test?

– Is passing the test necessary for intelligence?

– Is passing the test sufficient?

* What is a “machine”? Aren’t our brains themselves machines?

The Babbage EngineThe Babbage Engine

ENIACENIAC

Build your own Turing Machine!Build your own Turing Machine!

A Turing machine is a theoretical computing machine invented by Alan Turing (1937) to serve as an idealized model for mathematical calculation. A Turing machine consists of a line of cells known as a "tape" that can be moved back and forth, an active element known as the "head" that possesses a property known as "state" and that can change the property known as "color" of the active cell underneath it, and a set of instructions for how the head should modify the active cell and move the tape (Wolfram 2002, pp. 78-81). At each step, the machine may modify the color of the active cell, change the state of the head, and then move the tape one unit to the left or right.[read more in Wolfram MathWorld]

A Turing Machine is an Abstract MachineA Turing Machine is an Abstract Machine

An abstract machine is a model of a computer system (considered either as hardware or software) constructed to allow a detailed and precise analysis of how the computer system works. Such a model usually consists of input, output, and operations that can be preformed (the operation set), and so can be thought of as a processor. An abstract machine implemented in software is termed a virtual machine, and one implemented in hardware is called simply a "machine.”[Wolfram Mathworld]

Turing Machine here: try it!

Another Turing Machine

A concrete Turing Machine

Different hardware – same abstract machineDifferent hardware – same abstract machine

• Mental states are like computational states of computers

• The same computational or mental state can be realized by different hardware or brainware!

We’re in the same computational state!

We’re in the same computational state!

We’re in the same computational state!

We’re in the same computational state!

The Imitation GameThe Imitation Game

• Turing proposes a ‘game’ in which we have a person, a machine, and an interrogator—separated from the other person and the machine.

• The object of the game is for the interrogator to determine which of the other two is the person, and which is the machine.

“I believe that in about fifty years’ time,” Turing wrote in 1950, “it will be possible to programme computers…to make them play the imitation game so will that an average interrogator will not have more than 70% chan ce of making the right identification after five minutes of questioning…I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

• So far this hasn’t happened but…there is a contest on:

The Empirical Question: Can a machine The Empirical Question: Can a machine pass?pass?

The Loebner Prize: In 1990 Hugh Loebner agreed with The Cambridge Center for Behavioral Studies to underwrite a contest designed to implement the Turing Test. Dr. Loebner pledged a Grand Prize of $100,000 and a Gold Medal (solid—not gold-plated!) for the first computer whose responses were indistinguishable from a human's.

The Conceptual (Philosophical) QuestionThe Conceptual (Philosophical) Question

If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

• How is the question (of whether a machine could pass the Turing Test) related to the question of whether a machine can think?

• What would it show if a machine could pass the Turing Test?

– Is being able to pass the Turing Test a necessary condition on intelligence?

– Is being able to pass the Turing Test a sufficient condition on intelligence?

Behaviorism?Behaviorism?

The new problem has the advantage of drawing a fairly sharp line between the physical and intellectual capacities of a man. No engineer or chemist claims to be able to produce a material which is indistinguishable from the human skin…but even supposing this invention available we should feel there was little point in trying to make a ‘thinking machine’ more human by dressing it up in such artificial flesh.

• What matters for ‘intelligence’—or whatever Turing is testing for?

– Does ‘the right stuff’ (brain-stuff, ‘spiritual substance,” or whatever) matter?

– Does the right internal structure or pattern of inner workings matter? If so, at what level of abstraction?

– Does the right history, social role or interaction with environment beyond interrogation and response in the Turing Test matter?

Objections Turing ConsidersObjections Turing Considers1. The Theological Objection

2. The ‘Heads in the Sand’ Objection

3. The Mathematical Objection

4. The Argument from Consciousness

5. Arguments from Various Disabilities

6. Lady Lovelace’s Objection

7. Argument from Continuity in the Nervous System

8. Argument from the Informality of Behavior

9. Argument from Extrasensory Perception

The Theological ObjectionThe Theological Objection

Thinking is a function of man’s immortal soul. God has given an immortal soul to every mabn and women, but not to any other animal or to machines. Hence no animal or machine can think.

• Turing’s response: God could give a machine a soul if he wanted to

• Some questions:

– Zombies. On this account it would be a contingent fact that intelligent computers (or humans) had souls—soulless zombies could perfectly simulate ensouled humans or machines.

– Are souls, if there are such things, what matter for consciousness (vide Locke)

The The ‘‘Heads in the SandHeads in the Sand’’ Objection Objection

• The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.

• Turing notes that there’s no real argument here.

• Nevertheless, the prospect of intelligent machines raises a number of ethical questions…

The Mathematical ObjectionThe Mathematical Objection

• Gödel’s theorem…shows that in any sufficiently powerful logical system statements can be formulated which can neither be proved nor disproved within the system.

• Consequently there will be some questionsa machine (being essentially an automatedformal system) cannot answer.

• Turing notes however that there arequestions that humans can’t answerand it could be that beyond this we’rebound by the same constraint that restrictsthe capacity of machines.

The Argument from ConsciousnessThe Argument from Consciousness

No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.

• A machine that passed the Turing Test would, ipso facto, be able to give appropriate responses to questions about poetry, emotions, etc.

• If we require more than the Turing Test as evidence of consciousness then we have no good reason to believe that other humans are conscious.

• But we do have good reason to believe that other humans are conscious.

• Therefore the Turing Test would be evidence of consciousness in a machine if that machine could pass the test.

Arguments from Various DisabilitiesArguments from Various Disabilities

These arguments take the form, ‘I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to…be kind, be resourceful, be beautiful, be friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behavior as a man, do something really new.

• It seems likely that we can construct machines that will be able to do a great many of these things—including learning and making mistakes but

• We should also ask whether various items on the list are requirements for intelligence or whether we’re building in a species-chauvinistic requirement that would exclude intelligent beings that aren’t like us humans.

Lady LovelaceLady Lovelace’’s Objections Objection

‘The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform’

• But computers can surprise us and

• People aren’t all that original anyway

Final ObjectionsFinal Objections

• Argument from Continuity of the Nervous System

– Response: a digital machine can imitate an analogue machine

• Argument from the Informality of Behaviour

– Response: no reason to think human behavior is any less rule-governed

• Argument from Extrasensory Perception

– Taking ESP seriously, we could find ways to rule it out by putting competitors in a ‘telepathy-proof room.’ Surely, even if ESP were a reality it wouldn’t be any more of a requirement for intelligence than the ability to appreciate strawberries and cream.

• Learning

– In fact computers can, at least ‘learn’ and, unless we’ve established independently that they aren’t intelligent, no reason to deny that this constitutes genuine learning.

Imitation and ReplicationImitation and Replication

• When is imitating X replication—i.e. another instance of X—rather than mere simulation?

• When does the ‘right stuff’ matter:

– Margerine is only simulated butter but

– Walking with an artificial leg is real walking

• When do the right extrinsic features, e.g. right history matter:

– Counterfeit money and art forgeries are fakes but

– A copy of a file or application is the real thing

Are inputs/outputs all that matter?Are inputs/outputs all that matter?

Consider, for example, Ned Block's Blockhead…a creature that looks just like a human being, but that is controlled by a “game-of-life look-up tree,” i.e. by a tree that contains a programmed response for every discriminable input at each stage in the creature's life. If we agree that Blockhead is logically possible, and if we agree that Blockhead is not intelligent (does not have a mind, does not think), then Blockhead is a counterexample to the claim that the Turing Test provides a logically sufficient condition for the ascription of intelligence. After all, Blockhead could be programmed with a look-up tree that produces responses identical with the ones that you would give over the entire course of your life (given the same inputs).

Objections to the Turing Test as What Objections to the Turing Test as What MattersMatters

• Intentionality (The Chinese Room: Searle, “Minds, Brains and Programs”)

– You can’t crank semantics out of syntax: mere symbol-manipulation, however adept, doesn’t create meaning or understanding.

• Consciousness (The Inverted Spectrum: Block, “Inverted Earth”)

– Neither behaviorism nor functionalism can capture the felt, intrinsic character of phenomenal mental states, e.g. “what it is like” to see red.

• Semantic Externalism (Swampman: Davidson, “Knowing One’s Own Mind”)

– What one's words mean—if they mean—is determined not merely by some internal state, but also by the causal history of the speaker and the role he plays within his environment.

Intentionality ObjectionIntentionality Objection

What does “CKApqrr” mean? According to the syntactic rules of the first game, “Shak-A-WFF,” it’s a WFF but when I construct and manipulate WFFs I don’t know what I’m doing.

Consciousness: the Inverted Qualia Consciousness: the Inverted Qualia ObjectionObjection

[T]he inverted spectrum argument is this: when you and I have experiences that have the intentional content looking red, your qualitative content is the same as the qualitative content that I have when my experience has the intentional content of looking green.

We use color words in the same way, make the same inferences, and respond in the same way to the same stimuli but (it seems to be conceivable that) our experiences are different in their intrinsic, qualitative character: ‘what it is like’ to see red is different from ‘what it is like’ for me. The Turing Test can’t capture the ‘what it is like’ feature of experience.

Semantic ExternalismSemantic Externalism

Consciousness: The Zombie ProblemConsciousness: The Zombie Problem

It seems conceivable that a being with NO qualia could pass the Turning Test. Do qualia matter? If so, for what?