Transcript

PHIL*3180, 1

Descartes’ Argument 1: 1) I cannot doubt the existence of my

mind. (“My mind has the property of indubitable existence”?)

2) I can doubt the existence of my body. (“My body does not have the property of indubitable existence”?)

3) Therefore my mind has a property my body lacks.

4) If two things have different properties then they are not identical. (Leibniz’s Law: the indiscernibility of identicals.)

5) Therefore: my mind is not identical with my body.

PHIL*3180, 2

Descartes’ Argument 2: 1) My body is extended in space (and so

is spatially divisible). 2) My mind is not extended or divisible. 3) Therefore my body has a property my

body lacks. 4) If two things have different

properties then they are not identical. (Leibniz’s Law: the indiscernibility of identicals.)

5) Therefore: my mind is not identical with my body.

PHIL*3180, 3

The dualistic intuition: mental things and physical things are totally different.

The causal intuitions: a) causes and effects must occupy the

same domain / be relevantly similar;

b) the physical is causally closed. The solution to this tension? Epiphenomenalism: the view that

mental events have no causal impact on physical events—that the physical would continue just the same even if the mental changed or ceased to exist.

PHIL*3180, 4

An argument for behaviourism:

1) Dualism suffers serious problems: • mind-body causation; • the ‘problem of other minds’

(radical privacy); • implausible account of the

meaning of mental terms; • bad fit with empirical

psychology. 2) Behaviourism solves or dissolves

these problems. 3) So behaviourism is a more adequate

theory than dualism.

PHIL*3180, 5

What is behaviourism?

Logical/analytic behaviourism is a theory of the mind (rather than just a methodology). Statements containing mentalistic expressions are wholly translatable into statements containing only descriptions of publicly observable dispositions to behave in particular ways. According to Ryle, a theory of mind allows us to predict and explain human behaviour; it is not a theory of some ‘thing’ or set of inner states.

PHIL*3180, 6

Mind-brain type-type identity theory:

Consciousness (as it is experienced / introspected) is identical with—exactly the same thing as—some physical pattern of brain activity. NB: the ‘is’ of definition vs. the ‘is’ of composition (i.e., this is a contingent identity claim). Initial reason 1: Problems with

behaviourism Initial reason 2: Scientific parsimony

PHIL*3180, 7

Mind-brain type-type identity theory: The main argument:- i) Generally we conclude that ‘two’ things are

really the same thing when a) they are systematically correlated, and b) observations of ‘one’ of the things explains our observations of the ‘other.’ E.g. electrical charge and lightning.

ii) Our observations of the brain might systematically correlate with and explain our conscious experience (as long as we can get past the ‘phenomenological fallacy’).

iii) So consciousness could be identical with a brain process. [Tacitly: this is the only way for physicalism to be true, and physicalism probably is true, so consciousness probably is identical with a brain process.]

PHIL*3180, 8

Non-Reductive Materialism—Functionalism

MULTIPLE REALIZABILITY: a) If ‘two’ things are numerically identical

then it is logically impossible for ‘one’ to occur without ‘the other.’

b) However, pain (and other mental states) can occur without the firing of a particular sort of neural fibre (or whatever is the candidate physical state).

c) Hence, mental states can occur without the physical states with which they are supposed to be identical … and so they cannot be identical with them … and so mind-brain identity theory must be false.

PHIL*3180, 9

Non-Reductive Materialism—Functionalism

Role vs. occupant

↓ Input-output relations

↓ Machine tables

↓ Software vs. hardware

NB: Functional properties are not

physical properties.

PHIL*3180, 10

A sample machine table:

State S1

State S2

State S3

Input I1

B1 S3 B1 S1 B2 S2

Input I2

B2 S1 B1 S3 B3 S1

Input I3

B3 S1 B3 S2 B2 S3

PHIL*3180, 11

Mind as syntactic engine

PHIL*3180, 12

Cognition = the manipulation of symbols in accordance with rules

(algorithms)

The rules operate only on the shapes of mental symbols, but the transitions mirror semantic, rational processes.

PHIL*3180, 13

Causal-theoretical functionalism

1. Mental states are whatever play a

particular causal role (i.e. a causal analysis of mentalistic concepts).

2. The relevant causal role is specifiable only within a complete psychological theory (e.g. ‘folk psychology’).

3. That theory is subject to empirical confirmation or disconfirmation.

PHIL*3180, 14

Some flavours of functionalism:

• Machine functionalism (Putnam) • Analytic functionalism (Armstrong,

Lewis) • Cognitivism (modern cognitive

science, e.g. Fodor, Pylyshyn) • Teleological functionalism (see later—

Drestke, Millikan)

Functional state identity theory (Putnam) vs.

functional specification theory (Armstrong)

PHIL*3180, 15

Problems for functionalism:

1) Chauvinism (species-specific receptors and actuators)

2) Liberalism (Blockheads; Chinese rooms)

3) The inverted spectrum 4) Absent qualia (zombies)

The problem of qualia!

PHIL*3180, 16

The anti-functionalist argument form:

i) Case X is a possible case of mental (phenomenal / semantic / cognitive) difference without functional difference.

ii) If functionalism were true then Case X would be impossible.

iii) Therefore functionalism is not true.

PHIL*3180, 17

• Blockheads = functionally the same + cognitively different

• Chinese rooms = functionally the same + semantically different

• Spectrum inverts = functionally the same + phenomenally different

• Zombies = functionally the same + phenomenally different

• Aliens = mentally the same +

functionally different (chauvinism)

PHIL*3180, 18

The Chinese Room:

PHIL*3180, 19

Can the mental be reduced to the physical? No: the ‘special sciences’ are autonomous (Fodor)—some (true, scientific) facts cannot be explained or predicted using only physics Yes(ish): multiple realizability is not an obstacle to reduction (Kim)—all causation is ultimately physical causation, hence covered by physical law.

PHIL*3180, 20

What is reduction?

a) All the laws of theory S can be translated into laws of physics.

vs. b) All the events which fall under the

laws of S are physical events and hence fall under the laws of physics.

Classical theory reduction is a), not b).

PHIL*3180, 21

A theory reduction schema: Higher-level law: S1x � S2x Bridge law 1: S1x � P1x Bridge law 2: S2x � P2x Physical law: P1x � P2x Plus Claim NK: “there are natural kind predicates in an ideally completed physics which correspond to each natural kind predicate in any ideally completed special science” (Fodor).

PHIL*3180, 22

BUT: Claim NK is false (Fodor argues). Interesting generalizations can often be made about events whose physical descriptions have nothing in common … i.e. multiple realizability (our old friend). E.g. Gresham’s law (‘bad money drives out good’) E.g. rewards improve learning So there are no laws of physics corresponding to laws of the special sciences.

PHIL*3180, 23

The Physical Realization Thesis (Kim):

(i) Mental states occur in a system exactly when appropriate physical conditions are present in the system.

(ii) Causal properties of mental states are due to, and explainable in terms of, the causal properties of their physical substrates.

This, plus MR, leads to the following

PHIL*3180, 24

conclusion: Psychology is not a science with a unified subject matter: mental states are not natural kinds. Instead, there are structure-restricted reductions of mental states.

PHIL*3180, 25

What is Consciousness? • Creature consciousness • Transitive consciousness • State consciousness

• Phenomenal consciousness • Access consciousness • Self-consciousness • Monitoring consciousness

P-consciousness ≠ A-consciousness

PHIL*3180, 26

The Problem of Phenomenal Consciousness

What is it like to be a bat?

PHIL*3180, 27

The Explanatory Gap

“Consciousness is a mystery. No one has ever given an account, even a highly speculative, hypothetical, and incomplete account of how a physical thing could have phenomenal states. Suppose that consciousness is identical to a property of the brain, say activity in the pyramidal cells of layer 5 of the cortex involving reverberatory circuits from cortical layer 6 to the thalamus and back to layers 4 and 6, as Crick and Koch have suggested for visual consciousness. Still, that identity itself calls out for explanation!” (Block and Stalnaker 1998)

PHIL*3180, 28

Qualia Quined Qualia are:

1. Ineffable 2. Intrinsic 3. Private 4. Immediately apprehensible in

consciousness

Dennett argues there are no such things.

PHIL*3180, 29

Intuition pumps: 1) The taste of cauliflower 2) Alternative intra-personal

spectrum-inversion neurosurgery 3) Chase and Sanborn / the beer

drinker 4) Visual field inversion and recovery 5) The cry of the osprey

Experience is constituted by extrinsic properties—judgements and discriminations—not ‘qualia.’

PHIL*3180, 30

Zombies!

1) Zombies [P & ~Q] are conceivable. 2) If zombies are conceivable then they

are (metaphysically) possible. 3) If zombies are (metaphysically)

possible then phenomenal consciousness is non-physical.

4) Therefore phenomenal consciousness is non-physical.

PHIL*3180, 31

The Knowledge Argument

1) Mary is a brilliant scientist who knows all the physical facts about colour.

2) Mary—trapped in her black-and-white room—has never had a colour sensation.

3) When Mary sees colour for the first time she will learn something new.

4) Therefore Mary didn’t already know all the facts about colour experience.

5) Therefore not all the facts about colour experience are physical facts.

PHIL*3180, 32

A response to the knowledge argument: The Ability Hypothesis Mary does learn something new. But what she learns is how to do something she could not do before—she does not learn any new fact. Mary gains new abilities to remember and to imagine. This is knowledge but not information (know-how, but not knowledge-that). Thus the physical facts are all the facts there are.

PHIL*3180, 33

Another response to the knowledge argument: Phenomenal concepts

phenomenal concepts ≠ physical concepts There are no entailments between the two. But it does not follow from this that:

phenomenal properties ≠ physical properties

Mary does learn a new fact, but only in an ‘opaque’ and not in a ‘transparent’ sense.

PHIL*3180, 34

This is so, even though phenomenal concepts are special. They do not connote merely contingent ‘modes of presentation’ of the property, but ‘essential’ ones. Compare: gold vs. Au pain vs. c-fibres firing Phenomenal concepts are (a species of) recognitional concepts: type-demonstratives. They ‘directly’ pick out certain internal properties. These properties are physical-functional brain properties, and are picked out ‘indirectly’ by certain theoretical concepts.

PHIL*3180, 35

What about the explanatory gap? It seems that finding out that that property is cff ought to be explanatory, and it is not. Loar argues that the mistake is ‘expected transparency’: we think that “a direct grasp of a property ought to reveal how it is internally constituted.”

PHIL*3180, 36

Kripke’s argument:

1) Identities between ‘rigid designators’ are, if true, necessarily true. [a=b � ☐ a=b] E.g.: “Heat is the motion of molecules.” The appearance of contingency can be explained away.

2) “Pain is the firing of C-fibres” is not necessarily true (i.e. it is possibly false). [~☐ a=b] The appearance of contingency here cannot be explained away.

3) So “Pain is the firing of C-fibres” is false (and ditto for any other physical or functional example). [~a=b]

PHIL*3180, 37

Levine’s Explanatory Gap: • If we ask why is heat the movement of

molecules, there is a satisfying answer. • If we ask why pain is C-fibres firing,

there is no satisfactory answer.

Heat is a ‘causal role’ concept—and so suitable for explanation by underlying mechanisms—in a way pain is not.

PHIL*3180, 38

The rediscovery of light:

1) Various arguments attempt to show that consciousness is epistemically or metaphysically ‘irreducible’ to the physical.

2) Exact analogs of these arguments can be constructed to ‘show’ that light is non-physical.

3) The arguments for non-physical light are unsound.

4) So the arguments for non-physical consciousness are unsound as well.

PHIL*3180, 39

For example: i. Light has both extrinsic features—

such as reflection and refraction—and a special intrinsic feature, luminance. Luminance is epistemically accessible only from ‘the visual point of view.’

ii. Mary could know all there is to know about electro-magnetic waves, and thereby all the facts about the extrinsic properties of light. But Mary is blind, and so would not know about luminance.

iii. So luminance must be non-physical.

PHIL*3180, 40

Conceptual Analysis and the Explanatory Gap

CLAIM: If there is no a priori conceptual analysis of consciousness that connects it to the physical/functional, then there is an explanatory gap (and hence [?] physicalism is false). [~C � E] The intuition: an explanation satisfies iff it renders it inconceivable that the lower-level description could be true and the higher-level phenomenon not obtain.

Block and Stalnaker argue this CLAIM is false, by arguing that we could close the explanatory gap without a conceptual analysis of consciousness. [◊(~C & ~E)]

PHIL*3180, 41

1. Water: Consider the boiling of water: B&S argue that the boiling of water does not follow a priori from the micro-structural process—all that follows is that there will be a transparent, bubbling liquid, not that it is water (as opposed to, say, twater). We need to know, in addition, that the relevant liquid is water. In just the same way—B&S argue—it could turn out that pain is pca, even though pca doesn’t entail pain. Furthermore: “[e]ven if it is a microphysical fact that H2O is a waterish stuff around here, it is not a microphysical fact that it is the waterish stuff around here”

PHIL*3180, 42

2. Life: a) We pick out a class of paradigm cases of

living things, though without any conceptual analysis of necessary and sufficient conditions.

b) We understand completely how some of the simpler forms of life work.

c) “We have reason to think that more complicated living things work by similar principles, and see no bar, in principle, to our extending our explanations of simple living things to all forms of life” (378).

PHIL*3180, 43

Naturalised Semantics The aim is to show how meaning—mental content—is constructed from components we already understand and that are not already meaningful: a ‘recipe for thought’ (Dretske). One idea is to build mental content out of basic, ‘original’ intentional relations.

PHIL*3180, 44

Dretske: Causal/Informational Semantics

Some things naturally track other things: storm clouds, compass needles, smoke, animal tracks…. Typically the relation that underpins this is causation. The required extra ingredient for meaning is misrepresentation: the capacity to say that Fa even when it is not. Hence content must be ‘detached’ from (actual) causes.

PHIL*3180, 45

States that track other states can misrepresent if they have the natural function of providing a certain sort of information. (Functions introduce normativity to causality: they provide a telos.) States can acquire information-carrying functions either phylogenetically (e.g. hearts, eyes) or ontogenetically (learning). Both these things require a certain kind of history.

PHIL*3180, 46

Dretske’s recipe for thought:

1) A system with a need for the information that F, and the ability to adjust its behaviour in the presence of F.

2) A means for the system to detect F. 3) A natural process that confers on the

element that tracks F the function of carrying information about F.

PHIL*3180, 47

Millikan: Biosemantics Representation is teleo-functional. But focussing on the ‘normal causes’ of representations leads to problems. Instead, we should focus on representation consumption and on what Millikan calls the normal conditions for representation.

PHIL*3180, 48

So what is it for a system to use a representation as a representation? a) “…[T]hat the representation and the represented accord with one another, so, is a normal condition for proper functioning of the consumer device as it reacts to the representation.” b) “…[R]represented conditions are conditions that vary, depending on the form of the representation, in accordance with specifiable correspondence rules that give the semantics for the relevant system of representation.”

PHIL*3180, 49

E.g. beaver warning tail-slaps. The splash means danger because the avoidance behaviour it causes in beavers serves a purpose only when there is danger present—the ‘proper function’ of the tail-slap is to signal danger. The representation is also articulated: a splash-at-a time-and-place signals danger-at-a-time-and-place. This allows us to single out the particular thing among the various causes of the representation that it represents, whether or not this is the most frequent, or even a reliable, cause.

PHIL*3180, 50

Brandom’s Linguistic Rationalism A contentful state is one that plays the right sort of role in inference. The content of these states is fixed by inferential practices grounded in the social phenomenon of communication.

PHIL*3180, 51

Sentience: the biological phenomenon of ‘reliable differential responsiveness’

Sapience: the capacity to be a rational

agent and believer. According to Brandon, sapience is not biological but social; it is not causal but normative.

PHIL*3180, 52

The central semantic unit—the ‘minimum graspable’—is the judgement or the assertion: claiming that something (some proposition) is true. All other elements of language and thought—e.g. names, concepts—are to be understood in terms of the role they play in judging.

PHIL*3180, 53

What makes something a judgement—a propositional content—is that it can be used as the premise or conclusion of an inference. The core of linguistic practice is giving and asking for reasons. Compare: a parrot/thermostat vs. a knower. Judging is a kind of commitment. It involves staking a claim, making one liable to a) demands for justification, and b) further commitments.

PHIL*3180, 54

Linguistic commitments are grounded in social practices of communication and justification—to understand a propositional content just is to know how it functions in linguistic practice: it is ‘scorekeeping in the language game.’ E.g.: to take a claim to be true just is to undertake a commitment to endorse the claim. But not all commitments are undertaken, in the game—some are just attributed to others. Content is articulated by keeping straight who is committed to what.


Recommended