Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
Social Science Research Network Electronic Paper Collection at:
http://ssrn.com/abstract=762385
Public Law & Legal Theory Working Paper Series
This paper can be downloaded without charge from:
Georgetown University Law Center
Working Paper No. 762385
JOHN MIKHAIL
Investigating Intuitive Knowledge of the Prohibition ofIntentional Battery and the Principle of Double Effect
Aspects of the Theory of Moral Cognition:
Business, Economics and Regulatory PolicyResearch Paper No. 762385
1
© John Mikhail, 2002 All rights reserved
Aspects of the Theory of Moral Cognition: Investigating Intuitive Knowledge of the Prohibition of Intentional Battery
and the Principle of Double Effect
John Mikhail 1
Abstract: Where do our moral intuitions come from? Are they innate? Does the brain contain a module specialized for moral judgment? Questions like these have been asked in one form or another for centuries. In this paper, we take them up again, with the aim of clarifying them and developing a specific proposal for how they can be empirically investigated. The paper presents data from six trolley problem studies of over five hundred individuals, including one group of Chinese adults and one group of American children, which suggest that adults and children ages 812 rely on intuitive or unconscious knowledge of specific moral principles to determine the permissibility of actions that require harming one person in order to prevent harm to others. Significantly, the knowledge in question appears to be merely tacit: when asked to explain or justify their judgments, experimental subjects were consistently incapable of articulating the operative principles on which their judgments appear to have been based. We explain these findings with reference to an analogy to human linguistic competence. Just as normal persons are typically unaware of the principles guiding their linguistic intuitions, so too are they often unaware of the principles guiding their moral intuitions. These studies pave the way for future research by raising the possibility that specific poverty of the stimulus arguments can be formulated in the moral domain. Differences between our approach to moral cognition and those of Piaget (1932), Kohlberg (1981), and Greene et al. (2001) are also discussed.
1. Introduction
Where do our moral intuitions come from? Are they innate? Does the brain contain a
module specialized for moral judgment? Does the human genetic program contain instructions
for the acquisition of a sense of justice or moral sense? Questions like these have been asked in
one form or another for centuries. In this paper we take them up again, with the aim of clarifying
them and developing a specific proposal for how they can be empirically investigated.
2
In Section 1, we summarize our approach to the theory of moral cognition and explain
some basic elements of our theoretical framework. We also introduce examples of the
perceptual stimuli used in our research and discuss some of the properties of the moral intuitions
they elicit. In Sections 27, we present the results of six trolley problem studies designed to
investigate the moral competence of adults and of children ages 812; in particular, their intuitive
or unconscious knowledge of the prohibition of intentional battery and the principle of double
effect. In Section 8, we provide a general discussion of our findings and contrast our approach to
moral cognition with those of Piaget (1932/1965), Kohlberg (1981), and Greene, Sommerville,
Nystrom, Darley & Cohen (2001). Section 9 is an Appendix containing both the stimulus
materials used in our experiments and our subjects’ responses to them.
1.1 Theoretical Framework
Like many theorists, we begin from the assumption that the theory of moral cognition
may be usefully modeled on aspects of the theory of linguistic competence (see, e.g., Chomsky,
1978; Cosmides & Tooby, 1994; Dwyer, 1999; Goldman, 1993; Harman, 2000; Mahlmann,
1999; Mikhail, 2000; Mikhail, Sorrentino & Spelke, 1998; Rawls, 1971; Stich, 1993). Our
research is thus organized, in the first instance, around three questions, close analogues of the
fundamental questions in Chomsky’s (1986) framework for the investigation of human language.
(1) (a) What constitutes moral knowledge? (b) How is moral knowledge acquired? (c) How is moral knowledge put to use?
A brief overview of some of the concepts and terminology we use to clarify these
questions may be helpful. In our framework, the answer to (1a) is given by a particular moral
grammar or theory of moral competence: a theory of the mind/brain of a person who possesses a
system of moral knowledge, or what might be referred to informally as a “moral faculty,” “moral
3
sense” or “conscience.” The answer to (1b) is given by Universal Moral Grammar (UMG): a
theory of the initial state of the moral faculty—which, in keeping with conventional assumptions
of modularity (see, e.g., Fodor, 1983; Gazzaniga, 1992; Gazzaniga, Ivry & Magnum, 1998;
Pinker, 1997), we provisionally assume to be a distinct subsystem of the mind/brain—along with
an account of how the properties UMG postulates interact with experience to yield a mature
system of moral knowledge. The answer to (1c) is given by a theory of moral performance: a
theory of how moral knowledge enters into the actual representation and evaluation of human
acts and institutional arrangements, as well as other forms of actual conduct (see, e.g., Dwyer,
1999; Mikhail, 2000; compare Rawls, 1971; Nozick, 1968).
Following Chomsky (1965), we use the terms “observational adequacy,” “descriptive
adequacy” and “explanatory adequacy” to refer to increasing levels of empirical success a theory
of moral cognition might achieve. A moral theory is observationally adequate with respect to a
given set of moral judgments to the extent that it provides a correct description of those judgments
in some manner or other, for example, by listing them or by explicitly stating a set of principles
from which they can be derived. A moral theory is descriptively adequate with respect to the
mature individual’s moral competence to the extent that it correctly describes that system, in other
words, to the extent it provides a correct answer to (1a). Finally, a moral theory meets the
condition of explanatory adequacy to the extent it correctly describes the initial state of the moral
faculty and correctly explains how the properties of the initial state it postulates interact with
experience to yield a mature system of moral competence; in other words, to the extent that it
provides a correct answer to (1b) (Mikhail, 2000). 2
Unlike Kohlberg (1981), we distinguish sharply between an individual’s operativemoral
principles (those principles actually operative in her exercise of moral judgment) and her express
4
principles (those statements she makes in the attempt to describe, explain, or justify her
judgments). We make no assumption that the normal individual is aware of the operative
principles which constitute her moral knowledge, or that she can become aware of them through
introspection, or that her statements about them are necessarily accurate. On the contrary, we
hypothesize that just as normal persons are typically unaware of the principles guiding their
linguistic or visual intuitions, so too are they often unaware of the principles guiding their moral
intuitions. In any event, the important point is that, as with language or vision, the theory of moral
cognition must attempt to specify what the properties of moral competence actually are, not what a
person may report about them (Haidt, 2001; Mikhail, 2000; Mikhail, Sorrentino & Spelke, 1998).
Finally, we follow Chomsky (1995), Lewontin (1990), Marr (1982), and other
commentators in assuming that the problems of descriptive and explanatory adequacy possess a
certain logical and methodological priority over more complicated inquiries into the neurological
and evolutionary foundations of moral cognition and behavior. Hence we carefully distinguish
(1a)(1c) from two further questions a complete theory of moral cognition must answer:
(1) (d) How is moral knowledge physically realized in the brain? (e) How did moral knowledge evolve in the species?
Although many researchers have addressed questions like these, their efforts seem at this
juncture to be somewhat premature. Just as our ability to ask wellfocused questions about the
evolution and physical bases of language depends on solving the problems of descriptive and
explanatory adequacy in the linguistic domain (Chomsky, 1995; Hauser, Chomsky & Fitch,
2002), so too is our understanding of (1d) and (1e) advanced by achieving reasonably correct
solutions to questions like (1a) and (1b) in the moral domain. Put simply, we cannot profitably
ask how moral knowledge evolved in the species or where it resides in the brain until what
constitutes moral knowledge and how it is acquired are better understood.
5
1.2 Perceptual Stimuli and Perceptual Model
Research in the Piagetian tradition has attempted to answer questions like (1a) and (1b)
by investigating the developing child’s mental representations of the “subjective” and
“objective” elements of moral judgment, the former consisting of the goals and intentions of an
action, the latter consisting of an action’s effects and material consequences. In Piaget’s
(1932/1965) original studies, children were found to base their moral judgments on mental
representations of effects, not intentions, until around age nine. More recently, many
investigators have suggested that these findings were an artifact of the methods and assessment
procedures Piaget employed. Some researchers (e.g., Baird, 2001; Berndt & Berndt, 1975;
Costanzo, Coie, Grumet & Farnhill, 1973; Lilliard & Flavell, 1990; Nelson, 1980) have
discovered that children as young as three use information about motives and intentions when
making moral judgments, if that information is made explicit and salient. Moreover, a
considerable body of research on infant cognition (e.g., Gergely, Nadasdy, Csibra & Biro, 1995;
Johnson, 2000; Meltzoff, 1995; Woodward, Sommerville & Guajardo, 2001) suggests that even
young infants are predisposed to interpret the actions of animate agents in terms of their goals
and intentions.
Our research seeks to build on these prior studies by investigating how experimental
subjects reconstruct and utilize information about intentions and effects when evaluating
“morally complex acts” – that is, acts and omissions which are comprised of multiple intentions
and which generate both good and bad effects. To illustrate, consider the following examples of
the socalled “trolley problem” and related thought experiments invented by Foot (1967) and
Thomson (1985).
6
The Trolley Problem Charlie is driving a train when the brakes fail. Ahead five people are working on the track with their backs turned. Fortunately, Charlie can switch to a side track, if he acts at once. Unfortunately, there is also someone on that track with his back turned. If Charlie switches his train to the side track, he will kill one person. If Charlie does not switch his train, he will kill five people.
Is it morally permissible for Charlie to switch his train to the side track?
The Transplant Problem Dr. Brown has five patients in the hospital who are dying. Each patient needs a new organ in order to survive. One patient needs a new heart. Two patients need a new kidney. And two more patients need a new lung. Dr. Brown can save all five patients if he takes a single healthy person and removes her heart, kidneys, and lungs to give to these five patients. Just such a healthy person is in Room 306. She is in the hospital for routine tests. Having seen her test results, Dr. Brown knows that she is perfectly healthy and of the right tissue compatibility. If Dr. Brown cuts up the person in Room 306 and gives her organs to the other five patients, he will save the other five patients, but kill the person in Room 306 in the process. If Dr. Brown does not cut up the person in Room 306, the other five patients will die.
Is it morally permissible for Dr. Brown to cut up the person in Room 306?
The Bystander Problem Edward is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Edward sees what has happened: the train driver saw five workmen men ahead on the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men; the banks are so steep that they will not be able to get off the track in time. Fortunately, Edward is standing next to a switch, which he can throw, that will turn the train onto a sidetrack. Unfortunately, there is one person standing on the sidetrack, with his back turned. Edward can throw the switch, killing the one; or he can refrain from doing this, letting the five die.
Is it morally permissible for Edward to throw the switch?
The Footbridge Problem Nancy is taking her daily walk near the train tracks when she notices that the train that is approaching is out of control. Five men are walking across the tracks. The train is moving so fast that they will not be able to get off the track in time. Nancy is standing next to a man, whom she can throw in front of the train, thereby preventing it from killing the men. Nancy can throw the man, killing him but saving the five men; or she can refrain from doing this, letting the five die.
Is it morally permissible for Nancy to throw the man?
7
As we discuss below, when experimental subjects were presented with these scenarios,
they judged Charlie’s turning the train in The Trolley Problem to be permissible, Dr. Brown’s
cutting up the patient in the Transplant Problem to be impermissible, Edward’s throwing the
switch in the Bystander Problem to be permissible, and Nancy’s throwing the man in the
Footbridge Problem to be impermissible (Table 1). These responses confront us with a
potentially surprising contrast between the Trolley and Bystander Problems, on the one hand,
and the Transplant and Footbridge Problems, on the other. In the former problems, saving five
people at the cost of killing one person is thought to be permissible. In the latter problems, by
contrast, saving five at the cost of killing one is held to be impermissible.
Table 1: Moral Intuitions of Trolley, Transplant, Bystander, and Footbridge Problems Problem Action Good Effect Bad Effect Deontic
Status Trolley Charlie’s turning the train Preventing 5 deaths 1 Death Permissible Transplant Dr. Brown’s cutting up the patient Preventing 5 deaths 1 Death Impermissible Bystander Edward’s throwing the switch Preventing 5 deaths 1 Death Permissible Footbridge Nancy’s throwing the man Preventing 5 deaths 1 Death Impermissible
These facts lead us to speculate about the cognitive mechanisms the mind employs in
responding to these four scenarios. In the first instance, they lead us to ask the following
question: what are the operative principles of moral competence that are responsible for these
divergent responses? The problem is more difficult than it may seem at first. On the one hand,
comparatively simple deontological and consequentialist moral principles (e.g., “If an act causes
death, then it is wrong,” “If the consequences of an act are better than the consequences of any of
available alternative, then it is required,” etc.) are incapable of explaining the pattern of
intuitions elicited by these problems. For example, a simple deontological principle forbidding
8
all killing would generate the intuition that Charlie’s switching tracks in the Trolley Problem and
Edward’s switching tracks in the Bystander Problem are impermissible. But these actions are
judged to be permissible. Likewise, a simple utilitarian principle requiring agents to perform
actions with the best foreseeable consequences would presumably generate the intuition that Dr.
Brown’s cutting up the patient in the Transplant Problem and Nancy’s throwing the man in the
Footbridge Problem are obligatory, or at least permissible; yet these actions are judged to be
impermissible.
On the other hand, conditional principles whose antecedents simply restate those action
descriptions found in the stimulus (e.g., “If an act is of the type ‘throwing the switch,’ then it is
permissible”; “If an act is of the type ‘throwing the man’ then it is impermissible,”) are also
descriptively inadequate. This is because they lead us to make inaccurate predictions of how
these actiondescriptions will be evaluated when they are embedded in materially different
circumstances. For example, as we discuss below, when the costs and benefits in the Bystander
Problem are manipulated, so that an action described as “throwing the switch” will save $5
million of equipment at the cost of killing one person, individuals judge the action so described
to be impermissible. Likewise, when the circumstances of the Footbridge Problem are modified
so that the action described as “throwing the man” is presumed to involve consensual touching,
subjects judge the action to be permissible. In general, it is easy to show that the action
descriptions used in these problems are “morally neutral” (Baird, 2001; Nelson, 1980), in the
sense that the permissibility judgments they elicit are circumstancedependent.
Since the circumstances of an action can vary along an indefinite number of dimensions
(e.g., D’Arcy, 1963; Donagan, 1977; Lyons, 1965; Stone, 1964), the conclusion to which we
quickly are led by considerations like these is that any attempt to explain the moral intuitions
9
elicited by these examples by means of a simple stimulusresponse model is doomed at the start.
Although each of these moral intuitions is occasioned by an identifiable stimulus, how the mind
goes about interpreting these hypothetical fact patterns, and separating the actions they depict
into those that are permissible and those that are not, is not something revealed in any obvious
way by the surface properties of the stimulus itself. Instead, an intervening step between
stimulus and response must be postulated: a pattern of organization of some sort that is imposed
on the stimulus by the mind itself. Hence a simple perceptual model such as the one in Figure 1
is inadequate for explaining these moral intuitions. Instead, as is the case with language
perception (Chomsky, 1964), an adequate perceptual model must, at a minimum, look more like
the one in Figure 2.
Fig. 1: Simple Perceptual Model for Moral Judgment
INPUT ?
PERMISSIBLE
IMPERMISSIBLE
10
? PERMISSIBLE
IMPERMISSIBLE
STRUCTURAL DESCRIPTION
Perceptual Response: Moral Judgment
Unconscious Mental Representation
INPUT ?
Conversion Rules
Stimulus: Fact Pattern
Deontic Rules
Fig. 2: Expanded Perceptual Model for Moral Judgment
The expanded perceptual model in Figure 2 implies that, like grammaticality judgments,
permissibility judgments do not necessarily depend on the surface properties of an action
description, but on more fundamental properties of how that action is mentally represented. Put
differently, it suggests that the problem of descriptive adequacy in the theory of moral cognition
may be divided into at least two parts: (a) the problem of determining the nature of the
computational principles (i.e., “deontic rules”) operative in the exercise of moral judgment, and
11
(b) the problem of determining the representational structures (i.e., “structural descriptions”)
over which those computational operations are defined.
What are the properties of these intervening mental representations? In our view, it
seems reasonable to suppose that morally cognizable fact patterns are mentally represented in
terms of abstract categories like act, consequence, and circumstance; agency, motive, and
intention; proximate and remote causes; and other familiar concepts that are the stock in trade of
philosophers, lawyers, and jurists (Mikhail, 2000; see also Donagan, 1977; Sidgwick, 1907). But
which specific concepts does the system of moral cognition in fact use? In what manner, i.e.,
according to what principles or rules, does it use them? Answers to questions like these, if
available, would begin to solve the problem of descriptive adequacy.
1.3 Our Hypothesis
Our hypothesis is that the moral intuitions generated by the Trolley, Transplant, Bystander,
and Footbridge problems and structurally similar thought experiments (henceforth, “trolley
problems”) can be best explained by postulating intuitive knowledge of specific moral principles,
including the prohibition of intentional battery and the principle of double effect. The former is
a familiar principle of both common morality and the common law proscribing acts of
unpermitted, unprivileged bodily contact, that is, of touching without consent (Prosser, 1941;
Shapo, 2003). The latter is a complex principle of justification, narrower in scope than the
traditional necessity or “choice of evils” defense, which in its standard formulation holds that an
otherwise prohibited action may be permissible if the act itself is not wrong, the good but not the
bad effects are intended, the good effects outweigh the bad effects, and no morally preferable
alternative is available (Mikhail, 2000; see also Fischer & Ravizza, 1992). Both of these
12
principles require clarification, but taken together and suitably elaborated they can be invoked to
explain the relevant pattern of intuitions in a relatively simple and straightforward manner. The
key structural difference between the two sets of examples is that, in Transplant and Footbridge
problems, the agent commits a series of distinct trespasses prior to and as a means of achieving
his good end, whereas in the Trolley and Bystander problems, these violations are subsequent
and foreseen side effects. Figures 3 and 4 illustrate this difference in the case of the Footbridge
and Bystander problems.
D’s throwing the man at t (0)
D’s committing battery at t (0)
D’s preventing the train from killing the men at t (+n+o)
D’s killing the man at t (+n+p)
D’s causing the train to hit the man at t (+n)
D’s committing battery at t (+n)
Fig. 3: Mental Representation of Footbridge Problem
Side Effects
End
Means
D’s committing homicide at t (+n+p)
13
Our computational hypothesis holds that when people encounter the Footbridge and
Bystander problems, they spontaneously compute unconscious representations like those in
Figures 3 and 4. 3 Note that in addition to explaining the relevant intuitions, this hypothesis has
further testable implications. For example, we can investigate the structural properties of the
underlying representations by asking subjects to evaluate certain probative descriptions of the
relevant actions. Descriptions using the word “by” to connect individual nodes of the tree in the
downward direction (e.g., “D turned the train by throwing the switch,” “D killed the man by
turning the train”) will be deemed acceptable; by contrast, causal reversals using “by” to connect
nodes in the upward direction (“D threw the switch by turning the train,” “D turned the train by
killing the man”) will be deemed unacceptable. Likewise, descriptions using the phrase “in order
to” to connect nodes in the upward direction along the vertical chain of means and ends (“D
D’s throwing the switch at t (0)
D’s turning the train at t (+n)
D’s preventing the train from killing the men at t (+n) D’s causing the train
to hit the man at t (+n+o)
D’s committing battery at t (+n+o) End
Fig. 4: Mental Representation of Bystander Problem
Side Effects
D’s killing the man at t (+n+o+p)
D’s committing homicide at t (+n+o+p)
Means
14
threw the switch in order to turn the train”) will be deemed acceptable. By contrast, descriptions
of this type linking means with side effects (“D threw the switch in order to kill the man”) will
be deemed unacceptable. In short, there is an implicit geometry to these representations, which
an adequate theory can and must account for.
Our hypothesis is interesting and controversial for several reasons. First, while many
theorists have suggested that the principle of double effect may be part of a descriptively
adequate theory of trolley intuitions (e.g., Harman, 1977, 2000), and of human morality
generally (e.g., Nagel, 1986; Quinn, 1993), no prior experimental studies have directly tested this
assumption. The experiments by Petrinovich and his colleagues (Petrinovich & O’Neill, 1996;
Petrinovich, O’Neill & Jorgensen, 1993), which utilize trolley problems, do not adequately
clarify this issue, because of their focus on behavioral predictions (e.g., asking participants to
answer the question “What would you do?”) rather than on deontic judgments per se (e.g., asking
participants to answer the question “Is X morally permissible?”). Likewise, Greene et al. (2001),
who also use trolley problems as probes, also appear to leave this issue unresolved (see §8.2.3).
Second, our hypothesis is significant because, if it is true, it implies that the mental
operations involved in the exercise of moral judgment are more complex than is commonly
thought. For the principle of double effect, for example, to be operative in its standard
formulation, adults and children must possess a list of intrinsically wrong acts, a set of rules for
generating morally cognizable actrepresentations, and a calculus of some sort for computing—
and comparing the probabilities of—an action’s good and bad effects. They must also have the
cognitive resources to distinguish the “act itself” from its effects and further consequences, to
distinguish the act’s “foreseen effects” from its “intended effects,” and, more generally, to
differentiate the act’s causal and intentional properties from those of its alternatives. Further,
15
they must compute actrepresentations in terms of properties like ends, means, and side effects,
even though the stimulus contains no direct evidence of these properties. In short, our
hypothesis implies that “ordinary” people—not just trained lawyers or philosophers—possess a
complex sense of justice that incorporates subtle elements of a fully articulated legal code,
including abstract theories of causation and intention.
Finally, our hypothesis raises interesting and novel questions for the theory of moral
development. Specifically, it leads us to ask whether children are explicitly taught the principle
of double effect, and if not, whether the principle or some variant of it is in some sense innate.
As Harman (2000) explains, this question naturally arises as soon as one settles on an
explanation of the structure of our moral intuitions that makes reference to this principle. “An
ordinary person was never taught the principle of double effect,” Harman observes, and “it is
unclear how such a principle might have been acquired by the examples available to the ordinary
person. This suggests that [it] is built into . . . morality ahead of time” (Harman, 2000, p. 225).
Similar reasoning may be thought to apply to the prohibition of intentional battery, at least as
that prohibition is defined and utilized here. 4 On reflection, it seems doubtful that children are
affirmatively taught to generate the specific representations presupposed by this principle to any
significant extent. We thus seem faced with the possibility that certain moral principles emerge
and become operative in the exercise of moral judgment that are neither explicitly taught, nor
derivable in any obvious way from the data of sensory experience. In short, we appear
confronted with an example of what Chomsky calls the phenomenon of the “poverty of the
stimulus” in the moral domain (Dwyer, 1999; Mikhail, 2000; compare Chomsky, 1986).
16
CHILD’S LINGUISTIC
DATA
LINGUISTIC GRAMMAR
CHILD’S MORAL DATA
MORAL GRAMMAR
English Japanese Zapotec Malagasy Arabic …… ……
How much diversity?
UG
UMG
?
?
Figure 5: Acquisition Models for Language and Morality
The “argument from the poverty of the moral stimulus” (Mikhail, 2000) can be depicted
graphically by means of an acquisition model similar to the one Chomsky (1964) initially proposed
in the case of language (Figure 5). In the linguistic version of this model, Universal Grammar
(UG) “may be regarded as a theory of innate mechanisms, an underlying biological matrix that
provides a framework within which the growth of language proceeds,” and proposed principles of
UG “may be regarded as an abstract partial specification of the genetic program that enables the
child to interpret certain events as linguistic experience and to construct a system of rules and
principles on the basis of that experience” (Chomsky, 1980, p. 187). Likewise, in the case of
17
moral development, Universal Moral Grammar (UMG) may be regarded as a theory of innate
mechanisms that provides the basic framework in which the development of moral competence
unfolds, and specific principles of UMG may be regarded as a partial characterization the innate
function that maps the developing child’s relevant moral experience (her “moral data”) into the
mature state of her acquired moral competence (i.e., her “moral grammar”).
The linguistic grammars children acquire are hopelessly underdetermined by the data
available to them as language learners; linguists therefore postulate a significant amount of innate
knowledge to fill this gap (e.g., Baker, 2001; Pinker, 1994). Further, because every normal human
child can and will learn any of the world’s natural languages simply by being placed in an
appropriate environment, UG must be rich and specific enough to get the child over the learning
hump, but not so specific as to preclude her ability to acquire every human language (Chomsky,
1986). Turning to UMG, it is unclear whether a similar situation and a similar tension between
descriptive and explanatory adequacy obtains. Nevertheless, the acquisition model we have
sketched, though abstract, can be made more concrete by considering the specific example of
trolley intuitions. If a computational moral grammar does in fact enter into the best explanation of
these intuitions, then two further questions arise within the framework of this model: First, what
are the properties of the moral grammars that people do in fact acquire, and how diverse are they?
Second, what informational gaps, if any, can be detected between the inputs and outputs of the
model? That is, what if any principles of moral grammar are acquired for which the environment
contains little or no evidence? According to the argument from the poverty of the moral stimulus,
if specific principles emerge and become operative in the course of normal moral development,
but the acquisition of these principles cannot be explained on the basis of the child’s moral data,
then the best explanation of how children acquire these principles may be that they are innate, in
18
Chomsky’s dispositional sense (Chomsky, 1986; see also Baker, 2001; Dwyer, 1999; Mikhail
2002; Pinker, 1994; Spelke, 1998).
2. Experiment 1
Having introduced some elements of our theoretical framework, we turn directly to a
discussion and analysis of our experimental findings. At the outset of our investigations, we
were interested in a variety of questions that might be asked about thought experiments like the
trolley problems and the moral intuitions they elicit, including the following: First, are these
intuitions widely shared? Are they shared across familiar demographic categories like gender,
race, nationality, age, culture, religion, or level of formal education? Second, what are the
operative principles? How precisely can we characterize the relevant mental operations and to
what extent are they open to conscious introspection? Third, how are the operative principles
learned or acquired? What might examples like these eventually tell us about moral
development and the acquisition of the moral sense?
Our first study attempted to address only a subset of these questions, including (1)
whether and to what extent these intuitions are widely shared; (2) what are the operative
principles; and (3) whether the operative principles are open to conscious introspection.
2.1 Method
2.1.1 Participants
Participants were 40 adult volunteers from the M.I.T. community between the ages of 18
35. The group consisted of 19 women and 21 men.
19
2.1.2 Stimuli and Procedure
Eight scenarios were used, all of which were adapted from Foot (1967), Thomson (1986),
and Harman (1977) (see §9 for the complete text of these scenarios; see also Mikhail, 2000). In
all eight scenarios, an agent must choose whether to perform an action that will result in one
person being killed and five other persons, who would otherwise die, being saved.
The scenarios were divided according to our hypothesis into two groups. Four scenarios,
which were modeled on the Transplant and Footbridge Problems, described a choice between (a)
committing an intentional battery in order to prevent five other people from dying, knowing that
the battery will also constitute a foreseeable but nonintentional homicide, and (b) refraining
from doing so, thereby letting the five die. Four other scenarios, which were modeled on the
Trolley and Bystander Problems, described a choice between (a) doing something in order to
prevent five people from dying, knowing that the action will constitute a foreseeable but non
intentional battery and a foreseeable but nonintentional homicide, and (b) refraining from doing
so, thereby letting the five die.
The morally salient difference between the two sets of cases, in other words, concerned
the type of battery embedded in the agent’s action plan. In the first group of scenarios, the
battery was intentional, embedded within the agent’s action plan as a means (henceforth
“Intentional Battery”). In the second group, the battery was foreseeable (but not intentional),
embedded within the agent’s action plan as a side effect (henceforth “Foreseeable Battery”).
Each participant received a written questionnaire containing one scenario. The
participant was first instructed to read the scenario and to judge whether or not the proposed
action it described was “morally permissible.” The participant was then asked on a separate page
of the questionnaire to provide reasons explaining or justifying his or her response. Twenty
20
participants were given an Intentional Battery scenario. The other twenty participants were
given a Foreseeable Battery scenario. The assignment of participants to scenario type was
random.
2.2 Results
2.2.1 Judgments
The main results of Experiment 1 are presented in Figure 6. 2 of 20 participants in the
Intentional Battery condition judged the action constituting intentional battery to be permissible.
By contrast, 19 of 20 participants in the Foreseeable Battery condition scenario judged the action
constituting foreseeable battery to be permissible. This difference is significant: x 2 (1, N=40) =
28.96, p < .001, suggesting that the scenarios evoke different action representations whose
properties are morally salient. 5
Male and female responses in Experiment 1 are presented in Figure 7. Of the 10 men
given an Intentional Battery scenario, 2 judged the action constituting intentional battery to be
permissible and 8 judged it to be impermissible. Of the 10 women given an Intentional Battery
scenario, all 10 judged the action constituting intentional battery to be impermissible.
Meanwhile, all 11 of the men and 8 of the 9 women who were given a Foreseeable Battery
scenario judged the action constituting foreseeable battery to be permissible. These differences
are also significant, x 2 (1, N=19) = 15.44, p < .001 (women) and x 2 (1, N=21) = 14.6, p < .001
(men), suggesting that there are no significant gender differences in the way the two types of
scenario are mentally represented and morally evaluated.
21
0 2 4 6 8 10 12 14 16 18 20
Intentional Battery
Foreseeable Battery
Permiss. Impermiss.
Act Type
Subjects
Figure 6: Moral Judgments of Two Act Types in Experiment 1 (Intentional Battery vs. Foreseeable Battery)
X 2 (1, N=40) = 29.0, p < .001
0
2
4
6
8
10
12
Men Women Men Women
Permiss. Impermiss.
Intentional Battery
Subjects
Figure 7: Judgments of Act Types in Experiment 1 by Gender (Intentional Battery vs. Foreseeable Battery)
Foreseeable Battery
X 2 (1, N=19) = 15.44, p < .001 (women).
X 2 (1, N=21) = 14.6, p < .001 (men).
22
2.2.2 Justifications
Subjects’ expressed principles—the responses they provided to justify or explain their
judgments—were also coded and analyzed. Three categories of increasing adequacy were used
to classify these responses: (1) no justification, (2) logically inadequate justification, and (3)
logically adequate justification. Responses that were left completely blank were categorized
under the heading of “no justification.” Responses that were not blank but which failed to state a
reason, rule, or principle—or to identify any feature whatsoever of the given scenario—that
could in principle generate the corresponding judgment were classified as logically inadequate
justifications. Finally, responses that did state a reason, rule, or principle, or did otherwise
identify at least one feature of the given scenario—even one that was obviously immaterial,
irrelevant, arbitrary, or ad hoc—that could in principle generate the corresponding judgment
were classified as logically adequate justifications.
Utilizing this taxonomy, two researchers independently coded a subset of justifications
and achieved an interobserver reliability of 89% (n=36). One researcher then coded the
complete set of justifications collected in Experiment 1. 32.5% (13/40) of participants gave no
justification, 17% (7/40) provided logically inadequate justifications, while only 50% (20/40)
provided logically adequate justifications. Furthermore, many of the logically adequate
justifications consisted of simple deontological or consequentialist principles that were evidently
incapable of generating the conflicting pattern of intuitions in Experiment 1. These justifications
thus failed the test of observational adequacy in the sense defined in §1.1. These findings
together with the data on expressed justifications gathered in our remaining studies are discussed
again in §8.
23
2.3 Discussion
Experiment 1 was designed to achieve several different objectives. First, it was meant to
investigate a set of untested empirical claims implicit in the philosophical and legal literature
about how the trolley problems are mentally represented and morally evaluated. In their
accounts of trolley problems, philosophers and legal theorists often take for granted the deontic
status readers will assign to a given action sequence (e.g., Fischer & Ravizza, 1992; Katz, 1987;
Thomson, 1985). Prior to our studies, however, no controlled experiments had directly tested
these assumptions or attempted to extend them to broader populations. Instead, prior
experimental research using trolley problems as probes (Petrinovich & O’Neill, 1996;
Petrinovich et al., 1993) had left these issues largely unresolved. As we predicted, conventional
assumptions about the deontic intuitions elicited by these problems were confirmed, and the
intuitions themselves were widely shared.
Second, Experiment 1 was designed to investigate whether the participants in our
experiments could, when asked, provide coherent and wellarticulated justifications for their
judgments about individual trolley problems. Based on informal observation, as well as theory
dependent considerations arising from the linguistic analogy—in particular, the inaccessible
status of principles of grammar—we predicted that many or most of our subjects be incapable of
doing so. This prediction also held: even under an extremely liberal coding scheme, according to
which a justification was deemed logically adequate if it picked out at least one distinguishing
feature of the given scenario, even one that was obviously immaterial, irrelevant, arbitrary, or ad
hoc, that could in principle “serve as part of the premises of an argument that arrives at the
matching judgments” (Rawls, 1971, p. 46), only 50% of the participants in our study provided
logically adequate justifications for their judgments. Additionally, as indicated, many of these
24
justifications were inadequate to account for the pattern of intuitions generated in Experiment 1
and thus failed the test of observational adequacy in the sense defined in §1.1. This suggested
that a withinsubject design would elicit considerable fewer logically adequate justifications than
a betweensubject design, because in the former condition subjects would be required to
reconcile and explain two contrary intuitions by means of an overarching rationale or principle.
On this basis, we decided to utilize a withinsubject design in Experiment 2 (see §3).
A further objective of Experiment 1 was to investigate our hypothesis that the moral
intuitions generated by the trolley problems could be explained by postulating intuitive
knowledge of the prohibition of intentional battery and the principle of double effect. As
interpreted here, the combined effect of mechanically applying these principles to these scenarios
would be to permit throwing the switch and turning the train in the Trolley and Bystander
conditions but to prohibit cutting up the patient and throwing the man in the Transplant and
Footbridge conditions. This is how participants did, in fact, respond in these conditions, thus
confirming to a limited extent our hypothesis about operative principles.
Finally, Experiment 1 was also meant to begin the process of investigating the potential
universality of a certain class of moral intuitions, such as those elicited by the trolley problems,
by determining whether one sample of adult men and women would share intuitive responses to
these problems. Again, based upon informal observation, as well as various theorydependent
considerations (Mikhail, 2000), we predicted that there would be no statistically significant
gender differences. This prediction also held—a finding that is at least potentially in conflict
with the claims of Gilligan (1982) and others that men and women typically differ in how they
evaluate moral problems.
25
In sum, the findings of Experiment 1 constitute evidence that one component of moral
knowledge, deontic knowledge, consists of a system of rules or principles (a “moral grammar”)
capable of generating and relating mental representations of various elements of an action plan
(Mikhail et al., 1998). Our findings also constitute evidence that the moral grammar contains
principles capable of distinguishing intentional battery (battery embedded within an agent’s
action plan as a means) and foreseeable battery (battery embedded within an agent’s action plan
as a side effect), as well as a further principle, such as the principle of double effect or some
comparably complex ordering principle (Donagan, 1977), prohibiting intentional battery but
permitting foreseeable battery in the context of cases of necessity such as the trolley problems.
Because subjects displayed only a limited ability to provide adequate justifications of their
intuitions, Experiment 1 also implies that, as is the case with linguistic intuitions, the principles
generating moral intuitions are at least partly inaccessible to conscious introspection. Finally,
our findings also suggest that at least some moral intuitions are widely shared, irrespective of
gender.
3. Experiment 2
In Experiment 1, we discovered an apparent difference between the way intentional
battery and foreseeable battery are mentally represented and morally evaluated, at least in the
context of cases of necessity such as the trolley problems. We also discovered that the moral
competence of both men and women appears to consist, at least in part, of intuitive or
unconscious knowledge of the prohibition of intentional battery and the principle of double
effect. Experiment 2 was designed to bring additional evidence to bear on these hypotheses, in
three different ways.
26
The first way was to investigate the concept of battery that was used in our analysis of
Experiment 1. In Experiment 1, we drew on established legal doctrine in assuming that battery
could in effect be defined as “unpermitted or unprivileged contact with a person,” that is, as
contact without consent (Prosser, 1941; Shapo, 2003). Moreover, we followed the traditional
law of tort in assuming that the notion of unprivileged contact “extends to any part of the body,
or to anything which is attached to it” and includes any touching of one person by another or by
“any substance put in motion by him” (Hilliard, 1859). In Experiment 2, we investigated this
concept of battery by modifying one of the Intentional Battery scenarios used in Experiment 1,
so that an action described as “throwing the man,” which previously constituted battery, no
longer did so, because under the modified circumstances the action would likely be represented
as consensual. We did this by constructing a scenario in which a runaway trolley threatens to kill
a man walking across the tracks and the only way to save the man is to save him is to throw him
out of the path of the train, thereby seriously injuring him.
The second way we extended the results of Experiment 1 was to investigate our subjects’
knowledge of the consequentialist provision of the principle of double effect. As stated in §1,
the principle of double effect is a complex principle of justification requiring, among other
things, that the intended and foreseen good effects of an action outweigh its foreseen bad effects.
Our implicit assumption in Experiment 1 was that each of the scenarios used in that experiment
was mentally represented by our subjects as satisfying that condition. In particular, we took for
granted in Experiment 1 that individuals represented preventing the deaths of five people as an
intended and foreseen good effect that outweighed the foreseen bad effect of the death of one
person.
27
In Experiment 2, we tested our subjects’ presumed knowledge of this consequentialist
provision of the principle of double effect directly by modifying one of the Foreseeable Battery
scenarios used in Experiment 1, so that an action described as “throwing the switch,” which
previously generated intended and foreseen good effects outweighed by its foreseen bad effects,
no longer did so, but rather generated the opposite result of its intended and foreseen good
effects being outweighed by its foreseen bad effects. We did this by taking one version of the
standard Bystander Problem and substituting a valuable thing “five million dollars of railroad
equipment lying idle across the tracks” for what was previously described as “five men walking
across the tracks.” We reasoned that our subjects would take for granted that the moral worth of
a person is greater than that of a valuable thing.
Finally, our third modification was to convert the experimental design of Experiment 1
from a betweensubject design to a withinsubject design. Although fully half of the participants
in Experiment 1 provided logically adequate justifications of their judgments, on inspection it
was clear that many of these justifications were unable to explain the alternating pattern of
intuitions generated in Experiment 1. For example, many of these nominally adequate responses
took the form of either simple deontological justifications (e.g., “killing is wrong”) or simple
utilitarian justifications (“five versus one”) that were manifestly incapable of explaining the
essential contrast between the Foreseeable Battery scenarios (i.e., the Trolley and Bystander
Problems), on the one hand, and the Intentional Battery scenarios (i.e., the Transplant and
Footbridge Problems), on the other. Hence in Experiment 2 we decided to examine this issue
and to begin to explore related issues such as framing effects by asking participants to provide
judgments and justifications of a pair of trolley problems presented to them simultaneously.
28
3.1 Method
3.1.1 Participants
Participants were 65 adult volunteers between the ages of 2235, 55 of whom were
graduate students at the John F. Kennedy School of Government at Harvard University, and 10
of whom were employees of the Cable News Network in Washington, DC. There were 33
women and 30 men. Two subjects did not provide information about their gender.
3.1.2 Stimuli and Procedure
Five scenarios were used, one of which was the Trolley Problem presented in §1.2. The
other four scenarios, which were our main concern in Experiment 2, are reprinted below. Each
was derived from a template that was designed to allow for systematic control and variation in
this and future investigations (Mikhail, 2000). Differences between the last three scenarios and
the first are underlined and italicized here to make them more noticeable. Participants were
given questionnaires without these markings.
The Bystander Problem (Hank: Foreseeable Battery) Hank is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Hank sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Hank is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from killing the men. Unfortunately, there is a man standing on the side track with his back turned. Hank can throw the switch, killing him; or he can refrain from doing this, letting the five die.
Is it morally permissible for Hank to throw the switch?
The Footbridge Problem (Ian: Intentional Battery) Ian is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Ian sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that
29
they will not be able to get off the track in time. Fortunately, Ian is standing next to a heavy object, which he can throw onto the track in the path of the train, thereby preventing it from killing the men. Unfortunately, the heavy object is a man, standing next to Ian with his back turned. Ian can throw the man, killing him; or he can refrain from doing this, letting the five die.
Is it morally permissible for Ian to throw the man?
The Modified Bystander Problem (Karl: Bad Effects Outweigh Good Effects) Karl is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Karl sees what has happened: the driver of the train saw five million dollars of new railroad equipment lying idle across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the equipment. It is moving so fast that it will be destroyed. Fortunately, Karl is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from destroying the equipment. Unfortunately, there is a man standing on the side track with his back turned. Karl can throw the switch, killing him; or he can refrain from doing this, letting the equipment be destroyed.
Is it morally permissible for Karl to throw the switch?
The Modified Footbridge Problem (Luke: Consensual Contact) Luke is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Luke sees what has happened: the driver of the train saw a man walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the man. It is moving so fast that he will not be able to get off the track in time. Fortunately, Luke is standing next to the man, whom he can throw off the track out of the path of the train, thereby preventing it from killing the man. Unfortunately, the man is frail and standing with his back turned. Luke can throw the man, injuring him; or he can refrain from doing this, letting the man die.
Is it morally permissible for Luke to throw the man?
In the first scenario, the agent (“Hank”) must choose whether to throw a switch in order to
prevent a runaway train from killing five people, knowing that doing so will cause the train to
run down and kill an innocent bystander (henceforth “Foreseeable Battery”). In the second
scenario, the agent (“Ian”) must choose whether to throw a man in front of a runaway train in
order to prevent the train from killing five people (henceforth “Intentional Battery”). In the third
scenario, the agent (“Karl”) must decide whether to throw a switch in order to prevent a runaway
30
train from destroying five million dollars of equipment, knowing that doing so will kill an
innocent bystander (henceforth “Bad Effects Outweigh Good Effects”). Finally, in the fourth
scenario, the agent (“Luke”) must decide whether to throw a man walking across the tracks out
of the path of the train, knowing that doing so will injure him (henceforth “Consensual
Contact”).
Unlike Experiment 1, which used a betweensubject design, Experiment 2 employed a
withinsubject design. Each of the 65 participants received a written questionnaire containing
two scenarios, including one or more of the four scenarios reprinted above. 6 Participants were
first asked whether the proposed actions were “morally permissible” and then to explain or
justify their responses. 25 participants were given the Intentional Battery scenario, 25
participants were given the Foreseeable Battery scenario, 25 participants were given the Bad
Effects Outweigh Good Effects scenario, and 25 participants were given the Consensual Contact
scenario. The assignment of participants to scenario type was random.
3.2 Results
3.2.1 Intentional Battery vs. Foreseeable Battery
We present the main results of Experiment 2 in stages, beginning with the comparison
between intentional and foreseeable battery (Figure 8). 2 of 25 subjects in the Intentional Battery
condition judged the action constituting intentional battery (“throwing the man”) to be
permissible. Meanwhile, 19 of 25 subjects in the Foreseeable Battery condition judged the
action constituting foreseeable battery (“throwing the switch”) to be permissible. This difference
is significant: x 2 (1, N=50) = 24.4, p < .001.
31
0
5
10
15
20
25
Intentional Battery
Foreseeable Battery
Permiss. Impermiss.
Act Type
Subjects
Figure 8: Moral Judgments of Two Act Types in Experiment 2 (Intentional Battery vs. Foreseeable Battery)
X 2 (1, N=50) = 24.4, p < .001
0
2
4
6
8
10
12
Men Women Men Women
Permiss. Impermiss.
Intentional Battery
Subjects
Figure 9: Judgments of Two Act Types in Experiment 2 by Gender (Intentional Battery vs. Foreseeable Battery)
Foreseeable Battery
Act Type
X 2 (1, N=21) = 11.4, p < .001 (women).
X 2 (1, N=27) = 13.38, p < .001 (men).
32
Male and female participants who were given these two scenarios showed a similar
pattern of responses (Figure 9). 2 of 14 men and 0 of 11 women in the Intentional Battery
condition judged throwing the man the action constituting intentional battery (“throwing the
man”) to be permissible. Meanwhile, 11 of 13 men and 6 of 10 women in the Foreseeable
Battery condition judged the action constituting foreseeable battery (“throwing the switch”) to be
permissible. These differences are also significant, x 2 (1, N=21) = 11.4, p < .001 (women) and
x 2 (1, N=27) = 13.38, p < .001 (men).
3.2.2 Good Effects Outweigh Bad Effects vs. Bad Effects Outweigh Good Effects
Next, we describe the results of Experiment 2 in terms of the weighing of good and bad
effects (Figure 10). As indicated, 19 of 25 subjects who were given the Hank scenario (now re
categorized as “Good Effects Outweigh Bad Effects”) judged Hank’s throwing the switch to be
permissible. By contrast, none of the 25 subjects who were given the Karl scenario (“Bad
Effects Outweigh Good Effects”) judged Karl’s throwing the switch to be permissible. This
difference is significant: x 2 (1, N=50) = 30.65, p < .001.
Men’s and women’s responses followed the same pattern (Figure 11). 11 of 13 men and
6 of 10 women judged throwing the switch to be permissible in the Good Effects Outweigh Bad
Effects condition. By contrast, 0 of 14 men and 0 of 11 women held throwing the switch to be
impermissible in the Bad Effects Outweigh Good Effects condition. These results are also
significant, x 2 (1, N=21) = 11.4, p < .001 (women) and x 2 (1, N=27) = 19.99, p < .001 (men).
33
0
5
10
15
20
25
Bad Effect > Good Effect
Good Effect > Bad Effect
Permiss. Impermiss.
Act Type
Subjects
Figure 10: Moral Judgments of Two Act Types in Experiment 2 (Good Effects vs. Bad Effects)
X 2 (1, N=50) = 30.65, p < .001
0
2
4
6
8
10
12
Men Women Men Women
Permiss. Impermiss.
Bad Effect > Good Effect
Subjects
Figure 11: Judgments of Act Types in Experiment 2 by Gender (Good Effects vs. Bad Effects)
Good Effect > Bad Effect
Act Type
X 2 (1, N=21) = 11.4, p < .001 (women).
X 2 (1, N=27) = 19.99, p < .001 (men).
34
3.2.3 Intentional Battery vs. Consensual Contact
Third, we examine the comparison between intentional battery and consensual contact
(Figure 12). As indicated, 2 of 25 subjects in the Intentional Battery condition (“Ian”) judged the
action constituting intentional battery (“throwing the man”) to be permissible. By contrast, 24 of
25 subjects in the Consensual Contact condition (“Luke”) judged the action constituting
consensual contact (“throwing the man”) to be permissible. This difference is significant: x 2 (1,
N=50) = 38.78, p < .001.
Again, male and female responses conformed to the same pattern (Figure 13). 2 of 14
men and none of the 11 women in the Intentional Battery condition judged throwing the man to
be permissible. By contrast, all 7 of the men and all 16 of the women in the Consensual Contact
condition judged throwing the man to be permissible, x 2 (1, N=27) = 27.0, p < .001 (women) and
x 2 (1, N=21) = 14.0, p < .001 (men). 7
3.2.4 Justifications
Finally, we turn to our subjects’ expressed justifications, that is, the responses they
provided to justify or explain their judgments. Because we utilized a withinsubject design in
Experiment 2, we expected that these justifications would be significantly less adequate than the
corresponding justifications in Experiment 1, which relied on a betweensubject design. In
addition, we predicted that subjects presented with both an Intentional Battery scenario and a
Foreseeable Battery scenario, in particular, would not be able to justify their conflicting
intuitions.
35
0
5
10
15
20
25
Intentional Battery
Consensual Contact
Permiss. Impermiss.
Act Type
Subjects
Figure 12: Moral Judgments of Two Act Types in Experiment 2 (Intentional Battery vs. Consensual Contact)
X 2 (1, N=50) = 38.78, p < .001
0 2
4 6
8
10 12
14
16
Men Women Men Women
Permiss. Impermiss.
Intentional Battery
Subjects
Figure 13: Judgments of Act Types in Experiment 2 by Gender (Intentional Battery vs. Consensual Contact)
Consensual Contact
Act Type
X 2 (1, N=27) = 27.0, p < .001 (women).
X 2 (1, N=21) = 14.0, p < .001 (men).
36
Both of these predictions were confirmed. First, 35.4% (23/65) of participants gave no
justification and 38.5% (25/65) provided logically inadequate justifications, while only 20.0%
(13/65) provided logically adequate justifications. This contrasts sharply with the Experiment 1,
in which 50% of subjects provided logically adequate justifications.
Second, only 10% (1/10) of those subjects who were given both the Intentional Battery and
Foreseeable Battery scenarios and who attempted to provide some sort of explanation for their
judgments provided logically adequate justifications. The other 90% (9/10) provided logically
inadequate justifications. Further, as Table 2 reveals, this group’s expressed principles were
widely divergent. Many participants merely restated the problem they were asked to resolve or
otherwise provided answers which were nonresponsive. Moreover, several participants
appeared puzzled by the nature and strength of their intuitions and by how those intuitions
shifted as a result of apparently minor and inconsequential differences in the relevant action
descriptions.
3.3 Discussion
The results of Experiment 2 corroborate and extend those of Experiment 1. First, they
lend further support to the hypothesis that both men and women possess intuitive or unconscious
knowledge of the prohibition of intentional battery and the principle of double effect. By
imputing knowledge of these principles to our subjects, we can explain and predict their moral
intuitions. Specifically, we can explain why their intuitions flip so predictably when the standard
Bystander Problem is modified so that the costs of throwing the switch outweigh its benefits and
the standard Footbridge Problem is modified so that throwing the man no longer constitutes
intentional battery.
37
Table 2: Justifications for Bystander and Footbridge Pair of Problems in Experiment 2
Problem Type of Battery
Judgment Justification
Bystander Foreseeable Permissible
Footbridge Intentional Impermissible
Very odd. I don't know why I chose differently in the second scenario. The end result is the same. I just chose my gut responseand now am intrigued with how to reconcile them.
Bystander Foreseeable Permissible
Footbridge Intentional Impermissible
It's amazing that I would not throw a person but throw a switch to kill a person. I really wish there was more I could do for the 1 guy on the other track.
Bystander Foreseeable Permissible
Footbridge Intentional Impermissible
In either case, the moral decision rule depends on how close to the active killing of the man is.
Bystander Foreseeable Permissible
Footbridge Intentional Impermissible
Not acceptable to decide to risk someone else's life to save others.
Bystander Foreseeable Permissible
Footbridge Intentional Impermissible
I knowfive lives are five livesit's all about the guts. That's what it comes down to. Blaise Pascal got it all wrong.
Bystander Foreseeable Permissible
Footbridge Intentional Impermissible
The man, Hank can here actively influence a sequence of events which will limit damage (# of deaths). In the second event, he cannot throw another man onto the tracks because he will actively and deliberately kill an innocent bystander. Really an impossible choice.
Bystander Foreseeable Permissible
Footbridge Intentional Impermissible
Moral actors may be forced to make a decision between two passive choices where both will end rights. But to make action over passive choices requires another kind of analysis and degree of benefit.
Bystander Foreseeable Permissible
Footbridge Intentional Impermissible
In the first scenario it would be permissible to act as a utilitarian optimizer. In the second rights come into question.
Bystander Foreseeable Permissible
Footbridge Intentional Permissible
I believe that the ultimate question is that of lives lost. Some would argue that Hank and Ian would be morally justified in not stopping the train. While this may be true, it does not necessitate that it be morally unjustified to stop the train.
Bystander Foreseeable Impermissible
Footbridge Intentional Impermissible
For the first scenario, I wanted to draw a distinction between "is it permissible for him to throw the switch" and "does he have a duty to throw the switch," though I don't know if that would have changed my answer.
38
Second, the results of Experiment 2 suggest that individuals have limited conscious
access to these principles (or to whichever principles are actually responsible for generating their
intuitions). Even under a liberal coding scheme, only 20% of subjects provided logically
adequate justifications for their judgments. Further, only 10% did so when asked to explain the
most challenging pair of moral intuitions, namely, the perceived contrast between the Bystander
and Footbridge problems.
Third, Experiment 2 provides some initial evidence of framing effects. Most notably,
only 76% (19/25) of respondents in the Foreseeable Battery condition judged Hank’s throwing
the switch to be permissible, a much lower percentage than the 95% (19/20) of participants who
gave this response in Experiment 1. These effects were slightly less pronounced in males than in
females, but they were discernible in both groups: 85% (11/13) of men gave this response, as
compared with 100% (11/11) in Experiment 1, whereas 60% (6/10) of women gave this
response, as compared with 89% (8/9) in Experiment 1. These sample sizes are obviously quite
small, and it therefore would be premature to draw any firm conclusions about these effects at
this point. It seems likely, however, that a more systematic investigation of framing effects in
larger populations would yield significant results, perhaps including significant gender
differences. Nevertheless, the main pattern of intuitions Experiment 2 fell in line with those of
Experiment 1, in that both men and women in the aggregate recognized the relevant distinctions
among the Bystander, Footbridge, Good Effects Outweigh Bad Effects, and Consensual Contact
Problems. Hence Experiment 2 provides additional evidence that at least some moral intuitions
and the principles that generate them are widely shared, irrespective of demographic variables
like gender.
39
4. Experiment 3
Participants in Experiments 12 included persons from countries other than the United
States, including Belgium, Canada, Columbia, Denmark, France, Germany, Israel, Japan, Korea,
Lebanon, Mexico, and Puerto Rico. Nonetheless, only one or a few individuals from each of
these countries were represented, and the majority of participants were United States citizens or
members of other Western nations. Accordingly, Experiment 3 was designed to investigate the
moral intuitions of a “nonWestern” population.
4.1 Method
4.1.1 Participants
Participants were 39 adult volunteers ages 1865 from the broader Cambridge,
Massachusetts community, all of whom had emigrated from China within the previous five years
and most of whom had done so within the previous two years. The group included 19 women
and 19 men; 1 participant did not volunteer information about his or her gender. 8
4.1.2 Stimuli and Procedure
Same as Experiment 2, except that participants in this study were not asked to justify
their judgments. 14 participants were given the Intentional Battery scenario, 16 participants
were given the Intentional Battery scenario, 15 participants were given the Bad Effects Outweigh
Good Effects scenario, and 16 participants were given the Consensual Contact scenario. The
assignment of participants to scenario type was random.
40
4.2 Results
4.2.1 Intentional Battery vs. Foreseeable Battery
Once again, we present the results of Experiment 3 in stages, beginning with the
comparison between intentional and foreseeable battery (Figure 14). 2 of 14 subjects in the
Intentional Battery condition judged the action constituting intentional battery (“throwing the
man”) to be permissible. Meanwhile, 11 of 14 subjects in the Foreseeable Battery condition
judged the action constituting foreseeable battery (“throwing the switch”) to be permissible.
This difference is significant: x 2 (1, N=28) = 11.72, p < .001.
4.2.2 Good Effects Outweigh Bad Effects vs. Bad Effects Outweigh Good Effects
Due to the limited number of subjects in Experiment 3, we refrain from analyzing our
responses by gender. Instead, we turn directly to the comparison between good and bad effects
(Figure 15). 11 of 14 subjects in the Good Effects Outweigh Bad Effects condition judged
throwing the switch to be permissible. Meanwhile, only 1 of 15 subjects in the Bad Effects
Outweigh Good Effects condition judged throwing the switch to be permissible. This difference
is significant: x 2 (1, N=29) = 16.81, p < .001.
4.2.3 Intentional Battery vs. Consensual Contact
Third, we examine the contrast between intentional battery and consensual contact
(Figure 16). 2 of 16 subjects in the Intentional Battery condition judged throwing the man to be
permissible. Meanwhile, 14 of 16 subjects in the Consensual Contact condition judged throwing
the man to be permissible. This difference is significant: x 2 (1, N=32) = 18.0, p < .001.
41
0
2
4
6
8
10
12
Intentional Battery
Foreseeable Battery
Permiss. Impermiss.
Act Type
Subjects
Figure 14: Moral Judgments of Two Act Types in Experiment 3 (Intentional Battery vs. Foreseeable Battery)
X 2 (1, N=28) = 11.72, p < .001
0
2
4
6
8
10
12
14
Bad Effect > Good Effect
Good Effect > Bad Effect
Permiss. Impermiss.
Act Type
Subjects
Figure 15: Moral Judgments of Two Act Types in Experiment 3 (Good Effects vs. Bad Effects)
X 2 (1, N=29) = 16.81, p < .001
42
4.3 Discussion
The results of Experiment 3 suggest that the central findings of Experiments 12 are not
limited to persons educated or raised in the United States or other Western nations. Instead, they
suggest at least some operative principles of moral competence, including the prohibition of
intentional battery and the principle of double effect, are transnational and may be universal.
While claims of universality are often controversial and should be made with care, this
hypothesis is consistent with the role these principles already play in international law (i.e., the
“law of nations”). For example, the principle of double effect’s implied norm of noncombatant
immunity—that is, its prohibition against directly targeting civilians, together with its qualified
0
2
4
6
8
10
12
14
Intentional Battery
Consensual Contact
Permiss. Impermiss.
Act Type
Subjects
Figure 16: Moral Judgments of Two Act Types in Experiment 3 (Intentional Battery vs. Consensual Contact)
X 2 (1, N=32) = 18.0, p < .001
43
acceptance of harming civilians as a necessary side effect of an otherwise justifiable military
operation—has long been part of customary international law and is codified in Article 48 of the
First Protocol (1977) to the 1949 Geneva Conventions (e.g., Henkin, Pugh, Schacter & Smit,
1993, p. 36465). Likewise, the principle of double effect’s implied norm of proportionality is
also part of customary international law and is codified in Articles 2223 of the Hague
Convention of 1907 (e.g., Henkin et al., 1993, p. 368). Further, many important legal doctrines,
in both American law and the domestic law of other nations, turn on an analysis of purpose and
the distinction between intended and foreseen effects (Mikhail, 2002). Hence it is perhaps not
surprising to discover that thought experiments like trolley problems, which implicate these
concepts, elicit widely shared moral intuitions from individuals of different cultural
backgrounds.
Nevertheless, while Experiment 3 provides some initial support for the existence of moral
universals, this support is obviously quite limited. More empirical investigation on a much wider
scale is necessary before specific claims about universality could be defensible. In the context of
our hypothesis, what would perhaps be most compelling in this regard would be to collect
additional evidence on trolley intuitions from individuals from around the world, in particular
those from markedly different cultural, social, religious, and socioeconomic backgrounds. To do
this, one would presumably need to translate these thought experiments into different languages.
One might also need to modify them in culturally specific ways, insofar as certain inessential
elements of the scenarios (e.g., trolleys) may be unfamiliar. We do not attempt these extensions
in this paper but merely identify them as objectives of future research which flow naturally from
the studies presented here. 9
44
5. Experiment 4
Experiments 13 suggest that the moral competence of adults includes the prohibition of
intentional battery and the principle of double effect. By attributing intuitive knowledge of these
principles to our subjects, we can explain and predict their moral intuitions.
As Table 3 indicates, the computations presupposed by this explanation can be
reconstructed in the simple form of series of yesno questions or decision tree. Presented with a
presumptively wrong action, such as those harmful actions at issue in the Trolley, Transplant,
Bystander, and Footbridge problems, the decisionmaker first asks whether the proposed action’s
good effects outweigh its bad effects. If the answer is no, then the decisionmaker concludes the
action is impermissible. If the answer is yes, then the decisionmaker next asks whether the
action involves committing a battery as a means to achieve a given end. If the answer is no, then
the decisionmaker concludes that the action is permissible. If the answer is yes, then the
decisionmaker concludes that the action is impermissible.
Table 3: Explanation of Trolley, Transplant, Bystander, and Footbridge Intuitions as a Function of the Principle of Double Effect Problem (Agent) Homicide? Battery? Good Effects
Outweigh Bad Effects?
Battery as a Means?
Deontic Status
Trolley (Charlie) Yes Yes Yes No Permissible Transplant (Dr.Brown) Yes Yes Yes Yes Impermissible Bystander (Denise) Yes Yes Yes No Permissible Footbridge (Nancy) Yes Yes Yes Yes Impermissible Bystander (Hank) Yes Yes Yes No Permissible Footbridge (Ian) Yes Yes Yes Yes Impermissible Modified Bystander (Karl) Yes Yes No No Impermissible Modified Footbridge (Luke) No No Yes No Permissible
45
Table 3 illustrates that our central findings up to this point can be explained in the
foregoing terms. However, our findings are also consistent with an alternative explanation,
according to which trolley intuitions do not depend primarily on the mental state properties of an
agent’s action plan, but on its temporal properties, in particular whether its bad effects (or its
prima facie wrongs such as battery) are mentally represented as occurring before or after its
good effects. In particular, our central findings could be equally explained by the socalled
“Pauline Principle,” which holds that “it is impermissible to do evil that good may come”
(Anscombe, 1970; Donagan, 1977). Suitably formalized, a temporal interpretation of this
principle would in effect compute as “impermissible” any action plan which represents either a
bad effect or a battery occurring before a good effect. As Table 4 reveals, all but one of the
impermissible act representations examined thus far possess this property, the lone exception,
Karl’s throwing the switch in the Modified Bystander Problem, being explainable on other
grounds. 10 Hence the Pauline Principle (or some suitable formalization of it) also constitutes
(part of) an observationally adequate explanation of the results of Experiments 13.
Table 4: Explanation of Trolley, Transplant, Bystander, and Footbridge Intuitions as a Function of the Pauline Principle Problem (Agent) Homicide? Battery? Good Effects
Outweigh Bad Effects?
Battery or Bad Effects Prior to Good Effects?
Deontic Status
Trolley (Charlie) Yes Yes Yes No Permissible Transplant (Dr.Brown) Yes Yes Yes Yes Impermissible Bystander (Denise) Yes Yes Yes No Permissible Footbridge (Nancy) Yes Yes Yes Yes Impermissible Bystander (Hank) Yes Yes Yes No Permissible Footbridge (Ian) Yes Yes Yes Yes Impermissible Modified Bystander (Karl) Yes Yes No No Impermissible Modified Footbridge (Luke) No No Yes No Permissible
46
Experiment 4 was designed to investigate this logically possible alternative, as well as to
provide an additional check on the abstract concept of battery utilized in Experiments 13. To
accomplish these objectives, we constructed the following two new scenarios (Mikhail, 2000):
The Looped Track Problem—Intentional Battery (Ned) Ned is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Ned sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Ned is standing next to a switch, which he can throw, that will temporarily turn the train onto a side track. There is a heavy object on the side track. If the train hits the object, the object will slow the train down, thereby giving the men time to escape. Unfortunately, the heavy object is a man, standing on the side track with his back turned. Ned can throw the switch, preventing the train from killing the men, but killing the man. Or he can refrain from doing this, letting the five die.
Is it morally permissible for Ned to throw the switch?
The Looped Track Problem—Foreseeable Battery (Oscar) Oscar is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Oscar sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Oscar is standing next to a switch, which he can throw, that will temporarily turn the train onto a side track. There is a heavy object on the side track. If the train hits the object, the object will slow the train down, thereby giving the men time to escape. Unfortunately, there is a man standing on the side track in front of the heavy object, with his back turned. Oscar can throw the switch, preventing the train from killing the men, but killing the man. Or he can refrain from doing this, letting the five die.
Is it morally permissible for Oscar to throw the switch? 11
In the first scenario (“Ned”), battery is embedded within the agent’s action plan as a means. In
the second scenario (“Oscar”), battery is embedded within the agent’s action plan as a side
effect. Unlike the scenarios used in Experiments 13, however, the NedOscar pair is not
distinguishable in terms of their morally neutral basic actions (e.g., “throwing the switch” vs.
“throwing the man”) or the temporal properties of their good effects, bad effects, and batteries.
47
Instead, five fundamental properties are held constant between these two scenarios: (1) good
effects, (2) bad effects, (3) ultimate purpose or goal, (4) morally neutral basic action (“throwing
the switch” in each case), and (5) the temporal order of good effects, bad effects, and batteries.
Further, both are “impersonal” scenarios in the sense defined by Greene and colleagues (Greene
et al., 2001). The NedOscar pair is therefore the purest “minimal pair” of scenarios used thus
far in our investigations.
5.1 Method
5.1.1 Participants
Participants were 309 adult volunteers ages 1835 from the M.I.T. community. Because
the postulated difference between the relevant scenarios was quite subtle, we greatly increased
our sample sizes in order to be able to detect statistically significant differences in their
underlying representations. For the purposes of this study, we did not actively collect
information on participants’ gender. However, a retrospective analysis of participants’ names
indicated that at least 119 men and at least 117 women participated in this study. An additional
73 individuals did so whose gender was not readily ascertainable in this manner.
5.1.2 Stimuli and Procedure
Two scenarios were used. In one (“Ned”), battery was embedded within the agent’s
action plan as a means (henceforth “Intentional Battery”). In the other (“Oscar”), battery was
embedded within the agent’s action plan as a side effect (henceforth “Foreseeable Battery”). In
both scenarios, good effects, bad effects, ultimate purpose or goal, and morally neutral basic
48
action (“throwing the switch”) were held constant. In addition, temporal order of good effects,
bad effects, and batteries were also held constant.
A betweensubject design was utilized. Each participant received a questionnaire
containing one written scenario, accompanied by a diagram designed to make the scenario fully
comprehensible (on file with author). The participant was instructed to read the scenario and to
determine whether the proposed action it described was “morally permissible.” Unlike
Experiments 1 and 2, the participants were not asked to provide justifications for their
judgments; nevertheless, many individuals did provide justifications on their own initiative, and
these responses are analyzed below. 159 individuals were given the Intentional Battery scenario
(“Ned”) and 150 were given the Foreseeable Battery scenario (“Oscar”). The assignment of
participants to scenario type was random.
5.2 Results
5.2.1 Judgments
The main results of Experiment 4 are summarized in Figure 17. 76/159 or 48% of
participants who were given the Intentional Battery scenario (“Ned”) judged throwing the switch
to be permissible. Meanwhile, 93/150 or 62% of participants who were given the Foreseeable
Battery scenario (“Oscar”) judged throwing the switch to be permissible. This difference is
significant: x 2 (1, N=302) = 6.52, p < .025.
Judgments of men and women are presented in Figure 18. 54% (29/54) of men and 45%
(32/71) of women in the Intentional Battery condition judged throwing the switch to be
permissible. Meanwhile, 72% (47/65) of men and 59% (27/46) of women held the same action
to be permissible in the Foreseeable Battery condition. These contrasts are significant for men,
49
0 5 10 15 20 25 30 35 40 45 50
Men Women Men Women
Permiss. Impermiss.
Intentional Battery
Subjects
Figure 18: Judgments of Act Types in Experiment 4 by Gender (Intentional Battery vs. Foreseeable Battery)
Foreseeable Battery
X 2 (1, N=117) = 2.07, p < .2 (women).
X 2 (1, N=119) = 4.42, p < .05 (men).
0 10 20 30 40 50 60 70 80 90 100
Intentional Battery
Foreseeable Battery
Permiss. Impermiss.
Act Type
Subjects
Figure 17: Moral Judgments of Two Act Types in Experiment 4 (Intentional Battery vs. Foreseeable Battery)
X 2 (1, N=302) = 6.52, p < .025
50
x 2 (1, N=119) = 4.42, p < .025, but not for women, x 2 (1, N=117) = 2.07, p < .20. Hence, based
on this data, the null hypothesis that these scenarios are indistinguishable is falsified for men but
not for women. However, larger sample sizes would presumably support the same conclusion
with respect to women. These results also indicate a slight but discernible trend in which, in the
aggregate, men appear more willing than women to permit throwing the switch in these
circumstances.
5.2.2 Justifications
Although we did not ask for justifications, 49 subjects provided some sort of verbalized
explanation of their judgments on their own initiative. 30 of these 49 responses, or 9.9%
(30/302) of the overall total, were logically adequate justifications, while 19 of these 49
responses, or 6.3% (19/302) of the overall total, were logically inadequate. Meanwhile, 83.8%
(253/302) of subjects in Experiment 4 gave no justification.
5.3 Discussion
According to our hypothesis, when individuals encounter hypothetical fact patterns like the
Trolley, Transplant, Footbridge and Bystander problems, they spontaneously compute
unconscious representations of the relevant actions in terms of ends, means, and side effects.
They also distinguish battery as a means from battery as a side effect, prohibiting the former but
permitting the latter in the specific circumstances depicted by these problems. Consequently, we
were led to make two related predictions about the Ned and Oscar pair of scenarios. First, we
predicted that subjects would perceive an intuitive distinction between these scenarios, even
though their overt differences are quite subtle, because in only one of them (Ned) does the agent
51
intend to commit a battery as a means of furthering his good end. In the Oscar scenario, by
contrast, we assumed that subjects would compute a representation according to which battery is
not a means but a foreseen side effect. Second, we predicted that, on the basis of this distinction
between means and side effects, subjects would judge Oscar’s act of throwing the switch to be
permissible but Ned’s act of throwing the switch to be impermissible.
The results of Experiment 4 confirmed the first prediction. Although the differences
between these fact patterns are minimal, our subjects did in fact distinguish the two scenarios to
a statistically significant extent. That is, we were able to falsify the null hypothesis that these
scenarios are intuitively indistinguishable. This implies that, despite sharing the five
fundamental properties described above, the Ned and Oscar scenarios trigger distinct mental
representations whose properties are morally salient. This in turn lends at least some support to
the hypothesis that the operative distinction between these scenarios is the distinction between
battery as a means and battery as a side effect.
By contrast, the second prediction did not hold, or rather, it held only to a limited extent.
Although a majority (62%) of those participants in the Foreseeable Battery condition held
Oscar’s throwing the switch to be permissible, while a minority (47%) of those in Intentional
Battery condition held Ned’s throwing the switch to be permissible, the contrast between these
percentages was less sharp than in our previous studies. Further, the Ned responses were no
different than chance in this regard. This was also a departure from our prior studies, in which
the number of participants holding acts constituting intentional battery to be permissible was
small enough to warrant the claim that, as a general matter, individuals regard these acts to be
impermissible.
52
Nevertheless, Experiment 4 did confirm an intuitive distinction between this pair of
cases, despite their close similarities. Further, several explanations of these comparatively
anomalous results suggest themselves and raise interesting problems for future research. We
briefly mention two such possibilities here, leaving their investigation for another occasion.
First, although trolley intuitions are normally quite sharp (Thomson, 1986), it is also a
familiar observation that they “begin to fail above a certain level of complexity” (Nagel, 1986, p.
174). Indeed, some trolley problems are so complex and bizarre that they do not appear to be
particularly useful given the central aims and methods of cognitive science (e.g., Unger, 1996).
While the NedOscar pair arguably does not fit the latter category, these scenarios also are
undeniably more complex and difficult to process than the problems used in our prior studies.
Indeed, this is one reason we provided participants in this study with a diagram to facilitate
comprehension. Considerations like these suggest that the comparatively anomalous results of
Experiment 4 may be understood as the predictable effect of increasing the amount of relevant
information subjects are required to process in the course of fastening upon a morally salient
structural description. In the case of language, it is well understood that certain nonlinguistic
factors, such as memory limitations and other general limits on how the mind processes
information, can interrupt or interfere with the parsing of linguistic expressions. This, of course,
is one reason why linguists draw the competenceperformance distinction and a related
distinction between grammaticality and acceptability judgments (Chomsky, 1965; Haegeman,
1994; Mikhail, 2002). A similar situation has been thought to obtain in the moral domain
(Dwyer, 1999; Mikhail, 2000). If so, then it is possible that the comparatively anomalous
findings of Experiment 4 can be explained within the framework of a moral competence
performance distinction.
53
Second, a disparity between intended and actual result (LaFave, 2003) may also help
explain these findings. The standard operational test for distinguishing necessary means from
unnecessary side effect is a counterfactual test, according to which one asks whether the actor
would have acted differently if the negative result could have been avoided. By this measure,
Ned but not Oscar is presumed to intend a battery a means to achieving his end, because Ned’s
objectives include causing the train to hit the man, whereas Oscar’s objectives do not. Put
differently, if circumstances were altered and the man were no longer on the side track, then
presumably Ned would not throw the switch, because his immediate purpose in doing so is to
cause the train to hit the man. By contrast, Oscar presumably would still throw the switch in
these circumstances, because his immediate purpose is to cause the train to hit the object, of
which the man is standing in front.
The disparity between intended and actual result complicates this analysis. This concept
refers to the fact that, although Oscar’s intentions do not include saving the men by causing the
train to hit the man, the actual result of his action will likely be just that, because unless the man
jumps off the track in time, Oscar’s throwing the switch will cause the train to hit the man before
the train hits the object. If so, then, from an ex post perspective, there is a sense in which it will
be true that Oscar committed a battery as a means to saving the five men, even though, from an
ex ante perspective, it was not his intention to do so.
Significantly, this potential disparity between intended and actual result is not present in
any of the other Foreseeable Battery scenarios used in our investigations. Instead, in these other
scenarios, the intended good result of saving the five men necessarily occurs before, and
independently of, the foreseen bad result of committing a battery by causing the train to hit the
man. 12 Hence this important structural difference between the Oscar scenario and the
54
Foreseeable Battery scenarios utilized in our previous experiments may also help explain the
comparatively anomalous results of Experiment 4.
6. Experiment 5
As interpreted here, the principle of double effect is a complex principle of justification
which holds that an otherwise prohibited action may be permissible if (1) the act itself is not
wrong, (2) the good but not the bad effects are intended, (3) the good effects outweigh the bad
effects, and (4) no morally preferable alternative is available. In Experiments 14, we focused
mainly on the first and third provisions of this principle. In Experiment 5, we turned our
attention to the second and fourth provisions.
6.1 Method
6.1.1 Participants
Participants were 60 adult volunteers ages 1865 from the Washington, D.C. area. The
group consisted of 32 women and 28 men.
6.1.2 Stimuli and Procedure
Four scenarios were used, one of which was the Bystander Problem (see §3.1.2) and the
other three of which were the following modifications of this problem:
Modified Bystander Problem # 2 (Mark: Intentional Homicide) Mark is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Mark sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed, and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Mark is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from killing the men. Unfortunately, there is a man on the side track. Mark can throw the switch, killing him; or he can refrain from doing this, letting the men die.
55
Mark then recognizes that the man on the side track is someone who he hates with a passion. “I don’t give a damn about saving those five men,” Mark thinks to himself, “but this is my chance to kill that bastard.”
Is it morally permissible for Mark to throw the switch in order to kill the man on the side track? 13
Modified Bystander Problem #3 (Richard: Better Alternative) Richard is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Richard sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed, and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Richard is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from killing the men. Unfortunately, there is a man on the side track with his back turned. Richard can throw the switch, killing him; or he can refrain from doing this, letting the men die.
By pulling an emergency cord, Richard can also redirect the train to a third track, where no one is at risk. If Richard pulls the cord, no one will be killed. If Richard throws the switch, one person will be killed. If Richard does nothing, five people will be killed.
Is it morally permissible for Richard to throw the switch?
Modified Bystander Problem #4 (Steve: Disproportional Death) Steve is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Steve sees what has happened: the driver of the train saw a man walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the man. It is moving so fast that he will not be able to get off the track in time. Fortunately, Steve is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from killing the man. Unfortunately, there are five men standing on the side track with their backs turned. Steve can throw the switch, killing the five men; or he can refrain from doing this, letting the one man die.
Is it morally permissible for Steve to throw the switch?
In the first scenario, the agent (“Mark”) contemplates whether to throw the switch, not for the
purpose of saving the men, but for the purpose of killing the man on the side track (henceforth
“Intentional Homicide”). 14 In the second scenario, the agent (“Richard”) must choose whether to
throw the switch in order to prevent the train from killing five at the cost of killing one, but there
56
is a third option that will result in no one being killed (henceforth, “Better Alternative”). Finally,
in the third scenario, which was designed to replicate and extend the results of Experiments 23
concerning the manipulation of good and bad effects, the agent (“Steve”) must choose whether to
throw a switch in order to prevent a runaway train from killing one person, knowing that doing
so will cause the train to kill five other people (henceforth “Disproportional Death”).
A withinsubject design was utilized. Each of the 60 participants received a written
questionnaire containing two scenarios, including (1) one of these three scenarios and (2) the
original Bystander Problem (Hank). Hence there were three conditions, with 20 participants
assigned randomly to each condition. Participants were asked to read the scenarios and decide
whether the proposed actions were “morally permissible” and then to explain or justify their
responses.
6.2 Results
6.2.1 Intentional Homicide v. Bystander Problem
We present the results of Experiment 5 in stages, beginning with the comparison between
the Intentional Homicide Problem and the original Bystander Problem (Figure 19). 4 of 20
subjects in this condition judged the action constituting intentional homicide (“Mark’s throwing
the switch”) to be permissible. Meanwhile, 16 of 20 subjects in this condition judged the same
action (“Hank’s throwing the switch”) in the original Bystander Problem to be permissible. This
difference is significant: x 2 (1, N=20) = 14.4, p < .001.
57
0 2 4 6 8 10 12 14 16
Intentional Homicide
Bystander Problem
Permiss. Impermiss.
Act Type
Subjects
Figure 19: Moral Judgments of Two Act Types in Experiment 5 (Intentional Homicide vs. Bystander Problem)
X 2 (1, N=20) = 14.4, p < .001
0 2 4 6 8 10 12 14 16
Better Alternative
Bystander Problem
Permiss. Impermiss.
Act Type
Subjects
Figure 20: Moral Judgments of Two Act Types in Experiment 5 (Better Alternative vs. Bystander Problem)
X 2 (1, N=20) = 8.12, p < .01
58
6.2.2 Better Alternative v. Bystander Problem
Due to the limited number of subjects in Experiment 5, we refrain from analyzing our
responses by gender. Instead, we turn directly to the comparison between the Better Alternative
Problem and the Bystander Problem and (Figure 20). 15 of 20 subjects in this condition judged
Hank’s throwing the switch to be permissible in the Bystander Problem. By contrast, only 6 of
20 subjects judged the same action (“Richard’s throwing the switch”) to be permissible in the
presence of a better alternative. This difference is significant: x 2 (1, N=20) = 8.12, p < .01.
6.2.3 Disproportional Death v. Bystander Problem
Third, we turn to the comparison between the Disproportional Death Problem and the
Bystander Problem and (Figure 21). 3 of 20 subjects in this condition judged the action
0 2 4 6 8 10 12 14 16 18
Disproportional Death
Bystander Problem
Permiss. Impermiss.
Act Type
Subjects
Figure 21: Moral Judgments of Two Act Types in Experiment 5 (Disproportionate Death vs. Bystander Problem)
X 2 (1, N=20) = 7.03, p < .01
59
generating Disproportional death (“Steve’s throwing the switch”) to be permissible. By contrast,
11 of 20 subjects in this condition judged the same action (“Hank’s throwing the switch”) in the
original Bystander Problem to be permissible. This difference is significant: x 2 (1, N=20) = 7.03,
p < .01.
6.2.4 Justifications
Finally, we turn to participants’ justifications for their judgments. The fact that we
utilized a withinsubject design led us to expect that justifications in Experiment 5 would be less
adequate than those in Experiment 1 (which used a betweensubject design) and more like those
in Experiment 2 (which used a withinsubject design). This prediction was confirmed. Fifty
three percent (32/60) of participants gave no justification, 23% (14/60) provided logically
inadequate justifications, while 23% (14/60) provided logically adequate justifications—a figure
much closer to the percentage of logically adequate justifications in Experiment 2 (20%) than in
Experiment 1 (50%).
6.3 Discussion
The results of Experiment 5 lend further support to our hypothesis that the moral
competence of adults includes intuitive knowledge of the principle of double effect. By
attributing knowledge of this principle to our subjects, we can explain why their deontic
intuitions change when the standard Bystander Problem is modified such that (1) the bad but not
the good effects are intended, or (2) a morally preferable alternative is available. Additionally,
the results of Experiment 5 confirm our previous finding that individuals’ deontic intuitions are
also susceptible to systematic manipulation of good and bad effects.
60
Experiment 5 also provides additional evidence of framing effects. For example, 80%
(16/20) of participants in the Intentional Homicide condition (in which participants were given
both the Intentional Homicide and Bystander Problems in a withinsubject design) and 75%
(15/20) of participants in the Better Alternative condition (in which participants were given both
the Better Alternative and Bystander problems in a withinsubject design) judged throwing the
switch in the Bystander Problem to be permissible. Further, only 55% (11/20) of participants in
the Disproportional Death condition (in which participants were given both the Disproportional
Death and Bystander problems in a withinsubject design) judged throwing the switch in the
Bystander Problem to be permissible. These percentages, and particular the last figure, contrast
sharply with the 95% of respondents who judged throwing the switch in the Bystander Problem
to be permissible in the betweensubject design utilized in Experiment 1. This in turn suggests
that whether individuals are prepared to permit foreseeable battery or homicide on broadly
utilitarian grounds in these circumstances may depend on how that question is framed. However,
because the number of participants in Experiment 5 was again relatively small, we refrain from
drawing any firm conclusions about framing effects at this stage of our inquiry and merely
identify this issue as a topic for future research.
7. Experiment 6
Experiments 15 were designed to investigate the moral competence of adults only. In
Experiment 6, we extended this inquiry in a provisional way to include the moral competence of
children ages 812. Our objectives in this regard were limited. First, we wished to determine
whether children in this age group had moral intuitions about a pair of relatively simple trolley
problems that were similar to the intuitions of adults. Second, and more generally, we wished to
61
explore the potential of using these and similar thought experiments to investigate the moral
competence of young children. A central premise of both Piaget’s (1932/1965) and Kohlberg’s
(1981, 1984) theory of moral development is that the moral conceptions of adults and children
consist of fundamentally different principles. A corollary is the assumption that moral
development is something that happens gradually over the course of one’s lifetime, and thus
should be investigated by means of longitudinal studies. In this study, we began testing these
assumptions by presenting a group of children with two of the scenarios we used in studying the
moral intuitions of adults. Finally, we wanted to investigate whether these children’s moral
intuitions could be explained with reference to the prohibition of intentional battery and the
principle of double effect. That is, we wished to discover whether children, like adults, would
treat these two cases of necessity differently, depending on whether battery is used as an
intended means to a given end or as a foreseen side effect. In this way, we sought to inquire
whether these principles emerge and become operative relatively early in mental development,
thereby raising the possibility that specific poverty of the stimulus questions could be formulated
in the moral domain.
7.1 Method
7.1.1 Participants
Participants were 30 children ages 812 who were recruited with parental consent from
four metropolitan areas: Cambridge, Massachusetts; Knoxville, Tennessee; Toledo, Ohio; and
the District of Columbia. There were 14 girls and 16 boys.
7.1.2 Stimuli and Procedure
62
Two scenarios were used. In the first scenario (“Dr. Brown”), which was modeled on the
Transplant Problem, battery was embedded within an agent’s action plan as a means (henceforth
“Intentional Battery”). In the second scenario (“Charlie”), which was modeled on the Trolley
Problem, battery was embedded within the agent’s action plan as a side effect (henceforth
“Foreseeable Battery”) (see §9 for actual text).
A between subject design was utilized. Each of the 30 children was given a written
questionnaire containing either the Intentional Battery or the Foreseeable Battery scenario. The
child was first instructed to read the scenario and then to decide whether the proposed action it
described was “wrong.” For the purposes of this experiment, we took for granted the standard
assumption in deontic logic that “wrong” is logically equivalent to “not morally permissible”
(Prior, 1955; Von Wright, 1951) and reasoned that the children would have an easier time
answering a question using the term “wrong” than one using the phrase “morally permissible.”
The child was also asked on a separate page to provide an explanation for his or her response. 15
children were given the Intentional Battery scenario and 15 were given the Foreseeable Battery
scenario. Assignment of participant to scenario type was random.
7.2 Results
7.2.1 Judgments
The main results of Experiment 6 are summarized in Figure 22. 6 of 15 children in the
Intentional Battery condition judged the action constituting intentional battery to be permissible.
By contrast, 14 of the 15 children in the Foreseeable Battery condition judged the action
constituting unintentional battery to be permissible. This difference is significant: x 2 (1, N=30) =
9.6, p < 0.01.
63
7.2.2 Justifications
Due to the limited number of participants in this experiment, we refrain from analyzing
participants’ responses by gender. 15 Instead, we turn to children’s expressed justifications for
their judgments. These justifications were categorized according to the same coding scheme
used in our previous experiments. 46.7% (14/30) of participants gave no justification, 13.3%
(4/30) provided logically inadequate justifications, while 40% (12/30) provided logically
adequate justifications.
7.3 Discussion
Although the results of Experiment 6 are limited, they constitute at least some initial
evidence that the moral competence of 812 year old children includes the prohibition of
0
2
4
6
8
10
12
14
Intentional Battery
Foreseeable Battery
Permiss. Impermiss.
Act Type
Subjects
Figure 22: Moral Judgments of Two Act Types in Experiment 5 (Intentional Battery vs. Foreseeable Battery)
X 2 (1, N=30) = 9.6, p < .01
64
intentional battery and the principle of double effect. Put differently, they suggest that simple
deontological or consequentialist principles alone may be inadequate to describe the intuitive
moral knowledge of children ages 812. More generally, these results support the efficacy of
using trolley problems to investigate the moral competence of children, not only those of this age
group but possibly even much younger populations. This conclusion is significant in part
because trolley problems are qualitatively more complex than the questions used by researchers
in the Piagetian tradition (see §8.2.1).
Turning to adequacy of justifications, it is notable that the justifications offered by
children in Experiment 6 were only marginally less adequate than the corresponding
justifications offered by adults in Experiment 1 (which also relied upon relatively simple trolley
problems presented in a betweensubject design). In particular, the percentage of children who
provided logically adequate justifications (40% or 12/30) compares favorably with the
percentage of adults (50% or 20/40) of adults who did so in Experiment 1. Nevertheless, as was
the case with adults, many of these logically adequate justifications were manifestly incapable of
accounting for the divergent pattern of intuitions elicited in this experiment, in which saving five
people at the cost of killing one person is felt to be permissible in one case but not the other.
Hence Experiment 6 provides further evidence that, like adults, children ages 812 possess
unconscious moral knowledge that may be largely inaccessible to deliberate introspection.
Two further tentative conclusions may be drawn from Experiment 6. First, together with
the results of our previous experiments, the results of Experiment 6 imply that at least some of
the operative principles of moral competence, such as the distinction between intended means
and foreseen side effect, are invariant throughout the course of moral development, at least
between ages 865. This conclusion runs counter to one of the most basic assumptions of both
65
Piaget’s (1932/1965) and Kohlberg’s (1981, 1984) approach to moral development, according to
which the adult’s and child’s moral competence are comprised of fundamentally different
principles. A corollary of this assumption is the view that moral development is something that
happens gradually over the course of one’s lifetime, and thus should be investigated by means of
longitudinal studies. The findings of Experiment 6 call at least some aspects of this investigative
procedure into question, raising the possibility that, like language, vision, and other cognitive
systems, moral development involves predetermined critical stages, after which moral
competence more or less stabilizes.
Finally, the results of Experiment 6 also suggest, at least tentatively, that it may be
possible to formulate poverty of the stimulus arguments in the moral domain. While this
possibility is theoretically intriguing, we refrain from drawing any firm conclusions about it here.
Instead, we simply note that more experimental work must be done to determine whether certain
complex moral principles, such as those investigated here, are explicitly taught or otherwise
available to the developing child during the acquisition process (Harman, 2000; Mikhail, 2000).
Offhand, this seems unlikely, particularly in light of the discovery that adults do not explain or
justify their own moral intuitions with reference to these principles. Indeed, the fact that at least
some operative moral principles appear to be nonintrospectible makes it plausible to suppose
that these principles are not taught to successive generations explicitly. Hence we may
reasonably assume, as a working hypothesis, that they are the developmental consequences of an
innate cognitive faculty (Mikhail, 2000; Mikhail et al., 1998). However, this assumption is
largely speculative and the issue requires more empirical investigation.
66
8. General Discussion
Taken together, the studies presented here constitute significant evidence that adults
possess intuitive or unconscious knowledge of complex moral principles, including the
prohibition of intentional battery and the principle of double effect. Additionally, Experiment 6
provides some evidence for inferring that the same may be true of ages 8 to 12. By attributing
this knowledge to experimental subjects, we can predict and explain their moral intuitions.
Because the imputed knowledge is intuitive and not fully open to conscious introspection, we can
also advance a tentative explanation of why relatively few individuals appear capable of
providing logically adequate justifications of their judgments, even on an extremely permissive
interpretation of what counts as logically adequate, and why virtually no individuals appear
capable of providing observationally adequate justifications, that is, justifications from which the
systematic pattern of intuitions elicited by these experiments can be mechanically derived (see
§1.1). The explanation, simply put, is that moral cognition appears to involve unconscious
computation, that is, mental operations which are not consciously accessible. In this respect,
moral cognition may be compared to other cognitive capacities, such as language, vision, object
perception, and face recognition, all of which also involve unconscious computation. Human
language, in particular, is well understood to depend on “mental processes that are far beyond the
level of actual or even potential consciousness” (Chomsky, 1965, p. 8). Hence the discrepancy
between moral judgments and justifications observed here is in keeping with the nontransparent
character of mental activity generally. Within the expository framework we have adopted, the
phenomena can be explained rather easily with reference to the analogy to linguistic competence:
Just as normal persons are typically unaware of the principles guiding their moral linguistic
intuitions, so too are they often unaware of the principles guiding their moral intuitions. 16
67
In §8.2, we elaborate on these findings and place them within a broader context by
comparing and contrasting our approach to the theory of moral cognition with those of Piaget
(1932/1965), Kohlberg (1981), and Greene et al. (2001). Before doing so, however, we turn
directly in §8.1 to a more extensive discussion of the unconscious mental representations that
appear to be triggered by thought experiments like the trolley problems.
8.1 Describing the Operative Principles
In Figure 2 (see §1.2), we sketched an expanded perceptual model for moral judgment,
according to which permissibility judgments do not necessarily depend on the surface structure
of an actiondescription, but on how that action is mentally represented. The main theoretical
problem within this framework is an informationprocessing problem, namely: How do people
manage to compute a full structural description of the action that incorporates certain properties,
such as ends, means, side effects, and prima facie wrongs like battery, when the stimulus
contains no direct evidence for these properties? This is similar in principle to determining how
people manage to extract a threedimensional representation from a twodimensional stimulus in
the theory of vision (e.g., Marr, 1982), or to determining how people to recognize the word
boundaries in an undifferentiated auditory stimulus in the theory of language (e.g., Chomsky &
Halle, 1968). In our case, the question is how and why individuals make the inferences they do
about the various agents and actions in our examples, even when we deliberately deprive them of
direct evidence of those agents’ mental states and other morally salient properties.
As Figure 23 depicts, this problem may be divided into at least four main parts.
Presumably, to compute a morally cognizable structural description of a given action, one must
68
? PERMISSIBLE
IMPERMISSIBLE
STRUCTURAL DESCRIPTION
Perceptual Response: Moral Judgment
Unconscious Mental Representation
INPUT ?
Conversion Rules
Stimulus: Fact Pattern
Deontic Rules
Figure 2: Expanded Perceptual Model for Moral Judgment
69
?
Intentional Structure
“Conversion Rules”
?
Temporal Structure
Stimulus: Fact Pattern
Moral Structure
? ? INPUT
Causal Structure
Fig. 23: Components of Conversion from Stimulus to Structural Description
70
generate a complex mental representation of that action which encodes relevant information
about its temporal properties, its causal properties, its moral properties, and its intentional
properties. But how does the individual manage to extract the relevant cues from the stimulus
and convert what is given into a full structural description? The following is one hypothesis
(Mikhail, 2000).
8.1.1 Temporal Structure
The process appears to include the following steps. First, one must identify the morally
relevant actiondescriptions contained in the stimulus and order them serially according to their
relative temporal properties. For example, in the Bystander Problem, one must recognize that
“Hank’s seeing what happened” occurs before “Hank’s throwing the switch,” which occurs
before “Hank’s killing the man” (Figure 24).
Fig. 24: Temporal Order of Three ActRepresentations in the Bystander Problem
t (m) t (0) t (+n)
[Hank’s seeing what happened] [Hank’s throwing the switch] [Hank’s killing the man]
71
There is an important convention that this timeline incorporates, which is to date an
action from its time of completion. An act that begins at t (0) and ends at t (+n) is in a sense
performed neither at t (0), nor at t (+n) , but in that period of time bounded by them. For present
purposes, we simplify this situation by following traditional jurisprudence in locating the time of
an action according to when it is completed (Salmond, 1902/1966).
8.1.2 Causal Structure
Second, one must interpret the morally relevant actiondescriptions contained in the
stimulus in terms of their basic causal and other semantic properties. For example, one must
identify and interpret causative expressions such as “Hank killed the man,” “Hank prevented the
train from killing the men,” and “Hank let the men die” in terms of their underlying semantic
structures. Figure 25 illustrates both the surface and semantic structures of “Hank killed the
man.” Figures 26 and 27 do the same for the semantic structures of “Hank prevented the train
from killing the men” and “Hank let the men die,” respectively.
S S
NP VP Agent
N V NP Cause Effect
Det N Patient Event
Hank killed the man (person) (death)
Fig. 25: Surface and Semantic Structures of “Hank killed the man”
72
S1 Agent
Cause Effect
Neg S2 Agent
Cause Effect
Patient Event
(person) (death)
Fig. 26: Semantic Structure of “Hank prevented the train from killing the men”
S1 Agent
Cause Effect
Neg S2 Agent
Cause Effect
Neg S3 Agent
Cause Effect
Patient Event
(person) (death)
Fig. 27: Semantic Structure of “Hank let the men die”
73
Additionally, presumably by relying in part on temporal information (Kant, 1787/1953),
one must compute a representation of the causal structure of the relevant acts and omissions in
the form of a causal chain or ordered sequence of causes and effects. Figure 28 illustrates one of
three such causal chains at issue in the Bystander Problem, namely, the chain linking the agent’s
throwing the switch to the outcome of killing the man. At least two further causal chains must
be generated, the first linking the same action (“throwing the switch”) to the outcome of
preventing the train from killing the men, and the second connecting the forbearance of this
action (“not throwing the switch”) to the outcome of letting the men die. (In Figure 28, the effect
of causing the train to hit the men is placed in brackets to signify that this representation, unlike
the others, is not derived directly from an actiondescription contained in the stimulus, but rather
must be inferred from assumptions about how objects interact with one another, presumably in
accord with certain core knowledge of contact mechanics (Carey & Spelke, 1994; Spelke,
Breinlinger & Jacobson, 1992). In other words, the brackets identify one location in the causal
chain where the mind “supplies the missing information” (Pinker, 1997, p. 28) that killing the
man in these circumstances requires causing the train to come into contact with him.)
Fig. 28: Causal Chain Generated by Throwing Switch in Bystander Problem
(thing)
Patient
(move)
Event
Effect Cause Agent Cause Effect Cause Cause Effect
(thing)
Patient Event
(move) (thing)
Effect
Event Event Patient Patient
(contact) (person) (death)
switch Hank throw train turn man kill [train] [hit]
74
8.1.3 Moral Structure
Third, one must apply the moral principle “death is bad” to the semantic structures in
Figures 2527, transforming each of those structures into a new one that encodes a bad effect.
Figure 29 illustrates this operation in the case of the semantic structure of “Hank killed the man.”
S S
Agent Agent
Cause Effect → Cause Bad Effect
Patient Event Patient Event
(person) (death) (person) (death)
Fig. 29: Moral Transformation of “Hank killed the man”
This operation states that an effect which consists of the death of a person is a bad effect, and
may be rewritten as such. Likewise, one must apply the logical principle “preventing a bad
effect is good” to the morally transformed structure of sentences like “Hank prevented the train
from killing the men,” thereby converting that structure into one that encodes a good effect.
S1 S1
Agent Agent
Cause Effect → Cause Good Effect
Neg S2 Neg S2
Agent Agent
Cause Bad Effect Cause Bad Effect
Fig. 30: Conversion of Transform of “Hank prevented the train from killing the man”
75
This operation states that an effect which consists of the negation of a bad effect is a good effect,
and may be rewritten as such (Figure 30).
Additionally, one must apply the logical principle “failing to prevent a bad effect is bad”
to the morally transformed structure of sentences like “Hank let the men die,” thereby converting
that structure into another one that encodes a second, overarching bad effect:
S1 S1 Agent Agent
Cause Effect Cause Bad Effect
Neg S2 → Neg S2 Agent Agent
Cause Good Effect Cause Good Effect
Neg S3 Neg S3 Agent Agent
Cause Bad Effect Cause Bad Effect
Fig. 31: Conversion of Transform of “Hank let the men die”
This operation states that an effect which consists of the negation of a good effect is a bad effect,
and may be rewritten as such (Figure 31).
8.1.4 Intentional Structure
Fourth, one must apply what may be called a “presumption of good intentions” to the
representational structures which have been generated up to this point, thereby converting them
into new structures that represent the intentional properties of the given action. That is, taking
the representation of an action with both good and bad effects as input, one must (absent
countervailing information of the type presented in the Intentional Homicide Problem in §6)
generate the intentional structure of the action by identifying the good effect as the intended
76
effect or goal of the action and the bad effect as the foreseen but unintended side effect. This
operation can also be represented graphically (Mikhail, 2000; see also Donagan, 1977; Goldman,
1970). In Figure 32, for example, the tree diagram on the left represents one part of the
underlying structure, not of a sentence, but of an action with both good and bad effects that we
may presume the mind spontaneously constructs upon encountering thought experiments like the
trolley problems. The arrow in Figure 32 indicates that this act tree must be converted into the
tree diagram on the right, which in turn signifies that the intended outcome of S’s action, her
ultimate aim or goal, is to achieve the good effect (as represented by the vertical chain of means
and ends connecting the base of the tree to its final end), whereas the bad effect is an unintended
side effect.
[Bad Effect] [Good Effect] [Bad Effect] [Good Effect]
→
[S’s Ving at t (α) ] C [S’s Ving at t (α) ] C
Fig. 32: Generation of Intentional Structure of Act with Good and Bad Effects
Note that some operation of this general character must be postulated to explain how the
mind/brain computes intended and nonintended effects, since—crucially—there is no mental
state information in the stimulus itself. In the operation depicted here, the presumption of good
intentions acts as a default principle which says, in effect, that unless evidence to the contrary is
presented, the decisionmaker is to assume that the agent, S, is a person of good will, who
intends and pursues good effects and avoids bad ones. 17 In particular, the presumption directs
77
one to assume that S intends to prevent the train from killing the men and regards killing the man
as an unintended side effect (Figure 33).
[killing the man] [preventing the train [killing the man] [preventing the train from killing the men] from killing the men]
→
[throwing the switch] [throwing the switch]
Fig. 33: Generation of Intentional Structure in Trolley Problems
8.1.5 Additional Moral/Deontic Structure
The actrepresentations conceptualized thus far are complex in that they represent the
given actions in terms of morally salient properties like ends, means, and side effects, even
though the stimulus contains no direct evidence of these properties. Nevertheless, these
representations are necessary but not sufficient to distinguish the trolley problems in accord with
their corresponding moral intuitions. The tree structure depicted on the right side of Figure 33,
for example, applies equally to both of the loopedtrack scenarios used in Experiment 4, and the
general distinction it captures, between the goal of preventing the death of five persons and the
side effect of killing one person, is a property shared by most of the trolley problems we have
discussed so far. Hence these intuitions can be adequately explained only by attributing some
additional moral or deontic structure to the representations these problems elicit.
What additional structure is needed? The specific hypothesis we have advanced here is
that the relevant actions must be represented in terms of prima facie wrongs, such as battery or
homicide. For example, in the Footbridge Problem, one must derive a representation of battery
78
by inferring that (1) the agent must touch the man in order to throw him onto the track in the path
of the train, and (2) the man would not consent to being touched in this manner, because of his
desire for selfpreservation (and because no evidence is given to the contrary). Utilizing
standard notation in action theory (e.g., Mikhail, 2000; Goldman, 1970) and deductive logic
(e.g., Leblanc & Wisdom, 1993), this line of reasoning can be formalized in the following
derivation (Figure 34):
1. [Ian’s throwing the man at t (0) ] C ________________ 2. [Ian’s throwing the man at t (0) ] Given 3. [Ian throws the man at t (0) ] 2; Linguistic Transformation 4. [Ian throws the man at t (0) ] ⊃ [Ian touches the man at t (0) ] Analytic 5. [Ian touches the man at t (0) ] 3, 4; Modus Ponens 6. [The man has not expressly consented to be touched at t (0) ] Given 7. [Ian throws the man at t (0) ] ⊃ [Ian kills the man at t (+n) ] Given 8. [[Ian throws the man at t (0) ] ⊃ [Ian kills the man at t (+n) ]] ⊃
[the man has not implicitly consented to be touched at t (0) ] . [the man would not consent to being touched at t (0) , if asked] SelfPreservation Principle 18
9. [the man has not implicitly consented to be touched at t (0) ] 7,8; Modus Ponens 10. [the man would not consent to be touched at t (0) , if asked] 7,8; Modus Ponens 11. [Ian touches the man without his express, implicit, or hypothetical 4,5,9,10
consent at t (0) ] 12. [Ian touches the man without his express, implicit, or hypothetical
consent at t (0) ] ⊃ [Ian commits battery at t (0) ] Definition 13. [Ian commits battery at t (0) ] 11,12; Modus Ponens 14. [Ian’s committing battery at t (0) ] Linguistic Transformation
Fig. 34: Derivation of Representation of Battery in the Footbridge Problem
Additionally, one must locate the representation of these prima facie wrongs which are
derived in this manner in the correct temporal, causal, and intentional location in one’s act tree,
thereby identifying whether they are an intended and/or prior means to a given end, or a
subsequent and/or foreseen side effect. For example, one must locate the batteries committed in
the Footbridge and Bystander Problems in the manner depicted in Figures 34, but not in Figures
3536.
79
D’s throwing the switch at t (0)
D’s turning the train at t (+n)
D’s preventing the train from killing the men at t (+n) D’s causing the train
to hit the man at t (+n+o)
D’s committing battery at t (+n+o) End
Fig. 4: Mental Representation of Bystander Problem
Side Effects
D’s killing the man at t (+n+o+p)
D’s throwing the man at t (0)
D’s committing battery at t (0)
D’s preventing the train from killing the men at t (+n+o)
D’s killing the man at t (+n+p)
D’s causing the train to hit the man at t (+n)
D’s committing battery at t (+n)
Fig. 3: Mental Representation of Footbridge Problem
Side Effects
End
Means
80
D’s throwing the switch at t (0)
D’s turning the train at t (+n)
D’s committing battery t (0)
D’s preventing the train from killing the men at t (+n)
D’s causing the train to hit the man at t (+n+o)
Fig. 36: Inaccurate Representation of Bystander Problem
Side Effects
D’s killing the man at t (+n+o+p)
D’s committing battery at t (+n+p)
D’s throwing the man at t (0)
D’s committing battery at t (+n+o)
D’s killing the man at t (+n+p)
D’s causing the train to hit the man at t (+n)
D’s preventing the train from killing the men at t (+n+o)
Fig. 35: Inaccurate Representation of Footbridge Problem
Side Effects
End
Means
End
Means
81
Finally, once one has generated a full structural description of the agent’s action that
encodes information about its temporal, causal, moral, and intentional properties, as well as the
location of its prima facie wrongs such as battery, one must apply the relevant substantive moral
principles or deontic rules (e.g., the prohibition of intentional battery and the principle of double
effect) to that structural description.
While the foregoing account is incomplete, a complete theory of the steps converting
proximal stimulus to perceptual response by means of a morally cognizable structural description
can be given along these lines. In principle, a computer program could be written that could
execute these operations from start to finish. The theory presented here thus goes some way
toward achieving the first two of Marr’s (1982) three levels at which any perceptual information
processing task may be understood, namely, the computational level and the representation and
algorithm level. In this sense, the theory is arguably a major step forward in terms of satisfying
the demands of both observational and descriptive adequacy.
8.2 Contrast with Alternative Frameworks
8.2.1 Contrast with Piaget’s Framework: Complex vs. Simple Acts
From the point of view of the theory of moral cognition and moral development, the
thought experiments tested in these investigations are fascinating for several reasons. Perhaps
the most important reason is that they are qualitatively more complex than the stimulus materials
used by researchers working within the Piagetian tradition. As we mentioned in §1, in his
original studies Piaget (1932/1965) sought to address the problem of descriptive adequacy by
focusing attention on the “subjective” and “objective” elements of moral judgment. To this end,
he asked children to compare an action characterized by “good intentions” (e.g., helping mom set
the table) whose negative consequences were significant (e.g., breaking fifteen cups) with an
action characterized by “bad intentions” (e.g., taking a cookie from the cookie jar) whose
82
negative consequences were slight (e.g., breaking one cup), and to determine which agent was
“naughtier.” Piaget discovered that until around age nine, children tended to judge the agent
whose action is characterized by the “good intention/greater negative consequences”
combination to be “naughtier.” After age nine, this response pattern changed: the older children
tended to judge that the agent whose action is characterized by the “bad intention/slight negative
consequence” combination was “naughtier.” It was on this basis that Piaget concluded that
children based their moral judgments on effects, not intentions, until around age nine.
As subsequent researchers have noted, Piaget’s conclusions were unjustified, because he
used stories that covaried two and sometimes three parameters at once. The exact role played
by intentions and consequences in the children’s judgments was therefore impossible to
determine. To remedy this situation, researchers modified Piaget’s procedure by presenting
children with stories which permitted the independent variation of intention and consequence
parameters along two dimensions, “good” and “bad.” In this way, subjects could be presented
with actions characterized by one of four possible combinations of features: (1) “good
intention/good effect,” (2) “good intention/bad effect,” (3) “bad intention/good effect,” and (4)
“bad intention/bad effect.”
In one of Nelson’s (1980) experiments, for example, children were given four different
stories to evaluate. In the first, a boy, who sees his friend is sad because he does not have
anything to play with, throws a ball toward his friend in order to play catch with him and cheer
him up (good intention). The friend catches the ball and is happy (good effect). In the second,
the boy throws the ball with the same intention (good intention) but ends up hitting his friend on
the head and making him cry (bad effect). In the third, the boy is mad at his friend that day and
throws the ball toward him in order to hit him with it (bad intention). However, the friend
83
catches the ball and is happy (good effect). Finally, in the fourth, the boy again throws the ball at
his friend in order to hit him with it (bad intention). This time he succeeds in hitting his friend
on the head with the ball and making him cry (bad effect). When Nelson presented these four
stories to her experimental subjects, she discovered that children as young as three years of age
utilized information about intentions when making moral judgments (Nelson, 1980).
Table 5. Moral Judgments Elicited by Trolley, Transplant, Bystander, and Footbridge Problems as a Function of Piaget’s Variables Problem (Agent) Good Effect Bad Effect Good Intention
(Ultimate Aim or Goal) Deontic Status
Trolley (Charlie) Preventing 5 deaths 1 Death Preventing 5 deaths Permissible Transplant (Dr.Brown) Preventing 5 deaths 1 Death Preventing 5 deaths Impermissible Bystander (Denise) Preventing 5 deaths 1 Death Preventing 5 deaths Permissible Footbridge (Nancy) Preventing 5 deaths 1 Death Preventing 5 deaths Impermissible Bystander (Hank) Preventing 5 deaths 1 Death Preventing 5 deaths Permissible Footbridge (Ian) Preventing 5 deaths 1 Death Preventing 5 deaths Impermissible
Although experiments such as Nelson’s, which permitted the independent variation of
intentions and effects, were an improvement on Piaget’s original studies, they nonetheless
limited their attention to what we might call “morally simple acts,” i.e., acts whose mental
representations are characterized by only one morally salient (good or bad) ultimate intention
(i.e., purpose or goal) and only one morally salient (good or bad) effect. By contrast, the
examples we investigate here are more complex, because they involve actrepresentations which
are comprised of multiple intentions and which generate both good and bad effects. Moreover, as
Table 5 reveals, three of the four variables in the Piagetian framework – good and bad effects
and good intention (i.e., ultimate aim, purpose or goal) – are held constant in the Trolley,
Transplant, Footbridge, and Bystander Problems, while the fourth – bad intention – is not
relevant. This suggests that some other property or properties are responsible for the divergent
moral judgments generated by these examples.
84
The key insight of our paper is that these intuitions can be adequately explained only by
drawing on complex moral and legal principles, such as the prohibition of intentional battery and
the principle of double effect. Piaget emphasized what he called the “juridical complexity” of
children’s moral judgments and observed that, to understand those judgments, psychologists
must familiarize themselves with “the common law” (Piaget, 1932/1965, pp. 20, 13). Yet
Piaget’s own stimulus materials often failed to test for common legal distinctions, such as those
between justification and excuse, recklessness and negligence, proximate and remote causation,
or—what appears most relevant here—intended and foreseen effects.
Piaget may not be alone in this regard. As Robinson and Darley (1995) have observed,
many social scientists seeking to describe social norms and “the community’s sense of justice”
appear to rely on definitions and concepts that are both descriptively inadequate and legally
inaccurate. As a result, they often underestimate the subtlety and complexity of laypersons’
intuitive grasp of legal concepts and distinctions. And they often beg important questions about
how moral knowledge is acquired. For example, in a recent wideranging law review article
criticizing the role of moral intuitions in legal analysis, Kaplow and Shavell insist that norms
“must be relatively simple because they must be imparted to children and applied without
sustained analysis” (Kaplow & Shavell, 2001, p. 1033). Kaplow and Shavell’s observation begs
the question whether and to what extent norms are “imparted to children” in any meaningful
sense. If morality is like language, some of the principles generating our moral intuitions may
turn out to be innate; indeed, as some commentators have argued, the principle of double effect
may turn out to be one such principle (Harman, 2000). In any case, we would simply observe
that it seems highly questionable for norms theorists like Kaplow and Shavell to invert the
logical relationship of descriptive and explanatory adequacy in this fashion. The success of
85
research programs in other cognitive domains suggests that the origin of our moral intuitions can
be adequately investigated only insofar as their structure is clearly understood.
8.2.2 Contrast with Kohlberg’s Framework: Operative vs. Express Principles
Many research programs in moral cognition do not distinguish sharply between moral
judgment and moral reasoning and tacitly assume that moral principles are introspectible.
Perhaps the most influential research program of this type is the one developed by Lawrence
Kohlberg (1958, 1981, 1984). On Kohlberg’s view, Piaget was correct to assume that the
capacity for moral judgment passes through a series of developmental stages. However, whereas
Piaget proposed two broad stages, Kohlberg’s theory postulates six. Further, although
Kohlberg’s theory is partially based on Piaget’s, his conclusions are far better supported
empirically.
The aspect of Kohlberg’s framework most relevant to our discussion concerns his focus
on the explicit statements people make to justify or explain their moral judgments. Kohlberg
assessed moral development by having trained researchers code an experimental subject’s stated
justifications for her decisions on a series of moral dilemmas. One of his best known puzzles is
the “Heinz dilemma,” which deals with the example of a man whose wife’s life can be saved
only by a medicine he cannot afford. Under these circumstances, would it be right for the man to
steal the drug? Kohlberg and his associates put these and similar questions to children and adults
of all ages, asking them to justify whatever decision they reached. By evaluating not the
decision itself but the justification accompanying it, Kohlberg (1981, 1984) claimed to discover
that moral development progresses through an unvarying sequence of six stages.
86
Kohlberg’s overall theory is rich and complex, and we make no effort to evaluate it
systematically here. Instead, we simply note that our investigations appear to confirm in rather
dramatic fashion that those commentators who have questioned Kohlberg’s methodological
decision to focus on verbalized justifications rather than moral intuitions themselves are correct
to criticize this aspect of his framework (e.g., Darley & Shultz, 1990; Haidt, 2001; Kagan, 1987;
MacNamara, 1990, 1999; Mikhail, 2000).
Unlike Kohlberg, we distinguished at the outset of our investigations between operative
moral principles and express principles (§1.1). We also assumed that just as normal persons are
typically unaware of the principles guiding their linguistic intuitions, so too are they often unaware
of the principles guiding their moral intuitions. Further, based on both informal observation and a
review of the relevant literature, we predicted that an empirical investigation of the trolley
problems would reveal that our subjects’ “locus of moral certitude” (Jonsen & Toulmin, 1988)
would be their intuitive judgments themselves, not their underlying justification.
Our studies confirmed this prediction. When the participants in our experiments were
asked to provide verbal rationales for their judgments, they were consistently incapable of
articulating the operative principles on which their judgments were based (Table 6). Indeed, in
sharp contrast with Kohlberg’s (1981, 1984) findings concerning the relevance of demographic
variables like gender, nationality, age, and education, but in accord with our expectations, our
subjects’ moral judgments were widely shared, irrespective of these factors. But, as quotations
like these reveal, our subjects’ expressed principles were widely divergent. More importantly,
they consistently failed to state the operative reasons for their judgments in any theoretically
compelling sense. They often said things that were incompatible with their own judgments, or
even internally incoherent. And they often appeared puzzled by the nature and strength of their
87
intuitions, and by the way those intuitions shifted when we introduced small changes in the
wording of the actionsequences we gave them, in order to evoke distinct mental representations.
Table 6: Adequacy of Justifications in Experiments 12 and 46 Experiment and Design
Degree of Logical Adequacy Number Responses
Percentage Responses
No Justification 13 32.5 Inadequate Justification 7 17.5
Experiment 1: Between Subject (n=40) Adequate Justification 20 50.0
No Justification 23 35.4 Inadequate Justification 25 38.5
Experiment 2: Within Subject (n=65) Adequate Justification 13 20.0
No Justification 260 84.1 Inadequate Justification 19 6.1
Experiment 4: Between Subject (n=309) Adequate Justification 30 9.7
No Justification 32 53.3 Inadequate Justification 14 23.3
Experiment 5: Within Subject (n=60) Adequate Justification 14 23.3
No Justification 14 46.7 Inadequate Justification 4 13.3
Experiment 6: Between Subject (n=30) Adequate Justification 12 40.0
A related point worth highlighting concerns a question of method underlying Kohlberg’s
controversial findings about the role of gender and other demographic variables in moral
development. As critics such as Gilligan (1982) have observed, the original research from which
Kohlberg derived his theory was based on a study of 84 American boys from suburban Chicago
whose development Kohlberg followed for a period of over twenty years. Although Kohlberg
claimed universality for his stage theory, subsequent research revealed that girls and other groups
who were not included in Kohlberg’s original sample tended to reach only the third or fourth
stage of his 6stage sequence (Gilligan, 1982; see also Edwards, 1975; Holstein, 1976; Simpson,
88
1974). Critics like Gilligan have criticized these findings as the inevitable outcome of a research
program which is insensitive to the fact that, in moral matters, men and women often speak “in a
different voice” (Gilligan, 1982). Generally speaking, however, these critics have not strayed far
from Kohlberg’s paradigm and have continued to “measure” or otherwise evaluate the character
of an individual’s moral judgment by looking to her actual utterances.
We think the dissociation between moral judgments and justifications identified here
calls into question the entire approach of both Kohlberg and many of his critics. Simply put, our
studies suggest that these psychologists may have focused their attention on the wrong
phenomena. A comparison to the study of language and vision, neither of which constructs a
theory of development based on expressed justifications, brings the point into sharper focus.
Neither linguists nor vision theorists take the post hoc explanations of experimental subjects—
for example, statements explaining or justifying the intuition that a particular utterance is
ungrammatical—to be their primary source of data. Rather, their data is the subjects’ intuitions
themselves.
While our findings clearly establish a significant discrepancy between judgments and
justifications, it would be a mistake to conclude from what has been said thus far that express
principles have no evidentiary role to play in the theory of moral cognition. On the contrary, an
individual’s introspective reports may often provide important and even compelling evidence for
the properties of her moral competence (cf., Chomsky, 1965). Indeed, many of the justifications
offered by adults and even some of the justifications offered by young children in our
experiments were illuminating in that they revealed an intuitive grasp of a specific legal concept.
For example, several of the children who were given the Transplant Problem and asked to
explain their judgments in Experiment 6 referred on their own initiative to the crucial issue of
89
lack of consent. One participant, an eight yearold boy, who judged Dr. Brown’s cutting up the
patient to be wrong made the following comment:
“Okay, if given consent”
A second participant, an eleven yearold boy, said the following:
“I think that Dr. Brown should ask the person in Room 306 if they would like to be cut up to save the other peoples’ lives.”
A third participant, also an eleven yearold boy, said:
“I said no because it never said that she gave permission to kill her; to give away her body parts…I did not feel good about it because I would not like somebody to take my body parts.”
Finally, a fourth participant, a twelve yearold girl, said:
“I believe that it would be wrong to cut this 306 person up without them even knowing it. It would be different if Dr. Brown had asked this person if they would donate their organs and he had received their permission. That is why I would blame him if he took their life.”
As these remarks suggest, the distinction between operative and express principles does not
imply that researchers should disregard or discount the significance of verbalized justifications.
Although none of these children used the term “battery” to explain their judgments, their
comments clearly suggest an intuitive appreciation for one of the key elements of battery,
namely, lack of consent.
In sum, the distinction between operative and express principles appears to vitiate, or at
least seriously compromise, the Kohlberg paradigm, which arguably was the dominant approach
to the psychology of moral development in the second half of the twentieth century (e.g., Gilligan,
1982; Kohlberg, 1981, 1984; Rest, 1983; Turiel, 1983). Kohlberg’s stage theory of moral
development is one which primarily tracks the development of a person’s ability to express or
articulate moral principles. While this is an important skill, and perhaps corresponds with the
90
ability to engage in more complex acts of moral reasoning, it does not necessarily reveal the
properties of moral competence. On the contrary, as our subjects’ responses to the trolley
problems reveal, a person’s introspective verbal reports and general viewpoint about her moral
knowledge may sometimes be in error. Yet, at the same time, a subject’s reports may contain
important evidence of the speaker’s intuitive knowledge of a specific legal concept, knowledge
that runs the risk of being overlooked if the researcher is preoccupied with the search for
articulate justifications in the manner prescribed by Kohlberg. In the final analysis, the
important point is that, as is the case with a theory of language or vision, the goal of the theory of
moral cognition must be to explain an ideal observer’s actual moral intuitions and their
underlying cognitive mechanisms, rather than to account for an individual’s own statements,
explanations, or justifications of what she intuits and why. (For similar remarks on this
inadequacy of Kohlberg’s framework, see generally Darley & Shultz, 1990; Haidt, 2001; Kagan,
1987; MacNamara, 1990, 1999; Mikhail, 2000. For parallel remarks with respect to language,
see Chomsky, 1965).
8.2.3 Contrast with the Framework of Greene et al.: Computation vs. Emotion
A recent paper relying on trolley problems to investigate moral cognition which was
published after the studies presented here were largely concluded and which has received
widespread attention is Greene et al. (2001). In this section, we briefly comment on this paper
and contrast it with the investigation presented here.
To begin with, we note that the authors have written a highly stimulating paper. They
deserve credit for showing how a problem that has preoccupied philosophers can be studied
using the methods of brain imaging. However, the authors’ conclusion that the moral judgments
91
they examine are caused by differences in emotional engagement seems decidedly premature.
Their study appears to have serious methodological flaws which suggest this conclusion may be
suspect. 19 Additionally, Greene and colleagues have given insufficient consideration to the
competing hypothesis that their moral dilemmas elicit mental representations which differ in
their structural properties. Put simply, a computational theory of moral cognition has been ruled
out too soon.
The authors’ central thesis is that “the crucial difference between the trolley dilemma and
the footbridge dilemma lies in the latter’s tendency to engage people’s emotions in a way that the
former does not. The thought of pushing someone to his death is, we propose, more emotionally
salient than the thought of hitting a switch that will cause a trolley to produce similar
consequences, and it is this emotional response that accounts for people’s tendency to treat these
cases differently” (Greene et al., 2001, p. 2106). They advance a further generalization: “Some
moral dilemmas (those relevantly similar to the footbridge dilemma) engage emotional
processing to a greater extent than others (those relevantly similar to the trolley dilemma), and
these differences in emotional engagement affect people’s judgments” (Greene et al., 2001, p.
2106). Finally, on the basis of these observations, the authors predict and then confirm that
certain emotional centers of the brain are more active when subjects respond to the Footbridge
Problem than when they respond to the Trolley Problem.
These claims prompt three related observations. First, the authors’ data do not exclude
the possibility that the Footbridge and Trolley problems and related thought experiments engage
perceptual and cognitive processing in systematically different ways, and that it is these
differences, rather than (or in addition to) differences in emotion, that influence people’s moral
judgments. Rather, their data are consistent with assuming that people distinguish permissible
92
and impermissible actions for independent reasons and respond emotionally once these prior
determinations have been made.
Second, on the authors’ own view, some account of the process whereby subjects
interpret the verbal stimulus and extract informational cues is not merely possible, but necessary.
Indeed, Greene and colleagues presuppose just such an account, suggesting that people manage
to conclude that it is acceptable to sacrifice one person for the sake of five in the Trolley Problem
but not the Footbridge Problem by spontaneously analyzing these cases in terms of three
features: “whether the action in question (a) could reasonably be expected to lead to serious
bodily harm, (b) to a particular person or a member or members of a particular group of people
(c) where this harm is not the result of deflecting an existing threat onto a different party”
(Greene et al., 2001, p. 2107). Greene and colleagues explain their subjects’ moral judgments
and predict patterns of brain activity on the basis of these three features.
Third, the authors’ characterization of the function that maps verbal stimulus to moral
response is neither complete nor descriptively adequate. It is incomplete because we are not told
how people manage to recognize whether a given dilemma contains these features; surprisingly,
Greene and colleagues leave this first step in the perceptual process unanalyzed. More
importantly, the authors’ account is descriptively inadequate because it cannot explain simple but
compelling data to which a theory of moral cognition must answer.
Consider, for example, two marginal variations of the Footbridge and Bystander
problems, which we investigated in Experiments 23 and Experiment 5, respectively. In the first,
which we labeled the Consensual Contact Problem (see §3.1.2), a runaway trolley threatens to
kill a man walking across the tracks. The only way to save him is to throw him out of the path of
the train. Throwing the man, however, will seriously injure him. In the second, which we
93
labeled the Disproportional Death Problem (see §6.1.2), the same runaway trolley again
threatens to kill the man walking across the tracks. This time, the only way to save the man is to
throw a switch that will turn the trolley onto a side track, where it will kill five people.
Taken together, these two problems create a puzzle for Greene et al. (2001). Throwing
the man out of the path of the train is an action which “could reasonably be expected to lead to
serious bodily harm to a particular person . . . where this harm is not the result of deflecting an
existing threat onto a different party” (Greene et al., 2001, p. 2107). On the authors’ account,
therefore, it should be assigned to their “moralpersonal” category and judged impermissible.
But a combined total of 93% (38/41) of participants in Experiments 23 thought this action was
permissible. Conversely, while turning a trolley onto a side track where it will kill five people
instead of one is an action which “could reasonably be expected to lead to serious bodily harm to
. . . a particular group of people,” it is also “the result of deflecting an existing threat onto a
different party” (Greene et al., 2001, p. 2107). On the authors’ account, therefore, it should be
assigned to their “moralimpersonal” category and judged permissible. Yet, 85% (17/20) of
respondents in Experiment 5 thought this action was impermissible. How did our subjects
manage to come to these conclusions? The answer cannot be the one proposed by Greene et al.
(2001).
As we have argued, a simpler and more powerful explanation of all of these moral
intuitions is ready to hand, one that grows out of the rationalism that Greene and colleagues too
hastily reject. We need only assume people possess tacit knowledge of specific moral principles
and the ability to compute mental representations of various actions in morally cognizable terms.
The operative reason why pushing the man in the Footbridge Problem is impermissible is
because it constitutes intentional battery. The operative reason why turning the trolley in the
94
Bystander Problem is permissible is because the battery and homicide it generates are foreseen
but nonintended side effects that are outweighed by the intended good effect of preventing the
train from killing the men. By contrast, the operative reason why turning the trolley in the
Disproportional Death Problem is impermissible is because its costs outweigh its benefits.
Finally, the operative reason why throwing the man out of the path of the train in the Consensual
Contact Problem is permissible is both because its benefits outweigh its costs and because this
action is not a battery at all. In light of the reasonable presumption that a person would consent
to being thrown and injured if that were necessary to save his life, the man’s hypothetical
consent to being thrown in these circumstances may be assumed.
Greene and colleagues raise an important objection to the computational approach. They
observe that in an unusual variant of the Bystander Problem invented by Judith Thomson
(Thomson, 1985), in which the side track leading to the one person loops around to connect with
the track leading to the five people, most people agree that it would be permissible to turn the
trolley, even though doing so would appear to violate the Kantian injunction against “using” a
person to achieve a worthy end (Greene et al., 2001, p. 2106). But the authors fail to note that
the original Bystander Problem and Thomson’s looped track example differ in their temporal,
causal, and counterfactual properties (Costa, 1987). More importantly, Greene and colleagues do
not explain why one must accept their tacit assumption that the trolley would be turned in
Thomson’s looped track example for any purpose other than the one motivating the agent in the
original Bystander Problem—to prevent the train from killing the men. If one refrains from
making this assumption, then the intuition that turning the train is permissible in these
circumstances can be explained along familiar lines.
95
Recall that the “looped track” scenarios we utilized in Experiment 4 were designed in
part to investigate just this issue. In Experiment 4, we discovered that even in this context
subjects remained sensitive to the distinction between intended means and foreseen side effect.
Moreover, they did so even though the process of recovering these intentional properties from
the impoverished stimulus is nontrivial. Further, the relevant data presumably cannot be
explained by appealing to varying levels of emotional engagement, because both scenarios
involve an “impersonal” action (Greene et al., 2001) described as “throwing the switch.” The
data can be explained, however, with reference to unconscious computational principles.
Finally, we emphasize that Greene and colleagues do not provide a clear procedure for
determining whether the three features they identify are contained in (or otherwise derivable
from) their stimuli. We are told that patterns of brain activity can be predicted on the basis of
whether an action can “reasonably be expected to lead to serious bodily harm” to a person or
group “where this harm is not the result of deflecting an existing threat onto a different party”
(Greene et al. 2001, p. 2107), but we are not told how a given stimulus is to be analyzed in these
terms. A further virtue of our computational alternative is that these crucial first steps in the
perceptual process are fully analyzed. That is, not only can our subjects’ judgments be generated
by fixed and recognizable moral principles, once appropriate mental representations have been
computed. The mental representations themselves can be derived from their corresponding
stimuli by a process that is purely mechanical (§8.1). In short, a complete and explicit theory of
the steps converting proximal stimulus to perceptual response can be given. In this sense, the
theory of moral cognition presented here is at least observationally if not descriptively adequate.
Greene et al. (2001) offer no comparable explanation of the conversion of proximal stimulus to
perceptual response. Hence their theory is not even observationally adequate.
96
8.3 Conclusion
This paper summarizes the results of six experiments performed on 543 individuals,
including 513 adults and 30 children ages 812. The results constitute evidence that both adults
and children ages 812 possess intuitive or unconscious knowledge of specific moral principles,
including the prohibition of intentional battery and the principle of double effect. Significantly,
this knowledge appears to be merely tacit: when participants were asked to explain or justify
their judgments, they consistently failed to provide adequate justifications for their judgments.
Our findings also suggest that at least some moral intuitions and the principles that generate them
are widely shared, irrespective of demographic variables like gender, nationality, age, and
education. Finally, our findings imply that longstanding questions in moral cognition may be
fruitfully investigated within the framework of theoretical models similar to those utilized in the
study of language and other cognitive systems. Specifically, we have shown how it may be
possible to pursue a “Galilean style” (Chomsky, 1980) of scientific explanation in this domain, in
which observable data are rigorously explained in terms of computational rules and
representations. Having gathered evidence that individuals possess intuitive knowledge of moral
principles, we are now in a better position to determine how this knowledge is acquired and
whether and to what extent it may be innate. Our study thus paves the way for future research
into the nature and origin of human moral intuitions.
97
Acknowledgments:
We thank the participants in our study and the following individuals who enabled us to recruit subjects and collect data: Cary Coglianese, Nagwa Hultquist, Laura Sandler, Amber Smith, Laila Waggoner, Kenneth Winston, Fei Xu, and Yaoda Xu. Izzat Jarudi and Martin Hewitt provided assistance coding and analyzing data. Thanks also to Kirsten Condry, who created the looped track diagrams used in Experiment 4, and to Paul Bloom, Noam Chomsky, Danny Fox, Steve Goldberg, Tom Grey, Marc Hauser, Lisa Heinzerling, Ray Jackendoff, Emma Jordan, Mark Kelman, Joshua Knobe, Don Langevoort, David Luban, Matthias Mahlmann, James McGilvray, Shaun Nichols, Philippe Schlenker, Mike Seidman, Steve Stich, Josh Tenenbaum, and Kathy Zeiler for helpful suggestions and encouragement. Research support was provided by the Department of Brain and Cognitive Sciences, Massachussets Institute of Technology; the Department of Psychology, New York University; the Department of Psychology, Harvard University; Stanford Law School; Georgetown University Law Center; and the Peter Wall Institute for Advanced Studies at the University of British Columbia.
References:
Anscombe, G. E. (1970). War and murder. In R. Wasserstrom (Ed.), War and morality (pp. 42 53). Los Angeles: Wadsworth Publishing Company.
Baker, M. (2001). The atoms of language: The mind’s hidden rules of grammar. Basic Books. Baird, J. A. (2001). Motivations and morality: Do children use mental state information to
evaluate identical actions differently? Paper presented to Biennial Meeting, Society for Research in Child Development, Minneapolis, MN.
Berndt, T. J., & Berndt, E. G. (1975). Children’s use of motives and intentionality in person perception and moral judgment. Child Development, 46, 904912.
Carey, S. & Spelke, E. (1994). Domainspecific knowledge and conceptual change. In L. A. Hirschfeld & S. A. Gelman (Eds.), Mapping the mind: Domain specificity in cognition and culture (pp. 169200). New York: Cambridge University Press.
Chomsky, N. (1964). Current issues in linguistic theory. The Hague: Mouton. Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press. Chomsky, N. (1978). Interview with Noam Chomsky. Linguistic Analysis, 4(4), 301319. Chomsky, N. (1980). Rules and representations. New York: Columbia University Press. Chomsky, N. (1986). Knowledge of language: Its nature, origin, and use. Westport, CT: Praeger. Chomsky, N. (1995). The minimalist program. Cambridge, MA: MIT Press. Chomsky, N. & Halle, M. (1968). The sound pattern of English. Cambridge, MA: MIT Press. Cosmides, L. & Tooby, J. (1994). Beyond intuition and instinct blindness: Toward an
evolutionary rigorous cognitive science. Cognition, 50, 4177. Costa, M. J. (1987). Another trip on the trolley. In J. M. Fischer & M. Ravizza (Eds.), Ethics:
Problems and principles (pp. 303307). Fort Worth, TX: Harcourt Brace Jovanovich. Costanza, P. R., Coie, J. D., Grumet, J. F., & Farnhill, D. (1973). A
reexamination of the effects of intent and consequence on children’s moral judgments. Child Development, 44, 15461.
D’Arcy, E. 1963. Human acts: An essay in their moral evaluation. Oxford: Clarendon Press.
98
Darley, J. M. & Shultz, T. R. (1990). Moral rules: Their content and acquisition. Annual Review of Psychology, 41, 525.
Dobbs, D. (2000). The law of torts. St. Paul, MN: West Group. Donagan, A. (1977). The theory of morality. Chicago: University of Chicago Press. Dwyer, S. (1999). Moral competence. In R. Stainton (Ed.), Philosophy and Linguistics. Boulder,
CO: Westview Press. Edwards, C. (1975). Societal complexity and moral development: A Kenyan study. 3 Ethos 505. Fischer, J.M. & Ravizza, M. (1992). Ethics: Problems and principles. Fort Worth, TX: Harcourt
Brace Jovanovich. Fodor, J. (1983). Modularity of mind: An essay on faculty psychology. Cambridge, MA: MIT
Press. Foot, P. (1967). The problem of abortion and the doctrine of double effect. In J. M. Fischer & M.
Ravizza (Eds.), Ethics: Problems and principles (pp. 5966). Fort Worth, TX: Harcourt Brace Jovanovich.
Gazzaniga, M. S. (1992). Nature’s mind: The biological roots of thinking, emotions, sexuality, language, and intelligence. New York: Basic Books.
Gazzaniga, M. S., Ivry, R. B., & Mangum, G. R. (1998). Cognitive neuroscience: The biology of the mind. Cambridge, MA: MIT Press.
Gergely, G., Nadasdy, Z., Csibra, G., & Biro, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 16593.
Gilligan, C. (1982). In a different voice. Cambridge: Harvard University Press. Ginet, C. (1990). On action. New York: Cambridge University Press. Goldman, A. (1970). A theory of human action. Princeton: Princeton University Press. Goldman, A. (1993). Ethics and cognitive science. Ethics, 103, 33760. Greene, J., Sommerville, R., Nystrom, L., Darley, J., & Cohen, J. (2001, September 14). An
fMRI investigation of emotional engagement in moral judgment. Science, 293. Haegeman, L. (1994). Introduction to government and binding theory. Oxford: Basil Blackwell. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach
to moral judgment. Psychological Review, 108, 814834. Harman, G. (1977). The nature of morality: An introduction to ethics. New York: Oxford
University Press. Harman, G. (2000). Explaining value and other essays in moral philosophy. New York: Oxford
University Press. Hauser, M. D., Chomsky, N., & Fitch, T. (2002). The faculty of language: What is
it, who has it, and how did it evolve? Science, 298, 15691579. Hauser, M. D., Cushman, F., Young, L., Jin, R. KX., & Mikhail, J. A dissociation between
moral judgments and justifications. Manuscript submitted for publication. Helmholtz, H. von. (1867/1962). Helmholtz’s treatise on physiological optics. (J.P.C. Southall,
Ed. and Trans.). New York: Dover. Henkin, L., Pugh, R. C., Schacter, O., & Smit, H. (1992). Basic documents supplement to
international law: Cases and materials. St. Paul, MN: West. Hilliard, F. (1859). The law of torts (Vols. 12). Boston: Little, Brown and Company. Holstein, C. (1976). Development of moral judgment: A longitudinal study of males and
females. Child Development, 47: 5161. Johnson, S.C. (2000). The recognition of mentalistic agents in infancy. Trends in Cognitive
Science, 4(1), 2228. Jonsen, A. R. & Toulmin, S. (1988). The abuse of casuistry: A history of moral
99
reasoning. Berkeley, CA: University of California Press. Kagan, J. (1987). Introduction. In J. Kagan & S. Lamb (Eds.), The emergence of morality in
young children (pp. ixxx). Chicago: University of Chicago Press. Kant, I. (1787/1965). Critique of pure reason. (N. K. Smith, Trans.). St. Martin’s Press. Kaplow, L. & Shavell, S. (2001). Fairness versus welfare. Harvard Law Review, 114, 9611061. Katz, J. (1972). Semantic theory. New York: Harper and Row. Katz, L. (1987). Bad acts and guilty minds: Conundrums of the criminal law. Chicago:
University of Chicago Press. Kohlberg, L. (1958). The development of modes of thinking and choices in years 10 to 16.
Unpublished doctoral dissertation, University of Chicago (on file with author). Kohlberg, L. (1981). Essays on moral development: Vol. 1. The philosophy of moral
development. New York: Harper and Row. Kohlberg, L. (1984). Essays on moral development: Vol. 2. The psychology of moral
development. New York: Harper and Row. LaFave, W. R. (2003). Principles of criminal law. St. Paul, MN: West Group. Leblanc, H. & Wisdom, W. (1993). Deductive logic (3rd ed.). Englewood Cliffs, NJ:
Prentice Hall. Lewontin, R. (1990). The evolution of cognition. In D. N. Osherson & E. E. Smith (Eds.),
An Invitation to Cognitive Science: Vol. 3 (pp. 22946). Cambridge, MA: MIT Press. Lyons, D. (1965). Forms and limits of utilitarianism. Oxford: Clarendon Press. Macnamara, J. (1990). The development of moral reasoning and the foundations of geometry.
Journal for the Theory of Social Behavior, 21, 12550. Macnamara, J. (1999). Through the rearview mirror: Historical reflections on psychology.
Cambridge, MA: MIT Press. Mahlmann, M. (1999). Rationalismus in der praktischen theorie: Normentheorie und
praktiche kompetenz. BadenBaden: Nomos Verlagsgesellschaft. Marr, D. (1982). Vision. San Francisco: Freeman. Meltzoff, A. (1995). Understanding the intentions of others: Reenactment of intended
acts by 18montholdchildren. Developmental Psychology, 31, 83850. Mikhail, J. (2000). Rawls’ linguistic analogy: A study of the generative grammar model of
moral theory described by John Rawls in A Theory of Justice. Unpublished doctoral dissertation, Cornell University.
Mikhail, J. (2002). Law, science, and morality: A review of Richard Posner’s The problematics of moral and legal theory. Stanford Law Review, 54, 10571127.
Mikhail, J. (in press). Rawls’ linguistic analogy. Cambridge: Cambridge University Press. Mikhail, J., Sorrentino, C., & Spelke, E. (1998). Toward a universal moral grammar. In M. A.
Gernsbacher & S. J. Derry (Eds.), Proceedings, Twentieth Annual Conference of the Cognitive Science Society (p. 1250). Mahwah, NJ: Lawrence Erlbaum Associates.
Nagel, T. (1986). The view from nowhere. In J. M. Fischer & M. Ravizza (Eds.), Ethics: Problems and principles (pp. 16579). Fort Worth, TX: Harcourt Brace Jovanovich.
Nelson, S. 1980. Factors influencing young children’s use of motives and outcomes as moral criteria. Child Development, 51, 823.
Nozick, R. (1968). Moral complications and moral structures. Natural Law Forum, 13, 150. Piaget, J. (1932/1965). The moral judgment of the child. New York: The Free Press. Petrinovich, L. & O’Neill, P. (1996). Influence of wording and framing effects on
moral intuitions. Ethology and Sociobiology, 17, 145171. Petrinovich, L., O’Neill, P., and Jorgensen, M. (1993). An empirical study of moral intuitions:
100
Toward an evolutionary ethics. Journal of Personality and Social Psychology, 64(3), 467478.
Pinker, S. (1994). The language instinct: How the mind creates language. New York: Harper Collins.
Pinker, S. (1997). How the mind works. New York: Norton. Prior, A.N. (1955). Formal logic. Oxford: Clarendon Press. Prosser, W. (1941). Casebook on torts. Minneapolis, MN: University of Minnesota Press. Quinn, W. S. (1993). Morality and action. Cambridge: Cambridge University Press. Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press. Rest, J. (1983). Morality. In J. Flavell & E. Markman (Eds.), Manuel of child psychology: Vol. 3.
Cognitive development (4th ed.). New York: Wiley. Robinson, P. H. & Darley, J. M. (1995). Justice, liability, and blame: Community views
and the criminal law. San Francisco: Westview Press. Salmond, J. (1902/1996). Salmond on jurisprudence (12th ed., P.J. Fitzgerald, Ed.). London:
Sweet & Maxwell. Shapo, M. S. (2003). Principles of tort law. St. Paul, MN: ThomsonWest. Sidgwick, H. (1907). The methods of ethics. Indianapolis, IN: Hackett. Simpson, E. (1974). Moral development research: A case study of scientific cultural bias. Human
Development, 17, 81106. Spelke, E. (1998). Nativism, empiricism, and the origins of knowledge. Infant
Behavior and Development, 21. Spelke, E.S., Breinlinger, K., and Jacobson, K. (1992). Origins of knowledge. Psychological
Review, 99, 60532. Stich, S. (1993). Moral philosophy and mental representation. In M. Hechter et al. (Eds.),
The origin of values. New York: Aldine De Gruyter. Stone, J. (1964). Legal system and lawyers’ reasonings. Stanford, CA: Stanford University Press. Thomson, J. J. (1985). The trolley problem. In J. M. Fischer & M. Ravizza (Eds.), Ethics:
Problems and principles (pp. 6776). Fort Worth, TX: Harcourt Brace Jovanovich. Thomson, J. J. (1986). Rights, restitution, and risk. Cambridge, MA: Harvard University
Press. Turiel, E. (1983). The development of social knowledge: Morality and convention.
Cambridge, MA: Cambridge University Press. Unger, P. (1996). Living high and letting die: Our illusion of innocence. New York: Oxford
University Press. Von Wright, G. H. (1951). An essay in modal logic. Amsterdam: NorthHolland
Publishing Company. Woodward, A. L., Sommerville, J. A., & Guajardo, J. J. (2001). How infants make sense of
intentional action. In B. Malle, L. Moses & D. Baldwin (Eds.), Intentions and intentionality: Foundations of social cognition (pp.149169). Cambridge, MA: MIT Press.
101
9. Appendix
In this Appendix, we reproduce for each experiment both the scenarios we used and our
subjects’ responses to them.
9.1 Experiment 1
9.1.1 Scenarios
1. Transplant Problem Intentional Battery (Dr. Brown) Dr. Brown has five patients in the hospital who are dying. Each patient needs a new organ in order to survive. One patient needs a new heart. Two patients need a new kidney. And two more patients need a new lung. Dr. Brown can save all five patients if he takes a single healthy person and removes her heart, kidneys, and lungs to give to these five patients. Just such a healthy person is in Room 306. She is in the hospital for routine tests. Having seen her test results, Dr. Brown knows that she is perfectly healthy and of the right tissue compatibility. If Dr. Brown cuts up the person in Room 306 and gives her organs to the other five patients, he will save the other five patients, but kill the person in Room 306 in the process. If Dr. Brown does not cut up the person in Room 306, the other five patients will die.
Is it morally permissible for Dr. Brown to cut up the person in Room 306?
2. Footbridge Problem—Intentional Battery (Frank) Frank is on a footbridge over the train tracks. He knows trains, and can see that the one approaching the bridge is out of control. On the track under the bridge there are five people; the banks are so steep that they will not be able to get off the track in time. Frank knows that the only way to stop an outofcontrol train is to drop a very heavy weight into its path. But the only available, sufficiently heavy weight is a large man wearing a backpack, also watching the train from the footbridge. Frank can shove the man with the backpack onto the track in the path of the train, killing him; or he can refrain from doing this, letting the five die.
Is it morally permissible for Frank to shove the man? Y or N
3. Footbridge Problem—Intentional Battery (Ian) Ian is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Ian sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Ian is standing next to a heavy object, which he can throw onto the track in the path of the train, thereby preventing it from killing the men. Unfortunately, the heavy object is a man, standing next to Ian with his back turned. Ian can throw the man, killing him; or he can refrain from doing this, letting the five die.
Is it morally permissible for Ian to throw the man?
102
4. Footbridge Problem—Intentional Battery (Nancy) Nancy is taking her daily walk near the train tracks when she notices that the train that is approaching is out of control. Five men are walking across the tracks. The train is moving so fast that they will not be able to get off the track in time. Nancy is standing next to a man, whom she can throw in front of the train, thereby preventing it from killing the men. Nancy can throw the man, killing him but saving the five men; or she can refrain from doing this, letting the five die.
Is it morally permissible for Nancy to throw the man?
5. Trolley Problem—Foreseeable Battery (Charlie) Charlie is driving a train when the brakes fail. Ahead five people are working on the track with their backs turned. Fortunately Charlie can switch to a side track, if he acts at once. Unfortunately there is also someone on that track with his back turned. If Charlie switches his train to the side track, he will kill one person. If Charlie does not switch his train, he will kill five people.
Is it morally permissible for Charlie to switch his train to the side track? Y or N
6. Trolley Problem—Foreseeable Battery (Denise) Denise is a passenger on a train whose driver has just shouted that the train's brakes have failed, and who then fainted of the shock. On the track ahead are five people; the banks are so steep that they will not be able to get off the track in time. The track has a side track leading off to the right, and Denise can turn the train onto it. Unfortunately there is one person on the right hand track. Denise can turn the train, killing the one; or she can refrain from turning the train, letting the five die.
Is it morally permissible for Denise to switch the train to the side track? Y or N
7. Bystander Problem—Foreseeable Battery (Edward) Edward is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Edward sees what has happened: the train driver saw five workmen men ahead on the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men; the banks are so steep that they will not be able to get off the track in time. Fortunately, Edward is standing next to a switch, which he can throw, that will turn the train onto a sidetrack. Unfortunately, there is one person standing on the sidetrack, with his back turned. Edward can throw the switch, killing the one; or he can refrain from doing this, letting the five die.
Is it morally permissible for Edward to throw the switch? Y or N
8. Bystander Problem—Foreseeable Battery (Hank) Hank is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Hank sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted.
103
The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Hank is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from killing the men. Unfortunately, there is a man standing on the side track with his back turned. Hank can throw the switch, killing him; or he can refrain from doing this, letting the five die.
Is it morally permissible for Hank to throw the switch? Y or N
9.1.2 Responses
A. Intentional Battery Scenarios Problem Judgment Justification Adequacy Footbridge (Ian)
Impermissible Obviously, there is no right or wrong answer to this scenario. Both options presented are not "optimal" ones. In a sense the question may be interpreted if one life is more valuable than five lives. All lives were innocent bystanders in the scenario and either way, the killing of 5 versus 1 is still the loss of a life.
Adequate
Transplant (Bob)
Impermissible The moral permissibility of this action is dependent on the healthy patient's own judgments as to whether she would be willing to offer her body in this way. If she did offer permission, then things would be different.
Adequate
Footbridge (Frank)
Impermissible Based on the wording of the scenario, I am left to assume that Frank does not know the large man or the five people on the tracks. As such, he is a thirdparty, passive spectator to the upcoming tragedy, and has no moral position at stakehe is not responsible for the train being out of control or the presence of the five people in its path. But by causing the death of the large man, he becomes actively involved, in a morally negative way. Because the train is inanimate and makes no moral choice in killing the five, this scenario is not a moral dilemma for anybody. In considering the trade of one life for five, Frank assumes that his immoral act would fight the greater immorality of the train, thereby raising the moral equilibrium of the situation. In fact, because the train has no morals, he only lowers the moral equilibrium of the scenario. It is never morally permissible to lower the moral equilibrium. . . . I am fascinated that it took me 5 seconds to determine my answer, and 45 minutes to justify it, above. I have never given any thought to a moral equilibrium before, or whether all my decisions consider raising or lowering it, but as I think about choices I've made, it seems that my view of the "right" choice is tied to the idea of how it balances the morality of the situation. I wonder when this was instilled in me.
Adequate
Footbridge (Ian)
Impermissible In this case, Ian has the morally superior option of throwing himself in front of the train instead of the innocent bystander. Even if Ian is too light to stop the train, he cannot throw the bystander in front of the train without bearing moral responsibility for the bystander's death, even though the act would save five lives.
Adequate
104
Footbridge (Nancy)
Impermissible Shoving the man is tantamount to [her] taking some sort of responsibility for the otherwise randomly occurring situation. I don't feel that it is appropriate for her to decide to "sacrifice" this man, who is in as safe a position as she is, simply because only one person will be injured instead of 5. She could drop a heavy rock or the man's backpack, if that would be sufficient, but to choose on her own to put this man's life in jeopardy or cause him physical harm seems beyond the realm of what one human being should be allowed to do. "She cannot play God!" A nice simple crystallization of much more complicated situations that probably occur all the time in the world, on different levels and in different contexts.
Adequate
Transplant (Bob)
Impermissible As it was written (with the donor patient healthy), it was very easy to consider [the doctor's] action immoral. However, if she had been terminally ill and in a vegetative state, I would have had to think a little harder (though of course I would want to know the wishes of her and her family before deciding).
Adequate
Transplant (Bob)
Impermissible The scenario present is one which, to my thinking[/feeling], leaves the participants with [no] feeling of ambiguity [in] dilemma. My response was immediate, clear. Despite the mathematical gain in lives saved, to sacrifice one healthy life to save the lives of five others is unacceptable.
Adequate
Footbridge (Nancy)
Impermissible What if the driver was able to stop in time before hitting the five people but the heavy man was injured anyway? Besides, she does not have the right to risk his life without his permission—if he wanted to, he could jump in front of the train (or she could) but she can't force him.
Adequate
Footbridge (Ian)
Impermissible Ian is not justified in deciding who gets to live or die. I'm not sure how plausible it is for a train to be stopped by one man, no matter how heavy he is.
Adequate
Footbridge (Ian)
Impermissible Because the five workers will be killed in our accident where no one will be blamed, but the heavy man will have to be deliberately murdered to save the workers. After all, accidents happen and people die—the force of nature. It's more acceptable than a murder.
Adequate
Footbridge (Nancy)
Impermissible Replace: 5 with 100, people with school children => no judgment (or even: "no" becomes "yes")
Inadequate
Transplant (Bob)
Impermissible In principle (absolute secrecy etc.) should do for greater good.
Inadequate
Footbridge (Frank)
Impermissible He can jump himself, if he wants to... Inadequate
Footbridge (Nancy)
Permissible I dislike large men with backpacks, but some would [like me] to be pushed myself.
Adequate
105
B. Foreseeable Battery Scenarios Problem Judgment Justification Adequacy Trolley (Denise)
Permissible I don't have a strong preference. It seems that either way something bad will happen, and you can argue either action or inaction is the moral route. But there isn't time to decide in the real situation, so there is no moral obligation to act one way or the other.
Adequate
Bystander (Hank)
Permissible I would think that Hank's intentions should be examined. If Hank's intention was fed by personal desire (i.e., desire to participate in the act of killing), then there would be a different light thrown on the subject.
Adequate
Trolley (Denise)
Permissible Since D. knows what is going to happen if she does nothing, she would be remiss not to act. In a perverse sort of way it is better to kill one person than five, all else being equal. On the other hand, if the one person on the spur sees the train and believes s/he is safe and fails to move, while the 5 on the main track know what's happening, the situation might be different. But if there's not time for people to run off a train track, how is there time to switch tracks safely?
Adequate
Trolley (Charlie)
Permissible I am a utilitarian, so the scenario of 1 death is preferable to five deaths. Such decisions are made every day, although the differences between mortality in alternate scenarios are in "statistical" terms, rather than attached to particular people.
Adequate
Trolley (Charlie)
Permissible Given that in either outcome someone will die (and barring any way to alert any of the people on the tracks), it seems morally justifiable to take an action which lessens the loss of life.
Adequate
Trolley (Denise)
Permissible Denise has only two choices: 1) action turning to the side track and killing one person or 2) inaction which will lead to five deaths. I think that the only viable choice is action, turning the train. By doing so, she will save four lives that would have been lost. The scenario's wording makes it seem that Denise is in some way responsible for killing the person on the right sidetrack rather than being a hero for saving the lives of five people.
Adequate
Bystander (Edward)
Permissible By yes, I mean EITHER option is “morally permissible.” The other would have been just as morally permissible. Either option is suboptimal. I feel that 5 lives lost equals one life lost without further info. Both are awful. But say the 5 people were serial murderers and rapists plus the one person was say, some saintly person, then one might consider their views on capital punishment and reevaluate.
Inadequate
Bystander (Edward)
Permissible I went through several stages in answering this and generally found it troubling/morally difficult. 1st reaction: yes, throw switch. 2nd reaction: no! Is active killing. 3rd reaction: ramp up to large city/bomb vs. countryside. Obviously would divert to countryside.
Inadequate
Bystander (Hank)
Permissible It was a difficult decision to make!! The words that stand out are “killing the man” vs. “letting the five die”...which is a greater responsibility? Do we determine the value of life by number (5 vs. 1)? I'm left looking for a better solution to the problem. Could Hank warn the single
Inadequate
106
man standing on the track? Is it really Hank's right to choose who will die that day? Does it make him more responsible for death in general now that he has played an active role? It also brings up the question of bystander apathy—are we perpetrators by not “acting”?
Trolley (Denise)
Permissible I think that the obvious answer is "Yes" (and in fact it is likely the answer you would predict that most people would choose). For that reason, I would word the question differently (so that it is less loaded). I would ask: "Is it morally permissible for Denise to refrain from turning the train."
Inadequate
Trolley (Charlie)
Permissible The gut reaction is to turn. In the given scenario, I read it as if I was the driver, moving quickly in a panic situation. I believe that I would turn off the main track, THEN see the 1 person working there. If the driver (me) had taken the time to look at the 1 person, then decide, the opportunity to...[rest is too light to read]
Inadequate
Bystander (Hank)
Permissible Compare with following scenario: Hank is a physician with five sick patients. Each needs an organ transplant to live: one needs a lung, one a kidney, one a spleen, one a heart, one a brain (this is hypothetical). Fortunately, there is a healthy visitor (with a backpack, coincidentally) who has a healthy set of organs—lung, kidney, spleen, heart, brain. Is it morally permissible for Dr. Hank to sacrifice the backpack guy for organs?
Inadequate
Bystander (Hank)
Impermissible The reason I wrote no is because I don't believe that one can evaluate the worth of human lives by numbers alone. The scenario outlined here, if it mandated one of two actions, then I would choose to pull the switch, not because it is morally permissible but because it is the best available option.
Inadequate
107
9.2 Experiment 2
9.2.1 Scenarios
1. Bystander Problem—Foreseeable Battery (Hank) Hank is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Hank sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Hank is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from killing the men. Unfortunately, there is a man standing on the side track with his back turned. Hank can throw the switch, killing him; or he can refrain from doing this, letting the five die.
Is it morally permissible for Hank to throw the switch?
2. Footbridge Problem—Intentional Battery (Ian) Ian is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Ian sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Ian is standing next to a heavy object, which he can throw onto the track in the path of the train, thereby preventing it from killing the men. Unfortunately, the heavy object is a man, standing next to Ian with his back turned. Ian can throw the man, killing him; or he can refrain from doing this, letting the five die.
Is it morally permissible for Ian to throw the man?
3. Modified Bystander Problem—Bad Effects Outweigh Good Effects (Karl) Karl is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Karl sees what has happened: the driver of the train saw five million dollars of new railroad equipment lying idle across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the equipment. It is moving so fast that it will be destroyed. Fortunately, Karl is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from destroying the equipment. Unfortunately, there is a man standing on the side track with his back turned. Karl can throw the switch, killing him; or he can refrain from doing this, letting the equipment be destroyed.
Is it morally permissible for Karl to throw the switch?
4. Modified Footbridge Problem—Consensual Contact (Luke) Luke is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Luke sees what has happened: the driver of the train saw a man walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted.
108
The train is now rushing toward the man. It is moving so fast that he will not be able to get off the track in time. Fortunately, Luke is standing next to the man, whom he can throw off the track out of the path of the train, thereby preventing it from killing the man. Unfortunately, the man is frail and standing with his back turned. Luke can throw the man, injuring him; or he can refrain from doing this, letting the man die.
Is it morally permissible for Luke to throw the man?
5. Trolley Problem—Foreseeable Battery (Charlie) Charlie is driving a train when the brakes fail. Ahead five people are working on the track with their backs turned. Fortunately Charlie can switch to a side track, if he acts at once. Unfortunately there is also someone on that track with his back turned. If Charlie switches his train to the side track, he will kill one person. If Charlie does not switch his train, he will kill five people.
Is it morally permissible for Charlie to switch his train to the side track? Y or N
9.2.2 Responses
A. Pair #1: Bystander Problem (Hank) and Footbridge Problem (Ian) Problem Judgment Justification Adequacy Bystander Permissible
Footbridge Impermissible
The man, Hank can here actively influence a sequence of events which will limit damage (# of deaths). In the second event, he cannot throw another man onto the tracks because he will actively and deliberately kill an innocent bystander. Reallyan impossible choice.
Inadequate
Bystander Permissible
Footbridge Impermissible
It's amazing that I would not throw a person but throw a switch to kill a person. I really wish there was more I could do for the 1 guy on the other track.
Inadequate
Bystander Permissible
Footbridge Impermissible
In either case, the moral decision rule depends on how close to the active killing of the man is.
Inadequate
Bystander Permissible
Footbridge Impermissible
Not acceptable to decide to risk someone else's life to save others.
Inadequate
Bystander Permissible
Footbridge Impermissible
I knowfive lives are five livesit's all about the guts. That's what it comes down to. Blaise Pascal got it all wrong.
Inadequate
Bystander Impermissible
Footbridge Impermissible
For the first scenario, I wanted to draw a distinction between "is it permissible for him to throw the switch" and "does he have a duty to throw the switch," though I don't know if that would have changed my answer.
Inadequate
Bystander Permissible
Footbridge Permissible
I believe that the ultimate question is that of lives lost. Some would argue that Hank and Ian would be morally justified in not stopping the train. While this may be true, it does not necessitate that it be morally unjustified to stop the train.
Adequate
109
B. Pair #2: Bystander Problem (Hank) and Modified Bystander Problem (Karl) Problem Judgment Justification Adequacy Bystander Permissible
Modified Bystander
Impermissible
First scenariomorally justifiable if he feels that a net savings of 4 lives (and knowingly taking an action to kill a man) would be better than nonaction and therefore, saving the life of one. Second scenariolife is more important than money.
Adequate
Bystander Permissible
Modified Bystander
Impermissible
For the first scenario, saving 5 at expense of 1 is better than saving 1 at expense of 5. For the second scenario, value of even 1 person's life is greater than $5 million, if only because equipment can be replaced.
Adequate
Bystander Permissible
Modified Bystander
Impermissible
It is morally permissible for Hank to throw the switch. This can be framed as a "binary" decision. If we assume that Hank's decision is 100% accurate, he will choose to throw rather than not to throw. Karl is thus directly responsible for the man's life. Not switching the train would make him morally responsible for the man's death (Throw switch => 1 killed; Don't throw switch => 5 killed).
Inadequate
Bystander Permissible
Modified Bystander
Impermissible
While a life should not always be preferred to $5 million, in this case the railroad equipment is probably not going to be put to any better useit will probably just serve the greedy interests of the railroad firm itself. (Furthermore, the railroad firm probably got the $5 million unjustly.)
Inadequate
C. Pair #3: Footbridge Problem (Ian) and Modified Footbridge Problem (Luke) Problem Judgment Justification Adequacy Footbridge Impermissible
Modified Footbridge
Permissible
One cannot place another in harm's way. Ian may choose to throw himself in front of the train only. It is morally permissible for Luke to risk himself if he so chooses. These are all individual, private decisions relating to control over one's self.
Adequate
Footbridge Impermissible
Modified Footbridge
Permissible
I guess I didn't think Ian should actively murder someone, but if Luke didn't mind risking his own life, that's fine. He shouldn't be morally obligated to do so, however.
Adequate
Footbridge Impermissible
Modified Footbridge
Permissible
For Question 1, it's OK to throw yourself on the tracks to save the five. It's not OK to kill somebody else (who's innocent) to spare your own life. For Question 2, it's OK to force people if you think it's in their best interest, in circumstances like this.
Adequate
D. Pair #4: Footbridge Problem (Ian) and Modified Bystander Problem (Karl) Problem Judgment Justification Adequacy Footbridge Impermissible
Modified Bystander
Impermissible
In scenario 1, the issue is essentially allowing 5 to die (passive) or killing 1 (active). I don't feel that utility justifies the killing of the one to save the 5. Similarly in scenario 2, rights not to be killed trump utility.
Adequate
Footbridge Impermissible In the first case, killing a person is much different than letting people live. You can chose to kill yourself if you
Adequate
110
Modified Bystander
Impermissible feel that sacrifice will benefit society, but you cannot choose to kill someone else without their consent. Second case, money is a cheap substitute for life. I realize that we cannot spend limitless money to save lives, but he cannot kill a person due to our choice to reduce costs.
Footbridge Impermissible
Modified Bystander
Impermissible
My [judgment] is intuitive and I realize not logically justifiable. My intuition is that throwing 1 man on track of a passing train will not stop the train. I also am reluctant to grade life and thus equate [the] value of one life as worth more than 5, even though I know this can be done.
Inadequate
E. Pair #5: Bystander Problem (Hank) and Modified Footbridge Problem (Luke) Problem Judgment Justification Adequacy Bystander Impermissible
Modified Footbridge
Permissible
While in situation 1 Hank might not have the moral authority to kill the man on the side track, it may be the best policy decision to do so.
Inadequate
Bystander Impermissible
Modified Footbridge
Permissible
Option 1: the utility of killing a man to save 5 does not warrant a transgression of one's right not to be killed unjustly. And I am not morally obligated (it is not right) to deliberately kill someone to save other lives.
Inadequate
F. Pair #6: Modified Bystander Problem (Karl) and Modified Footbridge Problem (Luke) Problem Judgment Justification Adequacy Modified Bystander
Impermissible
Modified Footbridge
Permissible
Money should not determine whether you save a life or not if those are the options given. $5,000,000 vs. killing life.
Inadequate
Modified Bystander
Impermissible
Modified Footbridge
Permissible
Luke is not morally required to throw the man, but it is permissible and even nice of him to do so.
Inadequate
Modified Bystander
Impermissible
Modified Footbridge
Permissible
On the 2nd question, clearly it is morally permissible for him to do so but he is not morally obligated to do so.
Inadequate
G. Pair #7: Trolley Problem (Charlie) and Bystander Problem (Hank) Problem Judgment Justification Adequacy Trolley Impermissible
Bystander Impermissible
It is hard for me to allow one person to make the choice regarding whether other people die, regardless of the number of people.
Adequate
Trolley Permissible
Bystander Permissible
Of course it's wrong to allow anyone to die in both scenarios, but killing one is not as bad as losing 5 lives.
Adequate
111
Trolley Permissible
Bystander Permissible
My immediate instinct would be to envision the families of 5 vs. 1
Inadequate
H. Pair #8: Trolley Problem (Charlie) and Footbridge Problem (Ian) Problem Judgment Justification Adequacy Trolley Permissible
Footbridge Impermissible
Very odd. I don't know why I chose differently in the second scenario. The end result is the same. I just chose my gut responseand now am intrigued with how to reconcile them.
Inadequate
Trolley Permissible
Footbridge Impermissible
Moral actors may be forced to make a decision between two passive choices where both will end rights. But to make action over passive choices requires another kind of analysis and degree of benefit.
Inadequate
Trolley Permissible
Footbridge Impermissible
In the first scenario it would be permissible to act as a utilitarian optimizer. In the second rights come into question.
Inadequate
I. Pair #9: Trolley Problem (Charlie) and Modified Bystander Problem (Karl) Problem Judgment Justification Adequacy Trolley Permissible
Modified Bystander
Impermissible
I believe that it is always best to minimize the number of lives lost in a choice situation, through direct action. Although I have a problem with comparing any sum of money to human life, the situation could be more difficult for me if the equipment were so valuable it would save life in the future, or irreplaceable.
Adequate
Trolley Permissible
Modified Bystander
Impermissible
Yes for the first scenario is you will save five lives at the expense of one. And I would treat all lives equally. For the second scenario, life has to be greater than dollar amounts.
Adequate
Trolley Permissible
Modified Bystander
Impermissible
For the first scenario, first one [would be an utility reasoning?] (more utility for society if you kill one compare to five). For the second scenario, this answer depends on the value you put on life, which is equal to the utility of the person. I personally put a large value on life.
Adequate
Trolley Permissible
Modified Bystander
Impermissible
Doesn't the train have a horn? Inadequate
Trolley Permissible
Modified Bystander
Impermissible
If I am directly responsible for the outcome in the given situations I will always choose these answers.
Inadequate
Trolley Permissible On the second case, I am able to place a boundary between the right to life and a utilitarian option. In the
Inadequate
112
Modified Bystander
Impermissible first case, I am dealing also with the right to life, but in this case I prefer to save 5 lives (is that utilitarian? I don't think is comparable).
J. Pair #10: Trolley Problem (Charlie) and Modified Footbridge Problem (Luke) Problem Judgment Justification Adequacy Trolley Permissible
Modified Footbridge
Permissible
It was not difficult to answer because the question was about "morally permissible." Ask me whether or not I would risk my life to save another person, I might say yes but if I was in Luke's situation (2nd scenario) I do not know what I would do.
Inadequate
Trolley Permissible
Modified Footbridge
Permissible
I had no doubt at all on my answers. Inadequate
Trolley Permissible
Modified Footbridge
Permissible
In the second case, it is morally "permissible" for Luke to throw the man. However, he is not morally obligated to do so as he may risk his own life in the process.
Inadequate
113
9.3 Experiment 3
9.3.1 Scenarios
Same as Experiment 2.
9.3.2 Responses
Subjects in Experiment 3 were not asked to provide justifications.
114
9.4 Experiment 4
9.4.1 Scenarios
1. Looped Track Problem—Intentional Battery (Ned) Ned is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Ned sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Ned is standing next to a switch, which he can throw, that will temporarily turn the train onto a side track. There is a heavy object on the side track. If the train hits the object, the object will slow the train down, thereby giving the men time to escape. Unfortunately, the heavy object is a man, standing on the side track with his back turned. Ned can throw the switch, preventing the train from killing the men, but killing the man. Or he can refrain from doing this, letting the five die.
Is it morally permissible for Ned to throw the switch?
2. Looped Track Problem—Foreseeable Battery (Oscar) Oscar is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Oscar sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Oscar is standing next to a switch, which he can throw, that will temporarily turn the train onto a side track. There is a heavy object on the side track. If the train hits the object, the object will slow the train down, thereby giving the men time to escape. Unfortunately, there is a man standing on the side track in front of the heavy object, with his back turned. Oscar can throw the switch, preventing the train from killing the men, but killing the man. Or he can refrain from doing this, letting the five die.
Is it morally permissible for Oscar to throw the switch?
9.4.2 Responses
A. Looped Track Problem – Intentional Battery (Ned) Judgment Justification Adequacy Impermissible The man has the same right to live as the 5 men. I think that Ned must
not interfere with what naturally is going to happen. Adequate
Impermissible He doesn’t have the right to impose his morals on others (without consent/agreement).
Adequate
Impermissible would be playing god . . . (besides, if Ned had time to hatch this plan, then how come the 5 men didn't have time to jump?)
The only decisions of morality you can make involve yourself. i.e. should I die to save them.
Adequate
115
Maybe the man wouldn't want to sacrifice his life to save the 5 Impermissible If Ned were able to save the 5 men without killing a sixth, I believe he
should. If there were some way to alert the man on the side track and let him escape (being that his back is turned and he has no reason to believe he is going to be hit by the train), then I could justify throwing the switch.
I could see how someone would believe it was moral in that 5 lives would be saved at the expense of one. And honestly, I don't know what I would do if I were Ned, but somehow choosing who is going to die . . . that is, deciding which life has more value, is not "morally permissible" in my value system.
Adequate
Impermissible Question: how can a man slow a train down?
I wouldn't know what to do. I do not think I have the right to judge the importance value of life based on the quantity, or the age. Look for different alternative. I do not have the right to take one's life in order to save 5 others. It would be wrong.
Adequate
Impermissible This question sucks. Who is to decide one life for many? One life is worth as much as the rest—or do they not weigh more for the greater need? Yuck. I can’t decide morality!
Inadequate
Impermissible But I think I would. (?) Inadequate Impermissible Although I believe Ned should hit the switch anyway for nonmoral
reasons, I also believe that both hitting and not hitting the switch is morally wrong. Assuming that Ned has absolute knowledge that one act kills five men and the opposite act kills one, there is no moral option. In addition, there is no "more" moral option. Taking action (or not taking action) that results in the death of human being is immoral regardless of whether the same action saves another human being or other human beings. The making of this decision is beyond the bailiwick of morality.
Inadequate
Impermissible (1) How could you know it would give them sufficient time to escape? Would you guarantee a man's death for the chance of saving a few others. Would you be so bold? What if you misjudged the viscosity?
Inadequate
Impermissible This situation is exactly alike, in every respect, to one in which a doctor has five patients on a given day, each needing a transplant of a different organ; without such a transplant, they will all die within a day. A sixth perfectly healthy patient comes in. The question is, assuming that the doctor can perform all five transplants perfectly, guaranteed, should he kill the healthy man to save the five? I must answer this no, hence my response to the question above.
Inadequate
Impermissible Respondent drew arrows from the five men to the train, and from there to the object on the other side of the tracks, and commented: Time is longer here than here.
Give more time a chance.
Inadequate
Permissible (1) But try yelling at the guy to get off the track
(2) It sucks but 5 lives to 1 right? What if those 5 are criminals and the 1 is a saint.
There's also the Fat Man in the Cave Problem. Where you're with a group of people exploring a cave and the entrance caves in trapping a fat man in the entrance (his rear is sticking out of the cave and torso in the cave). And you have one stick of dynamite. The tide is rising and you
Adequate
116
will all drown if you don't get out. Blow up the fat guy or not? One of the most interesting answers I've ever heard is that if the entire group decides to blow up the entrance thereby blowing up the fat man, then at least the person telling him could hold his hand. 2 lives for the group. Is that the ethical thing to do?
Permissible The problem as stated is imperfect in that Ned has some uncertainty about the situation he cannot truly know that flipping the switch will trade one life for five (all six could die, or the driver could regain consciousness, etc.). However, the essence of the question is "kill one to save five?" I consider this "permissible" from a theoretical view; it is the logically superior choice given no other information. "Yes." In practice, though, I don't know if I would personally have the strength to directly kill another human under any circumstances; there may be an almost physical limit on my ability to do harm. I agree that doing 100 points to save +500 has a net +400 balance, but my "evil" limit may be 10, beyond which I might not be able to act.
Adequate
Permissible By throwing the switch, he would be "killing" the one man but he'll be saving the lives of the 5 men. That's the only way I can justify it. Otherwise, he'd be ignoring a grave problem and watching it happen. That is morally irresponsible.
Adequate
Permissible Yell at the 5 men. Inadequate Permissible Either way Ned will have nightmares about it. Inadequate Permissible "This is sick" Inadequate Permissible "This sucks" Inadequate Permissible But he shouldn’t because the 5 men know what’s coming. The single
man has his back turned and has no idea what is going on. But it is perfectly moral for him to throw the switch and spare 5 lives.
Inadequate
B. Looped Track Problem – Foreseeable Battery (Oscar) Judgment Justification Adequacy Impermissible I don't know. I don't think that Oscar has the right to decide who lives or
who dies, but he should do something. I guess, no. Adequate
Impermissible No, because Oscar has no right to play God. Adequate Impermissible Because Oscar is killing an individual himself, therefore being
accountable for a murder rather than a mishap that he was not responsible for.
Adequate
Impermissible The man on the side track might know that no train was due, so was justified in having his back turned—however, the 5 men crossing may have been doing so knowing about the potential risk.
Adequate
Impermissible No, since it would not be morally right to involve and kill an innocent bystander, even though it would save the lives of 5 men.
Adequate
Impermissible It’s never morally right to kill someone even if it means that 5 other lives can be saved.
Adequate
Impermissible My gut feeling is no, it's not morally permissible he is playing God and human beings aren't "quantifiable" like that (5 more important than one). On the other hand, I think the action is to throw the switch.
Adequate
Impermissible If he throws the switch he is taking a deliberate action which will (or likely will) result in the death of an innocent bystander.
If he does nothing, he is allowing an act of god to proceed over which he has no control other than to decide to kill someone (morally unacceptable).
Adequate
117
The 5 potential victims are accountable for their own situation (i.e. standing on a train track which is in service).
Impermissible Morally I believe that humans do not have the right to decide to take the lives of other humans, and in this situation nature should be allowed to run its course. Therefore, I believe that it is morally impermissible to throw the switch. In other cases, however, such as the taking of one's own life (euthanasia), I believe that there are certain circumstances where the switch could morally be "thrown."
Adequate
Impermissible Respondent drew an arrow pointing to the five men and commented: Shouldn't have been there in the first place.
Adequate
Impermissible Those 5 people were wrong in being on the track. To kill a man that is innocent and not doing anything wrong is wrong. The other people are walking on the track in danger.
Adequate
Impermissible Since the man on the side track is innocent. The side track is not open for operation because there is a heavy object on the track. So it’s ok for the man standing on the side track.
Adequate
Impermissible Oscar is in no position to play God deciding whose life (lives) are more important.
Adequate
Impermissible It's not morally permissible for him to let the 5 men die if it were in his power to change that. Meanwhile, it is also not permissible for him to take this other man's life. I think Oscar is caught in a situation in which neither choice would be morally right; he is going to have to base his decision on some other standards other than morality.
Inadequate
Impermissible Overpopulation is a serious problem in many areas of the world. Inadequate Impermissible This is not a question of the value of life. Inadequate Permissible If he can save the lives of five people, he has the moral right to do so. He
can save more people this way. Adequate
Permissible BTW, me and my friends were talking about this yesterday and the man is actually Oscar's friend in his version. And would it be more wrong for Oscar to turn the switch while the train is going towards his friend on while the train is going towards the main track. I think neither way is permissible but if I have to choose, saving more lives is more desirable.
Adequate
Permissible I think either actionthrowing the switch or notis permissible with respect to any "universal" moral law, that is, whatever Hank does, we must presume was the lesser of two evils in his mind. Therefore, Hank would have tried to do the right thing, as he understands it. Doing what one thinks is right seems to be the equivalent of doing one's best, which is all that could ever be expected of anyone, and therefore seems universally defensible
Adequate
Permissible The question is which is safer for the train; how many people are on it. Assuming the train would be fine and the only worry is the people, one is to lose the one to save the 5. For the most part, it seems down to number or morality [Too light to read well]
Adequate
Permissible The needs of the many outweigh the needs of the few, or the one. Adequate Permissible Instinctively yes because you’re going to save 5 people. Adequate Permissible Simple math: 1 man dead vs. 5 men. None of them should have been
crossing the tracks!! Adequate
Permissible Murder is not immoral if it prevents other murders. But the law would disagree. Were I placed in the role of Oscar, I might have to consider the option of not taking any action, and pretending not to be able to help.
Adequate
Permissible 1 < 5 Adequate Permissible It is morally permissible for Oscar to do either, but if you want to know
what I would do if I was Oscar, I would not throw the switch. Inadequate
Permissible When answering this question, I in no way use the argument that one Inadequate
118
death is better than five, but instead look at the possibility of having no deaths. If the train stays on its present course, 5 people will die. However, if the train is redirected, there is a chance, though a slight one, that everyone may live. I would redirect the train, doing everything I could to get the man's attention, and, God willing, he'd turn around and be able to escape the oncoming train.
Permissible On the one hand, nobody may act as God, and decide who lives and who dies. An alternative is only feasible if it does not implies the death of a human being. On the other hand, the real world is complex. Since a long time ago that that human lives are lost to save a culture, a nation, etc. Furthermore, I spent almost 10 minutes trying to decide and I am still undecided. So, it is reasonable that the man is also undecided, and any option will therefore be acceptable.
Inadequate
Permissible The man on the side track can hear train and thus try to jump off track. Thus saving 5 men and probably 6.
Inadequate
Permissible I don’t know if that’s what I would do, but I don’t think it’s immoral to do it.
Inadequate
119
9.5Experiment 5
9.5.1 Scenarios
1. Bystander Problem—Foreseeable Battery (Hank) Hank is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Hank sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Hank is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from killing the men. Unfortunately, there is a man standing on the side track with his back turned. Hank can throw the switch, killing him; or he can refrain from doing this, letting the five die.
Is it morally permissible for Hank to throw the switch?
2. Modified Bystander Problem # 2 (Mark: Intentional Homicide) Mark is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Mark sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed, and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Mark is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from killing the men. Unfortunately, there is a man on the side track. Mark can throw the switch, killing him; or he can refrain from doing this, letting the men die.
Mark then recognizes that the man on the side track is someone who he hates with a passion. “I don’t give a damn about saving those five men,” Mark thinks to himself, “but this is my chance to kill that bastard.”
Is it morally permissible for Mark to throw the switch in order to kill the man on the side track?
3. Modified Bystander Problem #3 (Richard: Better Alternative) Richard is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Richard sees what has happened: the driver of the train saw five men walking across the tracks and slammed on the brakes, but the brakes failed, and the driver fainted. The train is now rushing toward the five men. It is moving so fast that they will not be able to get off the track in time. Fortunately, Richard is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from killing the men. Unfortunately, there is a man on the side track with his back turned. Richard can throw the switch, killing him; or he can refrain from doing this, letting the men die.
120
By pulling an emergency cord, Richard can also redirect the train to a third track, where no one is at risk. If Richard pulls the cord, no one will be killed. If Richard throws the switch, one person will be killed. If Richard does nothing, five people will be killed.
Is it morally permissible for Richard to throw the switch?
4. Modified Bystander Problem #4 (Steve: Disproportional Death) Steve is taking his daily walk near the train tracks when he notices that the train that is approaching is out of control. Steve sees what has happened: the driver of the train saw a man walking across the tracks and slammed on the brakes, but the brakes failed and the driver fainted. The train is now rushing toward the man. It is moving so fast that he will not be able to get off the track in time. Fortunately, Steve is standing next to a switch, which he can throw, that will turn the train onto a side track, thereby preventing it from killing the man. Unfortunately, there are five men standing on the side track with their backs turned. Steve can throw the switch, killing the five men; or he can refrain from doing this, letting the one man die.
Is it morally permissible for Steve to throw the switch?
9.5.2 Responses
A. Pair #1: Bystander Problem (Hank) and Intentional Homicide Problem (Mark) Problem Judgment Justification Adequacy Bystander Permissible
Intentional Homicide
Impermissible
1. Because I don't think there is a right and wrong answer and that there are multiple alternatives to this scenario, I think Hank can throw the switch and probably also save the man standing on the side track. I think Hank can warn the man to move while he is throwing the switch. Furthermore, I think that by not throwing the switch the train can cause more damage than what is now foreseeable.
2. While Mark would be saving the men walking across the tracks, it is not right for Mark to throw the switch primarily to kill the man on the side track.
Inadequate
Bystander Permissible
Intentional Homicide
Impermissible
Scenario totally implausible from a time/speed/distance/safety equipment perspective. Also, throwing the switch might derail the train, killing all six pedestrians & the engineer.
Inadequate
Bystander Permissible
Intentional Homicide
Impermissible
Although I'm not advocating taking a person's life in this situation I believe that the loss of one would be far greater an outcome than five. With that being said, I might not feel that way if the one person was someone I had personal ties with such as a father or brother. Question #2: I do not believe that malicious intent of any kind is an acceptable solution to causing anyone harm or loss of life even if I despised an individual. I'm a firm believer that what goes around comes around and I
Inadequate
121
believe that God can and will take care of the individual in His own time.
Bystander Permissible
Intentional Homicide
Impermissible
Mark's choice to kill is the amoral decision in the 2nd scenario.
Inadequate
Bystander Permissible
Intentional Homicide
Impermissible
Hank Scenario: Do what's best for the greater good as long as the decision is not impacted by personal feeling.
Mark Scenario: Personal feeling to make lifethreatening decision is not morally correct.
Adequate
Bystander Impermissible
Intentional Homicide
Impermissible
I'm not sure what is meant by the phrase "morally permissible." I'm not sure if that is supposed to mean whether I think a particular act is justified or whether I think a majority of people would approve of a particular decision. Ultimately I decided to avoid taking affirmative steps to end another person's life. I have no idea if that is "moral," however.
Adequate
Bystander Permissible
Intentional Homicide
Permissible
The hypo made it sound like the choice was between certainly saving 5 people v. certainly saving 1. If the chances of saving those 5 were not as certain, I would feel differently; even 90% would change my answer.
Inadequate
B. Pair #2: Bystander Problem (Hank) and Better Alternative Problem (Richard) Problem Judgment Justification Adequacy Bystander Permissible
Better Alternative
Impermissible
1) While it is morally permissible, it is not morally advisable.
Inadequate
Bystander Permissible
Better Alternative
Impermissible
He should pull the cord and save everyone's lives. Inadequate
Bystander Permissible
Better Alternative
Impermissible
Easy questions. The questions about terminating life without dignity are harder. Questions about the death penalty are harder still.
Inadequate
Bystander Permissible
Better Alternative
Impermissible
Without knowing the individuals in the scenarios, I believe that, if it is possible to save 5 while harming only 1, then you should do so. However, if you have a third option which would harm no one, then that is the ideal choice.
Adequate
Bystander Permissible
Better Alternative
Impermissible
"First do no harm," and then within that context "do as little as possible." Since Richard has a third and vastly preferable choice, it is not morally permissible for him to take an active role in choosing what he considers the lesser of 2 evils. If he can "first do no harm" he should morally always choose that role.
Adequate
Bystander Impermissible Reason for first: the five are at fault and though it would Adequate
122
Better Alternative
Impermissible be nice to help them, I don't think it's moral to kill a completely innocent person to do this. If I would say yes, I feel like I would be making a statement that five guilty people's lives are more important than one innocent person's. I think that's wrong.
Reason for second: he has an alternative to save everyone and it's morally right to choose that one.
Bystander Impermissible
Better Alternative
Impermissible
I think it would be fine if he pulled the cord to save everyone, but it is not morally permissible to decide if 1 man or 5 has the right to live or die.
Adequate
Bystander Impermissible
Better Alternative
Permissible
First scenario: The five men on the tracks chose to be there on the tracks and know the risks as the train is approaching. The man on his own is on a different track unknowing . . . And I would be choosing to murder him to save 5 others . . . My liability. Second scenario: A chance to save all 6 . . . only issue is . . . what about the speed of the train? Is it a passenger train? By making the switch would it derail? Could I put more people at risk by attempting to make the switch and it not be successful?
Inadequate
123
C. Pair #3: Bystander Problem (Hank) and Disproportionate Death Problem (Steve) Problem Judgment Justification Adequacy Disproportional Death
Permissible
Bystander Impermissible
Life of 5 is greater than 1. Adequate
Bystander Permissible
Disproportional Death
Impermissible
While no innocent person's fate should be in the hands of another in an ideal world, in this case, death is inevitable, and it would be morally best to do the greatest good for the greatest number. Therefore, intervening and putting the train off of its course for a net of four lives saved is morally justifiable, if not ideal. In this case, my definition of moral is doing the right/rational thing.
Adequate
Bystander Permissible
Disproportional Death
Impermissible
They should both take the action that saves the most lives
Adequate
Disproportional Death
Permissible
Bystander Impermissible
Due to the preservation of life, it is better to lose one life than to lose five. My vote is do not throw the switch, but pray that the one that is on the track is able to get off safely.
Adequate
Bystander Permissible
Disproportional Death
Impermissible
These don't take into account: A) That Hank could yell to the people/the man on the side of the track (who ought have [illegible])
B) That his throwing of the switch could actually derail the train and kill/injure even more people.
Nevertheless, while 5 people is more than one person in the standpoint of potential casualties, the five are also 5 times more likely as a unit to notice the sound of the speeding train.
Hank is in a position to play God. Perhaps.
Inadequate
Disproportional Death
Permissible
Bystander Impermissible
Both scenarios provoke a lot of thought and human introspect. The fact that you (Hank) has the ability to change fate is somewhat scary. There are a lot of questions that should be answered prior to making a truly informed decision. My hope is that given the circumstances Hank would have the ability to warn all of the men to get them out of harm's way.
Inadequate
Disproportional Death
Impermissible
Bystander Impermissible
Who am I to say who should live or die. It is the Lord's will.
Adequate
Disproportional Death
Impermissible
Bystander Impermissible
Hank/Steve cannot be certain of the outcome of his actions or how his actions will affect the outcome.
Adequate
Bystander Impermissible
Disproportional Death
Impermissible
I felt that whether it was 5 men on the tracks or 1, it was partly their responsibility for being in harm's way. The bystanders shouldn't interfere and cause an innocent man or man's lives to be lost basically. The
Adequate
124
men on the tracks should have been more careful their lack of care should not result in another's death.
Bystander Impermissible
Disproportional Death
Impermissible
The main reason that I answered "No" to both questions is that the thought of making a value judgment regarding human life seems immoral (or at least uncomfortable) to me. I understand that from a utilitarian standpoint, 5 lives are worth more than 1, but that 1 person is still valued immensely by those who love him. It doesn't seem appropriate for me to intervene, thereby deciding whose life or lives are more precious.
Adequate
Bystander Impermissible
Disproportional Death
Impermissible
It sounds as if the men walking across the tracks see the train coming. They should have made sure the tracks were clear before crossing. It's not fair to kill some man (or men) whose back is turned and has no idea what is going on.
Adequate
Disproportional Death
Permissible
Bystander Permissible
Question 1: The dilemma involving Hank (1st Paragraph) asks the question if it is "morally permissible" for Hank to throw the switch. It is "permissible" using a Utilitarian moral system the greatest good for the greatest number of people. Hence, there is at least one moral system where this act is permissible. The irony is that this Enlightenment position devalues life (in its totality) and it can also be reasoned that the man with his back turned, who will presumably be killed, may have a more valuable life than the others. Ultimately, then, either action would permit a "moral" result depending on your cognitive framework. Question 2: Same answer as above, though culturally a harder argument to make. The position of modernity is to count human beings and thereby assume their equivalence. This is seen, for example, in the economic presumption of ["homo economics"] which economics (contemporary) is based upon. The ultimate equivalence of human beings, however, shares the same fate as this concept in practice homo economics does not exist, though it must be assumed intellectually.
Inadequate
125
9.6 Experiment 6
9.6.1 Scenarios
1. Transplant Problem Intentional Battery (Dr. Brown) Dr. Brown has five patients in the hospital who are dying. Each patient needs a new organ in order to survive. One patient needs a new heart. Two patients need a new kidney. And two more patients need a new lung. Dr. Brown can save all five patients if he takes a single healthy person and removes her heart, kidneys, and lungs to give to these five patients. Just such a healthy person is in Room 306. She is in the hospital for routine tests. Having seen her test results, Dr. Brown knows that she is perfectly healthy and of the right tissue compatibility. If Dr. Brown cuts up the person in Room 306 and gives her organs to the other five patients, he will save the other five patients, but kill the person in Room 306 in the process. If Dr. Brown does not cut up the person in Room 306, the other five patients will die.
Is it wrong for Dr. Brown to cut up the person in Room 306?
2. Trolley Problem—Foreseeable Battery (Charlie) Charlie is driving a train when the brakes fail. Ahead five people are working on the track with their backs turned. Fortunately Charlie can switch to a side track, if he acts at once. Unfortunately there is also someone on that track with his back turned. If Charlie switches his train to the side track, he will kill one person. If Charlie does not switch his train, he will kill five people.
Is it morally permissible for Charlie to switch his train to the side track? Y or N
9.6.2 Responses
A. Transplant Problem – Intentional Battery (Dr. Brown) Judgment Justification Adequacy Impermissible I believe that it would be wrong to cut this 306 person up without them
even knowing it. It would be different if Dr. Brown had asked this person if they would donate their organs and he had received their permission. That is why I would blame him if he took their life.
Adequate
Impermissible I think that Dr. Brown should ask the person in room 306 if they would like to be cut up to save the other peoples' lives.
Adequate
Impermissible I said no because it never said that she gave permission to kill him; to give away her body parts…I did not feel good about it because I would not like somebody to take my body parts.”
Adequate
Impermissible Okay, if given consent. Adequate Impermissible For question #1 I said yes, because the person in room 306 was not
informed she would die and neither was her family. But the 5 patients are prepared and their families were aware that they were close to dying.
Adequate
Impermissible Dr. Brown should ask the person in room 306 permission. Adequate
126
Impermissible It's not fair to kill a perfectly healthy person that has not done anything wrong.
Adequate
Permissible It depends on how old they are. Adequate Permissible I think it's ok because your saving 5 lives for one. Adequate
B. Trolley Problem – Foreseeable Battery (Charlie) Judgment Justification Adequacy Permissible It's better to kill one person than five if you have no other choice. Less
people will die. Adequate
Permissible Because if you kill 5 people it is worse. Just killing 1 person is not as bad as killing 5 people. I would shout out in front of the people to get out so I wouldn't hit the person. Could turning work?
Adequate
Permissible I think Charlie should change tracks because then he saves at least 5 but one is killed unless he hears the train so all and all to change would be better.
Adequate
Permissible I have no comments or questions. I do not know what to say. Inadequate Permissible If he switches tracks, will he still be going to the same place?"
If he didn't switch he may be blamed more and probably fired. He would be doing a good thing if he switched to the side track.
Inadequate
Permissible Wouldn't they hear the train coming? Inadequate Impermissible Not blaming him because he had to make a choice either way. Inadequate
127
Notes 1 Although this paper was written by me, it summarizes several years of research conducted with my colleagues Professor Elizabeth Spelke of the Department of Psychology, Harvard University, and Dr. Cristina Sorrentino of the Department of Psychology, New York University. Hence I use plural forms throughout. I was principal investigator in these studies, which began while I was a Lecturer and Research Affiliate in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology and continued while I was a law student at Stanford Law School. Although our findings have not yet been published, they have been presented in posters at annual meetings the Cognitive Science Society and the Society for Research in Child Development, as well as in lectures, workshops, and graduate seminars at Cornell University, Harvard University, M.I.T., Stanford University, the University of Berlin, and the Peter Wall Institute for Advanced Studies at the University of British Columbia. 2 The distinction between descriptive and explanatory adequacy is potentially confusing because correct answers to both problems are both descriptive and explanatory in the usual sense. A solution to the problem of descriptive adequacy is a description of the mature individual’s moral competence; at the same time it is an explanation of her moral intuitions. Likewise, a solution to the problem of explanatory adequacy is a description of the initial state of the moral faculty; at the same time, it is an explanation both of the individual’s acquired moral competence and (at a deeper level) those same intuitions (Mikhail, 2000; Mikhail, in press). 3 The syntactic form of the actiondescriptions located on the nodes of these act trees (Goldman, 1970) calls for comment. Drawing on Goldman (1970) and Ginet (1990), we take the central element of what we call a complex acttoken representation (Mikhail, 2000) to be a gerundive nominal, whose grammatical subject is possessive. Following Katz (1972), we use the symbol ‘at t’ to denote some unspecified position on an assumed time dimension, and we use superscripts on occurrences of ‘t’ to refer to specific positions on this dimension. We assume that superscripts can be either variables or constants. We take ‘t’ with the superscript constant ‘0’, i.e., ‘t (0) ’, to function as an indexical element in the complex acttoken representation, serving to orient the temporal relationships holding between it and other such representations. Superscript variables (‘n’, ‘m’, etc.) denote members of the set of natural numbers and appear in superscripts with prefixed signs ‘+’ and ‘’, indicating an appropriate number of positive and negative units from the origin point (‘t (0) ’) of the time dimension. For example, the symbol ‘t (+n) ’ signifies ‘n units to the right of the origin,’ and the symbol ‘t (n) ’ signifies ‘n units to the left of the origin’. We also use additional variables after the signs ‘+’ and ‘’ in our superscripts whose interpretation proceeds in accord with the conventions for adding and subtracting in algebra. For example, the symbol ‘t (+n + ( m)) ’ signifies ‘n m units to the right of the origin,’ whereas the symbol ‘t (n + ( m) + (o)) ’ signifies ‘n + m + o units to the left of the origin.’ Thus the ordered sequence of representations on the vertical line (i.e., the trunk) of the act tree in Figure 3 indicates that throwing the man occurs before causing the train to hit the man, which occurs before preventing the train from killing the men. As we discuss in §8.1, this notational system for representing temporal structure also adopts an important convention, which is to date an action from its time of completion. See Mikhail (2000) for further discussion. 4 The elements of intentional battery in tort law are more complex and vary among commentators and jurisdictions (e.g., Dobbs, 2000; Shapo, 2003). For the purposes of this study, we characterize battery as touching without consent, intentional battery as touching without consent used as a means to a given end, and foreseeable battery (a neologism) as touching without consent embedded in an agent’s action plan as a side effect (see §2.1). These are not meant to be legally adequate definitions, but they are adequate given our present purposes. 5 An alpha level of .05 was used for all statistical tests. 6 The exact breakdown of participants and scenario pairs is listed below. Each participant received a questionnaire with one of the following ten pairs, with the number of participants in each condition listed in brackets:
• Pair #1: Bystander Problem (Hank) and Footbridge Problem (Ian) [10] • Pair #2: Bystander Problem (Hank) and Modified Bystander Problem (Karl) [5] • Pair #3: Footbridge Problem (Ian) and Modified Footbridge Problem (Luke) [5] • Pair #4: Footbridge Problem (Ian) and Modified Bystander Problem (Karl) [5] • Pair #5: Bystander Problem (Hank) and Modified Footbridge Problem (Luke) [5] • Pair #6: Modified Bystander Problem (Karl) and Modified Footbridge Problem (Luke) [5] • Pair #7: Trolley Problem (Charlie) and Bystander Problem (Hank) [5] • Pair #8: Trolley Problem (Charlie) and Footbridge Problem (Ian) [5] • Pair #9: Trolley Problem (Charlie) and Modified Bystander Problem (Karl) [10] • Pair #10: Trolley Problem (Charlie) and Modified Footbridge Problem (Luke) [10]
7 Recall that two subjects in Experiment 2 did not provide information about their gender. Both were given a Consensual Contact scenario. One judged the action to be permissible and one judged it to be impermissible.
128
8 We thank Fei Xu and Yaoda Xu for their assistance in collecting this data. 9 For one notable effort in this direction, see Hauser, Cushman, Young, Jin & Mikhail (manuscript submitted for publication). 10 According to our hypothesis, this action is judged to be impermissible because its bad effects are perceived to be disproportional to its good ones. 11 Differences between the two scenarios are underlined and italicized to make them more noticeable. Participants were given questionnaires without these markings. The NedOscar pair and its loopedtrack design were also inspired by a notable debate over the principle of double effect in the philosophical literature (see Thomson, 1985; Costa, 1987). 12 This is merely an implication of the temporal order of good effects, bad effects, and batteries in these representations, which it may be recalled is what led us to create the NedOscar pair in the first place. 13 Differences between the two scenarios are underlined and italicized to make them more noticeable. Participants were given questionnaires without these markings. 14 To clarify, we assumed that respondents would represent Mark as intending the bad effect but not the good effect. A more complex scenario would be one in which the agent has both intentions and the opportunity to hit two birds with one stone, so to speak. We leave investigation of this alternative for another occasion. 15 Although we do not pursue the issue here, the breakdown of responses by gender did appear significant. 7 of 8 girls and all 7 of the boys in the Foreseeable Battery condition judged the action constituting foreseeable battery (“Charlie’s turning the train”) to be permissible. By contrast, 1 of 6 girls in the Intentional Battery condition judged the action constituting intentional battery (“Dr. Brown’s cutting up the patient”) to be permissible, but— surprisingly—a majority of boys (5 out of 9) disagreed. These results suggest, albeit provisionally, that boys may be slower than girls in arriving at the standard adult view that certain deontological violations are impermissible. More research is needed to clarify this issue. 16 The same holds, of course, for other cognitive domains, like vision, musical cognition, face recognition, and so on. In each of these domains, we take for granted—and can discover empirically—that experimental subjects are consistently incapable of articulating the operative principles on which their intuitive judgments are based. Moral cognition appears to be similar to other cognitive domains in this respect, that is, to be both principled and intuitive, involving patterns of “unconscious inference” (e.g., Helmholtz, 1867/1962). 17 So conceived, the presumption of good intentions appears related to the socalled “first principle of practical reason,” according to which “good is to be done and pursued and evil avoided” (Macnamara, 1990; Mikhail, 2000). 18 The “SelfPreservation Principle” holds that if an agent’s doing something to a moral patient necessitates killing her, then the moral patient would not consent to it (Mikhail, 2000). The principle may presumably be overridden in certain circumstances, but we do not address that topic here. 19 Like Petrinovich and colleagues, Greene et al. (2001) do not appear to investigate deontic knowledge as such, because instead of asking subjects to decide whether a given action is morally permissible, they ask whether the action is “appropriate.” See Greene et al. 293 (5537): 2105 Data Supplement—Supplemental Data at http://www.sciencemag.org/cgi/content/full/293/5537/2105/DC1 (last visited 9/25/2001). That this question appears inapposite can be seen by considering the analogous inquiry in the study of language: asking whether a linguistic expression is “appropriate” rather than “grammatical.” Chomsky (1957, p.15) emphasized the importance of distinguishing grammatical from closely related but distinct notions like significant or meaningful, and the same logic appears to apply here. Additionally, Greene et al. (2001) do not provide evidence that trolley intuitions are stable, systematic, or widely shared. Instead, they merely report that “most people” say that one ought to turn the trolley in the Trolley Problem but one ought not to push the man in the Footbridge Problem, and they then note in passing that “[p]articipants’ responses to versions of the trolley and footbridge dilemmas were consistent with the intuitions described above” (Greene et al., 2001, p. 2108, n. 11). Further, whether one ought to perform a given action is distinct from whether the action is morally permissible, and the authors conflate this crucial distinction. Finally, none of the intuitions studied by Greene and colleagues appear to qualify as “considered judgments” in Rawls’ (1971) sense—that is, as moral judgments “in which our moral capacities are most likely to be displayed without distortion” (Rawls, 1971, p. 47)—because all of their probes are phrased in the firstperson (e.g., “You are on a footbridge over the tracks … Is it appropriate for you to push the stranger?”) (emphasis added). This not only contravenes Rawls’ sensible warning that the theorist’s data set should exclude those judgments which “are likely… to be influenced by an excessive attention to our own interests” or which are given “when we are upset or frightened, or when we stand to gain in one way or the other” (Rawls, 1971, p. 47). It also appears to stack the deck in favor of the authors’ hypothesis that “variations in emotional engagement” (Greene et al., 2001) are responsible for generating this particular class of moral intuitions.