Upload
nottingham
View
1
Download
0
Embed Size (px)
Citation preview
BOOK SYMPOSIUM
With great power comes great responsibility
John Forge: The responsible scientist: a philosophical inquiry.Pittsburgh: University of Pittsburgh Press, 2008, pp. 272US$39.95 HB
Bernard Gert • Nicholas Evans • Heather Douglas •
John Forge
Published online: 13 March 2010
� Springer Science+Business Media B.V. 2010
Bernard Gert
This is a very good book, carefully argued and well written. Forge summarizes the
account of common morality that I provide in my books on morality in Chapter 8 of
his book and uses that account to support his claims about how scientists should
behave. It is one of the best summaries of my account of common morality that I
know of and Forge applies it with great care to his recommendations about how
scientists should behave. I have some very minor objections to some of what Forge
says about common morality; I mention them only because I have a personal
interest in preventing even the smallest misunderstanding of my account of common
morality. Indeed, I agree with almost all of Forge’s applications of my account of
common morality, which is very heartening to me, for it would be quite troubling if
my account of common morality led to recommendations about how scientists ought
to behave that I did not agree with. My most serious objection to what Forge says is
B. Gert (&)
Department of Philosophy, Dartmouth College, 6035 Thornton Hall, Hanover, NH 03755, USA
e-mail: [email protected]
N. Evans
Centre for Applied Philosophy and Public Ethics, Australian National University, Canberra,
Australia
e-mail: [email protected]
H. Douglas
Department of Philosophy, University of Tennessee at Knoxville, Knoxville, USA
e-mail: [email protected]
J. Forge
Unit for History and Philosophy of Science, Sydney University, Sydney, Australia
e-mail: [email protected]
123
Metascience (2010) 19:29–43
DOI 10.1007/s11016-010-9322-6
to his category of ‘‘weapons research,’’ which he says that scientists have a duty not
to participate in. I shall discuss this objection at the end of these comments.
First I shall comment on what I take to be a small but common mistake in
summarizing my account. Most people agree that the first five moral rules, those
prohibiting killing, causing pain, disabling, and depriving freedom and depriving of
pleasure are genuine moral rules, i.e., that it is immoral to violate any these rules
without an adequate justification. Most also agree that three of the next five rules,
‘do not deceive,’ ‘keep your promise,’ and ‘do not cheat,’ are also genuine moral
rules. Some, like Forge, have problems with the final two rules, ‘obey the law’ and
‘do your duty.’ These last two rules are indeed more controversial than the first
eight, but I think that some of the reasons that people have trouble with these last
two rules are due to a mistake in their description of the first eight rules. Forge
claims, and it is common to claim, that all of the first eight rules are prohibitions,
whereas that the last two require positive actions. Closely related to this is the claim
that the first eight rules only prohibit actions, not omissions. However, both of these
claims are false. The rule, ‘keep your promise,’ is not a prohibition although it can
be rephrased as one, ‘do not break your promise,’ and it does prohibit omissions. In
both of these respects ‘keep your promise’ is no different from the last two rules that
can be rephrased as ‘do not break the law’ and ‘do not neglect your duty.’ All three
rules have exactly the same formal characteristics; so that the above criticisms of the
last two rules cannot be taken to show that they are not genuine moral rules without
showing that ‘keep your promise’ is also not a genuine moral rule.
Forge’s criticism of the last two rules turns on his application of the concept of
impartiality (pp. 143–144). He is quite right that the first eight rules must be applied
impartially. People are prohibited from killing, causing pain, or deceiving, anyone
anywhere unless they have an adequate justification. However, Forge seems to hold
that the last two rules cannot be followed impartially, i.e., if I have a duty to help
one person, that is the person I must help, and my duty does not extend to helping
everyone impartially. For example, a nurse has a duty to provide pain medication to
her patients; she is not morally required to provide pain medication to all that need
it. He does not seem to appreciate that if I promise to help someone, then unless I
have an adequate justification I must help him, and am not morally required to help
others in the same situation, e.g., if I promise to drive him to the airport, I must drive
him to the airport and am not morally required to drive all who would benefit from
my driving them to the airport. Impartiality is maintained, but it is a more complex
kind of impartiality. Rephrasing the rules so that they are prohibitions makes this
point more clearly. I am morally required not to violate the rule, ‘do not break your
promise,’ to everyone everywhere, but I have only made a promise to this one
person. Obeying the rule concerning keeping promises with regard to this person is
compatible with my not violating it with regard to anyone anywhere. The same is
true with regard to the rule ‘do your duty.’ When rephrased as ‘do not neglect your
duty,’ doing what I have a duty to do with regard to one person is compatible with
my not violating this rule with regard to anyone anywhere.
Forge may not use the word ‘‘duty’’ as literally as I do. He talks about scientists’
negative duties not to cause harm (or increase the likelihood of harm being caused)
and of scientists’ positive duties to prevent harm. He realizes that talking about
30 Metascience (2010) 19:29–43
123
‘positive duties’ in this way is a technical use for he says in the first note to Chapter
10, (p. 254) that he uses ‘‘positive duties’’ to refer to what scientists are encouraged
to do by the moral ideals. He does not seem to realize that it is just as much a
technical use of ‘‘negative duties’’ to refer to what scientists are required to do by
the moral rules. Both of these technical uses are misleading because they suggest
that scientists have duties that other people do not have. But the whole point of
Forge’s application of common morality to scientists is to claim that common
morality applies to scientists in exactly the same way that it applies to everyone
else. He is simply denying the claim that some people have made that when
scientists are doing pure science they are not responsible for what they should know
increases the likelihood of harm being caused. Forge is quite right in maintaining
that scientists do not normally have duties in the way that doctors and lawyers do, so
that when we are concerned with the responsibilities of scientists, we are not
concerned with whether they are violating the rule ‘do your duty.’ Rather we are
concerned with their responsibility for the harmful outcomes that their scientific
work may contribute to.
Forge, correctly it seems to me, does not hold that scientists are responsible for
the psychological harm that results from showing that some common religious
beliefs are false. However, he does not try to explain why they are not responsible
for this psychological harm when such harm is not only foreseeable, but is generally
actually foreseen. I suggest that a useful analogy is that of a doctor telling a patient
that he has an incurable and fatal disease. The doctor knows that providing the
patient with this information will result in psychological harm, and knowing this the
doctor has a responsibility to report the information in the kindest, gentlest manner
he can, but he has no responsibility to deceive or to withhold that information. Of
course if the doctor intends to cause the patient psychological harm, then he is
behaving immorally, so that sometimes the same act, reporting bad news, can be
immoral if done in order to cause psychological harm but not immoral if one only
foresees but does not intend the psychological harm that will result from his telling.
Of course, most of the time when it is immoral to intentionally cause a bad outcome,
it will also be immoral to do the same act when one only knows, but does not intend
that a bad outcome will result, but this is not always true.
Reporting the relevant facts about a patient’s prognosis is part of a doctor’s duty,
and so unless there are exceptional circumstances, a doctor must report these facts
even when he knows that this will result in the patients suffering some
psychological harm. It is not implausible to hold that, unless there are other
harmful outcomes, scientists qua scientists have a duty to report or publish the facts
that they have discovered even when they know that publishing those facts will
result in some people suffering psychological harm because their favored beliefs
have been shown to be incorrect. Would any rational person favor a public moral
system that prohibited reporting or publishing the facts because this would cause
psychological harm to those whose favored beliefs were challenged by these facts?
Simply explicitly asking this question makes it clear that this kind of harm cannot be
taken to support a prohibition on publishing well-supported scientific facts. This
leaves it open whether rational persons would favor a public moral system that
prohibited reporting or publishing the facts when this would contribute to the risk of
Metascience (2010) 19:29–43 31
123
the development of some technology that causes far more than the harm of having
one’s cherished beliefs challenged.
I shall now discuss the criticism that I mentioned in the first paragraph, that is,
my criticism of Forge’s category of weapons research. Forge says, ‘‘ I understand
‘weapons research’ to refer to research conducted with the intention of designing or
improving weapons systems or the means for carrying out activities associated with
the use of weapons, such as command, control, and communication.’’ (p. 153) I do
not deny Forge’s claim that the primary aim of weapons research is to ‘‘provide and
improve the means for fighting wars’’ (Ibid.). Thus, the primary aim of weapons
research is to provide and improve the means for harming people. Scientists who
engage in weapons research therefore need to show that there is an adequate reason
for engaging in such research. Forge correctly points out that once these means for
fighting wars have been discovered, they are available to everyone and so the
particular purpose that the scientist might be concerned with, e.g., enabling his
country to have a better chance of winning their defensive war, is not sufficient
justification for engaging in weapons research. This argument has its greatest force
when one concentrates on the example that Forge does concentrate on, namely the
scientific research necessary to build the first atomic bomb.
When one is concerned with developing new weapons of mass destruction such
as the hydrogen bomb, or various biological weapons, Forge has a very strong
argument. Given the enormous amount of harm that might result from the scientific
research involved in developing new weapons of mass destruction, no amount of
harm prevented by a particular country being better able to defend itself is an
adequate justification for doing scientific research that might provide the means for
developing a new weapon of mass destruction. However, when one is concerned
with developing an airplane that has greater defensive or even offensive capabilities,
then it is not clear that providing the means for developing such an airplane has the
potential to cause more harm in the indefinite future than the harm that is prevented
in the particular situation that one is now facing. Weapons research is too broad and
varied a category such that all equally informed impartial rational persons would
agree that the harm that is prevented in the particular case is not an adequate
justification for risking the harm that might result in the indefinite future.
This criticism of Forge is based on the very system of common morality that he
uses to show that scientists are responsible for publishing papers that they should
know increase the likelihood of harmful outcomes. I agree that scientists are
responsible for engaging in pure research that they should know may provide the
means for improving the killing of people and of destroying their cities. But being
responsible for such outcomes does not mean that they should not engage in those
activities, for there may be secondary or derivative outcomes that provide adequate
justification for engaging in these activities. Forge’s argument is one that all
impartial rational persons would agree that scientists are morally prohibited from
engaging in only when the weapons research may provide the means for producing
new or improved weapons of mass destruction. This criticism does not invalidate
any other of Forge’s arguments, but rather supports his primary aim of showing that
scientists are morally responsible for the outcomes of their research when they
should know that these outcomes could be harmful. It is unfortunate that critics
32 Metascience (2010) 19:29–43
123
often pounce on a relatively unimportant but overstated conclusion in order to cast
doubt on the extremely well supported and important arguments that the author has
presented. My minor criticisms of Forge are meant to provide a preemptive defense
against such criticisms, for I believe that Forge has provided extremely strong
arguments for scientists to take responsibility for research even when they can
legitimately claim that they intend only to be doing pure research.
Nicholas Evans
The moral duties of a scientist toward the larger community have been of concern
since the inception of modern science. The current state of scientific progress makes
this issue of paramount importance, and John Forge’s The Responsible Scientist is a
timely reengagement of the issues of responsibility and duty as they pertain to the
scientist. Taking care to continually swing back to historical examples, Forge takes
us through his account of responsibility, and how it ties in with a ‘‘common
morality’’ to develop a model for the responsible scientist.
Forge’s book is in four parts: Outcomes and Responsibility; Looking Back;
Looking Forward; and Science and Group Responsibility. Outcomes and Respon-
sibility begins with an analysis of the actions of a scientist as they relate to a
scientist’s responsibilities over the outcomes of their research. Within this section is
a canvassing of the Manhattan project, where Forge introduces the achievements of
a few figures who will play a role throughout the book: Szilard and his attributed
conception of the bomb; Frederic Joliot-Curie’s publication on neutron multipli-
cation of uranium; the Frisch-Peierls memorandum; and Neddermeyer’s work on
implosion. Forge’s conversational style does a lot to introduce these varied episodes
and he provides us with some excellent historical context which is later incorporated
into his arguments. The crux of part one is perhaps found in Forge’s assertion that
science is not neutral and removed from the world. While certainly not a new claim,
Forge’s demand for both a ‘‘backwards-looking’’ and ‘‘forwards-looking’’ approach
to scientific responsibility in order to secure a safer future is well received.
‘‘Looking Back’’ is where much of the philosophical meat of The ResponsibleScientist lies. Forge develops his account of responsibility as three elements of control.
These are causal responsibility, omission of action, and omission of prevention. This is
achieved through study of the actions of Frederic Joliot, in both historically accurate
and alternative fictional examples. Demonstrating the conflict between the willfulness
of the Merton’s disinterested scientist (Joliot) and the socially aware scientist
(Szilard), Forge rejects Strawson’s ‘‘standard view’’ of responsibility as it relates to
intentionality, and presents his modified view. Forge argues that an agent’s
intentionality only plays a role in as much as her preferences are represented by the
outcomes of her actions, and that we must also take into account the possible outcomesof her research when assigning responsibility. Forge asserts that ignorance and
uncertainty are not excusable as they are part of scientist’s role responsibilities. He
then addresses Polanyi’s challenge over the unpredictability of science through a
Kuhnian perspective, stating that over the course of normal science, a scientist is able
to predict the majority of outcomes her work will create.
Metascience (2010) 19:29–43 33
123
‘‘Looking Forward’’, as the title suggests, is where Forge puts the theory to work.
Forge introduces Gert’s common morality as an acceptable moral system for
scientists, and calls on the scientists to accept responsibility not only for the
primary, but the secondary effects of their research. In particular, he calls to task
weapons researchers for the harms their research cause, and advocates for scientists
to be responsible for declining to work in institutions which claim weapons research
as a primary aim.
In the final phase of the book, Science and Group Responsibility, Forge devises
an account of the responsibility for science as shared by groups of scientists,
correctly noting that much ‘‘big science’’ occurring today is a collaborative effort.
Drawing on the case of the Exxon Valdez oil spill of 1989, Forge argues that while
scientists are not ‘‘mature’’ in the sense of possessing a collective membership like a
corporation would they nonetheless have collective responsibility over their work.
He also alludes to larger research networks in the corporate sphere, which may incur
group responsibility.
Forge’s conclusions are heartfelt, if somewhat unsurprising. He asserts that
scientists are responsible for their work, not only for intended but also secondary
and derivative outcomes. He goes further to state that with regards to values, the
ethical implications of scientific research should play as much a role in research as
scientifically established values for judging hypotheses.
The Responsible Scientist is a broad, timely addition to a growing number of
publications that are emerging as the state of science rapidly undergoes an almost
mythical transformation, what many are referring to as ‘‘the technological
singularity.’’ John Forge, as much activist as ethicist, champions a more ethical
mode of scientific enquiry, in which scientists work not only for the pursuit of
knowledge wholesale, but also the pursuit of knowledge in seeking the betterment
of humanity as a whole. While his primary case study is considered by many to be a
darker chapter in the history of modern scientific inquiry, he also outlines a hopeful
scenario, in which scientists act not as apolitical, semi-monastic seekers of truth
removed from the life of folk around them, but as champions of a more progressive
society. He seeks a more nuanced approach at reconciling the responsibilities of
scientists, and giving an account of their behaviour that departs from Mertonian
norms, without falling into some of the cynicism that Latourian ‘‘scientists who
write papers’’ fall prey to. He neither paints a picture of a mercenary scientist
working in pursuit of a material goal external to the work she does, nor does he
falsely construe scientists as pure, selfless seekers of knowledge.
With this in mind, I have some concerns about a few of Forge’s arguments. I
found his manipulation of Frederic Joliot’s history somewhat disingenuous: I found
at times that I was not sure if he is painting Joliot as a Nazi sympathiser in order to
express a counterfactual, or if he genuinely (and contrary to what we know about
Joliot) believes Joliot intended to aid the Nazis. As it stands, the wide and varied
history of the physical sciences around this time, surely accessible to a historian of
John’s pedigree, would have served him better than this historical fiction.
Forge’s rejection of the consequentialist model of responsibility also struck me as
somewhat odd. Forge’s philosophy is predicated on actions being chosen in order to
prevent harms, and to say a scientist should not pursue a cause of action based on
34 Metascience (2010) 19:29–43
123
the harms it may potentially cause sounds (perhaps deceptively so) like a form of
rule-utilitarianism. This is confusing, as Forge rejects rule-consequentialism out of
hand within his book. To be sure, Forge makes an attempt at separating Gert from
consequentialist principles (p. 147), but I don’t find it convincing. As Forge so
rightly points out, the knowledge, and hence power, bestowed upon professionals
places them within a privileged place in society. While a doctor may resent being
placed in a diagnosis position at a dinner party, seeing a woman suffering a heart
attack or seizure in the street, the doctor’s position of power must surely obligate her
to help. Although Forge asserts some difference between this and Gert, his
interpretation of Gert in this context seems to be similar if not identical to a rule-
based consequentialism. Highlighting other ways in which a scientist could align
himself with Gert’s moral system outside of a consequentialist view would have
helped me, as it did not seem obvious why Gert (who Forge appears to want to
divorce from consequentialist thought) was the ideal candidate.
My next concern is directed at Forge’s rebuttal of Polanyi. Forge uses Kuhnian
notions of ‘‘normal science’’ to rebut Polanyi, but I think this is somewhat mistaken;
while it may provide a partial rebuttal, as Forge himself says, this normal science is
often diagnostic in nature, particularly in the physical sciences. Forge is looking for
something that applies to all scientists, but it is quite often those at the cutting edge who
are engaged in morally problematic science. John only mentions it in a footnote, but
emerging life sciences technologies sparking the ‘‘dual use dilemma’’ are generating
significant debate about scientific responsibility in both philosophical and policy
circles. Although a model where crisis/new science and normal science occur
concurrently may depart from Kuhn’s science, it is more accurate to describe today’s
reality. So it can be said that Forge’s argument is limited in scope. We can expect a
diagnostician doing radiation statistics (now a common practice, and not the cutting
edge it was 60 years ago) to know where her research is going (often the statistics used
in radiomedicine calculations for dosages); asking a geneticist, however, where her
work on sequencing is going might be a bit more of a stretch. As mentioned above, as
the convergence of multiple branches of scientific inquiry progresses, and the boundary
between Forge’s distinction between the physical and ‘‘other’’ sciences is shown to be
more artificial, what can potentially happen is more uncertain. The term ‘‘singularity’’
should be a hint here. For those not familiar with astrophysics, a gravitational
singularity (i.e. a black hole) is an entity in which information cannot escape, and in
fact ‘‘disappears’’ beyond the event horizon of the black hole. Similarly one cannot
determine what happens beyond the event horizon of the ‘‘technological singularity.’’
John Forge has provided us with a fantastic platform from which he, myself and
others will continue to advocate a responsible, educated scientific community aware
of its place in the world. My concerns aside, it’s a thought-provoking piece and well
worth reading for anyone interested in these issues.
Heather Douglas
Controversies have intensified over which science should be done and about what
responsibilities scientists have for the products, both epistemic and technological, of
Metascience (2010) 19:29–43 35
123
science. Yet little sustained attention has been paid to the question of what
responsibilities scientists have towards those outside of science, what John Forge
calls ‘‘external responsibilities.’’ This book is thankfully about such external
responsibilities, and thus does not lead us down another examination of the moral
issues surrounding authorship, mentorship, data fabrication, and the treatment of
sentient research subjects.
The book instead focuses on what scientists are responsible for, given the
profound effects science can have on society. Having written on and taught courses
on the topic, I looked forward to reading it with relish. I hoped that a book-length
treatment might shed new and interesting light on the subject. And in some places, it
does. However, the book is more opaque than it should be, particularly given the
author’s hope that scientists might read it. The overall structure only becomes clear
as one nears the end, so for the majority of the book, it is not apparent why the
author is moving through the topics as he is (no overview of the book’s structure is
given). And within chapters and sections, it is not always obvious what the author is
up to, making it dense reading. In addition, the way in which Forge structures his
attention limits the reach of the book. He is focused on science’s role in technology
to the exclusion of its influence in policy-making (which is dealt with cursorily on p.
177), insistent that the main philosophical doctrine to be defeated is a view of
responsibility limited to what we intend to do, and primarily concerned with nuclear
weapons research for his exemplar of scientific activity. As a result, the book has
only glimmers of illumination, rather than the flood lights for which I had hoped.
The book is divided into four sections. The first, ‘‘Outcomes and Responsibility,’’
lays out the basic groundwork of how Forge views science, background on the key
historical examples from the Manhattan Project that reappear throughout the book,
and some groundclearing on the philosophy of responsibility. On the whole, it is a
long and desultory continuation of the introduction. The second, ‘‘Looking Back,’’
describes the boundaries of what we should or should not hold people responsible
for. The third, ‘‘Looking Forward,’’ provides the moral content for how we should
evaluate the actions of scientists—what we should view as morally acceptable and
what we should hold up as morally ideal. Finally, the fourth section tackles the topic
of group responsibility.
The first section does make some interesting distinctions. The outcomes of
science are described as being either technology or ideas written up in a scientific
paper (although Forge occasionally writes as though technology were the only real
outcome—e.g., p. 47). Pure vs. applied science are distinguished not by the content
of the research, but rather the intentions of the scientist, whether the aim of research
is the scientific paper (pure research) or a new technological product (applied
research) (p. 16). Because Forge argues that intentions alone do not bound
responsibility, this distinction becomes less relevant to the issue of responsibility
than one might suppose—the scientist cannot claim they were only interested in
truth (and in publishing their papers) to deflect all responsibility if their work is used
to produce easily foreseeable, horrible technologies. While intriguing, Forge’s
topology is not sufficiently robust to handle science aiming to produce knowledge
intended for use in decision-making (or evidence for use), instead of technology.
This lacuna weakens the book.
36 Metascience (2010) 19:29–43
123
The most frustrating section for me was the second one. Forge’s main
philosophical concern here is to defend what he calls a ‘‘wide view of
responsibility’’ for scientists. The alternative on which most of his criticisms fall
is the supposed ‘‘standard view,’’ that we are each only responsible for that which
we intend. While some philosophers apparently defend this view, it is so absurd that
it is difficult to see how it is worth the extended critique Forge develops of it. If the
‘‘standard view’’ were in fact standard, I could toss a lit cigarette into a tinder dry
wood with the sole intention of disposing of it, and I would be in no way morally
responsible for any ensuing wildfire. Yet we would hold such a reckless (if I had
thought about the possibility of wildfire and decided I did not care) or negligent (if I
had not managed to conceive of the possibility of wildfire) person morally
responsible, for I should have thought of and been concerned about such a
possibility. Such expectations of reasonable foresight and concern over unintended
consequences are a standard aspect of our everyday conception of moral
responsibility, and they accrue not just to particular roles, as Forge seems to think
(p. 60), but to everyday actions of moral agents like disposal of incendiary material,
driving cars, and walking dogs. So if you are convinced that we have responsibilities
for unintended outcomes through our responsibilities to be neither negligent nor
reckless, you can skip most of part 2 of the book.
The one thing that seems special to science, whether or not foresight is possible,
is addressed at the end of part 2, but in a less than convincing framework. Forge
critiques Polanyi’s assertion that scientific research is ‘‘doubly unpredictable,’’ that
it is neither predictable what knowledge will be produced nor how that knowledge
can be applied in technologies (p. 118). And well he should. His best argument
consists of pointing out that scientists often can and do make such predictions in
patent applications (pp. 127–132). So Polanyi’s blanket claim cannot hold.
However, we need more than this to proceed; we need to know when we should
expect reasonable foresight and when we should not. In attempting to provide
guidance here, Forge makes the stunning claim that ‘‘the most widely accepted
description of the methodology of science [is] Thomas Kuhn’s’’ (p. 119). I would be
very surprised if this were actually the case. The hay Forge makes from this
assertion is that scientists can be expected to have predictable results within
paradigmatic science, but not within revolutionary science. Given the difficulties of
deciding when a scientist is in one phase rather than the other, and the problems
philosophers have had in pinning down what counts as a paradigm, this guidance is
not very helpful. Looking more closely at actual scientific practice, such as claims
made for research projects before they are performed and discussions among
scientists about the potentials for various research directions would provide more
concrete assistance.
The third part of the book, dealing with the content of what we should want from
our scientists, tackles a difficult set of topics in an intriguing way. Starting with
Bernard Gert’s ‘‘common morality,’’ Forge argues that scientists have clear negative
duties to not do harm, and some positive moral ideals to prevent harm (or one could
say, even if Forge does not, to do good). Forge happily does not base these
obligations on some professional ethic (p. 145), and instead grounds the duties in the
obligations we all share. The account of positive duties seems spot on—that they are
Metascience (2010) 19:29–43 37
123
ideals that we cannot always be expected to meet, and that attempting to maximize
the good we could do all the time is simply too demanding (p. 174). Ultimately,
Forge argues that it would be preferable if all scientists attempted to do practical
good with their work, and thus to do applied research rather than pure (p. 233). But
he does not require this of scientists.
The requirements of the negative duties, on the other hand, provide the basis for
the most provocative claim in the book—that weapons research is never morally
permissible. Here, I think Forge’s arguments ultimately fall short. He begins by
characterizing all weapons as means to harm others, and scientists who work on
weapons as providing a means to inflict harm to others (p. 155). This character-
ization may seem uncontroversial at first, but because scientists have a duty not to
harm, and Forge suggests that knowingly providing a means for others to inflict
harm is also violation of this duty, Forge uses it to carve off any excuses for the
moral acceptability of weapons research, of any kind, at any time.
But his arguments are inadequate at several points. In arguing that there is no
such thing as a defensive weapon (which would be characterized not as primarily a
way to inflict harm but to prevent it, and thus may be morally permissible to
pursue), he merely provides one example of a supposedly defensive weapon (missile
defense systems) that could also be construed as an offensive weapon (p. 160).
While this is an important example, Forge’s claim that there are no genuinely
defensive ones is a universal, not an existential claim, and thus one example is
insufficient. He needs to do more to make his case.
Many scientists justify weapons research on the grounds that their weapons have
a deterrent effect, thus reducing harms. Forge dismisses this angle by claiming that
the deterrent effect of a weapon is derivative of its ability to produce harm, and thus
the ability to produce harm has ‘‘priority’’ (p. 162). While this might make sense
ontologically, it is unclear why the assertion of such priority is devastating to the
justification for weapons research. And this points to the underlying problem—that
of how to characterize the research one is doing. How one describes and understands
a technology is more malleable than Forge seems to realize. Just think of how we
can understand Alfred Nobel’s dynamite—as an explosive that is a tool to build
roads or as an explosive that destroys things we value. What is the correct
description?
Finally, he provides a critique of scientists who do weapons research under what
might be thought to be ideal historical contexts—in a national emergency of a just
war context. Surely, here it is justifiable to assist with the development of new
weapons, when faced with the potential annihilation of one’s people and culture.
Surprisingly, Forge argues that it is nearly impossible for this to be the case, because
in order for such research to be justified, even in this extreme context, the scientist
would have to be assured that the knowledge would be well and fully destroyed
once the emergency is passed, lest the weapon produce new unjustified harms in
new contexts (pp. 164–166). Thus, Forge argues that utilizing science to the fullest
in self-defense is not morally acceptable.
Perhaps this apparently absurd result comes from a complete rejection of
consequentialism, argued for elsewhere in the book (pp. 62–67, 147, 174). But if so,
one wonders if the full implications of the demand to not do harm, and to not
38 Metascience (2010) 19:29–43
123
provide means for others to do harm, are appreciated by Forge. If this maxim is
upheld strictly even in the face of a need for self-defense, should it not also be
upheld in other areas of research not labeled ‘‘weapons research?’’ For even if the
primary purpose of an area of research is not so problematic (a device to cause
harm), should not the maxim to avoid providing a means to do harm make scientists
wary of many areas of biotech, nanotech, and other cutting edge research? If
scientists must be assured that the knowledge they produce will not harm before
producing it, it seems that more than weapons research should be halted. Forge
cannot rely upon the intentions of scientists to shield them from these concerns—he
has already argued that intentions do not protect scientists from responsibility for
foreseeable consequences of their research. The brush with which he condemns
weapons research is so crude and broad, it seems it will tar much of the scientific
enterprise.
The fourth part of the book focuses on group responsibility. Forge distinguishes
between shared responsibility (p. 196) and collective responsibility (p. 204). For
shared responsibility, individual members of a group share responsibility for the
group’s actions, and the total sum of responsibility is equivalent to the sum for all
the individual members. Collective responsibility, on the other hand, requires a clear
group structure, with well-defined roles, and involves the submission of one’s
individual decision-making (usually voluntarily) to the processes of the group. It is
in collective responsibility that Forge finds ‘‘an account of genuine group
responsibility’’ in that the individuals in the group may have their own responsibility
plus that of the group (p. 204).
Forge develops his account using the example of the Exxon Valdez, and then
disappointingly, after all this philosophical work, only hints at how it may apply to
science (pp. 220–221). In order for collective responsibility to take hold, one needs
a fairly clear and historically robust institution with well-defined goals, policies, and
procedures. Few scientific groups have such characteristics, although Forge thinks
they might occur in research institutes. He has not looked into the matter, however,
and he does not discuss scientists working within corporations (the obvious place to
consider group responsibility) until the final concluding chapter (pp. 228–233), and
then there is no mention of group responsibility. One wonders what all the
philosophical work was for.
Historically minded readers will be irked by errors in the book. Logical
empiricism is presented as the worst sort of caricature (p. 123, 224). Rudner’s
famous 1953 paper is misunderstood as being primarily about cognitive values,
whereas Rudner clearly focused on ethical values (p. 225). And Forge seems to
think that the Manhattan Project was the first time scientists openly agonized about
their moral external responsibilities (pp. 34–35). Students of the First World War
know that chemists of that era experienced similar throws of conscience.
Despite all these flaws, it is helpful to have the framework of negative and
positive duties for scientists as a starting point for future discussions of how
research projects should be selected and/or encouraged. And it is a welcome thing to
have a full treatise, even one as flawed as this, on the bookshelf for such an
important topic. Hopefully, it will spur further engagement by philosophers and
scientists alike.
Metascience (2010) 19:29–43 39
123
Author’s response
John Forge
As an editor and subject editor of this journal I was involved with many review
symposia, and was delighted to see how (nearly always) authors expressed their
sincere thanks to their reviewers for the time and trouble taken and the insights
provided. I’m happy now to be in the role of an author, and to be able to express my
thanks to Bernie Gert, Nick Evans and Heather Douglas for their reviews. Indeed,
with a review like Douglas’s I’m glad we decided to have a right of reply with
symposia when we introduced this format in the first issue of the new series of
Metascience in 1992.
Evans and Douglas both give an overview of the book, but I would still state what
my overall aims were. These were to provide reasons to accept the ‘wide view’ of
the backward-looking responsibility for the scientist and the ‘two tiered’ account for
forward-looking responsibility. The key to the latter was my re-reading of Gert’s
work on common morality. It is the only theory of normative ethics that has made
good sense to me, and it makes such good sense that I had no hesitation in using it. I
am therefore extremely pleased to have Gert review my book and endorse (for the
most part) my use of his theory. I am grateful to him for his remarks on impartiality
and the last two moral rules. I do not believe this changes my conclusions
substantially, but I will make use of common morality again and so it is important
for me to state the theory correctly. Gert does not go the whole way with me on
weapons research, but more on that later.
The second part of the book was devoted to establishing the wide view, which
holds that we can be responsible not only for what we intend to do and foresee that
we do, but also what we should have foreseen that we do. Douglas thinks that this is
entirely obvious and anyone who accepts it can skip most of Part 2 of the book,
namely a little over a quarter of it. This remark, and some others in her review, alas
show that Douglas is deeply out of sympathy with the aims of the book, in spite of
her comment that she has ‘‘written and taught courses on the topic’’. On the first
page of the book, in the preface, I make it clear that the book deals with the
philosophical aspects of science and responsibility, albeit in a way that was
informed by HPS-STS issues. So in Part 2 I was engaged in trying to justify the wide
view of responsibility and work out its implications. The business of philosophy is
not just to state claims, however mundane and obvious they might seem to some and
then move on; it is to give reasons for accepting such claims.
The main issue about backward-looking responsibility is just what it is about our
acts and omissions that makes us responsible for them, given that we know what it is
to be responsible—the latter was the topic of Chapter 3 of Part 1—Part 1 comprises
three chapters that Douglas dismisses as a ‘‘long and desultory continuation of the
introduction’’. With regard to actions that are intended, matters about responsibility
seem to be reasonably clear: since intentions stand as reasons for actions, then
unless the agent has an excuse, she is responsible for what she does because
(roughly) she states that that is what she intended to do. But what of merely foreseen
actions and of those that the agent was in fact ignorant of but should have foreseen?
40 Metascience (2010) 19:29–43
123
Douglas thinks it is obvious that something like the wide view is correct, but why is
it correct? I go to some lengths to try to show that we should accept the wide view
because I am not aware of any adequate justification of it, in particular in regard to
unforeseen actions. That is why Part 2 has four quite long chapters, and why I need
to address the alternative view of Strawson, Mackie and Wallace, and so on.
When it comes to unforeseen actions, that is to say, actions that can be attributed
to the agent ‘under a description’ that she was not aware of, there is the issue of just
what limits these. What we do has causal consequences and there are ways of
incorporating these into the description of the action, what Feinberg calls playing
the accordion, but the agent may not be aware of these even though we think she
should be. Evans takes me up on my discussion here for my use of the case of Joliot-
Curie and Szilard’s proposed moratorium on publication of nuclear data in 1938,
which I use as an example in Chapter 5. He thinks perhaps I don’t do Joliot-Curie
credit. However, I don’t mean to paint Joliot-Curie as a Nazi sympathizer, but I
needed an example that I could treat counterfactually. After all, if someone did not
foresee something but should have done, then we need to talk about what things
would have, or might have, been like had that person foreseen what they did not see.
So I was discussing the possible ways of talking about what Joliot-Curie did when
he worked on neutron multiplication, such as doing an interesting piece of research,
aiding the Nazis in their quest for weapons, etc.
So, to come back to Douglas, yes, you could skip most of Part 2 if you are not
interested in a philosopher’s account of backward-looking responsibility, but if
that’s the case, you bought the wrong book. Douglas makes a number of further
critical comments about various other ‘flaws’ in the work. I won’t respond to these:
the interested reader can go and read the passages referred to and make her own
judgement as to whether there are flaws there.
Moving on, Gert and Douglas are not convinced by my argument that weapons
research, undertaking the design of new weapons or the improvement of existing
ones, is never justified, whatever the circumstances. Evans does not object to this
claim, but my rejection of consequentialism strikes him as odd, so perhaps I can
infer that he takes it that a consequentialist would come to a different conclusion
about weapons research. Weapons research is the provision of the means to harm,
and so—there is an argument here that I cannot summarise—it requires justification.
It does so because it is seen to come under the scope of the moral rules; these are not
absolute, but breaking them by doing weapons research requires justification - as I
read his review, Gert accepts this characterisation. I then distinguish two possible
kinds of justification: those that depend on the sort of weapon in question and those
that depend on the historical circumstances in which the work is done. The former I
call non-historical justifications and the latter historical.
Weapons that only defend against harm or deter harms would seem to count as
‘good’ weapons and so it seems that work on them would require no justification
besides pointing to their capabilities. I use the example of the strategic defense
initiative to illustrate that one cannot make inferences from weapons’ capabilities,
their mission or tactical employment, to the intentions of those deploying them. A
state bent on aggressive war will need to protect its assets with defensive systems
and will acquire them, and (most likely) will not wage war until it has them. I should
Metascience (2010) 19:29–43 41
123
like to have been able to discuss this point more thoroughly in the book, but I would
stress that I was not trying to give a (really bad) inductive argument in the section in
question, but to use the case to illustrate the general point that weapons of all kinds
are used in aggressive, and hence unjust, wars. The idea that there can be deterrent
weapons is easier to deal with. Discounting bluffing, a weapon deters because it can
be used to do certain things, like kill people and destroy their property. Thus
deterrence is what I call a derivative purpose, it is one that is, so to speak, parasitical
of the primary purpose of killing and destroying. Therefore there is no such thing as
a weapon that can only deter.
An historical justification makes reference, for instance, to the particular threats
one is facing at a given time and place, and as Gert clearly sees, it will tend to set the
harm prevented against other possible harms caused. Gert accepts that it is hard to
find justifications of this kind for developing weapons of mass destruction, but for
other ‘more classical’ weapons systems, like planes, he thinks that impartial rational
persons—those who set the standards for justifications in common morality—might
well agree that the harms to be prevented now are not obviously outweighed by
possible future harms caused. And here Gert and I disagree. My claim is that
because weapons research produces knowledge, of how to make the means to harm,
it cannot be known where this will lead. Doing weapons research is not like actually
manufacturing weapons. A given batch of guns, for instance, will eventually wear
out and hence the number of times they can be fired will be limited. But the number
of times a gun of that design can be fired is not limited. I think the onus is on those
who would give the justification for weapons research, and hence they must show
that future harms do not outweigh present harms prevented. And that, I believe,
cannot be done.
Perhaps not surprising, this ‘apparently absurd result’ does not convince Douglas,
who thinks I am not aware of the implications of the maxim not to harm. But I do
not deny that self-defence is justifiable nor that defensive wars are justifiable. My
claim is only that weapons research is never justifiable, and hence these other cases
are irrelevant. Douglas then asks about other fields, like nanotech and biotech that
might have implications for weapons research (and so the issue here is not about
actually doing weapons research), and suggests that I tar them unfairly. Recall that I
have a theory of responsibility. If a scientist can foresee or should foresee that her
work can provide the means to harm, then she requires justification to carry on, in
whatever field she works in and whether the research is pure or applied. That is a
consequence of my theory. And it’s one that I welcome. Do we want scientists to
engage in research in fields that can have harmful outcomes and not be responsible
for them and therefore not be obliged to think about them? What we want from
responsible scientists is for them to be sensitive to the possible harmful outcomes of
their work and try to look ahead and see how their work might be used, whether it is
pure or applied, and then act accordingly.
I don’t think the particular moral theory I have adopted makes any difference in
the debate about weapons research, which is perhaps why Evans is surprised that I
reject consequentialism. It does not make any difference in this case because there is
no ‘good’ to be maximised that is not factored into harms prevented. That is to say,
all the work to be done by way of justification is done by balancing harms caused
42 Metascience (2010) 19:29–43
123
and prevented, and no positive concept of the good is involved. For instance, no
sensible example could be constructed that took account of things like the pleasure
sadists take from harming as part of a justification. However, when it comes to
positive duties, consequentialism would make us try to balance goods promoted
with harms prevented, and I have yet to see a convincing treatment of the idea of a
universal good. That is one reason, among others, that I reject consequentialism.
Metascience (2010) 19:29–43 43
123