14
AI & Soc (1995) 9:29--42 1995 Springer-VerlagLondonLimited A| ~lt SOC|FI~ Ethics is Fragile, Goodness is Not Fernando Leal University of Guadalajara, Mexico Abstract: This paper first illustrates what kind of ethical issues arise from the new information, communication and automation technology. It then argues that we may embrace the popular idea that technology is ethically neutral or even ambivalent without having to close our eyes to those issues and in fact, that the ethical neutrality of technology makes them all the more urgent. Finally, it suggests that the widely ignored fact of normal responsible behaviour offers a new and fruitful starting point for any future thinking about such issues. Keywords: Skills; Deskilling; Communication; Control; Privacy; Identity; Ethical neutrality; Discourse; Responsibility Graham: Harry doesn't know any better. He's never thoughtof anybody but himself. Billie: Who does? Graham: Oh God, Billie, millions of people do. Born Yesterday, the film Although the prestige philosophers once enjoyed in general society is pretty worn out by now, a kind of secret hope sometimes lingers that they may still be able to offer good advice or even answer the big questions about what the good life is (Williams, 1986). This hope is bound to be disappointed for several reasons, not the least among which is that there are no ethical recipes and never have been. On the other hand, even if I would like to maintain that there is some useful work for philosophers to do in our increasingly complex societies, if only we pull ourselves together and dare see what is in front of our eyes, nonetheless I would also like to persuade my readers that they should not wait for philosophers to build an ethical theory free from any possible objection or immune to abuse. Ethical issues always arise in concrete settings and nobody is better qualified to attend to them with the required subtlety and feeling for detail than the persons directly acquainted with those settings. Remote managers and bureaucrats, let alone philosophers, are in comparison rather like the proverbial elephant in the china shop. But doesn't ethical knowledge and moral thinking require professional skills? All classical moral philosophers, most famously among them Immanuel Kant, didn't think it did. 1 And

Ethics is fragile, goodness is not

Embed Size (px)

Citation preview

AI & Soc (1995) 9:29--42 �9 1995 Springer-Verlag London Limited A | ~lt S O C | F I ~

Ethics is Fragile, Goodness is Not

Fernando Leal University of Guadalajara, Mexico

Abstract: This paper first illustrates what kind of ethical issues arise from the new information, communication and automation technology. It then argues that we may embrace the popular idea that technology is ethically neutral or even ambivalent without having to close our eyes to those issues and in fact, that the ethical neutrality of technology makes them all the more urgent. Finally, it suggests that the widely ignored fact of normal responsible behaviour offers a new and fruitful starting point for any future thinking about such issues.

Keywords: Skills; Deskilling; Communication; Control; Privacy; Identity; Ethical neutrality; Discourse; Responsibility

Graham: Harry doesn't know any better. He's never thought of anybody but himself. Billie: Who does? Graham: Oh God, Billie, millions of people do.

Born Yesterday, the film

Although the prestige philosophers once enjoyed in general society is pretty worn out by now, a kind of secret hope sometimes lingers that they may still be able to offer good advice or even answer the big questions about what the good life is (Williams, 1986). This hope is bound to be disappointed for several reasons, not the least among which is that there are no ethical recipes and never have been. On the other hand, even if I would like to maintain that there is some useful work for philosophers to do in our increasingly complex societies, if only we pull ourselves together and dare see what is in front of our eyes, nonetheless I would also like to persuade my readers that they should not wait for philosophers to build an ethical theory free from any possible objection or immune to abuse. Ethical issues always arise in concrete settings and nobody is better qualified to attend to them with the required subtlety and feeling for detail than the persons directly acquainted with those settings. Remote managers and bureaucrats, let alone philosophers, are in comparison rather like the proverbial elephant in the china shop. But doesn't ethical knowledge and moral thinking require professional skills? All classical moral philosophers, most famously among them Immanuel Kant, didn't think it did. 1 And

30 F. Leal

they were right. Only the insidious dualisms mushrooming in our modern industrial life could produce the absurd idea of "ethical expertise". 2

The particular subject of this paper is the new information, communication and automation technology, or ICA technology for short; and its argument runs as follows: (1) ICA technology is related to a lot of ethical issues, some of which are wholly new and all of which are at least partly new; (2) we may embrace the popular idea that technology is ethically neutral or even ambivalent without having to close our eyes to those issues, in fact the ethically neutrality of technology makes them all the more urgent; and (3) the starting point for a new thinking about them has to be the widely ignored fact of normal responsible behaviour.

Dr Frankenstein's Ethical Legacy: Five Examples

Different writers may want to classify the ethical issues arising from ICA technology in different ways (see e.g. Ermann et al., 1990). My favourite five categories are all in terms of loss: what we are losing are skills, communication, privacy, control and last but not least, our own sense of self (or our identity, to use the more fashionable expression in the social sciences). Although each one of these complex ethical issues can be said to be a direct consequence of (some use of) ICA technologies, it could be argued that there are also causal connections between some of them, which reinforce the original impact of technology. Figure 1 suggests some of those further connections, which I hope to explore in a future publication. For lack of space, the following shall be a very brief review of the issues when considered in isolation.

Perhaps the most widely publicised ethical issue is that associated with the concept of deskilling. Ever since Bravermann published his famous book there has been a great debate as to whether ICA technology has effectively destroyed the old skills of workers by first incorporating them in ever more sophisticated systems and then letting these be run or supervised by unskilled operators (Wood, 1982, 1989). But the arguments purporting to show that technological developments are unethical show a great deal of diversity. The simplest argument just says that through ICA technologies, job are lost, unemployment rises and more people become welfare dependent and their lives lose the meaning associated with working. On the other hand, it is also said that automation creates a new kind of dualism: some people get the more interesting work of, say, designing the systems, whereas other people have to perform the unskilled jobs, such as entering data routinely into computers or thoughtlessly following the instructions issued by the system at ever higher speeds (Garson, 1988). This dualism is considered to be bad in itself, because it reinforces old or fosters new divisions in society, some of which are based on differences of class, race or gender. But it also has the consequence that the particular jobs in the lower ranges are soul-destroying, inhumane, and debasing. A yet different line of argument says that the products of such new division of labour tend to be of a much poorer quality than in the good old days. This might be considered bad in itself, but it has yet another consequence which is sometimes mentioned, viz that the standards of excellence of the consumer or end user become lowered in the process. Finally, at a slightly more philosophical level it might even be argued that creating systems, such as expert systems, which incorporate the knowledge and skills of real people

Ethics is Fragile, Goodness is Not 31

Fig. 1. Some causal connections between the five identified complex ethical issues arising from the new ICA technology. (The dotted lines are rather more tentative than the continuous ones.)

coping with real situations effectively destroys such knowledge and skills (Cooley, 1987; Dreyfus and Dreyfus, 1986). Some argue that this is bad in itself, whilst others point out the terrible consequences which might ensue if in the future there should be a shortage of energy preventing the new systems from working properly. As far as I can see, these are the arguments purporting to show the unethical consequences of deskitling. There are of course, other people who claim it is all very much exaggerated, and that automation actually eliminates only jobs which were uninteresting in the first place, and also that all deskilling is accompanied by processes of reskilling, whereby new knowledge and skills are born into the world (Wood, 1982, 1989; Hirschhorn, 1984; Ropohl, 1985). Wherever the truth may lie, moral philosophers manage to keep very quiet about these debates. 3

New communication technologies have arguably a paradoxical effect in that they either change the actual process of communication or even destroy it (see Ord, in this issue). This sort of problem was probably first experienced when the very first communication technology appeared, viz, writing (Havelock, 1982). We know that writing and reading are acquired tastes and demand the development of special skills to cope with them. These skills go hand in hand with the creation of new codes, registers and jargons; written language is very different from spoken language, a fact which has a particular effect upon ordinary human communication by speech. And this effect becomes even more marked when writing is supported by printing. We all know from United Nations reports, how widely spread functional illiteracy is, even in industrial societies, and what the consequences for education, training, and cultural division and discrimination are. Now the temporal distance between the invention of writing and the invention of printing embraces more than two millennia, whereas the distance between the invention of printing and the development of new

32 F. Leal

communication technologies span only about three centuries. First, we had the telegraph, the gramophone and daguerreo type, then the telephone, the radio, and moving pictures, later the television. And now we have all these as enhanced by satellite communications, plus the personal computer, the modem, the fax, the email, the video, and so on. Communication and thereby human life itself has been changing at an incredible pace. It can be argued that our very being, character and personality are changing. There is talk also of new varieties of artificial 'autism', and of the inability to cope with so much information as is being produced by the minute. There is talk of the effects of visual violence on the souls of children and young people. There is talk of visual and acoustic pollution in our towns and cities. All these and other things can be said to be very relevant to ethics, but they have been and still are by and large, ignored by our moral philosophers.

A third complex issue has to do with p r i v a c y - or rather the loss thereof. It has been known for many years that ICA technologies allow for a rapid transfer of information, some of which used to be the property of individual people in the old days. Thus you subscribe to a journal and in some mysterious way you start receiving all sorts of offers from all sorts of people who know who you are, where you live and what you do. This may be mildly annoying in some cases, and words like 'junk mail' have been coined to talk about them. Yet, quite a few people are worried about what might happen if some of this private information ends up in the wrong hands. There is a new mistrust growing in industrial societies about answering polls and surveys. New laws have been passed to prevent the worse, but it may not be enough. When communication technologies enter into the picture, you even see things like sexual harassment via electronic mail. Finally, at the workplace, we find quite a lot of monitoring of people' s work. The new managers are capable of not only counting one's keystrokes and phonecalls, but also listening in when one is talking to clients to see whether one is really following the instructions of the newest expert system; why they can even analyse one's general patterns of performance! Up to a few years ago, this sort of monitoring affected workers only on the shopfloor or at the counter, then it reached middle management, but recently, it has begun to affect the work of the big bosses themselves. They don't like it naturally, but they were rather naive if they thought they were going to be exempt. Absolutely no job is safe from this kind of sophisticated surveillance. So the invasion of privacy is something moral philosophers should be worrying about, but they don' t seem to be much interested.

The invasion of privacy is of course, only the beginning. The information gained about people; can of course, be used to manipulate and control them (Garson, 1988, Ermann e t a l . , 1990). On the other hand, people are supposed to c o n t r o l things themselves and this is becoming increasingly difficult because of the sheer complexity of some of the tasks assigned. First of all, a breakdown in power supply might disrupt the functioning of the new system and thereby cause lots of trouble, sometimes suddenly forcing operators to take desperate steps to remedy the situation for which they are scarcely prepared. When such things happen at critical moments, such as during heavy traffic in sophisticated transport systems, accidents may occur resulting in loss of lives and property (for further elaboration of this problem, see Shipley, this issue). Yet it has been powerfully argued that human beings are not 'biologically designed' to cope with these or similar situations which are becoming increasingly common in the 'nonstop society'. The new ICA technology makes it possible and

Ethics is Fragile, Goodness is Not 33

even imperative at least under capitalism, for workers all over the world to labour under extremely stressful conditions and at times of the day which may run counter to their own internal clocks. This creates an unprecedented danger which the economy does not address, preferring as it does to invest in systems rather than in people (Moore-Ede, 1993). And if it escapes the economy, it is no wonder that it also escapes professional philosophy.

Finally, one of our most cherished values has to do with our sense o f self. It can be argued that all the things listed above have an impact on our identity going well beyond the changes they effect in the ways we communicate and gather or transfer information, and even beyond the ways we work (hence the middle position of the 'loss of identity' circle in Fig. 1). Namely, it can be argued that the very fact that our computers and systems are so clever makes for comparison. On the one hand, people can get quite intimidated by ever more complicated software, faulty design of control buttons, or levers, and hard to read instruction manuals (Norman, 1989). Their self confidence falters and they may be prey to depression and stress. But there may be a more direct probably deeper and partly independent influence of ICA technologies on our sense of self. Comparison between people and systems produces, as we know, the so called computer metaphor, which is a sophisticated elaboration of the older machine metaphor. Now the machine metaphor as such was rather harmless in comparison. After all, i t i s difficult to feel jealous of a motorcycle or a car because they can run faster than us or of an airplane because it can fly and we can't. But when machines start to think, or at least, when we believe they do, it is far more difficult to retain our poise. We are tempted to describe ourselves in a new way. The point, is of course, that according to some authors, this redescription is all wrong and has tremendous ethical implications. It has even been argued that the unethical aspects of information, automation, and communication here briefly reviewed, are ultimately dependent on the computer metaphor; that we could never design the jobs we are designing or pry into people 's lives the way we do or suffer the television programmes we suffer it we didn't think of people as machines, and particularly as computers (Rosenbrock, 1989, 1990, and this issue). I f this is so, then a certain loss of identity will not only be influenced by the above mentioned four losses but will actually have an impact upon them through the peculiar kind of technology we have (Fig. 2 is a possible partial representation of this). For the purposes of this essay, it doesn't matter whether these claims are true or not. 4 The important thing is that our smartest pundits and gurus working in artificial intelligence and cognitive sciences - and of course, also their philosophical allies - do not give a damn about ethical issues. So their use of the computer metaphor can be safely described as ethically neutral. This leads us to the general question of the ethical neutrality of technology.

I s T e c h n o l o g y N e u t r a l ?

A thesis that has been defended by politicians, business persons, scientists and engineers is that 'there is something wrong with the tools per se ' , or to put it in a terminology which has become familiar from an old debate in the social sciences, that technology is value-free. The social science debate actually didn't revolve around a statement but rather around the demand that social science should be value-free, a

34 F. Leal

te

Fig. 2. Possible causal connections between loss of skills and loss of identity in relation to the new ICA technology.

demand which was designed to prevent political commitments interfering with scientific objectivity. I suppose the thesis as related to technology could be also framed as a demand that technology should be value-free, because politics would otherwise prevent it from working. So working in technology would be the equivalent of objectivity in science. Sham technology which doesn' t do what it is supposed to do but only benefits politicians is not what we want. We want it to work, but of course, how we use it is a completely different thing: we can use atomic technology to kill people or to cure cancer. This is up to us and the politicians.

Regardless of whether it is a statement or a demand, the following discussion might become easier if we refer to the above thesis as neutralism. By far the most popular argument against neutralism counters that there are at least some technologies, like those specifically pertaining to nuclear weapons, which can only used wrongly. In as much as this argument depends on the further premise that nuclear weapons are inherently destructive, one could argue that it is just a particular application of the standard philosophical argument against neutralism, viz, that our technologies embody our values, whether we are aware of the fact or not. In a parallel fashion, some people might be prepared to argue that, say, expert systems will destroy existing knowledge and skills however we design them. Thus nuclear weapons and expert systems would be ethically wrong in themselves. The counter-argument would of course be that expert systems may destroy knowledge but do not have to. We can find a better use for them and design them so that human life and work is enhanced instead. I f we generalise this position, we would then say that any technology can be used both rightly or wrongly. This corresponds to the most popular form of neutralism; but it seems to me to be a bit too strong. Maybe there are inherently bad technologies after all. And anyway I don' t want to go into debates as to whether the atom bomb or experts systems are capable of ethically impeccable application. So I propose to settle for a weaker version, viz that any technology can be used wrongly. According to the standard philosophical jargon we might call this position weak neutralism.

Now, enemies of neutralism, as I mentioned before, want to insist that technologies have inbuilt values. Although they usually have in values they disagree with, unless they are totally opposed to any kind of technology, they necessarily imply that there might be technologies which embody the right values. An example might be a computer

Ethics is Fragile, Goodness is Not 35

program built in such a way as to save effort when completing our annual income tax return. The point is that such a program may eventually be used to deskill people, to destroy valuable knowledge or even to put an innocent person in prison. It is clear that both the good and the bad guys can use practically any technology for their own purposes, which will eventually result in good and bad (or wrong) uses made of it. In fact, technology is not only independent of the high ethical standards and integrity of its designers or inventors, it is also independent of the moral qualities of its users, because, as we know, any action can have bad consequences for all or some of the people affected by that action. Indeed, given the vulgaries of human intentions and the reality of unintended and perverse effects, I suggest that we have to accept that weak neutralism is true.

However, there is a further consideration emerging from an unprejudiced acceptance of weak neutralism. Even weak neutralism presupposes that we can distinguish good and bad consequences of technology. The question is whether we can do that in a way that really satisfies everyone involved. I said before that any action can have bad consequences for all or some of the people affected by that action. But what very commonly happens is that some actions have good consequences for some people and bad consequences for others. This is a question which has to be addressed when considering the ethical issues raised by technology.

Barbara Garson's book, The Electronic Sweatshop, (1988) is an impressive source of examples of job design and expert systems which destroy skills and cause anxiety, stress and loss of self confidence. Some of her cases have to do with ICA technology and some do not. But even these latter, she argues, obey the same kind of logic. For Garson, a certain kind of job design is always the harbinger of technological changes: you first change the job, then you introduce the systems. Her case is powerfully argued and profusely illustrated, so that one wonders whether she can see any positive effect accruing from ICA technology. She presents findings from interviews with all sorts of people and at all levels, who either explicitly articulate their anger at the workplaces made possible by ICA technology or else unwittingly in some other way betray the horror of those new jobs. But there is a very interesting exception - her interview with former secretary Sophia Crandall in Chapter 7. The topic of this chapter is the changes in the office which started taking place in the 1980s, when the personal secretaries of managers were separated from the latter and integrated into a secretarial department of their own - a rearrangement of jobs made possible by the introduction of word processing. Managers who thus lost their 'office wife ' and personalised assistance, and quite a bit of status, complained just a little bit louder than the secretaries themselves who became more strictly monitored and controlled under the new system. But then Sophia Crandall' s interview makes visible a hitherto unnoticed aspect of the new arrangements in the office. She says that managers resent the fact women have their own department, usually run by one of them instead of by a male, so that bosses can no longer impose extra-official duties on secretaries or harass them any more. The secretarial department on the other hand, gives women a new sense of their own power and importance with respect to the general output of the company; at the same time, they now have more control over their own time and duties than they ever had when 'monogamical ly ' attached to just one manager.

This relatively simple example proves that we shouldn't be too quick in judging whether a given consequence of the new ICA technology is good or bad. There may

36 F. Leal

be cases in which a group of people will find that ICA technology has displaced them or made their work more difficult; whereas another group of people will instantly welcome it because it benefits them enormously. Recent ethical theories which say moral philosophy should be based on what is called 'discourse' (Habermas, 1983) are relevant here. By that we mean a kind of discussion which does not seek to control or manipulate other people but to bring genuine differences of opinion to full expression; reasoning instead of force is then used to achieve some form of reconciliation which respects as fairly as possible all interests involved in a given situation. There is of course, no unique method for reaching agreement or even compromise , say in the workplace, when the system gets very complicated (Cropanzano, 1993). And the deeper one looks, the more complicated things get. Thus, most conflicts and controversies which have a moral edge are doubtless connected with the relatively simple clash of particular personal or corporate interests, but it is becoming increasingly clear that at least some of them have to do with a deeper struggle between different or even incompatible value systems. The most obvious of these differences emerge from the multiethnic and multicultural nature of today' s societies, which is caused by global communication as well as by migration; migration being in its turn not only caused by politics but also by economic factors connected with technological progress. 5 Less obvious but immensely important are the differences within what would appear to be a single culture, as was first argued by Macchiavelli in the 16th century (Berlin, 1955), then by Montesquieu and Stewart in the 18th century (Hirschman, 1977), and more recently by several authors writing from quite diverse perspectives (see, e.g. Hampshire, 1989; Klein and Eason, 1991; Jacobs, 1993). 6 Clearly, 'discourse' isn ' t easy and we shouldn't be naive about that or believe good will is all we need. Tolerance and good judgement are rare skills, which we will have to cultivate much farther before we can make any real progress. In this context, I think that we still have much more to learn from the kind of discourse which Socrates introduced into general philosophy. But a discussion of this topic would go far beyond the limits of this paper. 7

But can ethical reasoning and ethical discussion deliver goods by themselves? In trying to promote them, we may easily forget the most important lesson which weak neutralism can teach us. Bad consequences of our actions and our use of tools may be just a contingency we have to learn to live with, but the same thing is true of ethics as long as we think of it as a kind of theory. "It makes no difference whether the inventor or user of a new technology is not just an ordinary human being but actually a moral philosopher, even a professor of ethics. Even if the theory be somehow part of a master plan underlying our technology, weak neutralism warns us that such a technology, like any other one, is liable to misuse of one sort or another. Yet, technologies are not always misused. We have not yet said anything about why this should be so, although it is, in my opinion, the other half of the ethical story and by far the most vital one.

Normal Responsible Behaviour: A Neglected Quantity

Third World countries are famous for a kind of corruption which seems to be endemic to the social system. There may or may not be a significant difference in the amount

Ethics is Fragile, Goodness is Not 37

of corruption actually existent in a Third World country such as Mexico, and first world country such as Britain, but bad reputation has at least one consequence which often escapes notice. I was once teaching logic to a group of high school students in Mexico, when something they said made it clear to me that they actually thought that nobody in our country abides by the law. I was amazed by the idea and spent the rest of the hour trying to prove by means of examples that such a state of affairs could simply not be true, that social life is impossible unless the majority of people 's actions are in conformity to the written law and also obey all sorts of implicit norms and rules. Thus I pointed out to them that they would have been unable to come to school if it hadn't been for a bus driver who for the most part obeyed public regulations; in fact, it was necessary for most people driving that morning to do so; otherwise the traffic would have been in chaos. I als o pointed out to them that even before going out to take the bus, all sorts of other people would have had to do their jobs, starting with those who collected the garbage from the streets and their own parents who woke them up and prepared breakfast for them, and ending with the farmers who produced the food their parents then prepared, the merchants and grocers who made sure that food was of good quality, transported it across the country and put it on sale in the right places, and so on. After an hour or so of describing all those things, they had to accept that, in spite of all the corruption there might otherwise be, such a statement as 'nobody abides by the law in Mexico' is simply outrageous. Most people must just be doing their duty most of the time, even in Mexico.

Simple as this statement might appear to be, no moral philosopher has given it the weight it should have. As the celebrated zoologist, J Z Young (1951), once said, it is not rationality or consciousness which make us the peculiar biological species that we are but rather that intricate network of cooperation, on a steadily increasing space-time scale, which constitutes social life. What I am calling 'cooperation' seems to have two aspects, a collective aspect and an individual one. The collective aspect is what social scientists commonly refer to as cooperation but which I would prefer to call 'coordination' (Argyle, 1991). Coordination ranges from the intricacies of face-to-face conversation to the complexities of telecommunications and from simple localised division of labour in a workshop to the international transfer of capital and human resources. It is an amazing thing to look at and trying to understand how it works can be very difficult indeed. But it does work, and it does make us human. On the other hand, there is an individual aspect of cooperation without which coordination would not be possible. I propose to call this 'normal responsible behavoiur ' . 8 Of course, it could equally well be argued that normal responsible behaviour is also impossible without coordination. I am ready to concede that; but the fact is that coordination is something which at least social scientists are ready to recognise and research into, whereas normal responsible behaviour is receiving much less attention than it should. However that may be, the fact that coordination and normal responsible behaviour necessitate each other and are mutually dependent does not affect the argument.

I would like to suspect that normal responsible behaviour should be the centrepiece of ethical consideration. There was a famous writer who once said that all great literature is about a human misfortune, and that nobody writes about happiness because it is so terribly boring. A similar gestalt effect applies to normal responsible

3 8 F. Leal

behaviour. It is always bad conduct which makes the headlines (no news is good news, as they say). The result is that we become blind to the normality of cooperative social life, viz to that which happens most of the time. We are obsessed with evil, sin and human error. This is as true of literature as it is true of moral philosophy and social science. And goodness strikes us only under the guise of heroism, even it is a tragic failure. I don' t want to dismiss heroic deeds, but I think that we need heroes only insofar as the system has broken down. Thus we should be thinking more of how to design systems so as to make heroism as unnecessary as possible. But we can only do that if we pay sufficient attention to normal responsible behaviour.

To push my argument a bit further, I would like to say that normal responsible behaviour embodies human goodness at its best. Normal responsible behaviour is very robust and very reliable, so reliable in fact that badly designed systems are sometimes prevented from breaking down only because responsible human operators save the situation by doing the right thing at the right time. 9 But of course, normal responsible behaviour as all things human, is not infallible. It works most of the time but not always. And one of the reasons why it sometimes doesn ' t work is the way we design the system (Moore-Ede, 1993, and almost all contributions in this issue). To see this more clearly, I think we have to go into a bit of philosophising, for which I offer my apologies in advance.

My main thesis then is that the embodiment of human goodness in normal responsible behaviour makes it invisible to the usual ethical theorising and psychological or sociological analysis. As far as I can tell, there was only one philosopher who actually tried to make the right connection here, viz Socrates, who having been a skilled craftsman himself, pointed out again and again that human goodness was akin to a skill. This point was partially recovered by Aristotle when he distinguished the practical philosopher from the theoretical philosopher. The theoretical philosopher is a person who can think her way through mazes of argument, dissect pros and cons, and come up at the end of all this with a neat well-grounded logical solution to some philosophical question, say how movement is possible, what ultimate component of matter there are, or whether the universe has a beginning in time. The practical philosopher, by contrast, is a person who has become wise, i.e. who hasn't just thought about philosophical problems or knows the right solutions to such problems, but a person who incarnates philosophy in all her dealings with the social and political world around her; a person who enacts philosophy, who makes it work in the real world. The problem is that this insight was drowned by Aristotle' s own vision of the good life as contemplative and intellectual. In fact, we scholars and scientists have inherited this prejudice, which in some way or other infects all current models underlying cognitive science and their practical applications within the new technology (Shipley and Leal, 1991; Leal and Shipley, 1992).

What I therefore would like to suggest is that we should reassess Socrates and even perhaps go beyond him in saying that goodness is not just a skill among others, but that is an integral part of all human skills, effectively embedded in any one of them. I don ' t think that we can extricate ordinary goodness f rom ordinary skills. Of course, there is slipshod craftsmanship, skills can occasionally be misused, and ordinary goodness as embedded in a skill is far from infallible. However, a familiar point from the psychology of skills is that skills are not supposed to hit the target all the time but only most of the time (Singleton, 1978). I want to suggest that what is

Ethics is Fragile, Goodness is Not 39

true of ordinary skills is also true of human goodness. In fact, there is bound to be the case if my suggestion that they belong together is anywhere near the mark. The important thing is that such limited success is usually enough. Being able to hit the target most of the time doesn't make ordinary skills and ordinary goodness fragile, except if the system puts inordinate pressure upon them. Which leads to the impor- tance of keeping the reality of normal responsible behaviour in mind when applying the new technologies.

One of the old philosophical dreams was the attainment of absolute certainty and absolute perfection. The whole history of philosophical thinking can be read that way (Dewey, 1929). Now the form in which such certainty and perfection was to be attained was the theoretical form. Practice and experience was just too accidental and variegated to serve the purpose of such a magnificent dream. Our own century has seen such revolutions in theory that it is nowadays as commonplace to think of theories as fallible and replaceable. But it hasn't quite taught us to think of ordinary human experience and practice, of ordinary human judgement and skill, in one word of ordinary human goodness as the only genuine article. I think that by doing so we might yet be able to design better systems - systems, that is, which better enable people to do what they would try to do anyway. Doctors, psychologists and ergonomists are fond of depicting themselves as in a perpetual struggle with engineers to protect human operators from unnecessary hazards (Oborne et al., 1993; Moore-Ede, 1993). This is of course largely, although not completely true. On the one hand, some engineers have put up a valiant fight to advance human-centred technology (Rosenbrock, 1989, 1990; Cooley, 1987, this issue; Eason et aI., this issue). On the other hand, the truth is that most people involved in human factors, whether they are engineers or not, have the deplorable habit of describing the human operator as a worrisome 'source of error' and/or 'the weakest link' in the system; consequently they constantly strive to 'close the loop' and 'design the man out' of the system. At the same time, however, designers and consultants implicitly and even unthinkingly take for granted what I 've been calling normal responsible behaviour. When all is said and done, this means that they rely on the common decency of human operators, who will indeed do their sometimes desperate best to protect the health, safety or even life of the people immediately affected by any malfunctioning of their precious systems. I wonder how many of them really obey the Golden Rule of putting themselves in other people's shoes.

Conclusion

In conclusion, if technology is ethically neutral in the sense that it is liable to misuse, then ethics under the dispensation of powerful new technologies is a very fragile thing indeed. 1~ This makes ethics all the more precious - meaning by 'ethics' not only ethical theory in the abstract but also the 'moral ' structures and the inbuilt 'values' of say, technology. 11 Nevertheless, I want to insist here that it is ethics and not human goodness which is fragile, as has recently been suggested (Nussbaum, 1986). Believing in the fragility of goodness comes, I think, from the strong influence that the tragic vision of humankind and human life still has upon our imaginations.12 As against this tragic vision, I would like to insist that people - ordinary real people

40 F. Leal

- do not normally lead tragic lives and so their goodness is a firm rock upon which we can build and use ever better technologies.

A c k n o w l e d g e m e n t s

Part of the above material was used in a seminar I gave on 'Language, Ethics, and the Philosophy of Work', at the Organisational Studies Area (Department of Economics) of the Metropolitan University in Mexico City in November 1993. My heartfelt thanks go to all participants and especially to the organisers, Dr Luis Montano and Eduardo Ibarra, for their helpful comments. I am also very grateful to Dr Rene Saran and Dr Patricia Shipley for reading the first draft of thi s paper and improving both its form and its content. The Society for the Furtherance of the Critical Philosophy (London) provided financial assistance and a congenial environment which allowed me to finish the work in leisure.

N o t e s

1. When someone else tried to criticize Kant by saying that his 'categorical imperative' was only a formulation of common human knowledge, he replied that he never intended it to be anything else, and that it would be ridiculous to try to pose as a moral master to humankind, as though ethics was something a philosopher could invent from scratch (Kant 1786, Preface). These dualisms have been explored in recent papers by the guest-editors of this special issue (Shipley & Leal 1991, Leal & Shipley 1992; see Shipley, this issue). The reader will remark that this as well as the next four paragraphs reviewing the ethical issues arising from ICA technology always finish by indicating that contemporary moral philosophy is guilty of neglect. This theme is developed further in the introduction to this special issue. Thus, Kenneth Gergen has interestingly argued that we shouldn't mourn the loss of a "stable center of identity" - by losing it the world stands a fair chance to get rid of self-righteousness, bigotry and jingoism. For a popular summary of righteousness, bigotry and jingoism. For a popular summary of Gergen's ideas, see "The decline and fall of personality" in Psychology Today, vol. 25, no. 6, November-December 1992, pp. 59-63.

5. People working at what might be called the 'cultural interface' - social workers, educators, and so on - thus acquire a particular relevance in helping us to sort things out and achieve 'discourse'. Like engineers, doctors and ergonomists in system design, those persons are best qualified to think about ethical issues arising from ICA technology in their particular workplaces.

6. I suspect that the distinctions between the right and the good, or between duties and virtues, or between justice and care, are"splits of (ethical) truth", ifi may be allowed to use a phrase used in classical German philosophy. This would explain the increasingly acrimonious tone of many recent debates in moral philosophy, in which people tend to conceive of each other as ethical morons. But this whole subject goes beyond the scope of this paper.

7. The contributions by Eason et al. and Ord to this issue are in the context exemplary, as observed in the introduction to this special issue.

8. This theme is explored in several chapters of Oborne et al. 1993, even if the term 'normal responsible behaviour' does not explicitly appear in any of them (see also Shipley & Leal 1991, Leal & Shipley 1992).

9. Notice that I am not asking the question why (most) people are (for the most part, most of the time) good. I don' t know the answer to that question and probably on one else does. However, it is not necessary either to ask that question or to answer it in order to see that (a) as a matter of fact, people are good; (b) things in general run relatively smoothly for everyone because people are good; (c) people "save the situation" very often because they persist in being good despite inhuman arrangements; (d) everyone would be better served if people should be enabled to persist in being good without having to "save the situation", i.e. if inhuman arrangements would be improved. That is all we need to know.

10. A good example is genetic screening as presented by Sanmartin in this issue.

2.

3.

4.

Ethics is Fragile, Goodness is Not 41

11. It should be clear by now that by 'ethics' I understand a framework or structure within which things are done. Such a framework is constituted by ideas, values, precepts, codes, and whatnot; and so is an intellectual structure, or rather several intellectual structures, in Rosenbrock's sense of the word (see his contribution to this volume). Calling such a structure "fragile" means first of all that it is liable to abuse at any given time. One example among many is the value shift in hospital life examined by Ord in this issue.

12. "Only in very wicked world would decency inspire awe", says quite justly Mary Mothersill (1984) when criticizing Kant's temporary lapse into estheticizing ethics. Anyone can succumb to this ever present temptation; and maybe it is a good thing, too, or even avoidable when dealing with literary works or art (such as those Nussbaum analyzes in her beautiful book); but we should resist it when discussing real- life issues.

References

Argyle, Michael (1991) Cooperation: the basis of sociability. London: Routledge. Berlin, Isaiah (1955) Against the current: essays in the history of ideas. London: The Hogarth Press. Cooley, Mike (1987 Architect or bee? The human price of technology. New edition. London: The Hogarth

Press. Cropanzano, Russell, ed. (1993) Justice in the workplace: approaching fairness in human resource management.

Hillsdale, NJ: Lawrence Erlbanm Associates. Dewey, John (1929) The guest for certainty: a study of the relation of knowledge and action. London: Allen

& Unwin. Dreyfus, Hubert L. & Dreyfus, Stuart E. (1986) Mind over machine: the power of human intuition and

expertise in the era of the computer. New York: The Free Press. Ermann, M. David, Williams, Mary B. & Gutierrez, Claudio (1990) Computers, ethics, and society. New

York: Oxford University Press. Garson, Barbara (1988) The electronic sweatshop: how computers are transforming the office of the future

into the factory of the past. New York: Simon & Schuster. Habermas, Jiirgen (1983) Theorie de Kommunikativen Hadelins. Frankfurt: Suhrkamp. Hampshire, Smart (1989) Innocence and experience. Harvard University Press. Havelock, Eric (1982) The literate revolution in Greece and its cultural consequences. Princeton, NJ:

Princeton University Press. Hirschhorn, Larry (1984) Beyond mechanization: work and technology in postindustrial age. Cambridge,

MA: The MIT Press. Hirschman, Albert O. (1977) The Passions and the interests: political arguments for capitalism before its

triumph. Princeton, NJ: Princeton University Press. Jacobs, Jane (1993) Systems of survival: a dialogue on the moral foundations of commerce and politics.

London: Hodder & Stoughton. Kant, Immanuel (1786) Critik derpractischen Vernunft. (Critique of Practical Reason.) Hartknoch: Riga. Klein, Lisl & Eason, Ken (1991) Putting social science to work: the ground between theory and use explored

through case studies in organizations. Cambridge, UK: Cambridge University Press. Leal, Fernando & Shipley, Patricia (1992) 'Deep dualism' International Journal of Applied Philosophy, 7,

33-44. Moore-Ede, Martin (1993) The 24-hour society: the risks, costs and challenges of a world that never stops.

London: Piatkus. Mothersill, Mary (1984) Beauty restored. Oxford: Clarendon Press. Norman, Donald A. (1989) The design of everyday things. New York: Doubleday. Nussbaum, Martha (1986) The fragility of goodness: luck and ethics in Greek Tragedy and philosophy.

Cambridge, UK: Cambridge University Press. Oborne, D.J., Branton, R., Leal, F., Shipley, P. & Stewart, T., eds. (1993) Person-centred ergonomics: a

Brantonian view of human factors. London: Taylor & Francis. Ropohl, Giinter (1985) Die unvollkommene Technik. Frankfurt am Main: Suuhrkamp. Rosenbrock, Howard (t 990) Machines with a purpose. Oxford: Oxford University Press. Rosenbrock, Howard, ed. (1989) Designing human-centred technology: a cross-disciplinary project in

computer-aided manufacturing. London: Springer-Verlag. S hipley, Patricia & Leal, Fernando (1991) 'The active self: beyond dualism.' Newsletter of the British

Psychological Society, History and Philosophy Section, 13, 4-16. Singleton, W.T., ed. (1978) The analysis of practical skills. Lancaster: MTP Press. Young, J.Z. (1951) Doubt and certainty in science. Oxford: Clarendon Press. Williams, Bernard (1986) Ethics and the limits of philosophy. London: Fontana.

42 F. Leal

Wood, Stephen, ed. (1982) The degredation of work? London: Hutchinson. Wood, Stephen, ed. (1989) The transformation of work? London: Routledge.

Correspondence and offprint requests to: Dr F Leal, Paseo de los Robles 4169-4, Guadalajara JAL 45110, Mexico.