Reason 2000

Embed Size (px)

Citation preview

  • 7/30/2019 Reason 2000

    1/3

    Education and debate

    Human error: models and managementJames Reason

    The human error problem can be viewed in two ways:the person approach and the system approach. Eachhas its model of error causation and each model givesrise to quite different philosophies of error manage-ment. Understanding these differences has importantpractical implications for coping with the ever presentrisk of mishaps in clinical practice.

    Person approachThe longstanding and widespread tradition of the per-son approach focuses on the unsafe actserrors andprocedural violationsof people at the sharp end:nurses, physicians, surgeons, anaesthetists, pharma-cists, and the like. It views these unsafe acts as arisingprimarily from aberrant mental processes such as for-getfulness, inattention, poor motivation, carelessness,negligence, and recklessness. Naturally enough, theassociated countermeasures are directed mainly atreducing unwanted variability in human behaviour.These methods include poster campaigns that appealto peoples sense of fear, writing another procedure (oradding to existing ones), disciplinary measures, threat

    of litigation, retraining, naming, blaming, and shaming.Followers of this approach tend to treat errors as moralissues, assuming that bad things happen to badpeoplewhat psychologists have called the just worldhypothesis.1

    System approach

    The basic premise in the system approach is thathumans are fallible and errors are to be expected, evenin the best organisations. Errors are seen asconsequences rather than causes, having their originsnot so much in the perversity of human nature as inupstream systemic factors. These include recurrenterror traps in the workplace and the organisational

    processes that give rise to them. Countermeasures arebased on the assumption that though we cannotchange the human condition, we can change the con-ditions under which humans work. A central idea isthat of system defences. All hazardous technologiespossess barriers and safeguards. When an adverseevent occurs, the important issue is not who blundered,but how and why the defences failed.

    Evaluating the person approach

    The person approach remains the dominant traditionin medicine, as elsewhere. From some perspectives it

    has much to commend it. Blaming individuals is emo-tionally more satisfying than targeting institutions.People are viewed as free agents capable of choosingbetween safe and unsafe modes of behaviour. If some-thing goes wrong, it seems obvious that an individual(or group of individuals) must have been responsible.Seeking as far as possible to uncouple a personsunsafe acts from any institutional responsibility isclearly in the interests of managers. It is also legallymore convenient, at least in Britain.

    Nevertheless, the person approach has seriousshortcomings and is ill suited to the medical domain.Indeed,continued adherence to this approach is likely to

    thwart the development of safer healthcare institutions.Although some unsafe acts in any sphere are egre-

    gious, the vast majority are not. In aviationmaintenancea hands-on activity similar to medicalpractice in many respectssome 90% of quality lapseswere judged as blameless.2 Effective risk managementdepends crucially on establishing a reporting culture.3

    Without a detailed analysis of mishaps, incidents, nearmisses, and free lessons, we have no way of uncover-ing recurrent error traps or of knowing where theedge is until we fall over it. The complete absence ofsuch a reporting culture within the Soviet Union con-tributed crucially to the Chernobyl disaster.4 Trust is a

    Summary points

    Two approaches to the problem of humanfallibility exist: the person and the systemapproaches

    The person approach focuses on the errors of

    individuals, blaming them for forgetfulness,inattention, or moral weakness

    The system approach concentrates on theconditions under which individuals work and triesto build defences to avert errors or mitigate theireffects

    High reliability organisationswhich have lessthan their fair share of accidentsrecognise thathuman variability is a force to harness in avertingerrors, but they work hard to focus that variabilityand are constantly preoccupied with thepossibility of failure

    Department ofPsychology,University ofManchester,ManchesterM13 9PL

    James Reasonprofessor of psychology

    [email protected]

    BMJ 2000;320:76870

    768 BMJ VOLUME 320 18 MARCH 2000 www.bmj.com

    on 6 August 2008bmj.comDownloaded from

    http://bmj.com/http://bmj.com/http://bmj.com/
  • 7/30/2019 Reason 2000

    2/3

    key element of a reporting culture and this, in turn,requires the existence of a just cultureone possessinga collective understanding of where the line should bedrawn between blameless and blameworthy actions.5

    Engineering a just culture is an essential early step increating a safe culture.

    Another serious weakness of the person approachis that by focusing on the individual origins of error itisolates unsafe acts from their system context. As aresult, two important features of human error tend tobe overlooked. Firstly, it is often the best people whomake the worst mistakeserror is not the monopoly ofan unfortunate few. Secondly, far from being random,mishaps tend to fall into recurrent patterns. The sameset of circumstances can provoke similar errors,regardless of the people involved. The pursuit ofgreater safety is seriously impeded by an approach thatdoes not seek out and remove the error provokingproperties within the system at large.

    The Swiss cheese model of systemaccidents

    Defences, barriers, and safeguards occupy a keyposition in the system approach. High technology sys-tems have many defensive layers: some are engineered(alarms, physical barriers, automatic shutdowns, etc),others rely on people (surgeons, anaesthetists, pilots,control room operators, etc), and yet others depend onprocedures and administrative controls. Their functionis to protect potential victims and assets from localhazards. Mostly they do this very effectively, but thereare always weaknesses.

    In an ideal world each defensive layer would beintact. In reality, however, they are more like slices ofSwiss cheese, having many holesthough unlike in thecheese, these holes are continually opening, shutting,

    and shifting their location. The presence of holes inany one slice does notnormally cause a bad outcome.Usually, this can happen only when the holes in manylayers momentarily line up to permit a trajectory ofaccident opportunitybringing hazards into damag-ing contact with victims (figure).

    The holes in the defences arise for two reasons:active failures and latent conditions. Nearly all adverseevents involve a combination of these two sets of factors.

    Active failures are the unsafe acts committed bypeople who are in direct contact with the patient orsystem. They take a variety of forms: slips, lapses, fum-bles, mistakes, and procedural violations.6 Active

    failures have a direct and usually shortlived impact onthe integrity of the defences. At Chernobyl, forexample, the operators wrongly violated plant proce-dures and switched off successive safety systems, thuscreating the immediate trigger for the catastrophicexplosion in the core. Followers of the personapproach often look no further for the causes of anadverse event once they have identified these proximalunsafe acts. But, as discussed below, virtually all such

    acts have a causal history that extends back in time andup through the levels of the system.

    Latent conditionsare the inevitable resident patho-gens within the system. They arise from decisionsmade by designers,builders, procedure writers,and toplevel management. Such decisions may be mistaken,but they need not be. All such strategic decisions havethe potential for introducing pathogens into thesystem. Latent conditions have two kinds of adverseeffect: they can translate into error provokingconditions within the local workplace (for example,time pressure, understaffing, inadequate equipment,fatigue, and inexperience) and they can createlonglasting holes or weaknesses in the defences(untrustworthy alarms and indicators, unworkable pro-cedures, design and construction deficiencies, etc).Latent conditionsas the term suggestsmay liedormant within the system for many years before theycombine with active failures and local triggers to createan accident opportunity. Unlike active failures, whosespecific forms are often hard to foresee, latentconditions can be identified and remedied before anadverse event occurs. Understanding this leads toproactive rather than reactive risk management.

    To use another analogy: active failures are likemosquitoes. They can be swatted one by one, but theystill keep coming.The best remedies are to create moreeffective defences and to drain the swamps in whichthey breed. The swamps, in this case, are the everpresent latent conditions.

    Error management

    Over the past decade researchers into human factorshave been increasingly concerned with developing thetools for managing unsafe acts. Error management hastwo components: limiting the incidence of dangerouserrors andsince this will never be wholly effective

    creating systems that are better able to tolerate theoccurrence of errors and contain their damagingeffects. Whereas followers of the person approachdirect most of their management resources at trying tomake individuals less fallible or wayward, adherents ofthe system approach strive for a comprehensivemanagement programme aimed at several differenttargets: the person, the team, the task, the workplace,and the institution as a whole.3

    High reliability organisationssystems operatingin hazardous conditions that have fewer than their fairshare of adverse eventsoffer important models forwhat constitutes a resilient system. Such a system has

    Losses

    Hazards

    The Swiss cheese model of how defences, barriers, and safeguards

    may be penetrated by an accident trajectory

    We cannot change the human condition,but we can change the conditions underwhich humans work

    Education and debate

    769BMJ VOLUME 320 18 MARCH 2000 www.bmj.com

    on 6 August 2008bmj.comDownloaded from

    http://bmj.com/http://bmj.com/http://bmj.com/
  • 7/30/2019 Reason 2000

    3/3

    intrinsic safety health; it is able to withstand its

    operational dangers and yet still achieve its objectives.

    Some paradoxes of high reliability

    Just as medicine understands more about disease thanhealth, so the safety sciences know more about whatcauses adverse events than about how they can best beavoided. Over the past 15 years or so, a group of socialscientists based mainly at Berkeley and the Universityof Michigan has sought to redress this imbalance bystudying safety successes in organisations rather thantheir infrequent but more conspicuous failures.7 8

    These success stories involved nuclear aircraft carriers,air traffic control systems, and nuclear power plants(box). Although such high reliability organisations may

    seem remote from clinical practice, some of theirdefining cultural characteristics could be imported intothe medical domain.

    Most managers of traditional systems attributehuman unreliability to unwanted variability and strive toeliminate it as far as possible. In high reliability organisa-tions, on the other hand, it is recognised that humanvariability in the shape of compensations and adapta-tions to changing events represents one of the systemsmost important safeguards. Reliability is a dynamicnon-event.7 It is dynamic because safety is preserved bytimely human adjustments; it is a non-event becausesuccessful outcomes rarely call attention to themselves.

    High reliability organisations can reconfigurethemselves to suit local circumstances. In their routinemode, they are controlled in the conventionalhierarchical manner. But in high tempo or emergencysituations, control shifts to the experts on the spotasit often does in the medical domain. The organisationreverts seamlessly to the routine control mode oncethe crisis has passed. Paradoxically, this flexibility arisesin part from a military traditioneven civilian high

    reliability organisations have a large proportion ofex-military staff. Military organisations tend to definetheir goals in an unambiguous way and, for thesebursts of semiautonomous activity to be successful, it isessential that all the participants clearly understandand share these aspirations. Although high reliabilityorganisations expect and encourage variability ofhuman action, they also work very hard to maintain aconsistent mindset of intelligent wariness.8

    Perhaps the most important distinguishing featureof high reliability organisations is their collectivepreoccupation with the possibility of failure. Theyexpect to make errors and train their workforce to rec-ognise and recover them. They continually rehearsefamiliar scenarios of failure and strive hard to imaginenovel ones. Instead of isolating failures, they generalisethem. Instead of making local repairs, they look for sys-tem reforms.

    Conclusions

    High reliability organisations are the prime examplesof the system approach. They anticipate the worst andequip themselves to deal with it at all levels of the

    organisation. It is hard, even unnatural, for individualsto remain chronically uneasy, so their organisationalculture takes on a profound significance. Individualsmay forget to be afraid, but the culture of a highreliability organisation provides them with both thereminders and the tools to help them remember. Forthese organisations, the pursuit of safety is not so muchabout preventing isolated f ailures, either human ortechnical, as about making the system as robust as ispracticable in the face of its human and operationalhazards. High reliability organisations are not immuneto adverse events, but they have learnt the knack ofconverting these occasional setbacks into enhancedresilience of the system.

    Competing interests: None declared.

    1 Lerner MJ. The desire for justice and reactions to victims.In: McCauley J,BerkowitzL, eds.Altruism and helping behavior. New York:AcademicPress,1970.

    2 Marx D.Discipline: the role of rule violations. Ground Effects1997;2:1-4.3 ReasonJ.Managing the risks of organizational accidents. Aldershot:Ashgate,

    1997.4 Medvedev G. The truth about Chernobyl. New York: Basic Books, 1991.5 Marx D. Maintenance error causation. Washington, DC: Federal Aviation

    Authority Office of Aviation Medicine, 1999.6 Reason J. Human error. New York: Cambridge University Press, 1990.7 Weick KE. Organizational culture as a source of high reliability. Calif

    Management Rev1987;29:112-27.8 Weick KE, Sutcliffe KM, Obstfeld D. Organizing for high reliability:

    processes of collective mindfulness. Res Organizational Behav 1999;21:23-81.

    High reliability organisations

    So far, three types of high reliability organisations have been investigated:US Navy nuclear aircraft carriers, nuclear power plants, and air trafficcontrol centres. The challenges facing these organisations are twofold:

    Managing complex, demanding technologies so as to avoid major failuresthat could cripple or even destroy the organisation concerned

    Maintaining the capacity for meeting periods of very high peak demand,

    whenever these occur.

    The organisations studied7 8 had these defining characteristics:

    They were complex, internally dynamic, and, intermittently, intenselyinteractive

    They performed exacting tasks under considerable time pressure

    They had carried out these demanding activities with low incident ratesand an almost complete absence of catastrophic failures over several years.

    Although, on the face of it, these organisations are far removed from themedical domain, they share important characteristics with healthcareinstitutions. The lessons to be learnt from these organisations are clearlyrelevant for those who manage and operate healthcare institutions.

    US

    NAVY

    Blaming individuals is emotionallymore satisfying than targetinginstitutions.

    Education and debate

    770 BMJ VOLUME 320 18 MARCH 2000 www.bmj.com

    on 6 August 2008bmj.comDownloaded from

    http://bmj.com/http://bmj.com/http://bmj.com/