17
Review A critical review of methods and models for evaluating organizational factors in Human Reliability Analysis M.A.B. Alvarenga a , P.F. Frutuoso e Melo b, * , R.A. Fonseca a a National Commission of Nuclear Energy (CNEN), Rua General Severiano, 90, 22290-900 Rio de Janeiro, RJ, Brazil b COPPE, Graduate Program of Nuclear Engineering, Federal University of Rio de Janeiro, Caixa Postal 68509, 21941-972 Rio de Janeiro, RJ, Brazil article info Article history: Received 1 October 2013 Received in revised form 7 April 2014 Accepted 8 April 2014 Keywords: Human Reliability Analysis Organizational factors THERP ATHEANA FRAM STAMP abstract This work makes a critical evaluation of the deciencies concerning human factors and evaluates the potential of quantitative techniques that have been proposed in the last decades, like THERP (Technique for Human Error Rate Prediction), CREAM (Cognitive Reliability and Error Analysis Method), and ATHEANA (A Technique for Human Event Analysis), to model organizational factors, including cognitive processes in humans and interactions among humans and groups. Two important models are discussed in this context: STAMP (Systems-Theoretic Accident Model and Process), based on system theory and FRAM (Functional Resonance Analysis Method), which aims at modeling the nonlinearities of socio- technical systems. These models, however, are not yet being used in risk analysis similarly to Probabi- listic Safety Analyses for safety assessment of nuclear reactors. However, STAMP has been successfully used for retrospective analysis of events, which would allow an extension of these studies to prospective safety analysis. Ó 2014 Elsevier Ltd. All rights reserved. 1. Introduction Organizational factors are addressed in rst generation models of human reliability analysis by means of performance shaping factors such as training, experience, procedures, management, communication and culture. Several individual psychological and physiological stressors for humans are also treated by such factors. Organizations are made up of humans and these models suffer from a chronic deciency in terms of modeling the cognitive processes in humans. Human error is treated similarly to a physical component error. These models lack a cognitive architecture of human infor- mation processing, with cognitive error mechanisms, Swain (1990), Kantowitz and Fujita (1990), Cacciabue (1992), Fujita (1992), Hollnagel (1998). Second generation HRA methods have some kind of cognitive architecture or cognitive error mechanisms. Organizational factors are taken into account by performance shaping factors. The evo- lution here was to establish a mapping between these factors and the error mechanisms being inuenced or triggered in a given operational context, since not all performance shaping factors inuence a specic error mechanism. Thus, one can generate tables of inuences between performance shaping factors and error mechanisms and between these and specic types of human errors associated to a given operational context for each stage of infor- mation processing (detection, diagnosis, decision making and ac- tion). In fact, ATHEANA contains comprehensive tables with such interrelationships, NRC (2000). CREAM (Hollnagel, 1998) proceeds similarly. Although the above methods have evolved in matters relating to human cognition, organizational factors do not have a proper model that can highlight social, political and economic processes that inuence such factors in a similar way as error mechanisms in human cognition. Such processes involve complexity that models of rst or second generation cannot handle properly, Qureshi (2008). Digital technology systems require an analysis that takes into account complexity not found in analog technology. Digital systems may be at intermediate fault modes before reaching a nal failure state that will be revealed to human operators in the humane machine interface. These intermediate states are mostly invisible to operators and can move the system to often catastrophic condi- tions, where human beings do not have consciousness or infor- mation on what the system is doing, NRC (2008). In addition to digital systems, complex systems deal with social, political and economic levels of individual, group and organization * Corresponding author. Tel.: þ55 21 2562 8438; fax: þ55 21 2562 8444. E-mail addresses: [email protected] (M.A.B. Alvarenga), frutuoso@nuclear. ufrj.br, [email protected] (P.F. Frutuoso e Melo), [email protected] (R. A. Fonseca). Contents lists available at ScienceDirect Progress in Nuclear Energy journal homepage: www.elsevier.com/locate/pnucene http://dx.doi.org/10.1016/j.pnucene.2014.04.004 0149-1970/Ó 2014 Elsevier Ltd. All rights reserved. Progress in Nuclear Energy 75 (2014) 25e41

A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

Embed Size (px)

Citation preview

Page 1: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

lable at ScienceDirect

Progress in Nuclear Energy 75 (2014) 25e41

Contents lists avai

Progress in Nuclear Energy

journal homepage: www.elsevier .com/locate/pnucene

Review

A critical review of methods and models for evaluating organizationalfactors in Human Reliability Analysis

M.A.B. Alvarenga a, P.F. Frutuoso e Melo b,*, R.A. Fonseca a

aNational Commission of Nuclear Energy (CNEN), Rua General Severiano, 90, 22290-900 Rio de Janeiro, RJ, BrazilbCOPPE, Graduate Program of Nuclear Engineering, Federal University of Rio de Janeiro, Caixa Postal 68509, 21941-972 Rio de Janeiro, RJ, Brazil

a r t i c l e i n f o

Article history:Received 1 October 2013Received in revised form7 April 2014Accepted 8 April 2014

Keywords:Human Reliability AnalysisOrganizational factorsTHERPATHEANAFRAMSTAMP

* Corresponding author. Tel.: þ55 21 2562 8438; faE-mail addresses: [email protected] (M.A.B. A

ufrj.br, [email protected] (P.F. Frutuoso e MelA. Fonseca).

http://dx.doi.org/10.1016/j.pnucene.2014.04.0040149-1970/� 2014 Elsevier Ltd. All rights reserved.

a b s t r a c t

This work makes a critical evaluation of the deficiencies concerning human factors and evaluates thepotential of quantitative techniques that have been proposed in the last decades, like THERP (Techniquefor Human Error Rate Prediction), CREAM (Cognitive Reliability and Error Analysis Method), andATHEANA (A Technique for Human Event Analysis), to model organizational factors, including cognitiveprocesses in humans and interactions among humans and groups. Two important models are discussedin this context: STAMP (Systems-Theoretic Accident Model and Process), based on system theory andFRAM (Functional Resonance Analysis Method), which aims at modeling the nonlinearities of socio-technical systems. These models, however, are not yet being used in risk analysis similarly to Probabi-listic Safety Analyses for safety assessment of nuclear reactors. However, STAMP has been successfullyused for retrospective analysis of events, which would allow an extension of these studies to prospectivesafety analysis.

� 2014 Elsevier Ltd. All rights reserved.

1. Introduction

Organizational factors are addressed in first generation modelsof human reliability analysis by means of performance shapingfactors such as training, experience, procedures, management,communication and culture. Several individual psychological andphysiological stressors for humans are also treated by such factors.Organizations aremade up of humans and thesemodels suffer froma chronic deficiency in terms of modeling the cognitive processes inhumans. Human error is treated similarly to a physical componenterror. These models lack a cognitive architecture of human infor-mation processing, with cognitive error mechanisms, Swain (1990),Kantowitz and Fujita (1990), Cacciabue (1992), Fujita (1992),Hollnagel (1998).

Second generation HRA methods have some kind of cognitivearchitecture or cognitive error mechanisms. Organizational factorsare taken into account by performance shaping factors. The evo-lution here was to establish a mapping between these factors andthe error mechanisms being influenced or triggered in a givenoperational context, since not all performance shaping factors

x: þ55 21 2562 8444.lvarenga), [email protected]), [email protected] (R.

influence a specific error mechanism. Thus, one can generate tablesof influences between performance shaping factors and errormechanisms and between these and specific types of human errorsassociated to a given operational context for each stage of infor-mation processing (detection, diagnosis, decision making and ac-tion). In fact, ATHEANA contains comprehensive tables with suchinterrelationships, NRC (2000). CREAM (Hollnagel, 1998) proceedssimilarly.

Although the abovemethods have evolved inmatters relating tohuman cognition, organizational factors do not have a propermodel that can highlight social, political and economic processesthat influence such factors in a similar way as error mechanisms inhuman cognition. Such processes involve complexity that modelsof first or second generation cannot handle properly, Qureshi(2008).

Digital technology systems require an analysis that takes intoaccount complexity not found in analog technology. Digital systemsmay be at intermediate fault modes before reaching a final failurestate that will be revealed to human operators in the humanemachine interface. These intermediate states are mostly invisible tooperators and can move the system to often catastrophic condi-tions, where human beings do not have consciousness or infor-mation on what the system is doing, NRC (2008).

In addition to digital systems, complex systems deal with social,political and economic levels of individual, group and organization

Page 2: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

List of acronyms

ACCI-MAP Accident MapAHP/DEAAnalytical Hierarchical Process/Data Envelopment

AnalysisAPJ Absolute Probability JudgmentAOE Analysis of Operational EventsASEP Accident Sequence Evaluation ProgramATHEANA A Technique for Human Error AnalysisBBN Bayesian Belief NetworkBHEP Basic Human Error ProbabilityBN Bayesian NetworkBWR Boiling Water ReactorCAHR Connectionism Assessment of Human ReliabilityCESA Commission Errors Search and AssessmentCBDT Cause-Based Decision TreeCC Common ConditionsCODA Conclusions from occurrences by descriptions of

actionsCPC Common Performance ConditionsCREAM Cognitive Reliability and Error Analysis MethodDT Decision TreeEPRI-HRA Electric Power Research Institute e Human

Reliability AnalysisESD Event Sequence DiagramFCM Fuzzy Cognitive MapFLIM Failure Likelihood Index MethodologyFRAM Functional Resonance Accident ModelFT Fault TreeGST General Systems TheoryHCR Human Cognitive ReliabilityHEART Human Error Assessment and Reduction TechniqueHEP Human Error ProbabilityHFACS Human Factors Analysis and Classification SystemHF-PFMEA Human Factors-Process Failure Mode and Effect

AnalysisHFI Human Factors IntegrationHOF Human and Organizational Factors

HORAAMHuman and Organizational Reliability Analysis inAccident Management

HRA Human Reliability AnalysisHSC High Speed CraftIDAC Information, Decision, and Action in CrewIF Influencing FactorINTENT Not an acronymIPSN Institute for Nuclear Safety and ProtectionJHEDI Justified Human Error Data InformationK-HPES Korean-Human Performance Enhancement SystemLISREL Linear Structure RelationMERMOSMéthode d’Evaluation de la Réalisation des Missions

Operateur pour la SûretéMTS Maritime Transport SystemNARA Nuclear Action Reliability AssessmentNPP Nuclear Power PlantNRC Nuclear Regulatory CommissionORE Operator Reliability ExperimentPC Paired comparisonsPRA Probabilistic Risk AssessmentPSA Probabilistic Safety AssessmentPSF Performing Shaping FactorsQRA Quantitative Risk AssessmentSADT Structured Analysis and Design TechniqueSD System DynamicsSHARP1 Systematic Human Action Reliability Procedure

(enhanced)SLIM-MAUD Success Likelihood Index Methodology, Multi-

attribute Utility DecompositionSMAS Safety Management Assessment SystemSPAR-H Standardized Plant Analysis Risk-Human Reliability

AnalysisSTAMP Systems-Theoretic Accident Model and ProcessSTEP Sequential Time Events PlottingSTPA System-Theoretic Process AnalysisTHERP Technique for Human Error Rate PredictionTRC Time Reliability CorrelationUMH University of Maryland Hybrid

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e4126

relationships (Leveson, 2004a; Qureshi, 2008). Traditional modelsare based on a successive chain of events each relating to a previousevent causation. This is a strictly linear view of causeeeffect rela-tionship. In contrast to sequential models, epidemiological modelsevolved later, which distinguish latent failures (design, mainte-nance, management) that can converge to a catastrophic eventwhen using a “trigger”, i.e., by combining operational failures(unsafe acts, active failures) and typical system conditions (oper-ating environment, context), Leveson (2004a), Qureshi (2008), NRC(2008), Dekker et al. (2011).

These two classical approaches work well when applied tocomponents of conventional (non-digital) systems that have well-defined failure modes and exhibit linear relationships betweenthese failure modes and their causes, even when not expected inthe design since these failure modes are quite “visible”. Nonlinearinteractions, on the other hand, are unplanned, unfamiliar unex-pected sequences of failures and, in addition, invisible andincomprehensible, Leveson (2004a), Qureshi (2008), Dekker et al.(2011).

In complex nonlinear interactions, failures do not arise from therelationship (which may not be exhaustive) of components failuremodes and their causes, but “emerge” from the relationships be-tween these components during operational situations. To study

these interrelationships, it is necessary to identify the laws that rulethem. The only model that can do that is a model based on systemstheory, which aims to study the laws that govern any system, be itphysical, biological or social Leveson (2004a), Qureshi (2008),Dekker et al. (2011).

Human factors should be evaluated in three hierarchical levels.The first level should concern the cognitive behavior of humanbeings during the control of processes that occur through the man-system interface. Here, one evaluates human errors through humanreliability techniques of first and second generation, like THERP(Swain and Guttman, 1983), ASEP (Swain, 1987), and HCR(Hannaman et al., 1984) (first generation) and ATHEANA (NRC,2007) and CREAM (Hollnagel, 1998) (second generation). In thesecond level, the focus is on the cognitive behavior of human beingswhen they work in groups, as in nuclear power plants. The focushere is on the anthropological aspects that rule the interactionamong human beings. In the third level, one is interested in theinfluence of organizational culture on human beings as well as onthe tasks being performed. Here, one adds to the factors of thesecond level the economical and political aspects that shape thecompany organizational culture. Nowadays, Human ReliabilityAnalysis (HRA) techniques incorporate organizational factors andorganization levels through performance shaping factors.

Page 3: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 27

Our aim in this paper is to make a critical evaluation of the de-ficiencies concerning human factors and evaluate the potential ofquantitative techniques that have been proposed in the last decadeto model organizational factors, including the interaction amonggroups.

Three important techniques are discussed in this context: STAMP(Leveson, 2002), based on system theory, ACCI-MAP (Rasmussenand Svedung, 2000), and FRAM (Hollnagel, 2004, 2012), whichaims at modeling the nonlinearities of socio-technical systems.

This paper is organized as follows: Section 2 presents a litera-ture review on human reliability analysis and organizational fac-tors. Section 3 addresses the candidate models for treating humanreliability analysis and organizational factors in the context ofprobabilistic safety assessment. Section 4 displays a critique of non-systemic approaches for human reliability analysis in face of thecurrent socio-technical system approach. Section 5 discusses howTHERP (Swain and Guttman, 1983) takes organizational factors intoaccount. Section 6 is dedicated to discuss the same subject inrespect to SPAR-H (Gertman et al., 2005). This same subject isdiscussed in Section 7 in the context of ATHEANA (NRC, 2000).FRAM (Hollnagel, 2004, 2012), which models the nonlinearities ofsocio-technical systems is discussed in Section 8. STAMP (Leveson,2002), an approach based on system theory, is the subject of section9. Features and pros and cons of the discussed HRA techniques arethe subject of Section 10. Conclusions and recommendations arepresented in Section 11.

2. Literature review

Papers on the use of HRA models that include organizationalfactors have been searched for. The period covered spans from 1999

Table 1Literature review: 1999e2006.

Reference Domain Approach

Qualitative

Hee et al. (1999) Offshore SMAS to assess marine systems for human aorganizational factors. Case study conductedmarine terminal in California.

Papazoglou andAneziris(1999) a

Chemical No

Sträter and Bubb(1999)

Nuclear(BWR)

Method developed to describe and analyze hinteractions observed within events. Approaimplementation as a database is outlined. Mproposed for the analysis of cognitive errorsorganizational aspects.

Baumont et al.(2000)

Nuclear Describes the method developed to introducPSA the Human and Organizational ReliabilitAnalysis in Accident Management (HORAAMconsider human and organizational reliabilitaspects during accident management.

Øien (2001) Offshore Qualitative organizational model developed

Lee et al. (2004) Nuclear Operational events related to reactor trips in Kreactors

Reiman et al.(2005)

Nuclear Assesses the organizational culture of Forsm(Sweden) and Olkiluoto (Finland) NPP mainteunits. Uses core task questionnaires and semstructured interviews.

Carayon (2006) Notspecific

Addresses the interaction among people whoacross organizational, geographic, cultural, atemporal boundaries.

a Similar results to those found here may be found in Papazoglou et al. (2003).

to 2012. The domain of application, where mentioned, has beendisplayed. Also, qualitative and/or quantitative approaches havebeen discussed and potential PRA/PSA applications have also beendisplayed.

Tables 1e5 display the findings. Table 1 displays the papersfound in the period from 1995 to 2006. Table 2 presents thosepapers that appeared in 2007, while Table 3 is devoted to thosepapers published in 2008. Papers published in 2009 are the subjectof Table 4. Finally, Table 5 is dedicated to the papers published inthe 2010e2012 period.

The literature review just presented shows that there are anumber of papers dealing with organizational factors among whichsome consider quantification approaches in the light of probabi-listic safety assessment and quantitative risk analysis (as thechemical process industry traditionally calls the former).Papazoglou and Aneziris (1999), Papazoglou et al. (2003), Sträterand Bubb (1999), Øien (2001), Chang and Mosleh (2007), Díaz-Cabrera et al. (2007), Ren et al. (2008), Trucco et al. (2008),Mohaghegh and Mosleh (2009), Mohaghegh et al. (2009),Baumont et al. (2000), and Schönbeck et al. (2010) fall into thiscategory.

Applications may fall into several plant categories [like thechemical process field addressed by Papazoglou and Aneziris(1999) and Papazoglou et al. (2003)], or the nuclear field, likeSträter and Bubb (1999), and their discussion on the lack of humanreliability data. It is to be considered that different industry fieldshave different regulatory requirements, which implies that possiblecandidate methodologies have to be evaluated as to how theycomply with the nuclear regulatory culture, for instance. In thissense, the quantitative approaches highlightedmay be of interest inthe sense that they eventually address probabilistic safety

PRA/PSAusage

Quantitative

ndat a

No Yes

Quantitative effects of organizational andmanagement factors by linking the results of asafety management audit with the basic events of aQRA. Case study of an ammonia storage facility.

Yes

umanchethodor

Application to 165 BWR events and estimate ofprobabilities and their comparison with THERPdata.

Yes

e iny) toy

Based on decision trees (DT). Observation of crisiscenter exercises, in order to identify the maininfluence factors (IFs), which affect human andorganizational reliability. IFs used as headings in thedecision tree method. Expert judgment was used toverify the IFs, to rank them, and to estimate thevalue of the aggregated factors to simplify the treequantification.

Yes

Proposal of risk indicators as a complement to QRA-based indicators. Quantification methodology forassessing the impact of the organization on risk.

Yes

orean Use of conditional core damage frequency toconsider risk information in prioritizingorganizational factors.

No

arknancei-

No No

worknd

No No

Page 4: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

Table 2Literature review: 2007.

Reference Domain Approach PRA/PSAusage

Qualitative Quantitative

Bertolini(2007)

Foodprocessing

Fuzzy cognitive map (FCM) approach to explore theimportance of human factors in plants.

No No

Chang andMosleh(2007)

Nuclear Information, Decision, and Action in Crew Context(IDAC) operator response model. Includes cognitive,emotional, and physical activities during the course ofan accident.

Probabilistically predicts the response of an NPPcontrol room operating crew in accidentconditions.Assesses the effects of performance influencingfactors.

Yes

Díaz-Cabreraet al. (2007)

Miscellaneous Evaluation of a safety culture measuring instrumentcentered on relevant safety-related organizationalvalues and practices. Seven dimensions that reflectunderlying safety measures are proposed.Explores the four cultural orientations in the field ofsafety arising from the competing values framework:human relation or support, open system or innovation,internal process or rules, and rational goal or goalmodels.

No No

Galán et al.(2007)

Nuclear No u-factor approach explicitly incorporatesorganizational factors into the probability safetyassessment of NPPs. Bayesian networks areused for quantification purposes.

No

Grote (2007) Not specific No Management of uncertainties as a challenge fororganizations. Discussion on minimizinguncertainties versus coping with uncertainties.

No

Reiman andOedewald(2007)

Not specific Four statements presented and discussed:Current models of safety management are largely basedon either a rational or a non-contextual image of anorganization.Complex socio-technical systems are sociallyconstructed and dynamical structures.In order to be able to assess complex socio-technicalsystems an understanding of the organizational coretask is required.Effectiveness and safety depend on the culturalconception of the organizational core task.

No No

Table 3Literature review: 2008.

Reference Domain Approach PRA/PSAUsage

Qualitative Quantitative

Bellamy et al. (2008) Chemical Describes preparatory ground work for thedevelopment of a practical holistic model to helpstakeholders understand how human factors, safetymanagement systems and wider organizational issuesfit together.

No No

Li et al. (2008) Aviation They analyzed 41 civil aviation accidents occurring toaircraft registered in the Republic of China in the period1999e2006 using the HFACS framework. The authorsclaim that their research lent further support toReason’s organizational model of human error thatsuggests that active failures are promoted by latentconditions in the organization.

No No

Ren et al. (2008) Offshore Propose a methodology to model causal relationships.Reason’s Swiss cheese model is used to form a genericoffshore safety assessment framework and a Bayesiannetwork is tailored to fit into the framework toconstruct a causal relationship model. Uses a five levelstructure model to address latent failures.

No No

Trucco et al. (2008) Maritime An approach to integrate human and organizationalfactors into risk analysis

A Bayesian belief network has been developed to modelthe maritime transport system (MTS) by taking intoaccount its different actors. The BBN model has beenused in a case study for the quantification of HOF in therisk analysis carried out at the preliminary design stageof a High Speed Craf (HSC).The study has focused on a collision in open sea hazardcarried out by an integration of a Fault Tree Analysis oftechnical element with a BBN model of the influence oforganizational function and regulations.

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e4128

Page 5: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

Table 4Literature review: 2009.

Reference Domain Approach PRA/PSAusage

Qualitative Quantitative

Grabowski et al. (2009) Marinetransportation

Focus on the challenges of measuring performancevariability in complex systems using the lens of humanand organizational error modeling.

A case study of human and organizational error analysisin a complex, large-scale system, marine transportationis used to illustrate the impact of the data challenges onrisk assessment process.

No

Mohaghegh andMosleh (2009)

Aviation Explore issues regarding measurement techniques in aquantitative safety analysis context.A multi-dimensional perspective is offered throughcombinations of different measurement methods andmeasurement bases.

A Bayesian approach is proposed to operationalize themulti-dimensional measurements.Focus on extending PRA modeling frameworks toinclude the effects of organizational factors as thefundamental causes of accident and incidents.

Yes

Mohaghegh et al. (2009) Aviation Discussion on the results of a research with the primarypurpose of extending PRA modeling frameworks toinclude the effects of organizational factors as thedeeper, more fundamental causes of accidents andincidents.The focus is on the choice of “representational schemes”and techniques. A hybrid approach based on SystemDynamics (SD), Bayesian Belief Networks (BBN), EventSequence Diagram (ESD), and Fault Tree (FT) isproposed to demonstrate the flexibility of the hybridapproach that integrates deterministic and probabilisticmodeling perspectives.

No No

Tseng and Lee (2009) Electronics Propose an Analytical Hierarchical Process/DataEnvelopment Analysis (AHP/DEA) model that helps ininvestigating the associated importance of humanresource practices and organizational performancevariables. The case study contains 5 human resourcepractice variables and 7 organizational performancevariables through Linear Structure Relation (LISREL)

No No

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 29

assessments, like the discussions in Mohaghegh and Mosleh(2009), and Mohaghegh et al. (2009). However, eventual applica-tions to the nuclear industry should be considered in advance dueto specific regulatory issues, like eventual validation by regulatorybodies.

The models discussed in the literature presented are deficientfor three reasons, in general. It is assumed that the system can berepresented by a linear model and a linear combination of events;they do not simulate the system complexity, i.e., the complex wayof causeeeffect by which the various system parameters and var-iables relate to each other; they assume that the system can bedecomposed into individual parts that can in turn be analyzedindividually and not as a whole. In the next section, we will detail

Table 5Literature review in the period 2010e2012.

Reference Domain Approach

Qualitative

Schönbeck et al. (2010) Processplants

A benchmarking study for compmethods in assessing operator pexperiments is performed. Approorganizational factors in the opeinstrumented systems. It showsorganizational factors are most inprovides guidance for preventive

Waterson and Kolose (2010) Defense Outline a framework which aimsocial and organizational aspectsintegration (HFI). The frameworka set of interview questions thatstudy of human factors.

Peng-cheng et al. (2012) Developed a fuzzy Bayesian netwimprove the quantification of orhuman reliability analyses.A conceptual causal frameworkcausal relationships between orghuman reliability or human erro

these shortcomings and subsequently evaluate models that try toovercome them. BBN (Bayesian Belief Networks) and FuzzyCognitive Maps, which can model social, political and economicsystems, can be constructed by building a network of causal re-lationships between functions or states of objects (nodes) thatrepresent the system. In the first case, a network learns Bayesianprobability, and in the second case, the coupling strengths in suchcausal relationships. In any case, these networks can be constructedas a neural network.

Non-linear learning laws can be used to train the neuralnetwork. It means to set the Bayesian probabilities or connectionstrengths between network nodes. Nonlinear Hebbian learninglaws have been developed in the case of Fuzzy Cognitive Maps,

PRA/PSAusage

Quantitative

aring and evaluating HRAerformance in simulatorach to address human andrational phase of safetywhich human andneed of improvement andor corrective action.

No No

s to capture some of theof human factorswas partly used to designwere used with a case

No No

ork (BN) approach toganizational influences in

was built to analyze theanizational factors andr.

The probability inferencemodelfor HRA was built by combiningthe conceptual causalframework with BN toimplement causal anddiagnostic inference.

Yes

Page 6: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e4130

Stylios and Groumpos (1999), Papageorgiou et al. (2003), Song et al.(2011), and Kharratzadeh and Schultz (2013).

3. On HRA models to be discussed

In order to justify the HRA techniques discussed next, a searchwas made on three critical HRA reviews: the discussion on goodpractices of NUREG-1842 (NRC, 2006), a review of HRA techniquesprepared for the US National Aeronautics and Space Administration(NASA, 2006), and, finally, a review performed by the British Healthand Safety Executive (HSE, 2009).

Table 6 displays 24 techniques discussed in the referencesmentioned in the last paragraph. We have adopted the classifica-tion used in HSE (2009). As new tools are emerging based on earlierfirst generation tools, such as HEART (Williams, 1988) they arebeing referred to as third generation techniques, as is the case withNARA (Kirwan et al., 2005). Note that some techniques are classifiedsolely on expert opinion grounds and some of them do not fall inany of the categories considered.

The criterion we have followed is basically to consider tech-niques that have been recommended by the NRC and at least by oneof the remaining reviews. This has lead initially to the considerationof THERP, ASEP, SPAR-H, ATHEANA, SLIM-MAUD, and CBDT. How-ever, we have decided to discard those techniques not classified inone of the generations (and, thus, SLIM-MAUD and CBDT have beendiscarded) and also ASEP because it is an adaptation of THERP. Onthe other hand, as we are to discuss systemic approaches [at thispoint, we refer to FRAM, Hollnagel (2004, 2012)] for human reli-ability, we have decided to consider CREAM also, although it hasnot been reviewed by the NRC (2006).

4. A critique of non-systemic models

NRC has published two reports related to good practices in thefield of Human Reliability Analysis (HRA). The first of these, NRC(2005), establishes the criteria of good practices and the tech-niques and methodologies HRA should comply with. The second

Table 6Candidate HRA techniques reviewed by NRC, NASA and HSE.

Technique Reference

Classification Acronym

1st generation THERP Swain and Guttman (1983)ASEP Swain (1987)HEART Williams (1988)SPAR-H Gertman et al. (2005)JHEDI Kirwan (1990)INTENT Gertman et al. (1992)

2nd generation ATHEANA NRC (2000)CREAM Hollnagel (1998)CAHR Sträter (2005)CESA Reer et al. (2004)CODA Reer (1997)MERMOS Bieder et al. (1998)

3rd generation NARA Kirwan et al. (2005)Expert opinion SLIM-

MAUDNRC (1985)

APJ Seaver and Stillwell (1983)PC Seaver and Stillwell (1983)

Unclassified ORE Parry et al. (1992)CBDT Parry et al. (1992)FLIM Chien et al. (1988)SHARP1 Wakefield et al. (1992)EPRI-HRA Grobbelaar et al. (2003)UMH Shen and Mosleh (1996)TRC Dougherty and Fragola (1987)HF-PFMEA

Broughton et al. (1999)

one, NRC (2006), performs an evaluation of HRAmain techniques incomparison with the criteria established in NRC (2005).

These reports recognize the fact that Performing ShapingFactors (PSFs) are important and help HRA analysts identify andunderstand some influences that PSFs exert on the actions of thetasks allocated to human beings. They also recognize the factthat databases on plant operational events are useful to get dataon the influences the operational context (including organiza-tional factors) exerts upon the unsafe actions involving humanfailures and establish some quantitative basis to quantify humanerrors probabilities (HEPs) as a function of organizationalfactors.

However, the above cited NUREGs do not have any criteria orspecific guides for the implementation of organizational factors inHRA, leaving this task as a suggestion for future research. In spite ofrecognizing certain efforts in this sense, as for example, Davoudianet al. (1994), NRC also admits that the state of the art in how toidentify and understand the important organizational influencesand how to use this information for determining HEPs is notadequate until now.

The origin of the deficiency previously mentioned is in themodel type that has been adopted so far for Probabilistic SafetyAssessments (PSA) and Analysis of Operational Events (AOE),including several types of incidents and accidents. Let us start thento discuss the main characteristics of these models and how theycan be altered to establish a paradigm to treat organizational fac-tors in an appropriate way but, before that, some comments onsystem theory are due.

A model based on systems theory studies the mechanisms thatregulate the systems by feedback control to leave them in a con-dition of stability. Therefore, in this theory, systems are notdesigned to be static in time, but to react to themselves and theconditions of their environment, i.e., they are constantly trying toadapt. Accidents occur, then, as a result of the system failure toadapt, i.e., control systems failure. Breaking of restrictions imposedby the control system tomaintain stability of the system is revealedas the main cause of failures or accidents. This concept can be

NUREG-1842(NRC, 2006)

NASA(NASA, 2006)

HSE(HSE, 2009)

Yes Yes YesYes Yes YesNo Yes YesYes Yes YesNo No YesNo No YesYes Yes YesNo Yes YesNo Yes YesNo Yes YesNo No YesNo No YesNo Yes YesYes Yes Yes

No No YesNo No YesYes No NoYes No YesYes No NoYes No NoYes No NoNo Yes YesNo Yes NoNo Yes No

Page 7: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 31

extended to social, political and economic systems that also havetheir own restrictions or rules of operation, Rasmussen andSvedung (2000), Leveson (2004a), Qureshi (2008), Dekker et al.(2011).

In the systems theory approach, the system has several statesclose to its stable operation limit, which are not visible at first, as inthe above case of digital systems. In this limit state, systems maybecome unstable catastrophically, even when small oscillationsoccur in the parameters that influence the behavior of their physicalcomponents or human beings. This state is not captured by classicalapproaches, Rasmussen and Svedung (2000), Qureshi (2008).

In the discussion above, we can see that the complexity is pre-sented in four different ways: interactive complexity (arises fromthe interaction between components), non-linear complexity(cause and effect not related by component failure modes), dy-namic complexity (adaptive behavior system, reaching limit stateson the boundary of system stability) and decomposition complexity(the system is not decomposed in its structure, but in the functionsit performs, which appear in relationships between components),Leveson (2004a), Dekker et al. (2011).

The natural definition of system also arises from the abovediscussion: a set of components that interact with each other andhave relationships that define the functions of one or more pro-cesses that occur within a well-defined boundary, and functions ofprocesses occurring in relationships with other systems, Laszlo andKrippner (1998).

The system can then be divisible, structurally speaking, butfunctionally is an indivisible entity with emergent properties. Tobetter understand this issue, we have to observe biological systems.Life appears in an integrated manner. An organ of the human bodywill not function if separated from the body, i.e., it loses its emer-gent properties. For example, a hand separated from body cannotwrite, Laszlo and Krippner (1998).

In Reason (1997), Leveson (2002), and Hollnagel (2004) one canfind the comparison between the traditional approach (non-sys-temic) and the systemic approach of risk assessment. Below wedescribe the main differences between them, pointed out by thoseauthors. There are three types of accident models and associatedrisk analyses: accident sequential models, accident epidemiologicalmodels and accident systemic models.

The sequential models of accidents are those used in most ofHRA and PSA techniques. Thesemodels are based on the hypothesisthat accidents can evolve in a pre-defined sequence of events thatinvolve failures of systems, components and human failures. It ispart of the initial hypothesis that the global system that is beingmodeled can be decomposed into individual parts, in other words,systems, subsystems and components, described in the bottom-updirection (lower, physical hierarchical level to the higher, abstracthierarchical level).

Risks and failures are, therefore, analyzed in relation to eventsand individual components, with their associated probabilities.Human failures are treated just as component failures. The outputs(the effects in terms of catastrophic failures, e.g., core meltdown) ofthe global system are proportional to the inputs (causes in terms ofindividual failures), that are predictable for those that know thedesign of the subsystems and components; therefore, these sys-tems are linear. The risks are represented by a linear combination offailures and malfunctioning, for example, observed in event treesand fault trees. Therefore, accidents are avoided by the identifica-tion and elimination of the possible causes. The safety level can beassured by improving the answer of the organization that isresponsible for the plant in reacting to the triggered accidents(robustness feature).

The hypotheses e decomposition, linearity and simple com-binations of individual failures e work well strictly for

technological systems, because system and component designersknow how these systems and components can be decomposedinto individual parts and how they work. Therefore, they canmake reliable inferences on the failure modes of such systemsand components.

The epidemiological models of accidents are equally based onthe linearity and decomposition hypotheses, but they have com-plex characteristics, because they are based on more complexcombinations of failures and mainly on safety barrier weaknesses.In this case, simple failures of systems and components, or humanfailures, combined with latent failures (design, maintenance, pro-cedures, management, etc.) contribute to the degradation of safetybarriers, thus affecting the defense in depth concept. The globalsystem should be modeled from top to bottom, from the safetyobjectives and functions, hierarchically higher until the lowerfunctional levels. Risks and failures can, therefore, be described inrelation to the functional behavior of the whole system. Accidentsare avoided by reinforcing safety barriers and thus the defense-in-depth concept. Safety is assured by monitoring these barriers anddefenses, through safety action indicators.

On the other hand, socio-technical systems are by their verynature, complex, non-linear and non-decomposable. These threefeatures appear naturally from a unique outstanding characteristicof these systems: they are emergent. The fact of being emergentmeans that the complex relationships among inputs (causes) andoutputs (effects) make unexpected and disproportional conse-quences to emerge, which lead one to the resonance concept, inother words, certain combinations of the variability of systemfunctional actions as a whole can reach the threshold allowed forthe variability of a system function. This is the approach of thesystemic models of accidents.

The action variability is an immediate consequence of the natureof socio-technical systems that survive in social, economical andpolitical environments. Those environments determine the vari-ability index, because they are composed by human beings, with ahighly adaptive cognitive and emotional character, with their owncognitive mechanisms, and they are not of a technical nature asfound in common systems. Therefore, the approach based onfunctional variability behaves quite differently from that of systemscomposed by components only.

In this case, the risk analysis associated with these systemsshould leave aside the individual failures of systems and compo-nents to simulate the system dynamics as a whole, seeking forcombinations of individual variability of the system functions thatcan lead to undesirable functional behaviors, after the propagationof these combinations through the whole system. This means thatan individual part of the system is linked to the system as a whole.However, the risk analysis must pay attention to the dynamics ofthewhole system and not to the action or behavior of the individualpart.

In this approach, accidents result from unexpected combina-tions (resonances) of the variability of action or behavior. Therefore,accidents can be avoided by monitoring and reducing variability.Safety is assured by the constant ability of anticipating futureevents.

Before detailing these systemic models of accidents, we willdescribe how the available first and second generation HRA tech-niques treat organizational factors.

5. Organizational factors in THERP (Swain and Guttman,1983)

In the first generation techniques of HRA, like THERP, organi-zational factors are taken into consideration by means of perfor-mance shaping factors (PSFs), which are multiplied by the basic

Page 8: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e4132

human error probabilities (BHEPs), increasing or decreasing thebaseline values.

NRC (2005) and NRC (2006) establish fifteen PSFs that areimportant for HRA. These PSFs are described in Table 7 togetherwith a classification scheme that highlights those of an organiza-tional nature.

The cognitive characteristics can be included in the errormechanisms of second generation HRA techniques. The groupfactors can be interpreted as human factors of the second level(between humanesystem interaction in the first level and organi-zational factors in the third level) and can be included in theorganizational factors, related to management and supervisionactivities. Manesystem interface and ergonomics design charac-teristics are plant-specific features, but can be interpreted as thepoor result of design processes having as root causes some deficientorganizational factors. The characteristics of the design process arenot modeled in THERP. In NRC (2006) there are several critiquesabout the use of PSFs in THERP. These critiques are reproducedbelow.

PSFs are listed and discussed in Chapter 3 of THERP, which alsopresents their relevance; however, there is not a detailed orienta-tion of how to quantitatively evaluate each PSF.

THERP reveals explicit factors for stress levels and levels ofexperience only. For other qualitatively analyzed PSFs, it does notdisplay explicit factors.

Besides PSFs with explicit factors (stresses and experience),there are three additional groups of PSFs that may modify thenominal value of a HEP. However, that modification occurs in asubjective way: a) PSFs already included in the nominal value of aHEP that are listed in THERP tables (for example: if the writtenprocedure to be followed is long or short); b) PSFs specified asrules that modify the nominal value of a HEP, inside its uncer-tainty limits (for example: to use the higher limit, if the operatorinvolved in an action is not well trained); c) PSFs for which thereare not specific orientations with relation to the quantitativeevaluation (factors).

The quantitative evaluation of PSFs of the third group dependson the experts’ experience and judgment on human factors or ofhuman reliability analysts.

The lack of orientation in the quantitative evaluation of PSFs canbecome a source of the analyst’s or specialist’s variability when

Table 7Main Performance Shaping Factors according to NRC good practices, NRC (2005, 2006).

PSF Organizational

1. Quality of training X2. Quality of procedures and administrative controls X3. Availability and clarity of instrumentation4. Team available and time required to complete the act

including the impact of concurrent activities5. Complexity of the required diagnosis and response6. Workload time, pressure, and stress7. Crew dynamics and characteristicsa

8. Available staffing and resources X9. Ergonomic quality of human-system interface10. Environmental factors11. Accessibility and operability of the equipment to be

manipulated12. Need for special toolsb

13. Communication (strategy and coordination) andwhether one can be easily heard

14. Special fitness needs15. Accident sequence diversions/deviationsc

a E.g., degree of independence among individuals, operator biases/rules, use of statusb E.g., keys, ladder, hoses, clothing.c E.g., extraneous alarms, outside discussions.

THERP is used. This feature can distort the results obtained in HRAand consequently in Probabilistic Safety Assessments (PSA).

The quantitative evaluation of some PSFs can induce the an-alyst simply to assume that those are the most important PSFsthat should be treated, in detriment of the remaining ones.Therefore, an inadequate quantification of HEPs can happen,because there can be other important PSFs not considered by theanalyst.

In the quantitative evaluation, it is also necessary to point outthat THERP, due to its characteristics, does not treat organizationalfactors, so that possible latent PSFs that can influence HEPs are nottreated in the analysis.

Besides the critiques described above, THERP linearly ap-proaches the plant and consequently the organization inwhich it isinserted, therefore THERP does not consider the plant socio-technical characteristics that should be taken into account in thequalitative and quantitative analysis of PSFs.

6. Organizational factors in SPAR-H (Gertman et al., 2005)

SPAR-H evaluates the following eight PSFs: 1) available team; 2)stress; 3) complexity; 4) experience/training; 5) procedures; 6)ergonomics; 7) fitness for duty; and 8) work process. Among these,only 4, 5, 7, and 8 can be considered organizational factors. Theothers can be considered cognitive characteristics (1, 2, and 3) orergonomics factors (6). SPAR-H suggests some metrics to calculatespecific PSFs, like complexity, time pressure and available team.There are no specific suggestions, however, for the remaining ones,although it mentions research in this area through the technicalliterature. NRC (2005, 2006) point out the following deficiencies ofthe SPAR-H methodology related to performance shaping factors:

� The approaches supplied by SPAR-H on how to evaluate theinfluences of a specific PSF are generally useful, but they can beinsufficient to analyze and understand the conditions of a sce-nario. Those conditions affect the task of attributing levels (togenerate factors) for each PSF, especially if analysts withoutenough knowledge of HRA and human factors are used in themeasurement;

� The detailed analysis of PSFs presents inadequate solutions,because a nominal level (fixed factor) is almost always

Cognitive Man-systeminterface

Designergonomicfactors

Groupinteractionfactors

XX

XX

X

XXX

XX X

XX

checks, level of aggressiveness in implementing procedures.

Page 9: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

Table 8Classification of performance shaping factors in ATHEANA (NRC, 2007).

Performing shaping factors Characteristics and factors

1. Quality of training/experience

Organizational factors:Possible failure in quality assurance.Important: within that factor isanother factor, the latent factor, whichhides the problem.

2. Quality of procedures/administrative controls

Organizational factors:Possible failure in quality assurance.Important: within that factor isanother factor, the latent factor, whichhides the problem.

3. Availability and clarity ofinstrumentation

Man-machine interface designfeatures.

4. Time available and timerequired to complete the actincluding the impact ofconcurrent activities

Cognitive characteristics

5. Complexity of the requireddiagnosis and response

Cognitive characteristics

6. Workload/time pressure/stress

Cognitive characteristics

7. Crew dynamics andcharacteristics (e.g., degreeof independence amongindividuals, operator biases/rules)

Group interaction factors

8. Use of status checks, level ofaggressiveness inimplementing theprocedures

Group interaction factors

9. Available staffing/resources Organizational factors:Possible failure in quality assurance.Important: within that factor isanother factor, the latent factor, whichhides the problem.

10. Ergonomic quality of thehuman-system interface

Design ergonomics factors

11. Environmental factors Design ergonomics factors12. Accessibility and operability

of the equipment to bemanipulated

Design ergonomics factors

13. The need for special tools(e.g., keys, ladder, hoses,clothing)

Design ergonomics factors

14. Communications (strategyand coordination) andwhether one can be easilyheard

Design ergonomics factors or groupinteraction factors

15. Special fitness needs Cognitive characteristics16. Accident sequence

diversions/deviations (e.g.,extraneous alarms, outsidediscussions)

Special characteristics

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 33

attributed to PSFs, due to theway they are defined, limiting theirusefulness for identifying different situations. This approach hasas an effect a generalization that assists some of the applicationsdirected to SPAR-H, but it can be insufficient for detailed eval-uations of plants or specific scenarios;

� In the analysis of complexity SPAR-H considers a multiplicity offactors (PSFs), guiding the analyst to a technical literature oforientation that makes it possible the evaluation of the factors.This approach does not seem appropriate for a method ofsimplified and normalized HRA, because it may go beyond theanalyst’s capacity that, for example, does not have enoughknowledge on psychology. However, the discussion is healthyand can help the decision-making process in what concernscomplexity;

� The orientation and the solution to measure complexity arepractical, useful and important for a generic analysis. However,they can be inadequate for a detailed plant or scenario analysis;

� In the analysis of SPAR-H, the six training or experience monthsrepresent a parameter that is not applicable in many situations,because the plant operation teammay have a member with lessthan six months of training or experience. The level, frequencyand training type the team of a plant receives about the sce-narios and actions related to them are much more important forthe success of the action and these subjects are not approachedin PSFs;

� The analysis of the generic PSFs named fitness for duty seemsnot to be useful, as long as there is a nominal value for almost allcases in commercial nuclear plants. It is worth pointing out thatPSFs are very important in the retrospective analysis of currentevents.

The good recommendation on SPAR-H methodology is theinteraction between PSFs, including organizational factors, throughthe qualitative interactionmatrix that is plant specific. SPAR-H doesnot suggest a quantification technique, but only the qualitativeindication on how PSFs can be realistically and coherentlyquantified.

The approach of SPAR-H is to use the plant and specific scenarioinformation to evaluate the effects of PSFs. However, in the calcu-lation of HEPs, each PSF is treated independently from theremaining ones; in other words, the analysts will make judgmentsseparately on each PSF. However, SPAR-H supplies a good discus-sion on the potential of the interaction effects among PSFs, due tothe event and scenario specificities to be analyzed. Besides, it iscertainly possible, and in many cases probable, that other factors(for example: team characteristics, procedure strategies) can in-fluence the team action for a given scenario.

So, unless it happens an analysts’ attempt (independently,without explicit orientation) to take into account such effects (inother words, to consider interactions), it is possible that the re-sults do not reflect important plant and scenario specific charac-teristics. In other words, if analysts do not try to incorporate theinfluences of the interaction potential effects among PSFs and donot include the influences of other important PSFs when neces-sary, taking into account the accident scenario, there is the pos-sibility that the generic analysis does not supply a realisticevaluation of a specific accident or plant condition. Therefore, thisis a limitation of SPAR-H.

If a generic high-level analysis is considered appropriate for aspecific application (for example: the analysis of Accident SequencePrecursors) (NRC, 1992; Modarres et al., 1996) or if after some an-alyses, the independent evaluation of all PSFs is consideredappropriate for the event and for the scenario that will be exam-ined, a simple application of SPAR-H can be appropriate. Otherwise,the results might contain errors and an important potential plant

problem (for example, procedure limitations) might not beidentified.

7. Organizational factors in ATHEANA (NRC, 2000)

ATHEANA evaluates sixteen PSFs, shown in Table 8. The PSFstreated in ATHEANA should undergo an analysis with the purposeof covering subjects from plant design to plant organization. Thisclassification aims just to break the linearity of classifying PSFs asmere multiplying factors e this approach needs to be enlarged.Table 8 sets a link between PSFs that can be considered as effectsand their possible characteristics or generating factors.

It can be observed that the organizational factors described inTable 8 involve critical points like: training, procedures, adminis-trative controls and human resources (personal). Consideringtraining and procedures only, one is already facing a binomial set ofdecisive factors in plant operation and safety. In spite of not being

Page 10: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e4134

characterized as organizational factors, the other factors charac-teristics described in Table 8 can be effects whose causes may bedue to possible organizational fragilities.

For example, factors related to ergonomic designs can be theresult of incorrect options taken during the design phase or evenbecause of economical issues. Incorrect options and economicalaspects are characterized as causes of organizational decisions.These observations have the sole intention of enlarging the un-derstanding of human reliability experts.

The sixteen PSFs described in Table 8 are discussed in ATHEANA.A deeper study of PSFs is found in NRC (2007), which suppliesadditional information on expert judgment that allows for devel-oping a PSF quantification process. NRC (2000) does not focus onthis subject. Also, in Appendix B of NRC (2005) a discussion ispresented on PSFs that is consistent with ATHEANA’s.

ATHEANA uses the context to identify PSFs and it evaluates themost important plant conditions that influence the human actionbeing analyzed and can trigger PSFs. Therefore, although ATHEANAlists several important PSFs for most HEPs, experts can presentother PSFs (positive or negative) that are judged to influence HEPs.Their estimation is performed by taking into account the plantconditions. So, instead of using a group of predetermined PSFs, theyare obtained starting from the plant context evolution, which isanalyzed by experts.

In their judgment to estimate HEPs experts use the contextgeneral information to obtain the best decision for each HEP. Due tothe way PSFs are identified and considered in ATHEANA, it shouldsimply be avoided the measure or evaluation of the influence de-gree of each PSF. The reason is because, as previously seen, ATHE-ANA uses the context evaluation to decide which PSFs areimportant or triggered by the context (former: the context iscomplex, because the procedure does not satisfactorily solve aspecific situation) and, therefore, they are not pre-establishedmultipliers for the influence degree of each PSF. As in many otherHRA methods, the task to decide the way how PSFs affect the es-timate of HEPs remains to experts.

Due to the importance of the appropriate context study, whichincludes important PSFs for the action that will be evaluated, it isimportant that experts that use ATHEANA identify the specific plantinformation and PSA scenarios to define the context and its influ-ence on human actions that are under analysis. Second HRA gen-eration techniques, like NRC (2000) and Hollnagel (1998), use thecognitive model of the human being and the error mechanisms thatinfluence human failures or unsafe actions. In these two ap-proaches, there is the proposal of a prospective analysis that linksPSFs with error mechanisms and error types or modes (unsafeactions). This analysis has qualitative features and there is, in theproposed quantification, the idea of taking into account the con-ditional probabilities given the pairs of PSFs (together with oper-ational context) e error mechanisms e unsafe actions (errorsmodes or error types), Alvarenga and Fonseca (2009).

It is important to stress the fact that ATHEANA does not use PSFsas many other HRA techniques do (NRC, 2006), as has been dis-cussed in this section. There is not an a priori list of PSFs to beconsidered by the analyst and then adjust the HEP by givenmultiplying factors. ATHEANA approaches the context developingprocess to identify what PSFs and plant conditions are relevant tothe human action under consideration and so identify which PSFscould be triggered.

We must observe that in all first or second generation tech-niques the use of PSFs to describe factors that influence basic hu-man errors probabilities (BHEPs) are accomplished through a linearapproach, because BHEPs are to be multiplied by these factors. Anyorganizational factors quantified in this way cannot take into ac-count the nonlinearities of socio-technical systems.

8. FRAM (Functional resonance accident model) (Hollnagel,2004, 2012)

In Section 1 we identified the concept of emergence as the mainfoundation of systemic models of accidents. This concept wasintroduced by the English philosopher of science Lewes (2005). Heproposed a distinction between resulting and emergent phenom-ena in which the resulting phenomena could be predictable bystarting from its constituent parts and the emergent phenomenacould not.

According to systemic models, failures emerge from the normalvariability of the action of the functions that compose the system.The concept of functional variability leads one to the concept offunctional resonance, which in turn can be derived from stochasticresonance. This appears when a random noise is superposed to aweak signal in the output of the system or one of their componentfunctions. The mixture of the two can reach or surpass the detec-tion threshold of this signal, characterizing a stochastic resonance.Most of the time, in a stable condition, the variation or oscillation ofthe signal around a reference value remains within a variationrange with very well defined boundaries, characterized by limitingvalues. The variation of each signal depends on the interactionwithother signals that exist in the system. For a specific signal, the othersignals constitute the environment responsible for the noise, whichrepresents the variability of this environment. Consequently,functional resonance can be considered as the detectable signalthat appears out of the non-deliberate interaction of the weakvariability of many signals interacting with each other.

Systemicmodels also have roots in chaos theory (Lorentz, 2003),which describes complex and dynamic systems. This kind of systemcan present unstable behavior as a function of a temporary varia-tion of their parameters in a randomway, even when the system isruled by physical laws. The consequence of this instability is thatthese systems can present a great sensitivity to disturbances (noise)and errors, which can be amplified by the system nonlinearities andthe great number of interactions among the system components.

Chaos theory, however, has limited practical value to become amodel of accidents, according to Hollnagel (2004, 2012), becausethe equations of the noise processes have to be added to the systemequations. These, in turn, are formulated starting from physicallaws describing the system. However, there are no deterministicequations and general physical laws for socio-technical systems,although an effort on this may be found in Sterman (2000),following earlier discussions by Bertalanffy (1971) on general sys-tem theory and Hetrick (1971), which briefly discusses topics onnonlinear mechanics and their application to nuclear reactor sys-tems from the point of view of system stability. On the other hand,the management concept, a typically organizational factor, repre-sents a control function. Several formulations have been proposedbased on control systems (Rasmussen et al., 1994) to model it.Rasmussen (1997) proposed a systemic model of socio-technicalsystems based on control systems.

Rasmussen et al. (1994) display the basic model of control sys-tem theory for socio-technical systems. The model is composed ofinputs, outputs, boundary conditions and feedback control. Anotherrepresentation, the Structured Analysis and Design Technique(SADT) has been used for defining systems, analysis of softwarerequirements and system and software design (Pressman, 1992).SADT consists of procedures that allow the analyst to decomposethe software (or system) into several functions. An applicationexample for modeling systems of nuclear power stations has beendescribed by Rasmussen and Petersen (1999). The diagrammaticnotation of SADT consists of blocks of functions, each one withthree types of input parameters (inputs, controls and resources)and one of output (outputs). Hollnagel, in his FRAM model

Page 11: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

Table 9Common Conditions (CCs) and functions they influence (Hollnagel, 2004, 2012).

Common condition (CC) Human(M)

Technological(T)

Organizational(O)

1. Availability of resources X X2. Training and experience X3. Quality of communication X X4. Human-machine interface (HMI)

and operational supportX

5. Access to procedures andmethods

X

6. Work conditions X X7. Number of goals and conflict

solvingX X

8. Available time (time pressure) X9. Circadian rhythm X10. Crew collaboration quality X11. Organization quality and

supportX

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 35

(Hollnagel, 2004, 2012) extended this basic model by adding twomore parameters: available time and pre-requirements.

In FRAM, the Functional Resonance Analysis is performed in foursteps (Hollnagel, 2004):

1. Identify essential system functions through the six basic pa-rameters mentioned below;

2. Characterize the potential variability (positive or negative) ofeach function as a function of the context;

3. Define functional resonance based on possible dependencies orcouplings between these functions;

4. Identify barriers or damping factors to absorb the variability andspecify required performance monitoring.

In the first step, the six basic parameters are described as follows(Hollnagel, 2004, 2012):

1. Input: what is used or transformed to produce the output;2. Output: what is produced by a specific function;3. Control: what supervises or adjusts the function;4. Resource: what is needed or consumed by the function to pro-

cess the inputs;5. Precondition: system conditions that must be fulfilled before

the function can be carried out;6. Time available: it can be the constraint and can also be

considered a special kind of resource.

In step 2 above, it becomes necessary to define the context tocharacterize the variability. There are two levels of context. Thecontext of first order is defined internally by the complements ofthe systems functions. The second order context is defined by thesystem environment. For a given function, the variability of the restof the system defines its environment. The complement of thefunctions is supplied by Common Conditions (CC). Each CC affectsone or more types or categories of functions, depending on thenature of the function. There are three categories of functions, inagreement with their nature: human (M), technological (T) andorganizational (O).

Originally this was described by performance shaping factors,but in techniques like ATHEANA (NRC, 2000), it evolved to theconcept of error forcing conditions. In CREAM (Hollnagel, 1998),PSFs appear under the name of Common Performance Conditions(CPC). All the CPCs in CREAM influence all task steps as awhole. It isdifferent from THERP, where specific PSFs influence specific tasksteps.

An extension of CREAM Common Performance Conditions hasbeen elaborated for FRAM. In this context, they are called CommonConditions (CCs). There are eleven CCs in FRAM (Hollnagel, 2004,2012) and each one influences certain types of functions (M, O, orT), as displayed in Table 9.

In order to quantify the effect of CCs on function variability, a CCgradation becomes necessary, although they are qualitative. Belowwe find Hollnagel’s proposal on this gradation, Hollnagel (2004,2012):

� Stable or variable but adequate e associated performance vari-ability is low;

� Stable or variable but inadequate e associated performancevariability is high;

� Unpredictable e associated performance variability is very high.

This proposal allows for quantificationprovided thatwe associatevalues to CCs. CCs will influence all function parameters. If aparameter has the highest performance variability value, the con-nections associated with this parameter will fail or be reduced.

Consequently, we can see its impact on the whole network offunctions by observingwhether the outputs of each functionwill fail.

In the third step of the analysis, functional resonance is deter-mined for the couplings between functions, established by thecoupling among different function parameters. In other words, theoutput of a function can be connected to different types of inputs ofthe other functions (input, resource, control and pre-condition).Through these couplings, one can identify how the variability of agiven function can affect the action of the other functions. Eitherthe function performance at the output or unexpected connections(functional resonance) between the functions can be discovered,thus invalidating (failing) the connection, or relaxing the couplingof the interconnected parameters in the connection. This is depic-ted in Fig. 1.

In the fourth step of the analysis, safety barriers are identified toavoid functional resonance. These barriers can be associated to anyof the six function parameters, simply by adding one moreconnection (an AND logical node) to the pertinent parameter,which characterizes a restriction to the performance of thatparameter in the function. However, there is another approach,which uses the same control system paradigm, but with a differentkind of quantification, the STAMP model, which will be nextdescribed. The STAMP model uses a mathematical model todescribe the system dynamics. It is different from FRAM, whichuses the qualitative analysis (structural) based on connections andvariability (semi-quantitative).

We briefly review some available FRAM applications. FRAMwebsite (http://functionalresonance.com/publications.html) maybe consulted for further information.

Herrera and Woltjer (2010) compare the use of the SequentialTime Events Plotting (STEP) (Hendrick and Benner, 1987) and FRAMin the framework of civil aviation. Their conclusion is that STEPhelps to illustrate what happened, involving which actors at whattime, whereas FRAM illustrates the dynamic interactions withinsocio-technical systems and lets the analyst understand the howand why by describing non-linear dependencies, performanceconditions, variability, and their resonance across functions.

Belmonte et al. (2011) present a FRAM application to a railwaytraffic supervision that uses modern automatic train supervisionsystems. According to them, examples taken from railway trafficsupervision illustrate the principal advantage of FRAM in compar-ison to classical safety analysis models, i.e., its ability to take intoaccount technical as well as human and organizational aspectswithin a single model, thus allowing a true multidisciplinarycooperation between specialists from the different domainsinvolved.

Page 12: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

Followreviewedprocedures

Resonance

Safetyfunctionsactuation

Resonance

Verify thereturn of plantoperation tonormalconditions

Monitor safetyfunctions

Resonance

Control actions withprocedures notreviewed

Monitoring taskswith proceduresnot reviewed

Use of notreviewedprocedures

Occurrence of aDeviation with SmallAlteration in theParameter

T C

I o

P R

T C

I

P R

O

I O

P R

T C

I

P R

T C

O

Fig. 1. A functional resonance with safety function control and monitoring.

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e4136

The mid-air collision between flight GLO1907, a commercialaircraft Boeing 737-300, and flight N600XL, an executive jetEMBRAER E-145 was analyzed bymeans of FRAM to investigate keyresilience characteristics of the air traffic management system,Carvalho (2011).

Pereira (2013) makes an introduction to the use of FRAM to theeffectiveness assessment of a specific radiopharmaceutical dis-patching process.

9. STAMP (Systems-Theoretic Accident Model and Process)(Leveson, 2002)

STAMP is based on system process dynamics and not on eventsand human actions individually. Rasmussen (1997) proposed amodel for socio-technical systems, where the accident is viewed asa complex process with several hierarchical control levels,involving legislators, regulators, elaborate associations, companypolicy, plant management, engineering and technical departmentsand operation staff [see Fig. 2 and Qureshi (2007) and Rasmussen(1997)].

Later, Rasmussen and Svedung (2000) applied this model to riskmanagement. However, they described the process downstream ineach level through an event chain similar to event trees and faulttrees. On the other hand, a model of socio-technical systems usingconcepts of process control systems theory was applied by For-rester to business dynamics involving economic processes(Forrester, 1961). STAMP combines the Rasmussen and Svedungstructure with Forrester’s mathematical model for system dy-namics to describe the process occurring in each level.

The systemic model of STAMP leans on 4 basic concepts:Emergence, Hierarchy, Communication and Control. As discussedin the last section dedicated to the FRAM methodology, systemicmodels need the concept of emergence to explain functional

resonance. Accidents are seen as the result of the unexpectedinteraction (resonance) of system functions. Therefore, the com-ponents of these functions cannot be separately analyzed (indi-vidual failures) and later combined in a linear way to evaluatesafety. Safety can only be evaluated by considering the relation-ship of each component with the other plant components, that is,in the global context. Therefore, the first fundamental concept ofSTAMP is that of underlying the emergent properties that areassociated to the restrictions imposed on the degree of freedom ofthose components that compose the functions belonging to agiven system hierarchy. The concept of system functions hierarchybecomes, therefore, the second fundamental concept (Leveson,2002, 2004a).

The emergent properties are controlled by a group of re-strictions that represent the control actions about the behavior ofthe system components. Consequently, higher hierarchical levelsimpose restrictions or control actions on lower levels. Accidentsappear, therefore, as the result of restriction violations for the in-teractions between components in the global context or because ofthe lack of appropriate control laws that are imposed in theexecution of the restrictions (Leveson, 2002, 2004a). Due to theseveral hierarchical levels, the system is composed by severalcontrol loops nested through feedback mechanisms, other basicconcept that comes from control theory (open systems that receiveinformation from the environment to reach a steady state). Theseseveral feedbacks inside the system keep it permanently in a steadystate, when constantly adapting to variations in itself and in theexternal environment. From this point of view, the accident is seenas the incapacity of the feedback in making the controls reinforcethe execution of restrictions. When the system components belongto the social and organizational levels, the restrictions take the formof internal policies, regulations, procedures, legislation, certifica-tions, standards, authorizations, labor agreements and other

Page 13: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

Government

Regulators,Associations

Company

Management

Staff

WorkHazardous Process

ResearchDiscipline

Political Science;Law; EconomicsSociology

Economics;Decision Theory;Organizational;Sociology

IndustrialEngineering;Management &Organization

Psychology;Human Factors;Human-MachineInteraction

Mechanical;Chemical;and ElectricalEngineering

EnvironmentalStressors

Changing politicalclimate and public

awareness

Changing marketconditions andfinancial pressure

Changingcompetency andlevels of education

Fast pace oftechnologicalchange

PublicOpinion

Fig. 2. The socio-technical hierarchical system involved in risk management,Rasmussen (1997), Qureshi (2007).

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 37

instruments of economical, social, political and administrativecontrol.

The other two fundamental concepts of STAMP are the con-trollers that exert the controls on the restrictions in the lower hi-erarchy levels and the effective communication channels totransmit the control information and to receive the feedback in-formation about the state of restrictions. Comparing STAMP’scontrol structure with the parameters of FRAM’s hexagonal struc-ture, one can identify two types of input parameters in FRAM,Controls and Pre-requirements are restrictions imposed on thebehavior of the function to be controlled, while Resources andAvailable Time are part of the Inputs of the function. The output ofthe function is feedback for the controllers of higher hierarchicallevel.

Human or automatic controllers should exist in all hierarchylevels. Even in the automatic controllers’ case, human beings willstill be present in the monitoring of the automatic functions. Bothtypes of controllers will need models to simulate the process thatare controlled and the interfaces with the rest of the system towhich they are interlinked. Some inputs to the controller are re-strictions coming from higher levels. On the other hand, thecontroller output supplies the restriction for the lower levels andthe feedback of the state of the restrictions on its hierarchical level.These basic ideas are illustrated and detailed in Leveson (2002,2004a).

The controllers are not necessarily physical control devices, butthey can be design principles, such as redundancies, interlocks andsafe failure, or even processes, production and maintenance pro-cedures. It is necessary to observe that human controllers possess acognitive modeling (Alvarenga and Fonseca, 2009). Accidentshappen when restrictions are not satisfied or controls on them are

not effective. Therefore, in STAMP, the following accident causes areidentified (Leveson, 2002, 2004a):

1. Control actions exerting inadequate coercion related withrestrictions:a. Unidentified hazards;b. Inadequate, inefficient or non-existing control actions for the

identified hazards;

i. Design of control algorithms of the processes that do notmake coercion related with restrictions:1. Failures in the creation process;2. Changes of processes without corresponding changes

in the control algorithm (asynchronous evolution);3. Incorrect modifications or adaptations .

ii. Inconsistent, incomplete or incorrect (alignment lack)processes models:1. Failures in the creation process;2. Failures in the updating process (asynchronous

evolution);3. time delays and inaccuracy measures that are not

taken into consideration.iii. Inadequate coordination between controllers and deci-

sion makers (superimposing areas and boundaries).c. Inadequate execution of control action.

i. Failures of communication;ii. Inadequate actuator operation;iii. Time delays.

d. Inadequate or inexistent feedback.i. Non-provided arrangements in system design;ii. Communication failures;iii. Time delays;iv. Inadequate operation of sensors (incorrect information or

not supplied).

The models should contain the same information either forhuman beings or automatic systems. The fundamental informa-tion is: relationships between system variables, current state ofsystem variables, and available process mechanisms for changingthe state of system variables. The relationships between thesystem variables are modeled through the technique of systemdynamics (Leveson, 2002) that is based on the theory of non-linear dynamics and feedback control. There are three basicblocks for building the models of system dynamics. These blocksperform basic feedback loops. The functions of each hierarchicallevel are composed by the complex coupling of several of thesebasic loops.

The first basic loop is the Reinforcement Loop, a structure thatfeeds itself, creating growth or decline, similarly to the loops ofpositive feedback in the theory of control systems. An increase invariable 1 implies an increase in variable 2, which causes an in-crease in variable 1, and so on, in an exponential way, if there are noexternal influences. The same reasoning is valid for a negativereinforcement, which generates an exponential decrease for onevariable and an exponential increase for the other (reinforcementsin the opposite directions), because an increase in a variable impliesa decrease in the other variable. The change does not necessarilymean changes in values but in direction. In many instances, thevariable is interacting with the variation rate of the other variableand not with the variable itself.

The second type of loop is the Balance Loop, in which thecurrent value of a system variable or a reference parameter ismodified through some control action. This corresponds to theloop of negative feedback in the theory of control systems. In thiscase, the difference between the variable value and the wanted orreference value is noted as an error. The action is proportional to

Page 14: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e4138

the error, so that it brings the variable value to the reference valuealong time.

The third type is the Delay, which is used to model the timeinterval that elapses between the causes (or actions) and the ef-fects. It could be the source of several instabilities in the systemdepending on the complex interactions between the system vari-ables along time. Once the whole system is modeled through thesebasic structures, after the hierarchical functional decompositionand identification of the variables and parameters that define thefunctions, it becomes possible to accomplish the simulation of thesystem dynamic behavior, assuming certain initial values for thesystem variables. Consequently, the system can be observed alongtime to check for instabilities as well as abrupt behaviors, includingthe probable system collapse.

Marais and Leveson (2006) discuss basic structures that can beidentified in the dynamic simulation of systems that can erode oreven cause the system to collapse. These structures were namedsafety archetypes:

1. Stagnant safety practices in face of technological progresses, dueto delays in the control loops;

2. Decreases in safety conscience due to the absence of incidents;3. Side effects due to unintended safety actions;4. Corrections of symptoms instead of root causes;5. Erosion of safety by:

a. Complacency e low rates of incidents and accidentsencourage an anti-regulatory feeling;

b. Postponed safety programs e a production versus safetyconflict;

c. Incident and event reports with lower prioritye effects of theprize versus punishment strategy.

Fig. 3 displays an example of an archetype (complacency).With the concept of safety archetypes associated to the STAMP

methodology, it becomes a complete tool to model, simulate andevaluate the safety of socio-technical systems. STAMP, however, isnot yet being used in risk analysis similarly to probabilistic safetyanalysis for safety assessment of nuclear power plants. For thispurpose, it is necessary that safety archetypes have some type ofassociated metric that allows a quantitative assessment and,therefore, to identify measurable safety criteria in practical terms.Examples of this can be found in the STAMP analysis of the Chal-lenger and Columbia accidents, Leveson (2008).

We now review some STAMP applications. A detailed bibliog-raphy on this subject may be found at http://sunnyday.mit.edu/STAMP-publications.html.

AccidentProbability

Risk

Anti-regulatoryBehavior

External Pressure

Training, Inspectionand Monitoring

Risk MonitoringLoopOversight Loop

Accident Loop

Oversight

+

+

+

-

--

-

-

Fig. 3. An example of a safety archetype (complacency), Marais and Leveson (2006).

Leveson (2004b) describes how systems theory can be used tobuild up new accident models that better explain system accidents.The loss of a Milstar satellite being launched by a Titan/Centaurlaunch vehicle is used as an illustration of the approach.

Leveson et al. (2012) demonstrate how to apply STAMP tohealthcare systems. Pharmaceutical safety is used as the example inthe paper, but the same approach is potentially applicable to othercomplex healthcare systems. System engineering techniques can beused in re-engineering the system as a whole while, at the sametime, encourage the development of new drugs.

Thomas et al. (2012) discuss a STAMP application to evaluate thesafety of digital instrumentation and control systems in pressurizedwater reactors.

Ishimatsu et al. (2014) discuss a new STAMP-based hazardanalysis technique, called System-Theoretic Process Analysis(STPA), which is capable of identifying potential hazardous designflaws, including software and system design errors and unsafe in-teractions among multiple system components and an applicationto the Japanese Aerospace Exploration Agency H-II Transfer Vehicleis presented. A comparison of the results of this new hazard anal-ysis technique to those of the standard fault tree analysis used inthe design and certification of the H-II Transfer Vehicle, System-Theoretic Hazard Analysis is also presented.

10. A comparison between models discussed

Several models based on the theory of systems have been pro-posed. Models presenting a full proposal to incorporate the fourtypes of complexity discussed earlier are STAMP, FRAM and ACCI-MAP, which is an implementation of Rasmussen-Svedung model(Rasmussen and Svedung, 2000).

STAMP implements the framework of Rasmussen-Svedungand is quantified by techniques of system dynamics, whichwere used in the1960’s to analyze economic systems (Forrester,1961), unlike ACCI-MAP, which is a qualitative model. More-over, STAMP identifies in the structures of control systems somearchetypes (patterns) responsible for the collapse of the system(Marais and Leveson, 2006), causing accidents, as well as criteriafor identifying deficiencies in these structures. This provides apreliminary evaluation of the system design, enabling correc-tions in their design. The techniques of system dynamics alsoenable the study of unstable oscillations of the system andsensitivity to critical parameters and variables, which also pro-vides a prospective analysis Dulac and Leveson (2004), Leveson(2004a,b).

However, to fully implement the Svedung-Rasmussenframework, STAMP has to use control structures with closedloop feedback in three cognitive levels (skills, rules and knowl-edge) and Rasmussen hierarchical abstraction levels when rep-resenting the analysis of work field tasks, Rasmussen andSvedung (2000).

Moreover, FRAM does not identify how to recognize or quantifysystem resonancemodes and how systemsmigrate to limit states ofresonance on the system operation boundary. It is possible that thetechniques of system dynamics can be used to quantify FRAM, as itwas done with STAMP, which may be the subject of future in-vestigations, Sringfellow (2010).

In order to compare conventional techniques with techniquesbased on systems theory, a table has been prepared containing a setof 16 characteristics, some of which were discussed in Section 3.

The comparison has been performed by considering both thenon-systemic and systemic approaches for treating organizationalfeatures. Each technique and/or approach has been checked in or-der to identify what features it eventually takes into account. Theresults are shown in Table 10.

Page 15: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

Table 10Comparison among HRA techniques/approaches on relevant features.

Feature Technique

Non-systemic Systemic

THERP CREAM ATHEANA ACCI-MAP FRAM STAMP

Sequential LinearModel

Yes Yes Yes No No No

EpidemiologicalLinear Model

No No Yes No No No

System-theory-basedmodel

No No No Yes Yes Yes

Decompositioncomplexity

No No No Yes Yes Yes

Nonlinear complexity No No No Yes Yes YesInteractive

complexityNo No No Nof Yes Yes

Dynamic complexity No No No Nof Yes YesCognitive

architectureaNo Yes Yes Nof No No

Cognitive errormechanismsa

No Yes Yes Nof No No

Hierarchical levels ofabstractionb

No No No Yes No Yes

Levels of decisionmakingb

No No No Yes No Yes

Performance ShapingFactors

Yes Yes Yes No No No

Metrics for PSFsc No No No NAd NA NAOperational context No Yes Yes Yes Yes YesControl structures in 3

consecutive levels(SeReK)e

No No No Nof No No

a NRC (2000).b Rasmussen and Svedung (2000).c Gertman et al. (2005).d NA ¼ not applicable, because it does not make sense to consider PSF metrics for

system dynamic models as the underlying concepts are different.e Skill, Rule, Knowledge.f ACCI-MAPs are general and must be detailed to enable theses features.

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 39

It may be inferred from this table that STAMP is the most suit-able approach to treat organizational features in the context ofhuman reliability analysis.

11. Conclusions

The techniques of human reliability analysis of first generationtake organizational factors into consideration through performingshaping factors (PSFs). These factors are quantified in a subjectiveway by experts in the field of HRA or through databases of specificoperational events for each plant that contains among the rootcauses the programmatic (or systematic) causes that characterizeorganizational factors. However, the interaction of these factorswith error mechanisms and unsafe actions of human failures, orbetween the factors themselves is linear, because, according to thequantified level of the state of each factor, the probabilities of hu-man error associated with unsafe actions are multiplied byadjusting factors that can decrease or increase them. The inter-connection matrix between factors (Gertman et al., 2005) is alsolinear. HRA methodologies of the second generation continue touse PSFs, although they include in the network of conditionalprobabilities the error mechanisms as functions of the PSFs.

This approach has two basic deficiencies. The first is that thenumber of organizational factors is not enough to model all aspectsof this nature, especially the political, economical and normativeones. On the other hand, the interaction of these factors with eachother and with error mechanisms, unsafe actions, modes and typesof errors observed in the individual level and group level is highlynonlinear.

Two modern approaches are the most promising to solve thesedeficiencies, because they are based on non-linear models: FRAMand STAMP. These two methodologies are based on the GeneralSystems Theory (GST) (Bertalanffy, 1971) that largely uses theconcepts of Control Systems Theory to accomplish in practice thebasic ideas of GST. FRAM extends the basic model of the theory ofcontrol systems with input and output variables or parameters, aswell as for resources, controls or restrictions (boundary condi-tions), adding two more types of variables or parameters: timeand pre-requirements that are considered special boundary con-ditions or special types of resources and restrictions. The socio-technical system is decomposed into functions and each func-tion has a hexagonal structure as described in this paper. Thesystem has internal feedback, because each hexagonal function islinked with the others through one or more of the six parametersor variables.

The nonlinear nature of FRAM is established by the resonanceconcept, in other words, given a limited variation in one of theparameters of one of the functions, the system can, in each in-formation transmission for the interlinked functions, enhance theeffect on other parameters of other functions and on its ownfunction, which is generated in such a way so as to generate aneffect of stochastic resonance along time in the parameters that,in certain cycles of information transmission, can surpass itsvariation threshold, indicating a rupture in the system safety orstability. These transmission cycles for the functions can enter aprocess of stabilization or not, since the cycles can be dampenedor not. Apart from the mathematical formulation for the sto-chastic resonance, Hollnagel (2004, 2012) establishes the conceptof functional resonance, in which he tries to fail or relax theconnections between the functions, through the six intercon-nection parameters or variables. Thus, it is necessary to seek forunexpected connections between the functions. The failure orrelaxation of the connection is a function of the parameters orvariables variability in the connection. This variability, in turn, is afunction of the variability of the common conditions (CCs) thatinfluence all functions at the same time. Therefore, we can haveseveral alterations in the parameters or variable values at thesame time. This analysis is, therefore, of a qualitative or semi-quantitative nature, depending on the external subjective evalu-ations of CCs. On the other hand, the concept of stochastic reso-nance can be worked through mathematical models as long asone establishes, in each function to be modeled, a mathematicalfunction for the dependence of each one of the six variables orparameters as a function of the other five that compose thefunction. One should bear in mind, however, that the six com-ponents of the function can have one or more variables orrepresentative parameters of the element, and this feature makesthe modeling quite complex.

A proposal for establishing a mathematical model, as requestedabove, comes from STAMP, which uses the modeling of systemdynamics, used in economical systems. Although in this model ahexagonal structure is not used as in FRAM, it establishes in thesame way a functional decomposition of the organization as afunction of the organizational structure that is composed by severaldepartments or divisions, each one with its specific function. Theyinclude the external organizations, such as the government thatinterface with the organization. Each department has parametersand variables (P&Vs) in the input and output that make the inter-connection with other departments or divisions. It also possessesP&Vs that represent the controls or restrictions of higher hierar-chical levels, as well as P&Vs that represent resources to performthe function, including the time and pre-requirements as in FRAM.The dynamic simulation of these variables in STAMP is equivalentto the functional resonance in FRAM, with the advantage of

Page 16: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e4140

identifying safety archetypes that are responsible for the systemerosion and collapse, which serves as a safety criteria to evaluatesocio-technical systems.

References

Alvarenga, M.A.B., Fonseca, R.A., 2009. Comparison of THERP quantitative tableswith the human reliability analysis techniques of second generation. In: Pro-ceedings of the International Nuclear Atlantic Conference, Available in CD-ROM.Brazilian Association of Nuclear Energy, Rio de Janeiro.

Baumont, G., Menage, F., Schneiter, J.R., Spurgin, A., Vogel, A., 2000. Quantifyinghuman and organizational factors in accident management using decisiontrees: the HORAAM method. Reliab. Eng. Syst. Saf. 70, 113e124.

Bellamy, L.J., Geyer, T.A.W., Wilkinson, J., 2008. Development of a functional modelwhich integrates human factors, safety management systems and widerorganisational issues. Saf. Sci. 46, 461e492.

Belmonte, F., Schön, W., Heurley, L., Capel, R., 2011. Interdisciplinary safety analysisof complex socio-technological systems based on the functional resonanceaccident model: an application to railway traffic supervision. Reliab. Eng. Syst.Saf. 96, 237e249.

Bertalanffy, L., v., 1971. General Systems Theory, Foundations, Development, Ap-plications. Allen Lane, The Penguin Press, London.

Bertolini, M., 2007. Assessment of human reliability factors: a fuzzy cognitive mapsapproach. Int. J. Indus. Ergon. 37, 405e413.

Bieder, C., Le-Bot, P., Desmares, E., Bonnet, J.-L., Cara, F., 1998. MERMOS: EDF’s NewAdvanced HRA Method. In: Mosleh, A., Bari, R.A. (Eds.), Probabilistic SafetyAssessment and Management (PSAM 4). Springer-Verlag, New York.

Broughton, J., Carter, R., Chandler, F., Holcomb, G., Humeniuk, B., Kerios, B., Bruce, P.,Snyder, P., Strickland, S., Valentino, B., Wallace, L., Wallace, T., Zeiters, D., 1999.Human Factors Process Failure Modes & Effects Analysis. PGOC91-F050-JMS-99286. Boeing Company, Seattle.

Cacciabue, P.C., 1992. Cognitive modelling: a fundamental issue for human reli-ability assessment methodology? Reliab. Eng. Syst. Saf. 38, 91e97.

Carayon, P., 2006. Human factors of complex sociotechnical systems. Appl. Ergon.37, 525e535.

Carvalho, P.V.R., 2011. The use of Functional Resonance Analysis Method (FRAM) in amid-air collision to understand some characteristics of the air traffic manage-ment system resilience. Reliab. Eng. Syst. Saf. 96, 1482e1498.

Chang, Y.H.J., Mosleh, A., 2007. Cognitive modeling and dynamic probabilisticsimulation of operating crew response to complex system accidents. Part 4:IDAC causal model of operator problem-solving response. Reliab. Eng. Syst. Saf.92, 1061e1075.

Chien, S.H., Dykes, A.A., Stetkar, J.W., Bley, D.C., 1988. Quantification of human errorrates using a SLIM-based approach. In: IEEE Fourth Conference on HumanFactors and Power Plants, Monterey, CA.

Davoudian, K., Wu, J.S., Apostolakis, G.E., 1994. Incorporating organizational factorsinto risk assessment through the analysis of work processes. Reliab. Eng. Syst.Saf. 45, 85e105.

Dekker, S., Gilliers, P., Hofmeyr, J.-H., 2011. The complexity of failure: implications ofcomplexity theory for safety investigation. Saf. Sci. http://dx.doi.org/10.1016/j.ssci.2011.01.008.

Díaz-Cabrera, D., Hernández-Fernau, E., Isla-Díaz, R., 2007. An evaluation of a newinstrument to measure organizational safety culture values and practices. Accid.Anal. Prev. 39, 1202e1211.

Dougherty, E., Fragola, J., 1987. Human Reliability Analysis: a Systems EngineeringApproach with Nuclear Power Plant Applications. John Wiley & Sons Ltd,Canada.

Dulac, N., Leveson, N., 2004. An approach to design for safety in complex systems.In: Proceedings of the International Conference on System Engineering(INCOSE’04), Toulouse, France, pp. 393e407.

Forrester, J.W., 1961. Industrial Dynamics. MIT Press, Cambridge, MA.Fujita, Y., 1992. Human reliability analysis: a human point of view. Reliab. Eng. Syst.

Saf. 38, 71e79.Galán, S.F., Mosleh, A., Izquierdo, J.M., 2007. Incorporating organizational factors

into probabilistic safety assessment of nuclear power plants through canonicalprobabilistic models. Reliab. Eng. Syst. Saf. 92, 1131e1138.

Gertman, D.I., Blackmann, H.S., Haney, L.N., Seidler, K.S., Hahn, H.A., 1992. INTENT: amethod for estimating human error probabilities for decision based errors.Reliab. Eng. Syst. Saf. 35, 127e136.

Gertman, D.I., Blackman, H.S., Byers, J., Haney, L., Smith, C., Marble, J., 2005. TheSPAR-H Method, NUREG/CR-6883. U.S. Nuclear Regulatory Commission,Washington, DC.

Grabowski, M., You, Z., Zhou, Z., Song, H., Steward, M., Steward, B., 2009. Human andorganizational error data challenges in complex, large-scale systems. Saf. Sci. 47,1185e1194.

Grobbelaar, J., Julius, J., Lewis, S., Rahn, F., 2003. Guidelines for Performing HumanReliability Analysis: Using the HRA Calculator Effectively. Draft Report. ElectricPower Research Institute, Monterey, CA.

Grote, G., 2007. Understanding and assessing safety culture through the lens oforganizational management of uncertainty. Saf. Sci. 45, 637e652.

Hannaman, G.W., Spurgin, A.J., Lukic, Y., 1984. Human Cognitive Reliability Modelfor PRA Analysis, NUS-4531. Draft EPRI Document. Electric Power ResearchInstitute, Palo Alto, CA.

Hee, D.D., Pickrell, B.D., Bea, R.G., Roberts, K.H., Williamson, R.B., 1999. SafetyManagement Assessment System (SMAS): a process for identifying and evalu-ating human and organization factors in marine system operations with fieldtest results. Reliab. Eng. Syst. Saf. 65, 125e140.

Hendrick, K., Benner, L., 1987. Investigating Accidents with STEP. Marcel Dekker Inc.,New York.

Herrera, I.A., Woltjer, R., 2010. Comparing a multi-linear (STEP) and systemic(FRAM) method for accident analysis. Reliab. Eng. Syst. Saf. 95, 1269e1275.

Hetrick, D.L., 1971. Dynamics of Nuclear Reactors. The University of Chicago Press,Chicago.

Hollnagel, E., 1998. Cognitive Reliability and Error Analysis Method (CREAM).Elsevier Science, New York.

Hollnagel, E., 2004. Barriers and Accident Prevention. Ashgate Publishing Company,Aldershot.

Hollnagel, E., 2012. FRAM: the Functional Resonance Analysis Method, ModellingComplex Socio-technical Systems. Ashgate, Aldershot, UK.

HSE, 2009. Review of Human Reliability Assessment Methods. Report RR679. Healthand Safety Executive, Buxton, Derbyshire, UK.

Ishimatsu, T., Leveson, N.G., Thomas, J.P., Fleming, C.H., Katahira, M., Miyamoto, Y.,Ujiie, R., Nakao, H., Hoshino, N., 2014. Hazard analysis of complex spacecraftusing systems-theoretic process analysis. J. Spacecr. Rock. http://dx.doi.org/10.2514/1.A32449.

Kantowitz, B.H., Fujita, Y., 1990. Cognitive theory, identifiability and human reli-ability analysis (HRA). Reliab. Eng. Syst. Saf. 29, 317e328.

Kharratzadeh, M., Schultz, T.R., 2013. Neural-network modelling of Bayesianlearning and inference. In: Proceedings of the 35th Annual Meeting of theCognitive Science Society, Austin, TX, ISBN 978-0-9768318-9-1, pp. 2686e2691.

Kirwan, B., 1990. A resource flexible approach to human reliability assessment forPRA. In: Safety and Reliability Symposium. Elsevier Applied Sciences, Amster-dam, pp. 114e135.

Kirwan, B., Gibson, H., Kennedy, R., Edmunds, J., Cooksley, G., Umbers, I., 2005.Nuclear Action Reliability Assessment (NARA): a data based HRA tool. Saf.Reliab. 25, 38e45.

Laszlo, A., Krippner, S., 1998. Systems theories: their origins, foundations, anddevelopment. In: Jordan, J.S. (Ed.), Systems Theories and a Priori Aspects ofPerception. Elsevier, Amsterdam, pp. 47e74.

Lee, Y.S., Kim, Y., Kim, S.H., Kim, C., Chung, C.H., Jung, W.D., 2004. Analysis of humanerror and organizational deficiency in events considering risk significance. Nucl.Eng. Des. 230, 61e67.

Leveson, N.G., 2002. System Safety Engineering: Back to the Future. MassachusettsInstitute of Technology, Cambridge. http://sunnyday.mit.edu/book2.pdf.

Leveson, N.G., 2004a. A new accident model for engineering safer systems. Saf. Sci.42, 237e270.

Leveson, N.G., 2004b. A systems-theoretic approach to safety in software-intensivesystems. IEEE Trans. Depend. Secure Comput. 1, 66e86.

Leveson, N.G., 2008. Technical and managerial factors in the NASA challenger andColumbia losses: looking forward to the future. In: Kleiman, D.L., Cloud-Hansem, K.A., Matta, C., Handelsman, J. (Eds.), Controversies in Science andTechnology, vol. 2. Mary Ann Liebert Press, New York.

Leveson, N.G., Couturier, M., Thomas, J., Dierks, M., Wierz, D., Psaty, B.,Finkelstein, S., 2012. Applying system engineering to pharmaceutical safety.J. Healthc. Eng. 3, 391e414.

Lewes, G.H., 2005. Problems of Life and Mind. In: First Series: the Foundations of aCreed, vol. 2. University of Michigan Library Reprinting Series, Ann Arbor.

Li, W.-C., Harris, D., Yu, C.-S., 2008. Routes to failure: analysis of 41 civil aviationaccidents from the Republic of China using the human factors analysis andclassification system. Accid. Anal. Prev. 40, 426e434.

Lorentz, E., 2003. The Essence of Chaos. Rutledge, London.Marais, K., Leveson, N.G., 2006. Archetypes for organizational safety. Saf. Sci. 44,

565e582.Modarres, M., Martz, H., Kaminskiy, M., 1996. The accident sequence precursor

analysis. Nucl. Sci. Eng. 123, 238e258.Mohaghegh, Z., Kazemi, R., Mosleh, A., 2009. Incorporating organizational fac-

tors into probabilistic risk assessment (PRA) of complex socio-technicalsystems: a hybrid technique formalization. Reliab. Eng. Syst. Saf. 94,1000e1018.

Mohaghegh, Z., Mosleh, A., 2009. Measurement techniques for organizational safetycausal models: characterization and suggestions for enhancements. Saf. Sci. 47,1398e1409.

NASA, 2006. Human Reliability Analysis Methods, Selection Guidance for NASA.NASA Headquarters Office of Safety and Mission Assurance, Washington,D.C.

NRC, 1985. Application of SLIM-MAUD: A Test of an Interactive Computer-BasedMethod for Organizing Expert Assessment of Human Performance and Reli-ability. NUREG/CR-4016. U.S. Nuclear Regulatory Commission, Washington,DC.

NRC, 1992. Precursors to Potential Severe Core Damage Accidents: 1992. A StatusReport, NUREG/CR-4674. Oak Ridge National Laboratory, U. S. Nuclear Regula-tory Commission, Washington, D.C.

NRC, 2000. Technical Basis and Implementation Guidelines for A Technique forHuman Event Analysis (ATHEANA). NUREG-1624. U.S. Nuclear RegulatoryCommission, Washington, D.C.

NRC, 2005. Good Practices for Implementing Human Reliability Analysis. NUREG-1792. U. S. Nuclear Regulatory Commission, Washington D.C.

Page 17: A Critical Review Evaluating Organizational Factors in Human Reliability Analysis

M.A.B. Alvarenga et al. / Progress in Nuclear Energy 75 (2014) 25e41 41

NRC, 2006. Evaluation of Human Reliability Analysis Methods against Good Prac-tices. NUREG-1842. U. S. Nuclear Regulatory Commission, Washington D.C.

NRC, 2007. ATHEANA’s User Guide Final Report. NUREG-1880. US Nuclear Regula-tory Commission, Washington, D.C.

NRC, 2008. Human Factors Considerations with Respect to Emerging Technology inNuclear Power Plants. NUREG/CR-6947 BNL-NUREG-79828. US Nuclear Regu-latory Commission, Washington, D.C.

Øien, K., 2001. A framework for the establishment of organizational risk indicators.Reliab. Eng. Syst. Saf. 74, 147e167.

Papageorgiou, E., Stylios, C., Groumpos, P., 2003. Fuzzy cognitive map learningbased on non-linear Hebbian rule. In: Gedeon, T.D., Fung, L.C.C. (Eds.), AI 2003:Advances in Artificial Intelligence. Springer-Verlag, Berlin Heidelberg.

Papazoglou, I.A., Aneziris, O., 1999. On the quantification of the effects of organi-zational and management factors in chemical installations. Reliab. Eng. Syst.Saf. 63, 33e45.

Papazoglou, I.A., Bellamy, L.J., Hale, A.R., Aneziris, O.N., Ale, B.J.M., Post, J.G.,Oh, J.I.H., 2003. I-RISK: development of an integrated technical and manage-ment risk methodology for chemical installations. J. Loss Prev. Process Indus.16, 575e591.

Parry, G., et al., 1992. An Approach to the Analysis of Operator Actions in PRA. EPRITR-100259. Electric Power Research Institute, Palo Alto, CA.

Peng-cheng, L., Guo-hua, C., Li-cao, D., Li, Z., 2012. A fuzzy Bayesian networkapproach to improve the quantification of organizational influences in HRAframeworks. Saf. Sci. 50, 1569e1583.

Pereira, A.G.A.A., 2013. Introduction to the Use of FRAM on the effectivenessassessment of a radiopharmaceutical Dispatches process. In: Proceedings of theInternational Nuclear Atlantic Conference, Available in CD-ROM. Brazilian As-sociation of Nuclear Energy, Rio de Janeiro.

Pressman, R.S., 1992. Software Engineering e a Practitioner’s Approach. McGraw-Hill Book Company, New York.

Qureshi, Z.H., 2007. A review of accident modelling approaches for complex socio-technical Systems. In: Proceedings of the 12th Australian Workshop on SafetyRelated Programmable Systems, vol. 9. Australian Computer Society, Adelaide,pp. 47e59.

Qureshi, Z., 2008. HA Review of Accident Modelling Approaches for ComplexCritical Sociotechnical Systems. Report DSTO-TR-2094. Command, Control,Communications and Intelligence Division, Defence Science and TechnologyOrganisation, Edinburgh, South Australia.

Rasmussen, B., Petersen, K.E., 1999. Plant functional modeling as a basis forassessing the impact of management on plant safety. Reliab. Eng. Syst. Saf. 64,201e207.

Rasmussen, J., 1997. Risk management in a dynamic society: a modelling problem.Saf. Sci. 27, 183e213.

Rasmussen, J., Petersen, A.M., Goodstein, L.P., 1994. Cognitive System Engineering.John Wiley & Sons, New York.

Rasmussen, J., Svedung, I., 2000. Proactive Risk Management in a Dynamic Society.Swedish Rescue Services Agency, Karlstad.

Reason, J., 1997. Managing the Risks of Organizational Accidents. Ashgate PublishingCompany, Aldershot.

Reiman, T., Oedewald, P., Rollenhagen, C., 2005. Characteristics of organizationalculture at the maintenance units of two Nordic nuclear power plants. Reliab.Eng. Syst. Saf. 89, 331e345.

Reiman, T., Oedewald, P., 2007. Assessment of complex sociotechnical systems e

theoretical issues concerning the use of organizational culture and organiza-tional core task concepts. Saf. Sci. 45, 745e768.

Ren, J., Jenkinson, I., Wang, J., Xu, D.L., Yang, J.B., 2008. A methodology to modelcausal relationships on offshore safety assessment focusing on human andorganizational factors. J. Saf. Res. 39, 87e100.

Reer, B., 1997. Conclusions from occurrences by descriptions of actions (CODA). In:Drottz Sjöberg, B.M. (Ed.), New Risk Frontiers, Proceedings of the 1997 AnnualMeeting of the Society for Risk Analysis-Europe.

Reer, B., Dang, V.N., Hirschberg, S., 2004. The CESA method and its application in aplant-specific pilot study on errors of commission. Reliab. Eng. Syst. Saf. 83,187e205.

Schönbeck, M., Rausand, M., Rouvroye, J., 2010. Human and organisational factors inthe operational phase of safety instrumented systems: a new approach. Saf. Sci.48, 310e318.

Seaver, D.A., Stillwell, W.G., 1983. Procedures for using Expert Judgment to EstimateHuman Error Probabilities in Nuclear Power Plant Operations. NUREG/CR-2743.U.S. Nuclear Regulatory Commission, Washington, D.C.

Shen, S.H., Mosleh, A., 1996. Human Error Probability Methodology Report RAN:96e002. Calvert Cliffs Nuclear Power Plant, BGE.

Song, L., Yang, L., Huainan, A., Huainan, J.J., Guanghua, J.L., 2011. A Bayesian beliefNet model for evaluating organizational safety risks. J. Comput. 6, 1842e1846.

Sterman, J., 2000. Business Dynamics, Systems Thinking and Modeling for a Com-plex World. Irwin McGraw-Hill, Boston.

Sringfellow, M.V., 2010. Accident Analysis and Hazard Analysis for Human andOrganizational Factors. PhD dissertation. Department of Aeronautics and As-tronautics, Massachusetts Institute of Technology, Cambridge.

Stylios, C.D., Groumpos, P.P., 1999. Mathematical formulation of fuzzy cognitivemaps. In: Proceedings of the 7th Mediterranean Conference on Control andAutomation (MED99), Haifa, Israel, pp. 2251e2261.

Sträter, O., 2005. Cognition and Safety e an Integrated Approach to Systems Designand Performance Assessment. Ashgate, Aldershot, UK.

Sträter, O., Bubb, H., 1999. Assessment of human reliability based on evaluation ofplant experience: requirements and implementation. Reliab. Eng. Syst. Saf. 63,199e219.

Swain, A.D., 1987. Accident Sequence Evaluation Program Human Reliability Anal-ysis Procedure. NUREG/CR-4772/SAND86-1996. Sandia National Laboratories,U.S. Nuclear Regulatory Commission, Washington, DC.

Swain, A.,D., 1990. Human reliability analysis: need, status, trends and limitations.Reliab. Eng. Syst. Saf. 29, 301e313.

Swain, A.D., Guttman, H.E., 1983. Handbook of Human Reliability Analysis withEmphasis on Nuclear Power Plant Applications. NUREG/CR-1278. U.S. NuclearRegulatory Commission, Washington, D.C.

Thomas, J., Lemos, F.L., Leveson, N.G., 2012. Evaluating the Safety of Digital Instru-mentation and Control Systems in Nuclear Power Plants. Research Report NRC-HQ-11-6-04-0060. Massachusetts Institute of Technology, Cambridge.

Trucco, P., Cagnoa, E., Ruggeri, F., Grande, O., 2008. A Bayesian belief networkmodelling of organisational factors in risk analysis: a case study in maritimetransportation. Reliab. Eng. Syst. Saf. 93, 823e834.

Tseng, Y.-F., Lee, T.-Z., 2009. Comparing appropriate decision support of humanresource practices on organizational performance with DEA/AHP model. ExpertSyst. Appl. 36, 6548e6558.

Wakefield, D., Parry, G., Hannaman, G., Spurgin, A., 1992. SHARP1: A Revised Sys-tematic Human Action Reliability Procedure. EPRI TR-10171 1, Tier 2. ElectricPower Research Institute, Palo Alto, CA.

Waterson, P., Kolose, S.L., 2010. Exploring the social and organizational aspects ofhuman factors integration: a framework and case study. Saf. Sci. 48, 482e490.

Williams, J., 1988. A data-based method for assessing and reducing human error toimprove operational performance. In: IEEE Conference on Human Factors inPower Plants. Monterey California.