16
Joournal of Occupational Acciderzts, 9 (1987) 137-152 Elsevier Science Publishers B.V., Amsterdam - Printed in The Netherlands 137 Development of an Expert System for Human Reliability Analysis H.M.N. RAAFAT and A.H. ABDOUNI Department of Chemical Engineering, Aston University, Birmingham (U.K.) (Received 17 October 1986; accepted 6 March 1987) ABSTRACT Raafat, H.M.N. and Abdouni, A.H., 1987. Development of an expert system for human reliability analysis. Journal of Occupaf~~ffa~ Accidents, 9: 137-152. The assessment and quantification of risk in the process industry was developed from reliability technology which tends t.o focus on the hardware aspects of potential plant/system failures. The human factor receives little or no attention despite the fact that human error can play a significant role in many plant safety and reliabilit~~ assessment. Human performance modelling and analysis are still being developed and require special expert judgement from psychologists, human factor specialists and engineers. This paper will examine the use of Expert Systems as a tool to assist the analyst in the modelling of human reliability within the overall framework of Probabilistic Risk Assessment ( PRA ) A computer based approach is presented which utilises quantified expert judgement, through task analysis and decision making trees. A case study is used to demonstrate the computer model. 1. INTRODUCTION Despite the technological advances made to improve safety and reliability of modern systems and plants, the human element continues to play a key role in the design, operation, maintenance and management of systems and con- sequently in their failures. It is known, for example that 70% of aviation accidents are due to crew errors (Flight International, 1975)) similar figures apply to the process indust,ry. Some of the recent major accidents, e.g. the Three Mile Island, Chernobyl and Flix- borough disasters, occurred as a direct result of human error. The Health and Safety Executive (1984, 1986) have acknowledged the significance of human factor and man-machine interface in the field of programmable electronic sys- tems, and the considerable human role in software reliability and common cause failures. Several techniques have been developed to analyse human reliability, par- ticularly in the nuclear and process industries. The modelling of human per- 0376s6349/87/$03.50 0 1987 Elsevier Science Publishers B.V.

Development of an expert system for human reliability analysis

Embed Size (px)

Citation preview

Joournal of Occupational Acciderzts, 9 (1987) 137-152

Elsevier Science Publishers B.V., Amsterdam - Printed in The Netherlands

137

Development of an Expert System for Human Reliability Analysis

H.M.N. RAAFAT and A.H. ABDOUNI

Department of Chemical Engineering, Aston University, Birmingham (U.K.)

(Received 17 October 1986; accepted 6 March 1987)

ABSTRACT

Raafat, H.M.N. and Abdouni, A.H., 1987. Development of an expert system for human reliability

analysis. Journal of Occupaf~~ffa~ Accidents, 9: 137-152.

The assessment and quantification of risk in the process industry was developed from reliability

technology which tends t.o focus on the hardware aspects of potential plant/system failures. The

human factor receives little or no attention despite the fact that human error can play a significant

role in many plant safety and reliabilit~~ assessment. Human performance modelling and analysis

are still being developed and require special expert judgement from psychologists, human factor

specialists and engineers.

This paper will examine the use of Expert Systems as a tool to assist the analyst in the modelling

of human reliability within the overall framework of Probabilistic Risk Assessment ( PRA ) A

computer based approach is presented which utilises quantified expert judgement, through task

analysis and decision making trees. A case study is used to demonstrate the computer model.

1. INTRODUCTION

Despite the technological advances made to improve safety and reliability of modern systems and plants, the human element continues to play a key role in the design, operation, maintenance and management of systems and con- sequently in their failures.

It is known, for example that 70% of aviation accidents are due to crew errors (Flight International, 1975)) similar figures apply to the process indust,ry. Some of the recent major accidents, e.g. the Three Mile Island, Chernobyl and Flix- borough disasters, occurred as a direct result of human error. The Health and Safety Executive (1984, 1986) have acknowledged the significance of human factor and man-machine interface in the field of programmable electronic sys- tems, and the considerable human role in software reliability and common cause failures.

Several techniques have been developed to analyse human reliability, par- ticularly in the nuclear and process industries. The modelling of human per-

0376s6349/87/$03.50 0 1987 Elsevier Science Publishers B.V.

138

formance and the quantification of human error are complex and time consuming procedures which require specialist, training plus engineering, sta- tistical and psychological knowledge. Such complexities make the choice of modelling techniques and data collection very difficult.

The objective of the research work presented here is to promote the use of Artificial Intelligence techniques in order to develop an Expert System to over- come some of the difficulties in human performance modelling and the quan- tification of human error. The proposed system will provide a systematic analytical procedure, easy to follow by both, human factor specialists and non- specialists.

2. HUMAN RELIABILITY ANALYSIS TECHNIQUES

A bewildering variety of t,echniques have been developed over the past dec- ade to analyse human reliability, mainly in the nuclear industry. The general approaches to the quantification of human error may be grouped into three categories: 1. Synthetic or decomposition techniques, ‘2. Classic reliability techniques, and 3. Subjective expert judgement.

The first category relies primarily on the historical data on the probability of failure for relatively basic elements of human behaviour, such as; closing of switches or valves, reading instruments. The probability of error for more com- plex tasks is then calculated by the combination of basic elements in a sequence of sub-tasks.

The second category attempts to apply theoretical reliability time dependent modelling, to predict parameters such as the probability as a function of time. This is an attractive approach as it quantifies human error in the same terms as hardware failures. However, unlike hardware failures, a human can recover a mistake, and this recovery factor should be taken into account when calcu- lating human reliability.

The third category makes a much greater use of quantified expert judgement, supplementing the currently inadequate human error probability data base for various types of tasks.

3. HUMAN RELIABILITY MODELLING

The approach adopted for the quantification of human error in this study will start by guiding the analyst through Task Analysis. It is then proposed that a combination of two human performance modelling techniques; THERP and SLIM to be considered together rather than sepa- rately, since they are intimately linked, for the purpose of application in the process industry.

139

3.1 Task Analysis

The most effective start to any human reliability analysis must involve some form of task analysis. Task analysis provides a systematic description of the human actions required by the system and some of the psychological functions that underlie these actions.

Systematic examination and description of performance in this way is often overlooked during the design and management of industrial plants. Hierarchi- cal Task Analysis ( HTA) , developed by Annett et al. (1979) is the most useful form of this type of analysis, as it examines a considerable variety of tasks and is particularly effective for tasks such as process control, where emphasis is on the operator’s competence in planning and decision making skills rather than motor skills.

The plan in HTA is crucial since the difficulties facing the operator may be completely overlooked if the analyst uses an approach which concentrates on what should be done, without systematically examining,when things should be done. Many complex tasks may appear superficial if their planning aspects are ignored.

The sole use of any one task analysis technique is not recommended, in fact this is an area where expert opinions differ as to the techniques to use. To-date there are in excess of 60 task analysis techniques, therefore, it is necessary to select a technique which suits the problem being analysed.

THERP (Technique for Human Error Rate Probability)

This technique employs event trees and binary logic for its analytical pro- cedure. Human error trees provide a flexible structure for representing the steps of a well defined procedure. As the plant operator can take various deci- sion paths, the analyst can determine whether they are important or not. Details of this method have been extensively described in the Handbook (Swain and Guttmann, 1983). This also includes data sets and a procedure guide.

In THERP, the task classification is done primarily according to levels of cognitive information processing, with the result that some types of tasks are difficult to classify, e.g. signal detection and man/man communication.

The human error tree is based on the task analysis approach and the Hand- book presents error/failure data which are estimated through judgement for a variety of task elements.

3.3 SLIM (Success Likelihood Index Methodology)

The basic principle of SLIM (Embury and Humphreys, 1984) is that the likelihood of an error occurring in a particular situation depends on the com- bined effects of a relatively small set of Performance Shaping Factors ( PSFs) .

140

These include both the human traits and conditions of the work setting that are likely to influence an individual’s performance. Examples of human traits that ‘shape’ performance might include: - level of skill, - level of mental loading, ~ activation and arousal, and - error recovery ‘correction’.

The latter is probably the most important and much overlooked typical human activity. Conditions of work which may affect performance include the time available to complete a task, alarms and other aids.

SLIM has the major advantage of providing a useful structure in modelling potential human error/failure modes. One of the objectives of the proposed expert system is to provide an expert opinion to assess the relative importance of each PSF with regard to its effect on reliability to the task being evaluated and to make numerical rating of how good the PFSs are in the task under consideration.

4. THE EXPERT SYSTEM

Expert systems have been applied to complex problems which involve the use of incomplete data and many uncertainties, such that human judgement is needed in their solution.

Human Reliability Analysis ( HRA) methodologies tend to rely heavily on knowledge, experience and judgement. The general lack of human reliability data and the short supply of human factors experts result in a long and tedious process, when a risk assessment study is carried out. It is obvious, therefore, because of the advantages of expert systems, it may be possible that non-spe- cialists can be assisted in the modelling and quantification of human performance.

An expert system is a system which handles real life complex problems requiring an Expert’s interpretation and attempts to solve these problems by using a computer model of the human expert reasoning. It is presumed that the conclusions the expert would reach if faced with a comparable problem are similar. Expert, systems have been successfully utilised in many fields to sci- ence, engineering and business as well as in medical diagnosis.

The primary objective of the expert system presented in this research work is to develop a systematic methodology for human reliability analysis which can be used by both; human factors experts and non-experts, in order to ana- lyse the contribution of human error probabilities to the overall plant risk.

The expected benefits of the use of such an expert system in the field of human reliability analysis can be summarised as follows: 1. to embody domain-specific knowledge from more than one expert, 2. to provide a facility for updating and refining its knowledge and data,

141

I RAMS

INTERFACE

Advice & Explanation ,

f

INFERENCE

ENGINE

Knowledge Base

Maintenance

I

EXPERT

I KNOWLEDGE

BASE

i

!JSER -

Fig. 1. Expert system components.

3. to give the user explanation its line of reasoning, 4. to simplify the analytical procedure, and 5. to reduce the time and cost for analysis.

4.1 Components of the expert system

2. The user interface, which controls the terminal. The information displayed relates directly to the knowledge base. This Interface will translate the input as specified by the user to the form used by the Expert System. The other four modules are separate from the “Shell”. These are:

3.

4.

5.

The knowledge base, which stores information as frames and rules. Several kinds of information are attached to each frame, some of which relates to how to use the frame. The knowledge base maintenance system, which is designed to update, mod- ify and expand the Knowledge Base. The situation data base module, which contains all relevant information to the situation under analysis, and

The proposed expert system will consist of six modules. (The interaction between the components is shown in Fig. 1. )

The proposed expert system “shell” will continue two modules, known as RAMS. These are: 1. The inference engine, which employs the information contained in the

knowledge base in order to interpret the current data. This operates in a search-select-and-execute fashion, either forward or backward chaining can be selected.

142

6. The explanation module, which explains to both the expert and the user how

the system arrived at a specific conclusion (Niide et al., 1985).

-1.2 Knowledge acquisition procedure

The limited availability of human reliability data is insufficient to permit

the building of a comprehensive expert system. Therefore, the following

approach is proposed:

1. To extract, by using “IF-THEN” rules, data and information from the avail-

able literature.

2. To construct a simplified routine for human reliability analysis, based on

the available knowledge.

3. To construct a sub-routine to utilise direct numerical estimation based on

expert opinion which would involve SLIM.

3. To use the above program as a general framework to be constantly reviewed

and updated in an interactive session between the domain expert and the

developer.

3.3 Implementing the system

It was decided to use LISP as a programming language for the proposed

expert system as it offers flexibility in writing rules. the programming is

designed to run on an IBM-PC/AT. There are several methods for represent-

ing knowledge in expert systems, and the proposed system will employ a com-

bination of two such methods: production rules and frames.

Rule-based systems, especially production rules constitute the best cur-

rently available means for codifying the problem-solving know-how of human

experts (Hayes-Roth, 1985). These systems are exceptionally flexible and can

accommodate different problem solving strategies.

“Frames” have emerged to support the complex task of sharing human

knowledge in an expert system. Frames are discrete structures having individ-

ual properties, known as ‘slots’, into which all domain-knowledge are parti-

tioned. Frames are used in the proposed expert system to integrate diverse

knowledge representations (e.g. rules, equations). This is particularly suited t’or a domain knowledge which has many forms, such as the human perform-

ance domain.

1.3 Structure of the expert system

The proposed expert system will guide the analyst systematically through

the human reliability analysis procedure in two stages:

Stngc I: Modelling of human performance:

143

1. The user is first guided through the Information Gathering Process, by answering questions related to the specific situation.

2. The information gathered by the analyst during plant visits and operator interviews is analysed to determine the actions required by the operator: task analysis.

3. Potential operator errors and deviation from the required actions are identified and their consequences are modelled and analysed: event trees.

Stage 2: Quantification of human reliability: 1. Available human error probabilities are assigned to individual task/step

error mode. Non-available probabilities are calculated using the SLIM methodology.

2. Performance Shaping Factors ( PSFs) associated with each human error mode are identified, and their effects are assessed.

3. The effects of ‘dependence’ of individual tasks and operators are assessed. 4. The probability of each sequence of errors in the event tree is calculated,

as well as the probability of success in completing the required sequence of steps/tasks.

5. The effects of ‘recovery factors’ are estimated, and the calculated proba- bilities are adjusted accordingly.

The flow-chart of the human reliability analysis procedure is shown in Fig. 2.

5. A SAMPLE SESSION (LOSS OF CONTAINMENT EVENT)

Perhaps the best way to demonstrate how the program works is to apply it to a typical situation in the process industry. Let us consider a control room example in a tank refinery farm. The objective of this hypothetical example is to calculate the probability of loss of containment due to a tank overfilling. Fig. 3 shows the layout of the control room and parts of the control system.

The operating system consists of a tank being filled with a hazardous mate- rial. Being a continuous process, the tank is filled in 50 min and discharges in 10 min. The plant operator will start the cycle by depressing the pump start button. When the tank is full, the measuring system which consists of a float, level transmitter (LT) and a level indicator (LI) would signal the operator, to stop the pump and open the motor operating valve (MOV) to discharge the tank. This completes the cycle. A manually operated valve, situated between the pump and tank, is normally open and is used to isolate the tank during maintenance and emergencies. There are obviously some safety features in the design such as high level alarms and overflow pipes, but these have been omit- ted for the sake of simplicity.

The potential overfilling of the product storage tank could lead to the for- mation of a large flammable/toxic cloud. The objectives of the analysis are to estimate the probability of the process operator to failing to carry out the

144

I l-ask

Analysis I

Event Trees I *

of Success

t fzffects of

Recovery

4 SCCCESS PROBABILITY

Fig. ‘2. Flow-chart of’ HRA procedure.

CIINTROL ROOM

Fig. 3. Operating system.

145

required actions, and to study the effect of human error on the overal system reliability in relation to the loss of containment event.

6. ANALYTICAL PROCEDURE

6.1 Plant visit

The program will start by asking the analyst some preliminary questions about the type, capacity and layout of the process plant plus more specific questions about the control room layout and design.

6.2 Operator actions

The analyst is then prompted to determine the operator actions that are regarded as critical for responding to an emergency. The following actions should be identified in our example:

(i 1 detect the relevant Annunciator (ANN ) , (ii) diagnose the situation correctly, and

(iii) follow the steps in the written procedure: Stop the pump, open MOV, close the manual valve, return to the control room and monitor the level indi- cator (LI).

6.3 Talk-or-walk through

This section is concerned with the number of personnel involved, duration of the procedure, number of procedures per shift and the maximum time avail- able to act. In our example the following should apply: - duration of one filling cycle is 60 min (tank fills in 50 min and empties in

10 min), _ the required actions must be completed within a maximum of 10 min follow-

ing the annunciating indicator, and _ at least two quantified operators will be in the control room at all times.

6.4 Task analysis and floor identification

Ideally a comprehensive HTA should be performed in order to identify all relevant operator activities in respect to the tank filling task, and possible deviations from the defined tasks. The programme will present task analysis in the format shown in Table 1.

This format will allow for future expansion of human error modes in order to account for several error modes for each step/task. The program will allow other methods of task analysis to be accomodated.

The tasks described in Table 1 are only a brief outline of the actions required

146

TABLE 1

Task analysis

Task Equipment Action Indication Time Error mode Effect (min)

A Panel Detect relevant Light Fail to detect, Danger annunciator

B Display Diagnose Light 10

Mis-diagnose Danger C Pump Stop pump None Omit stop pump Danger D MOV Open MOV None 10 Omit Open MOV Critical E Manual Close valve Stem 20 Omit close valve Critical

valve F Level Monitor LI Light Fail to monitor LI Critical

indicator

by the operators in relation to the undesired event (storage tank overflows).

These include:

Step A: Detect the relevant ANN. As the control room contains several

annunciating indicators, the operator is required to recognise the correct indi-

cator, relevant to a tank being full.

Step B: Diagnose the indicator. Having recognised the correct indicator the

operator should be able to relate this to a particular tank being full, in order to

take action to stop the flow to that tank.

Step C: Stop the pump. The operator, by following written procedures, should

stop the relevant pump, within a maximum prescribed time. An error of omis-

sion or a delay in stopping the pump will lead to tank overflow and its related

hazardous consequences.

Step D: Open MOV. This step is required in order to discharge the tank.

Omitting to open the MOV, which is inside the cont,rol room, will have no

immediate effect on the loss of containment event. However, if a new tank

filling cycle is initiat,ed without discharging the tank after the previous cycle,

then a hazardous event could result.

Step E: Close manual valve. The operator is required to perform this step

outside the control room. Closing the manual valve is not required during each

filling cycle, as it is a requirement for emergency and maintenance work only. (This step can be omitted from the regular sequence of steps required by the

operator under normal conditions. ) Step F: Monitor level indicator. The monitoring of the level indicator is

required particularly towards the end of each filling cycle, and at the start of

the following cycle to ensure that the tank is empty. The program will consider initially only two types of human error modes;

namely errors of commission and omission. Error of commission is the correct

performance of a task or part of a task (st.ep) , e.g. wrong adjustment of valve,

147

I ISTEPCI CRITICAL

STEP A STEP 8 STEPD DANGER 1 SlJC4XSS

1-P

(l-P,)

Fig. 4. Error event tree.

while error of omission is the ‘forgetting’ of a task or step that should have been performed.

It is possible to extend potential human error modes in a similar fashion to Failure Mode and Effect Analysis, and consider a number of possible human errors associated with each task, and analyse their effect on the process. These might include: - Sequential Error:performing a task out of sequence, - Extraneous Act: the introduction of a task or step that should not, have been

performed, - Time Error: failure to perform a task or step within the prescribed time,

either too late or too early.

6.5 development of error event tree

Once the task analysis is completed and associated potential errors are iden- tified, the program will display the sequence of steps in the form of an error event tree as shown in Fig. 4.

Each branch of the event tree will lead to one of three possible outcomes: success in completing the task, critical (which might lead to danger under certain conditions ) and danger (which in this case, the overfilling of the tank).

6.6 Quantification of h~rn~n error

Human Error Probability (HEP ) is derived from the Handbook by Swain and Guttmann (1983)) where a great number of human error probabilities are compiled with particular reference to tasks performed by operators in the nuclear industry. This may be applied directly to the process industry due to the similarity of tasks.

The handbook is probably the best source of human error data to date. The user, will be advised to use the SLIM subroutine to calculate any missing human error probabilities, as the program goes through each task in turn checking

148

DETECT RELEVANT ‘ANN’ - Is the Action:

a - use of displays ? b manipulation of conuol ? C - diagnosis ? d administrativr ?

** a - What class is the display ?

a unannunciated (meter/counter)? b annunciated (lightMum) ?

** b - Number of alarms ?

** 10 - Numt;\of relevant alarms ?

> HEP = 0.003

Fig. 5. Assignment of HEPs.

STEPSA ( B ( C ( D ( E ( F

-,g

a Task Analysis.

Fig. 6. HRA event tree.

against a generic grouping of known step/task probability of error data base. The following format for data aquisition is proposed: The example used in Fig. 5 shows how the programme prompts the user to calculate the probability of not detecting the relevant ANN. Once estimates of error probabilities are made for all steps/tasks, the computer will display the Human Reliability Analysis (HRA) event tree, with the assigned HEPs and the total probability for each sequence or branch of the tree as shown in Fig. 6.

6.6.1 Effects of performance shaping factors In order to estimate the relative importance of Performance Shaping Factor

(PSFs) on each human error probability, the program will consider at this stage only three PSFs: experience, stress and the task type. Stress may be taken as a function of time available to act. The format shown in Fig. 7 is proposed to assist the user in selecting the appropriate level of each PSF and the example chosen is for the error of omitting to stop the pump ( Step C) .

6.6.2 Effects of Dependance The programme will now consider the degree of dependence between oper-

ators in the control room in order to assess the effect on HEPs. The user would

149

OMIT STOP The PUMP

Is the Operator: a skilled z 6 months experience ?

b novice < 6 months experience ?

**a -Is the Level of Stress:

a- very low ? b optimum ? c moderately high ? d - extremely high ?

**c

- Is the Task (action); a step-by-step (routine) ? b - dynamic ?

**a

> HEP = .006

Fig. 7. Effects of PSFs.

have indicated earlier the number of operators in the control room and how many of them on a shift are involved directly with the tank filling tasks. The program will prompt the analyst to indicate the level of dependence between the two operators of: high, moderate, low or zero. Once a level is selected, the programme will calculate the Joint Human Error Probability (JHEP) for each step of the task. The HRA event tree is displayed again showing the adjusted probabilities to reflect the effects of dependence.

6.6.3 Effects of recovery factors The last step in the human reliability analysis procedure is the estimation

of the effects of the recovery factors on individual human error probabilities. The Recovery Factor (RF) is estimated in the same way as PSFs and depen- dence effects. It is noted that the overall probability of human success will increase as a result of taking RFs into account. In the example considered, the total probability of success is increased from 0.9861 to 0.9987.

6.7 Determination of system probability of success

Having calculated HEP for each step of the task, and adjusting these prob- abilities to take into account the effects of PSFs, dependence and recovery factors, the program is now ready to present data according to the analyst requirements in three ways: 1. weighted HEP for each step of the task; 2. weighted HEP for a certain sequence of steps that could lead to a defined

outcome. Possible outcomes are shown in the error event tree in fig. 4;

150

STEPS

B lC ID I F

Prob. Success= 0.9861

I

+--l-l~Foo~“‘~~“mm Ertor b

Probabilities

Fig. 8. HRA sequence probability.

3. the total system success or failure probability for the human reliability anal- ysis event tree shown in Fig. 6. If for example steps B, C, D and E are requested for quantification, the pro-

gram will calculate the weighted HEPs for that sequence as shown in Fig. 8. It can be seen from the above example that the operator is expected to suc-

cessfully complete the sequence of steps B, C, D and F 987 times out of a 1000 which may or may not be acceptable in relation to the undesired event (loss of containment). The essential point here is that the program will provide the analyst with a number of options for the quantification of HEPs depending on the requirements of the overall framework of probabilistic risk assessment.

7. PROBABILISTIC RISK ASSESSMENT (PRA)

The completed analytical procedure with the program provides can have many possible applications. One such application is the integration of human reliability within the overall framework of Probabilistic Risk Assessment (PRA).

The fault tree analysis shown in Fig. 9 illustrates how operator errors iden- tified in our example can influence the top event ‘tank overflows’.

The quantitative analysis of human error probabilities and system hardware failures would identify particular aspects of the system in relation to the top event which would warrant improvements, and help in establishing priorities for action.

The top event probability of 0.129 (based on assumed data) may be high and should not be acceptable. The example shows that Human Error is the major contributor to the top event probability. It is interesting to note that the relatively high probability of hardware failures was mitigated as a result of the operator monitoring of the alarms. However, since there is an OR gate at the top of the fault tree; both hardware failure and human error probabilities must be reduced in order to reduce the top event probability to an acceptable level.

The example shows that mis-diagnosis is the most serious single error which affects the top event. The system reliability may be improved by incorporating high level alarms and a pump trip. This, however, may not improve human

151

STORAGE TANK Probability=O.lPO

OVERFLOWS

-5 PClXlO P=O.O02

Pumping

System oN Fails

Operator Fails To Stop Punp Within 10 m

A P.0.127

RisingLevel

KIT Detected

Human Errors

Transmission 0.024

p= 0.125

Fig. 9. Containment fault tree.

reliability as the operator will become reliant upon the control system, and the tank will ultimately overflow as a result of hardware failures. Human reliabil- ity can be improved by careful design of the process control system and by incorporating an alarm analysis system to reduce the likelihood of mis- diagnosis.

8. CONCLUDING REMARKS

The application of Expert Systems in the field of human reliability analysis proves to be of great value. The guidance through systematic procedures would

152

reduce the analytical errors of omission and it is expected to reduce human

reliability analysis time and cost.

The proposed program presented in this paper is intended primarily to guide

the analyst systematically through the qualitative modelling of human per-

formance. The second part of the program is intended to quantify human error

modes by allocating a probability to each error mode. The programme can then

provide the analyst with several options:

1. to determine individual task/step error mode probability;

2. to determine the probability of a sequence of errors and assess possible effects

on the process. This can be carried out inductively (starting by error modes),

or deductively ( starting by the effects).

3. to select and quantify human error modes relevant to the type of probabilis-

tic risk assessment study chosen, e.g. the top event of a fault tree analysis.

Although the quantification of human reliability is necessary for the overall

PRA, it is crucial to have better understanding of the nature of human errors.

The emphasis in industry is on the qualitative analysis of human reliability

because the techniques available for quantifications are still a matter of debate

and research.

REFERENCES

Annett. .J. et al., 1979. Task Analysis. Department of Employment. Training Information Paper

6. HMSO. London.

Embury, D. and Humphreys, P., 1984. SLIM-MAUD. NUREG/CR-3518, March.

Flight International 22, *January 1975.

Hayes-Roth. F.. 1985. Rule based systems. ACM, 28(g), 15 pp.

Health and Safety Executive, 1984. Guidance on the safe use of programmable electronic systems.

Draft Document for consultation, HSE, June 1984, and a later issue in 1986.

Health and Safety Executive, 1986. Human Factors in Industrial Systems- Review of Reliability

Analysis Techniques. Draft available from HSE, Chapel St. London.

Niide. K. et al., 1985. Some expert system experiments in process engineering, IChem E, Symp..

22: ‘3%243.

Swain, A.D. and Guttmann. H.E.. 1983. Handbook of human reliability analysis with emphasis

on nuclear power plant applications. Final Report, NUREG/CR-1278, August.