Human reliability
Human reliability (also known as human performance or HU) is related to the field of human factors and ergonomics, and refers to the reliability of humans in fields including manufacturing, medicine and nuclear power. Human performance can be affected by many factors such as age, state of mind, physical health, attitude, emotions, propensity for certain common mistakes, errors and cognitive biases, etc.
Human reliability is very important due to the contributions of humans to the resilience of systems and to possible adverse consequences of human errors or oversights, especially when the human is a crucial part of the large socio-technical systems as is common today. User-centered design and error-tolerant design are just two of many terms used to describe efforts to make technology better suited to operation by humans.
Common traps of human nature
People tend to overestimate their ability to maintain control when they are doing work. The common characteristics of human nature addressed below are especially accentuated when work is performed in a complex work environment.[1]
Stress - The problem with stress is that it can accumulate and overpower a person, thus becoming detrimental to performance.
Avoidance of mental strain - Humans are reluctant to engage in lengthy concentrated thinking, as it requires high levels of attention for extended periods.
The mental biases, or shortcuts, often used to reduce mental effort and expedite decision-making include:
- Assumptions – A condition taken for granted or accepted as true without verification of the facts.
- Habit – An unconscious pattern of behavior acquired through frequent repetition.
- Confirmation bias – The reluctance to abandon a current solution.
- Similarity bias – The tendency to recall solutions from situations that appear similar
- Frequency bias – A gamble that a frequently used solution will work.
- Availability bias – The tendency to settle on solutions or courses of action that readily come to mind.
Limited working memory - The mind's short-term memory is the “workbench” for problem solving and decision-making.
Limited attention resources - The limited ability to concentrate on two or more activities challenges the ability to process information needed to solve problems.
Mind-set People tend to focus more on what they want to accomplish (a goal) and less on what needs to be avoided because human beings are primarily goal-oriented by nature. As such, people tend to “see” only what the mind expects, or wants, to see.
Difficulty seeing one's own error - Individuals, especially when working alone, are particularly susceptible to missing errors.
Limited perspective - Humans cannot see all there is to see. The inability of the human mind to perceive all facts pertinent to a decision challenges problem-solving.
Susceptibility to emotional/social factors - Anger and embarrassment adversely influence team and individual performance.
Fatigue - People get tired. Physical, emotional, and mental fatigue can lead to error and poor judgment.
Presenteeism - Some employees will be present in the need to belong to the workplace despite a diminished capacity to perform their jobs due to illness or injury.
Analysis techniques
A variety of methods exist for human reliability analysis (HRA).[2][3] Two general classes of methods are those based on probabilistic risk assessment (PRA) and those based on a cognitive theory of control.
PRA-based techniques
One method for analyzing human reliability is a straightforward extension of probabilistic risk assessment (PRA): in the same way that equipment can fail in a power plant, so can a human operator commit errors. In both cases, an analysis (functional decomposition for equipment and task analysis for humans) would articulate a level of detail for which failure or error probabilities can be assigned. This basic idea is behind the Technique for Human Error Rate Prediction (THERP).[4] THERP is intended to generate human error probabilities that would be incorporated into a PRA. The Accident Sequence Evaluation Program (ASEP) human reliability procedure is a simplified form of THERP; an associated computational tool is Simplified Human Error Analysis Code (SHEAN).[5] More recently, the US Nuclear Regulatory Commission has published the Standardized Plant Analysis Risk - Human Reliability Analysis (SPAR-H) method to take account of the potential for human error.[6][7]
Cognitive control based techniques
Erik Hollnagel has developed this line of thought in his work on the Contextual Control Model (COCOM) [8] and the Cognitive Reliability and Error Analysis Method (CREAM).[9] COCOM models human performance as a set of control modes—strategic (based on long-term planning), tactical (based on procedures), opportunistic (based on present context), and scrambled (random) - and proposes a model of how transitions between these control modes occur. This model of control mode transition consists of a number of factors, including the human operator's estimate of the outcome of the action (success or failure), the time remaining to accomplish the action (adequate or inadequate), and the number of simultaneous goals of the human operator at that time. CREAM is a human reliability analysis method that is based on COCOM.
Related techniques
Related techniques in safety engineering and reliability engineering include failure mode and effects analysis, hazop, fault tree, and SAPHIRE (Systems Analysis Programs for Hands-on Integrated Reliability Evaluations).
Human Factors Analysis and Classification System (HFACS)
The Human Factors Analysis and Classification System (HFACS) was developed initially as a framework to understand the role of "human error" in aviation accidents.[10][11] It is based on James Reason's Swiss cheese model of human error in complex systems. HFACS distinguishes between the "active failures" of unsafe acts, and "latent failures" of preconditions for unsafe acts, unsafe supervision, and organizational influences. These categories were developed empirically on the basis of many aviation accident reports.
"Unsafe acts" are performed by the human operator "on the front line" (e.g., the pilot, the air traffic controller, the driver). Unsafe acts can be either errors (in perception, decision making or skill-based performance) or violations (routine or exceptional). The errors here are similar to the above discussion. Violations are the deliberate disregard for rules and procedures. As the name implies, routine violations are those that occur habitually and are usually tolerated by the organization or authority. Exceptional violations are unusual and often extreme. For example, driving 60 mph in a 55-mph zone speed limit is a routine violation, but driving 130 mph in the same zone is exceptional.
There are two types of preconditions for unsafe acts: those that relate to the human operator's internal state and those that relate to the human operator's practices or ways of working. Adverse internal states include those related to physiology (e.g., illness) and mental state (e.g., mentally fatigued, distracted). A third aspect of 'internal state' is really a mismatch between the operator's ability and the task demands; for example, the operator may be unable to make visual judgments or react quickly enough to support the task at hand. Poor operator practices are another type of precondition for unsafe acts. These include poor crew resource management (issues such as leadership and communication) and poor personal readiness practices (e.g., violating the crew rest requirements in aviation).
Four types of unsafe supervision are: inadequate supervision; planned inappropriate operations; failure to correct a known problem; and supervisory violations.
Organizational influences include those related to resources management (e.g., inadequate human or financial resources), organizational climate (structures, policies, and culture), and organizational processes (such as procedures, schedules, oversight).
See also
- Absolute probability judgement – Technique used in the field of human reliability assessment
- ATHEANA – Technique used in the field of human reliability assessment (A Technique for Human Event Analysis)
- Human error assessment and reduction technique (HEART) – technique in human reliability assessment and error identification , a technique used in the field of human reliability
- Influence diagrams approach
- Latent human error – Term used in safety work and accident prevention
- Team error – Type of error
- TESEO – technique in the field of Human reliability Assessment (Tecnica Empirica Stima Errori Operatori)
- Incident pit, conceptual model from diving for explaining incident development and recovery
Footnotes
- https://www.standards.doe.gov/standards-documents/1000/1028-BHdbk-2009-v1/@@images/file DOE-HDBK-1028-2009
- Kirwan and Ainsworth, 1992
- Kirwan, 1994
- Swain & Guttmann, 1983
- Simplified Human Error Analysis Code (Wilson, 1993)
- SPAR-H
- Gertman et al., 2005
- (Hollnagel, 1993)
- (Hollnagel, 1998)
- Shappell and Wiegmann, 2000
- Wiegmann and Shappell, 2003
References
- Gertman, D. L.; Blackman, H. S. (2001). Human reliability and safety analysis data handbook. Wiley.
- Gertman, D., Blackman, H., Marble, J., Byers, J. and Smith, C. (2005). The SPAR-H human reliability analysis method. NUREG/CR-6883. Idaho National Laboratory, prepared for U. S. Nuclear Regulatory Commission.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - M. Cappelli, A.M.Gadomski, M.Sepielli (2011). Human Factors in Nuclear Power Plant Safety Management: A Socio-Cognitive Modeling Approach using TOGA Meta-Theory. Proceedings of International Congress on Advances in Nuclear Power Plants. Nice (FR). SFEN (Société Française d'Energie Nucléaire).
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Hollnagel, E. (1993). Human reliability analysis: Context and control. Academic Press.
- Hollnagel, E. (1998). Cognitive reliability and error analysis method: CREAM. Elsevier.
- Hollnagel, E.; Amalberti, R. (2001). The Emperor's New Clothes, or whatever happened to "human error"? Invited keynote presentation at 4th International Workshop on Human Error, Safety and System Development. Linköping, June 11–12, 2001.
- Hollnagel, E., Woods, D. D., and Leveson, N. (Eds.) (2006). Resilience engineering: Concepts and precepts. Ashgate.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Jones, P. M. (1999). Human error and its amelioration. In Handbook of Systems Engineering and Management (A. P. Sage and W. B. Rouse, eds.), 687-702. Wiley.
- Kirwan, B. (1994). A Guide to Practical Human Reliability Assessment. Taylor & Francis.
- Kirwan, B. and Ainsworth, L. (Eds.) (1992). A guide to task analysis. Taylor & Francis.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Norman, D. (1988). The psychology of everyday things. Basic Books.
- Reason, J. (1990). Human error. Cambridge University Press.
- Roth, E.; et al. (1994). An empirical investigation of operator performance in cognitive demanding simulated emergencies. NUREG/CR-6208, Westinghouse Science and Technology Center. Report prepared for Nuclear Regulatory Commission.
- Sage, A. P. (1992). Systems engineering. Wiley.
- Senders, J.; Moray, N. (1991). Human error: Cause, prediction, and reduction. Lawrence Erlbaum Associates.
- Shappell, S.; Wiegmann, D. (2000). The human factors analysis and classification system - HFACS. DOT/FAA/AM-00/7, Office of Aviation Medicine, Federal Aviation Administration, Department of Transportation.
- Swain, A. D., & Guttman, H. E. (1983). Handbook of human reliability analysis with emphasis on nuclear power plant applications. NUREG/CR-1278 (Washington D.C.).
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Wallace, B.; Ross, A. (2006). Beyond human error. CRC Press.
- Wiegmann, D.; Shappell, S. (2003). A human error approach to aviation accident analysis: The human factors analysis and classification system. Ashgate.
- Wilson, J.R. (1993). SHEAN (Simplified Human Error Analysis code) and automated THERP. United States Department of Energy Technical Report Number WINCO--11908.
- Woods, D. D. (1990). Modeling and predicting human error. In J. Elkind, S. Card, J. Hochberg, and B. Huey (Eds.), Human performance models for computer-aided engineering (248-274). Academic Press.
- Federal Aviation Administration. 2009 electronic code of regulations. Retrieved September 25, 2009, from https://web.archive.org/web/20120206214308/http://www.airweb.faa.gov/Regulatory_and_Guidance_library/rgMakeModel.nsf/0/5a9adccea6c0c4e286256d3900494a77/$FILE/H3WE.pdf
Further reading
- Autrey, T.D. (2015). 6-Hour Safety Culture: How to Sustainably Reduce Human Error and Risk (and do what training alone can't possibly do). Human Performance Association. Archived from the original on 2021-04-11. Retrieved 2020-08-21.
- Davies, J.B., Ross, A., Wallace, B. and Wright, L. (2003). Safety Management: a Qualitative Systems Approach. Taylor and Francis.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Dekker, S.W.A. (2005). Ten Questions About Human Error: a new view of human factors and systems safety. Lawrence Erlbaum Associates. Archived from the original on 2012-12-11. Retrieved 2010-05-24.
- Dekker, S.W.A. (2006). The Field Guide to Understanding Human Error. Ashgate. Archived from the original on 2012-03-06. Retrieved 2010-05-24.
- Dekker, S.W.A. (2007). Just Culture: Balancing Safety and Accountability. Ashgate. Archived from the original on 2012-03-06. Retrieved 2010-05-24.
- Dismukes, R. K., Berman, B. A., and Loukopoulos, L. D. (2007). The limits of expertise: Rethinking pilot error and the causes of airline accidents. Ashgate.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Forester, J., Kolaczkowski, A., Lois, E., and Kelly, D. (2006). Evaluation of human reliability analysis methods against good practices. NUREG-1842 Final Report. U. S. Nuclear Regulatory Commission.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Goodstein, L. P., Andersen, H. B., and Olsen, S. E. (Eds.) (1988). Tasks, errors, and mental models. Taylor and Francis.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Grabowski, M.; Roberts, K. H. (1996). "Human and organizational error in large scale systems". IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans. 26: 2–16. doi:10.1109/3468.477856.
- Greenbaum, J. and Kyng, M. (Eds.) (1991). Design at work: Cooperative design of computer systems. Lawrence Erlbaum Associates.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Harrison, M. (2004). Human error analysis and reliability assessment. Workshop on Human Computer Interaction and Dependability, 46th IFIP Working Group 10.4 Meeting, Siena, Italy, July 3–7, 2004.
- Hollnagel, E. (1991). The phenotype of erroneous actions: Implications for HCI design. In G. W. R. Weir and J. L. Alty (Eds.), Human-computer interaction and complex systems. Academic Press.
- Hutchins, E. (1995). Cognition in the wild. MIT Press.
- Kahneman, D., Slovic, P. and Tversky, A. (Eds.) (1982). "Judgment under uncertainty: Heuristics and biases". Science. Cambridge University Press. 185 (4157): 1124–31. doi:10.1126/science.185.4157.1124. PMID 17835457. S2CID 143452957.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - Leveson, N. (1995). Safeware: System safety and computers. Addison-Wesley.
- Morgan, G. (1986). Images of Organization. Sage.
- Mura, S. S. (1983). Licensing violations: Legitimate violations of Grice's conversational principle. In R. Craig and K. Tracy (Eds.), Conversational coherence: Form, structure, and strategy (101-115). Sage.
- Perrow, C. (1984). Normal accidents: Living with high-risk technologies. Basic Books. ISBN 9780465051441.
- Rasmussen, J. (1983). Skills, rules, and knowledge: Signals, signs, and symbols and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 257-267.
- Rasmussen, J. (1986). Information processing and human-machine interaction: An approach to cognitive engineering. Wiley.
- Silverman, B. (1992). Critiquing human error: A knowledge-based human-computer collaboration approach. Academic Press.
- Swets, J. (1996). Signal detection theory and ROC analysis in psychology and diagnostics: Collected papers. Lawrence Erlbaum Associates.
- Tversky, A.; Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1131.
- Vaughan, D. (1996). The Challenger launch decision: Risky technology, culture, and deviance at NASA. University of Chicago Press. ISBN 9780226851761.
- Woods, D. D., Johannesen, L., Cook, R., and Sarter, N. (1994). Behind human error: Cognitive systems, computers, and hindsight. CSERIAC SOAR Report 94-01. Crew Systems Ergonomics Information Analysis Center, Wright-Patterson Air Force Base, Ohio.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Wu, S., Hrudey, S., French, S., Bedford, T., Soane, E. and Pollard, S. (2009). "A role for human reliability analysis (HRA) in preventing drinking water incidents and securing safe drinking water" (PDF). Water Research. 43 (13): 3227–3238. doi:10.1016/j.watres.2009.04.040. PMID 19493557.
{{cite journal}}
: CS1 maint: multiple names: authors list (link) - CCPS, Guidelines for Preventing Human Error. This book explains about qualitative and quantitative methodology for predicting human error. Qualitative methodology called SPEAR: Systems for Predicting Human Error and Recovery, and quantitative methodology also includes THERP, etc.
External links
Standards and guidance documents
Tools
Research labs
- Erik Hollnagel at the Crisis and Risk Research Centre at MINES ParisTech
- Human Reliability Analysis Archived 2011-10-15 at the Wayback Machine at the US Sandia National Laboratories
- Center for Human Reliability Studies at the US Oak Ridge National Laboratory
- Flight Cognition Laboratory at NASA Ames Research Center
- David Woods at the Cognitive Systems Engineering Laboratory at The Ohio State University
- Sidney Dekker's Leonardo da Vinci Laboratory for Complexity and Systems Thinking, Lund University, Sweden
Media coverage
- “How to Avoid Human Error in IT“ Archived 2016-03-04 at the Wayback Machine
- “Human Reliability. We break down just like machines“ Industrial Engineer - November 2004, 36(11): 66