Book contents
- Frontmatter
- Contents
- Illustrations
- Tables
- Preface
- Part I Introduction
- Part II Theoretical issues and background
- Part III System analysis and quantification
- 6 Fault and event trees
- 7 Fault trees – analysis
- 8 Dependent failures
- 9 Reliability data bases
- 10 Expert opinion
- 11 Human reliability
- 12 Software reliability
- Part IV Uncertainty modeling and risk measurement
- Bibliography
- Index
11 - Human reliability
from Part III - System analysis and quantification
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Illustrations
- Tables
- Preface
- Part I Introduction
- Part II Theoretical issues and background
- Part III System analysis and quantification
- 6 Fault and event trees
- 7 Fault trees – analysis
- 8 Dependent failures
- 9 Reliability data bases
- 10 Expert opinion
- 11 Human reliability
- 12 Software reliability
- Part IV Uncertainty modeling and risk measurement
- Bibliography
- Index
Summary
Introduction
In many complex systems involving interaction between humans and machines, the largest contribution to the probability of system failure comes from basic failures or initiating events caused by humans. Kirwan ([Kirwan, 1994], Appendix 1) reviews twelve accidents and one incident occurring between 1966 and 1986, including the Space Shuttle accident and Three Mile Island, all of which were largely caused by human error. The realization of the extent of human involvement in major accidents has, in the Netherlands, led to the choice of a completely automated decision system for closing and opening that country's newest storm surge barrier.
Since humans can both initiate and mitigate accidents, it is clear that the influence of humans on total system reliability must be considered in any complete probabilistic risk analysis.
The first human reliability assessment was made as part of the final version of the WASH-1400 study. At that time the methodology was largely restricted to studies on the failure probability for elementary tasks. A human error probability, HEP, is the probability that an error occurs when carrying out a given task. In many situations in which human reliability is an important factor the operator has to interpret (possibly incorrect) instrumentation data, make deductions about the problems at hand, and take decisions involving billion dollar trade-offs under conditions of high uncertainty.
- Type
- Chapter
- Information
- Probabilistic Risk AnalysisFoundations and Methods, pp. 218 - 239Publisher: Cambridge University PressPrint publication year: 2001