Book contents
- Frontmatter
- Contents
- List of Figures and Tables
- Preface and Acknowledgment
- Acronyms and Abbreviations
- Introduction: How Teamwork Is More Important than Technical Prowess
- 1 Rethinking Normal Accidents and Human Error – A New View of Crisis Management
- 2 USS Greeneville – The Downside of Charismatic Leadership
- 3 The Hillsborough Football Disaster – Explosive Team Chemistry
- 4 American Airlines Flight 587 – Latent Failures Align
- 5 Bristol Royal Infirmary – The Price of Organizational Overreach
- 6 US Airways Flight 1549 – Thinking through Crisis
- 7 Team Resource Management
- Notes
- Bibliography
- Index
1 - Rethinking Normal Accidents and Human Error – A New View of Crisis Management
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- List of Figures and Tables
- Preface and Acknowledgment
- Acronyms and Abbreviations
- Introduction: How Teamwork Is More Important than Technical Prowess
- 1 Rethinking Normal Accidents and Human Error – A New View of Crisis Management
- 2 USS Greeneville – The Downside of Charismatic Leadership
- 3 The Hillsborough Football Disaster – Explosive Team Chemistry
- 4 American Airlines Flight 587 – Latent Failures Align
- 5 Bristol Royal Infirmary – The Price of Organizational Overreach
- 6 US Airways Flight 1549 – Thinking through Crisis
- 7 Team Resource Management
- Notes
- Bibliography
- Index
Summary
In 1984, Charles Perrow published Normal Accidents, one of the first texts to consider the impact of our increasing use of technology in high-risk fields, analyzing the implications on everyday life. Because risk can never be entirely eliminated, Perrow argued, system designers can neither predict every possible failure scenario nor create perfect contingency plans for front-line operators. In other words, no matter how effectively conventional safety devices such as warning systems, overflow valves, or automatic shutdown features perform, some accidents are unpredictable because some failures are simply not ‘conventional.’ Particularly challenging is the fact that as one unexpected failure stresses different parts of the system in unusual ways, compound failures emerge with increasingly unanticipated results. In fact, these types of unpredictable, compound failures are so inevitable, Perrow argues, we should call them ‘normal accidents,’ not because of their frequency, but because these accidents are the ‘normal’ consequence of ever-evolving technologies generating increasingly complex operating systems that stress team operations and sense making in unpredictable ways.
As a result, one of the major factors precipitating compound failures in complex systems is the inability of operators, trained to respond ‘by the book,’ to evolve their mental picture of the system failure as new data emerge. Such a failure is either so catastrophic or so complex that it shocks people's sense-making capacities. It becomes literally incomprehensible.
- Type
- Chapter
- Information
- Thinking Through CrisisImproving Teamwork and Leadership in High-Risk Fields, pp. 7 - 13Publisher: Cambridge University PressPrint publication year: 2011