So far, we have focused mainly upon the causes of errors, that is, upon the conditions that precede their occurrence and on the cognitive mechanisms that shape their more predictable forms. For the remaining chapters, the emphasis will shift towards their consequences, beginning here with a consideration of the processes involved in the detection and recovery of errors.
To err is human. No matter how well we come to understand the psychological antecedents of error or how sophisticated are the cognitive ‘prostheses’ – devices to aid memory or decision making–we eventually provide for those in high-risk occupations, errors will still occur. Errors are, as we have seen, the inevitable and usually acceptable price human beings have to pay for their remarkable ability to cope with very difficult informational tasks quickly and, more often than not, effectively. In conditions where “machines botch up, humans degrade gracefully” (Jordan, 1963). But, as we shall discuss further in the ensuing two chapters, the centralised supervisory control of complex, hazardous, opaque, tightly-coupled and incompletely understood technologies can, on occasions, transform these normally adaptive properties into dangerous liabilities.
If it is impossible to guarantee the elimination of errors, then we must discover more effective ways of mitigating their consequences in unforgiving situations. Many have suggested that this is really the only sensible way of combating the human error problem in high-risk technologies.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.