Skip to main content Accessibility help
×
×
Home
  • This chapter is unavailable for purchase
  • Print publication year: 1990
  • Online publication date: June 2012

5 - A design for a fallible machine

Summary

This chapter sketches out one possible answer to the following question: What kind of information-handling device could operate correctly for most of the time, but also produce the occasional wrong responses characteristic of human behaviour? Of special interest are those error forms that recur so often that any adequate model of human action must explain not only correct performance, but also these more predictable varieties of fallibility.

Most of the component parts of this ‘machine’ have been discussed at earlier points in this book. The purpose of this chapter is to assemble them in a concise and internally consistent fashion.

It is called a ‘fallible machine’ rather than a theoretical framework because it is expressed in a potentially computable form. That is, it borrows from Artificial Intelligence (AI) the aim of making an information-handling machine do “the sorts of things that are done by human minds” (Boden, 1987, p. 48). As Boden (1987, p. 48) indicates, the advantages of this approach are twofold: “First, it enables one to express richly structured psychological theories in a rigorous fashion (for everything in the program has to be precisely specified, and all its operations have to be made explicit); and secondly, it forces one to suggest specific hypotheses about precisely how a psychological change can come about.”

The description of the ‘fallible machine’ is in two parts. In the first seven sections of the chapter, it is presented in a notional, nonprogrammatic form.

Recommend this book

Email your librarian or administrator to recommend adding this book to your organisation's collection.

Human Error
  • Online ISBN: 9781139062367
  • Book DOI: https://doi.org/10.1017/CBO9781139062367
Please enter your name
Please enter a valid email address
Who would you like to send this to *
×