This chapter sketches out one possible answer to the following question: What kind of information-handling device could operate correctly for most of the time, but also produce the occasional wrong responses characteristic of human behaviour? Of special interest are those error forms that recur so often that any adequate model of human action must explain not only correct performance, but also these more predictable varieties of fallibility.
Most of the component parts of this ‘machine’ have been discussed at earlier points in this book. The purpose of this chapter is to assemble them in a concise and internally consistent fashion.
It is called a ‘fallible machine’ rather than a theoretical framework because it is expressed in a potentially computable form. That is, it borrows from Artificial Intelligence (AI) the aim of making an information-handling machine do “the sorts of things that are done by human minds” (Boden, 1987, p. 48). As Boden (1987, p. 48) indicates, the advantages of this approach are twofold: “First, it enables one to express richly structured psychological theories in a rigorous fashion (for everything in the program has to be precisely specified, and all its operations have to be made explicit); and secondly, it forces one to suggest specific hypotheses about precisely how a psychological change can come about.”
The description of the ‘fallible machine’ is in two parts. In the first seven sections of the chapter, it is presented in a notional, nonprogrammatic form.
Email your librarian or administrator to recommend adding this book to your organisation's collection.