To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Cognitive neuropsychology is too young a field to have an accepted set of methods, let alone a training or apprenticeship procedure common to different centres of research. So those who come to work or study in the field tend to continue using the principles of the disciplines from which they arrived, and these are many and varied! However, two main approaches can be distinguished.
For the majority – those arriving from the neurosciences and some branches of psychology, particularly traditional human experimental psychology and those areas of clinical neuropsychology where test batteries are widely used – cognitive neuropsychology as represented in the earlier chapters in the book must at first seem to be a field fixed in a nineteenth-century mould. There tends to be little equipment, hard neuroscience evidence on patients tends to be ignored, and in its concentration on the individual case it uses methods that seem highly idiosyncratic if not positively dubious. For readers of this persuasion, it will be obvious that one needs to justify theoretical inferences based on single cases and the lack of discussion of hard neuroscience evidence, such as lesion localisation.
There is, however, an increasing minority who have arrived from other parts of cognitive psychology and the speech sciences. For some of them, the emphasis on the individual case, a rejection of the group study, and a lack of concern with the neurological basis of behaviour are becoming almost elements of a creed.
One of the major causes of the sea-change in psychology that occurred in the 1960s was that the nature and organisation of knowledge, a topic that had been virtually dormant, began to become a highly active area of research. This increase of interest in the field did not, however, lead to any great degree of agreement. Views became polarised on a range of issues. Are the systems that mediate knowledge in some sense distinct from other cognitive systems, particularly those having the processes that underlie our experience of a memory? Are different types of knowledge the province of distinct semantic systems, or does all knowledge depend on a system that has a common operating principle? Is knowledge mediated by the operation of networks or schemata? Is it based on individual instances, prototypes, or what? Is there a special status for so-called basic level concepts (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976) – the highest level at which all members have many perceptual attributes in common – for example, dog, knife, and apple?
The wealth of experiments that have been carried out in normal subjects does not exist for neurological patients. Yet the findings that have been obtained are both surprising and highly counterintuitive. In this chapter, I will not attempt a survey.
The cognitive neuropsychologist shares a conceptual problem with the archaeologist – to infer from a changed system the properties of the original one. Archaeologists have recently begun to consider the inference problem from the other perspective (for a review, see Hodder, 1982). What physical legacy, they have asked, would a society of the type being studied in fact leave? As an illustration of the dangers of making too direct an inference from the data, consider what could be inferred in future about recent Eskimo settlements from the tool remains that would be likely to be left. One would, for instance, find it very difficult to deduce the existence of some of the tools most valuable to the society. The use of other easily made or frequently broken artefacts would be simple to observe, but tools such as bone spears are highly ‘curated': carried round, cared for, and reused, and if they are lost or discarded, this normally occurs away from the home site (Binford, 1976). Other things would be left behind, but probably not bone spears! Thus there would be real dangers of theorising inappropriately about the society from what was left. To lessen this danger, there has developed a new field intermediate between archaeology and anthropology – ethnoarchaeology – which studies what traces existing societies would or would not leave.
That neuropsychology might be subject to an analogous danger is illustrated by the development of ideas on dissociations.
Interest in neuropsychology has increased greatly over the past 20 years. This has mainly occurred for direct clinical reasons, such as the increase in the ability of neuropsychological investigations to assess the crippling problems in thought, memory, and language that can occur from brain damage. There is, though, a second reason for the increase of interest in the subject. The human brain is still the organ we understand least well, and the process by which its highest function – cognition – operates remains mysterious. The dramatic effects of brain damage appear to provide valuable evidence about how the systems underlying cognition operate. The aim of this book is to assess this evidence. Is neuropsychological evidence of any real value in understanding normal cognition? If so, what form or forms should this evidence take, and what substantive conclusions can be drawn?
The relevance of neuropsychological findings remains controversial because neuropsychology – in a somewhat parallel fashion to psychology – has rejected much of its former doctrines in virtually every generation. The approach that is the subject matter of this book, cognitive neuropsychology, is hardly a generation old. In many respects, cognitive neuropsychology seems very healthy. Interest in it is increasing rapidly, and surprising findings are frequently being made. Yet its practitioners use conflicting methodologies, each of which balances on a set of barely examined assumptions.
In the 1970s, I wrote two articles (Shallice, 1979a, 1979b) that assessed, in a positive fashion, a methodological approach then unfashionable within neuropsychology – the single-case study.
There is an even more severe conceptual difficulty for the modular view of mind than how it can operate efficiently. The efficiency problem can at least be posed in a ‘mechanistic’ framework closely related to the framework in which ‘modularity’ itself is expressed. A more complex issue is why the human cognitive-processing system, which is apparently modular, should have the property of being conscious, unlike most modular systems – for example, present-day complex machines.
The obvious strategy within a modular approach is to identify some aspects of the operation of some particular module – say, its input – as conscious experience. However, one is then faced with the question of what could be so special about the processing in that module as to give its input such exceptional status. No real progress appears to have been made. One appears merely to be taking the first step on the road to infinite regress.
The situation is worse than it appears. Not only is there no apparent line of attack on how and why a modular system might be conscious, but an explanation of consciousness within the conceptual framework of modularity would probably need to be ‘functionalist’ (Putnam, 1960). In other words, consciousness would correspond to some ‘system-level’ property (i.e. information-processing characteristic), of the brain and not to some aspect of its material constituents.
The modular model of the mind that is suggested by cognitive neuropsychology research contains a conceptual lacuna. The existence of many special-purpose processing systems, each of which can operate autonomously, would not seem sufficient to produce coherent and effective operation for the whole system. Does this function itself require special-purpose units?
Another aspect of the functional architecture, in addition to modularity, needs to be considered in responding to this question. It has been widely assumed in theorising on cognitive processes that all the routine cognitive and motor skills that we have are controlled by more or less program-like entities, such as ‘productions’ (e.g. Newell & Simon, 1972) or action or thought ‘schemata’ (Schmidt, 1975; Rumelhart & Ortony, 1977). It is presumed that just as the mind contains representations of a finite but large and extensible set of words, so it contains a large but finite set of action or thought schemata. What would any individual thought or action schema control? There would be an enormous variety of operations. Take, for instance, how to use a table knife for cutting, for pushing, or for spreading food; how to subtract one number from another; how to rotate an object mentally; and so on. Moreover, schema control would be on multiple levels; so the schema for making a sandwich could call those for cutting bread and for spreading butter.
Separate or Common Input and Output Processes: Are the Two Empirically Distinguishable?
The last three chapters have been concerned with the application of the cognitive neuropsychology method to whole domains, not just with the isolation of individual subsystems. Yet if some form of modularity framework is assumed as a general design principle for cognition, the conclusion that the orthographic, phonological, and semantic analyses of words should be conducted by functionally distinct subsystems is not too surprising. The sights, sounds, and meanings of words are phenomenologically very different. If one were to design a system to categorise words orthographically from the output of earlier visual processing, another to categorise them phonologically from the output of earlier auditory processing, and a third to specify them semantically from the outputs of the orthographic and phonological analyses, then the computational requirements of the three processes would be sufficiently distinct to make a modular ‘solution’ plausible.
In this chapter, an issue will be addressed for which general design principles and phenomenology do not provide any obvious answer. What is the relation between the sets of conclusions reached in chapters 5 and 6? To put it more generally, are the central representations and processes used by output systems the same as those used by input systems? On the claims being made for the cognitive neuropsychology method, this is just the sort of question that the approach should be suited to answer.
For 100 years, it has been well known that the study of the cognitive problems of patients suffering from neurological diseases can produce strikingly counterintuitive observations. From time to time, research workers studying normal function have been strongly influenced by such observations or by the ideas of the neurologists who made them. Bartlett (1932) and Hebb (1949) are two examples. However, in general, neuropsychology has had little impact on the study of normal function.
With any knowledge of the history of clinical neuropsychology, it is easy to understand why this neglect occurred. The standard of description of the psychological impairments of patients was low, often being little more than the bald statement of the clinical opinion of the investigator. There was frequently a dramatic contrast between the vagueness of the psychological account of the disorder and the precision with which anatomical investigation of the lesion that had given rise to it was carried out at post-mortem. Also, the field, like psychology itself, could agree on little but the most obvious and basic theories. Typical are the disputes about the existence of the syndrome visual object agnosia, a specific difficulty in the perception of objects when both sensation and the intellect are intact. The syndrome was widely accepted as real in the golden age of the flowering of neuropsychology (1860-1905) (e.g. Lissauer, 1890). Yet its existence was still being denied nearly a century later (e.g. Bay, 1953; Bender & Feldman, 1972).
Ten years of work on the acquired dyslexias has been basically positive as far as the broader cognitive neuropsychology research program is concerned. However, the overall picture is complicated as the use of the syndrome-complex approach and the large variety of syndromes and sub-syndromes that have been isolated have led to the natural lines of functional cleavage in the domain being not too clearly visible. As a counterpoint, it would be useful to take another domain where the correspondence between syndromes and normal function is simpler. The complementary set of disorders – the agraphias, impairments in the writing process – provides an excellent example in this respect.
Before 1980, agraphia was treated by neuropsychologists as a poor relation of aphasia. Writing was viewed as a highly complex secondary skill, with forms of breakdown of little theoretical interest. Most work was concerned with the pattern of the concomitant aphasic or apraxic disorders that occurred with agraphic difficulties (see, e.g., Marcie & Hécaen, 1979). One influential view was that cases of agraphia that appeared to be pure were not the result of damage to specific mechanisms concerned with writing, but were the secondary effect of a confusional state characterised by a reduction and/or ready shifting of attention (Chedru & Geschwind, 1972). Writing, it was argued, was affected because it is a complex skill that is rarely overlearned.
In chapter 9, it was shown that the use of neuropsychological group studies is not likely to lead to rapid advance in our understanding of normal function. Earlier, it was argued that, by contrast, the single-case study approach is an effective source of evidence. The argument was, however, a pragmatic one. The method leads to conclusions that are internally consistent and that mirror those arrived at by other means. Yet the theoretical structures used to interpret the different types of evidence may, as Rosenthal (1984) has pointed out, seem satisfactory as much for the ease and simplicity with which we can use them as for their empirical adequacy in modelling reality. If one examines the theoretical inferences made from single-case studies in earlier chapters, it becomes clear that they are delicately balanced on a set of implicit supporting assumptions. The inference procedures therefore need to be examined directly to assess whether they can bear the theoretical weight placed on them. In fact, those who have adopted the single-case approach have only rarely attempted to justify their leap from findings on a single patient to a general conclusion.
The most rigorous treatment of the inference procedure is that of Caramazza (1986).