To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Unravelling the mysteries of the mind is perceived to be one of the greatest challenges of the twenty-first century. The development of new biomedical and computational technologies and their application in brain research, which was accelerated during the 1990s by the US ‘Decade of the Brain’ initiative, led to an exponential increase of methods and possibilities to examine, analyse and represent the central nervous system and its workings on many different levels. These developments foster the vision that one day it may be possible to explain psychological and cognitive phenomena as the causal effects of brain chemistry.
At least for some neuroscientists such as the Nobel prize winner Eric Kandel, it is the declared goal to explore all classical philosophical and psychological questions related to cognitive functioning with the methods and concepts of cellular biology. Other researchers think that one day it may even be possible to locate the origin of religious feelings and spirituality in the structures of the brain. According to sociobiologist Edward O. Wilson, the final goal of scientific naturalism will be reached if neurosciences succeed in explaining traditional religion as a material phenomenon. Wolf Singer, a leading neurophysiologist, claims that in the light of the new neuroscientific insights we have to say goodbye to our traditional understanding of human freedom. Even if only parts of such far-reaching claims are realistic, it can be anticipated that the advancement of modern neurosciences will not only change current beliefs about fundamental phenomena of the mind, but our conception of humans in general.
A cynical view regards the criminal law as a crude system of state-enforced sanctions designed to ensure social order: punishment follows the commission of a proscribed act. In one sense this is true: criminal law involves rules, the breaking of which results in punishment, and often rather crude punishment at that (a tyrant from an earlier age would certainly recognise a contemporary prison cell in a British prison, even if he might remark on its relative degree of comfort). This view of criminal law, however, is a very shallow one, and an anachronistic one too. Criminal justice is not purely concerned with the undiscriminating regulation of anti-social behaviour; a modern criminal justice system has far more sophisticated tasks than the mere punishment of those who break the law. These include the need to be imaginative in response to crime and to take into account the range of possible measures that can be invoked to deal with the offender. They also include the need to take into account the moral implications of the system and to ensure that the system is fair to those to whom it reacts. This is demanded not only by the moral imperative that the state should only punish the guilty, but by the pragmatic requirement that to be effective, the criminal justice system should enjoy a reasonable measure of popular support. A harsh system of criminal justice, which lacks the consent of the governed, will simply not work in an open and liberal society.
Attention deficit/hyperactivity disorder (AD/HD) is a medical diagnosis applied to children and adults who are experiencing characteristic behavioural and cognitive difficulties in important aspects of their lives, for example in familial and personal relationships at school or work. The diagnosis attributes these difficulties to problems of impulse control, hyperactivity and inattention. It is claimed that these problems are caused primarily by dysfunctions in the frontal lobes of the brain and that there are predisposing genes. Currently the diagnosis is claimed to relate to between 2% and 5% of all children of compulsory school age in England and Wales.
THE DIAGNOSIS OF AD/HD
In 1968 the American Psychiatric Association (APA) produced the first standardised set of criteria for what was then called hyperkinetic reaction of childhood. This gave way in 1980 to attention deficit disorder with hyperactivity (ADDH), which was revised in 1987 to attention deficit disorder (ADD). A subsequent revision (American Psychiatric Association, 1994) produced the current diagnostic criteria for attention deficit/hyperactivity disorder (AD/HD). These changes in nomenclature reflect changing conceptualisations of the nature of the condition, with a shift away from an emphasis on causation to a continuing emphasis on behavioural symptoms as the defining characteristics of the condition. According to the APA, children with AD/HD fall into one of three main subtypes: predominantly inattentive and distracted, predominantly hyperactive–impulsive, and combining hyperactivity with inattention and distractibility.
By
Sir Dai Rees, Fellow of the Royal Academy of Medicine, a Fellow of the Royal Society and a member of Academia Europea,
Steven Rose, Professor of Biology and Director of the Brain and Behaviour Research Group The Open University, Walton Hall, Milton Keynes MK7 6AA, UK
By
Helen Hodges, Professor Emeritus of Psychology King's College London,
Iris Reuter, Consultant Neurologist Department of Neuroscience, Institute of Psychiatry, King's College, DeCrespigny Park, London SE5 8AF, UK,
Helen Pilcher, Leading Scientist Cell biology group
Stem cells are a very special category of building-block in the human body, versatile in that they can not only divide to make copies of themselves but also turn into many mature final forms that no longer divide. For example, stem cells from blood or bone marrow can turn into nerve cells, and those from the brain can turn into blood. There is intense interest in medical applications to restore and renew body parts by inducing stem cell grafts to multiply into new types of tissue needed for repair. This is a particularly exciting prospect for diseases of brain degeneration which are presently incurable. This chapter explains the important concepts in simple terms and offers an account of the extent to which this promise is being realised in practice and of the hurdles that still remain. The chapter to follow considers the ethical issues raised by these actual and potential advances.
WHERE DO STEM CELLS COME FROM?
Stem cells are found in embryonic, fetal and adult brain and body. The fertilised egg is definitively ‘totipotent’, meaning that all other types of cell derive ultimately from it. As the embryo develops into a fetus, stem cells become progressively programmed to become specific cell types, and before their final evolution into mature non-dividing cells they are often called ‘progenitor’ or ‘precursor’ cells.
By
Sir Dai Rees, Fellow of the Royal Academy of Medicine, a Fellow of the Royal Society and a member of Academia Europea,
Steven Rose, Professor of Biology and Director of the Brain and Behaviour Research Group The Open University, Walton Hall, Milton Keynes MK7 6AA, UK
It is often supposed that all shreds of human agency succumb in the face of advances in the understanding of evolutionary process, genetics and brain function. Conventional wisdom collapses and all responsibility for the consequences of our actions is diminished to the point at which, it is claimed, no blame can be attached to anything we do.
Or so the argument goes, but is it really the case that science has had such serious implications for the way we should think about our own capacity for choice? The importance of the emotions in controlling human behaviour certainly suggests to some that all of us are in the grip of our instincts and our genes. We seem to be surrounded by examples of irrational behaviour, such as when people are in love, in lynching mode or maddened with war fever. The brain (and the genes that contribute to its construction) are such that, when people make conscious choices, they don't really know what they are doing and if so the presumptions of law, morality and common sense must be wrong.
In 1979 the Mayor of San Francisco and one of his officials were gunned down by one Dan White. At his trial White was convicted of manslaughter instead of the first-degree murder of which he was accused. His lawyers produced an original argument which came to be known as the ‘Twinkie defence’.
By
Steven Rose, Professor of Biology and Director of the Brain and Behaviour Research Group The Open University, Walton Hall, Milton Keynes MK7 6AA, UK
The US government designated the 1990s as ‘The Decade of the Brain’. The huge expansion of the neurosciences which took place during that decade has led many to suggest that the first ten years of this new century should be claimed as ‘The Decade of the Mind’. Capitalising on the scale and technological success of the Human Genome Project, understanding – even decoding – the complex interconnected web between the languages of brain and those of mind has come to be seen as science's final frontier. With its hundred billion nerve cells, with their hundred trillion interconnections, the human brain is the most complex phenomenon in the known universe – always of course excepting the interaction of some 6 billion of such brains and their owners within the socio-technological culture of our planetary ecosystem.
The global scale of the research effort now put into the neurosciences, primarily in the United States, but closely followed by Europe and Japan, has turned them from classical ‘little sciences’ into a major industry engaging large teams of researchers, involving billions of dollars from government – including its military wing – and the pharmaceutical industry. Such growth cannot be understood in isolation from the social and economic forces driving our science forward.
The consequence is that what were once disparate fields – anatomy, physiology, molecular biology, genetics and behaviour – are now all embraced within ‘neurobiology’.
My theme is how ideas in neuroscience – laboratory work, theory and seminar room discussions – land in our communities via pharmaceutical promotions, the media, print journalism and litigation; and how there is growing gulf between commonsense notions of responsibility, and a medicalised model of criminal behaviour.
In 1994 I was commissioned by a newspaper to investigate a case that connected a suicidal killer, the drug Prozac and a civil action for liability. Over a period of twelve weeks I attended a trial, in the United States, that involved arguments about impulse control, free will, the action of brain chemistry on human behaviour and, because the arguments were presented, public misunderstanding of current science (Cornwell, 1996).
On 14 September 1989, Joe Wesbecker, a forty-seven-year-old pressman returned to the printing factory Standard Gravure, his former place of work in Louisville, Kentucky, and shot twenty of his co-workers, killing eight, before committing suicide in front of the pressroom supervisor's office. It was discovered soon afterward that Wesbecker had been taking a course of the antidepressant Prozac. Thus Eli Lilly of Indianapolis, the manufacturer and distributor of the drug, became a prime target in a subsequent liability suit brought by the survivors and relatives of the dead. According to his psychiatrist, Dr Lee Coleman of Louisville, Wesbecker had been prescribed Prozac to alleviate depression related to workplace stress and his complaints of continuing unfair treatment by the management at Standard Gravure.
By
Merlin W. Donald, Professor and Head of the Department of Psychology Queen's University, Kingston, Ontario, Canada; Visiting Professor University of Toronto
Our definition of human nature gives us a conceptual foundation for our ideas about human rights, individual responsibility, and personal freedom. These ideas were originally derived from the liberal humanities, and are ultimately the secular modern descendants of the concept of a ‘natural law’ based on earlier religious and philosophical traditions. In this context, this is not a trivial exercise. It provides a conceptual foundation for our legal system, as well as our constitutional protections of human rights. Since the time of Charles Darwin there have been many attempts to define human nature in more scientific terms. In effect, this has amounted to an attempt to derive a new kind of natural law, based largely on scientific evidence, and especially on the theory of evolution. Here I am not speaking of Social Darwinism, an earlier intellectual movement that naively tried to extrapolate the laws of natural selection to human society, but of more recent empirical attempts to construct a culturally universal description of the human condition, and to explain it in terms of evolution and genetics.
In such attempts, human nature is usually defined as having been fixed when our species evolved in the Upper Palaeolithic, and this suggests that we have been genetically engineered to survive under the special conditions of late Stone Age civilisation. This raises the disconcerting possibility that human nature might prove to be maladaptive in today's high-tech, fast-moving, urbanised world. On the other hand, the logic leading to this conclusion is not compelling.
Reductionism comes in two phases. First, there is the monistic move where we explain a great range of things as only aspects of a single basic stuff. Thus, Thales says that all the four elements are really just water. Again, Nietzsche says that all motives are really just forms of the will to power, and Hobbes says that mind and matter are both really just matter. Second, there can follow the atomistic move – made by Democritus and the seventeenth-century physicists – which is slightly different. Here we explain this basic stuff itself as really just an assemblage of ultimate particles, treating the wholes that are formed out of them as secondary and relatively unreal. (I have discussed the various forms of reductionism more fully in Midgley (1995).)
Both these drastic moves can be useful when they are made as the first stage towards a fuller analysis. But both, if made on their own, can be absurdly misleading. It is pretty obvious that Nietzsche's psychology was oversimple. And, if we want to see the shortcomings of atomism, we need only consider a botanist who is asked (perhaps by an archaeologist) to identify a leaf. This botanist does not simply mince up her leaf, put it in the centrifuge and list the resulting molecules. Still less, of course, does she list their constituent atoms, protons and electrons.
In this chapter, I argue that it is inappropriate at present to pursue research into the genetic basis of ‘intelligence’ and of other behavioural traits in humans. I do not think that such research should be prohibited, nor that we should ignore research findings that emerge from other studies and that give us insight into these areas, but I doubt the wisdom of conducting research designed specifically to identify ‘genes’ or ‘genetic variation’ that contributes substantially to the normal variation in human cognitive abilities and behaviours. Set out below are the various arguments that have brought me to this judgement, probably as much from temperament as deliberation. These considerations can be arranged on a variety of different levels:
First are a number of contextual issues such as what is intelligence? Why is it valued so highly? What is it that motivates some scientists to invest so much effort in attempts to measure intelligence, and especially to assess and rank their fellow humans? What lessons can be learned from previous attempts to measure the (intellectual and moral) worth of individuals and races/population groups?
Research aimed at identifying genetic variation associated with inter-individual differences in intelligence within the ‘normal range’ is relatively unlikely to yield important and replicable results and may consume much time, effort and resources.
Such research is unlikely to identify biological determinants of intelligence in ‘normal’ individuals or any clearly beneficial application in medicine or other social realms.
What is true of intelligence will be broadly true also of other personality characteristics, although the demarcation between ‘normal’ and ‘abnormal’ behaviours may be more difficult to define in some of these areas.
Two very important lines of work in the past twenty years have contributed substantially to our understanding of many factors involved in the aetiology and maintenance of generalized anxiety disorder (GAD). One of these lines of work was initiated by the pioneering studies of Andrew Mathews, Colin MacLeod and their colleagues in the mid-1980s on individuals with GAD. In numerous studies, such individuals have demonstrated prominent automatic attentional biases for threatening information, and interpretive biases for ambiguous information that could be threatening or non-threatening (see Mathews & MacLeod, 1994; Mineka, Rafaeli & Yovel, 2003; Williams et al., 1997). It is now known that such biases seem to serve as vulnerability factors for anxiety during periods of stress and to serve a likely role in the maintenance of anxiety once it has developed (e.g. MacLeod et al., this volume; Mathews & MacLeod, 2002).
The other line of work contributing substantially to our understanding of GAD was that initiated by Borkovec and his colleagues in the mid-1980s on the nature, functions and consequences of the worry process, which is seen as so central to current formulations of GAD. Worry is often considered to be the primary cognitive component of anxiety and Borkovec's work has focused on understanding why worry is so excessive and persistent in individuals with GAD. Mathews (1990) published an important and widely cited paper linking these two lines of research by arguing that worry functions to maintain hypervigilance to threatening cues. Borkovec and colleagues (Borkovec, Alcaine & Behar, in press) have also linked these two lines of work by noting that the attentional and interpretive biases for threatening information that are shown by generally anxious individuals seem to provide further sources of input or triggers for the worry process.
An impressive body of empirical evidence, laid down over the past twenty-five or so years, has firmly established that emotional disorders such as anxiety and depression are accompanied by characteristic cognitive biases in the processing of emotional information. This chapter pays tribute to Andrew Mathews' significant contribution to this accumulated knowledge but also to his continued involvement in the new directions that are building on these solid foundations. On a personal note, Andrew has played a pivotal role in the respective lives and careers of both authors, acting in turn as a nurturing teacher, respected colleague and invaluable friend. Although we can never repay our debt of gratitude, nor match his eloquent style and incisive logic, we can, and do, attempt to highlight his recent work so that the importance of his ongoing contributions to this field are represented in this volume.
Assumptions and observations from clinical practice indicate that cognitive biases must be susceptible to some change since their modification forms an important basis of cognitive therapy. This chapter focuses on the development of experimental techniques to modify cognitive biases and on the assessment of the subsequent effects on mood states and vulnerability to anxiety. Research in this direction has the potential to provide a useful laboratory analogue to aid the investigation of naturally occurring biases, as well as allowing us to address questions of causality (see MacLeod et al., this volume) and explore new treatment possibilities. Before embarking on details of methodology, it is worth considering some of the questions that we have attempted to address on the way.
We feel honoured to contribute this chapter to a volume that is devoted to recognizing the unique place that Andrew Mathews has had in the study of anxiety and the anxiety disorders. From the beginning of his career, he set the course for research on anxiety, including psychophysiology, treatment and, most recently, cognitive mechanisms. This chapter will relate predominantly to Mathews' contribution to the cognitive psychopathology of anxiety (see Chapter 1 in this book), attempting to integrate Mathews and Mackintosh's (1998) concepts of anxiety with Foa and Kozak's (1985, 1986; see also Foa & McNally, 1996; Foa & Cahill, 2001) emotional processing theory to further our understanding of social anxiety disorder and its treatment.
Emotional processing theory
Emotional processing theory utilizes information processing concepts to explain the psychopathology and treatment of anxiety disorders. A basic concept in emotional processing theory is the presence of fear structures that serve as blueprints for responding to danger (Foa & Kozak, 1986; Lang, 1977). The theory proposes that three kinds of representations are contained in these structures:
Information about the stimuli,
Information about verbal, physiological and behavioural responses, and
The interpretive meaning of these stimuli and responses.
Thus, a fear structure is comprised of an intricate network of associations of the different elements. A normal fear structure contains associations that generally reflect reality accurately (e.g. a car veering towards me → fear (heart rate acceleration, scanning the road, veering my car off the road) → cars coming towards me are dangerous).