To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The emergent capacity for creative abstraction allows humans to alter our concepts and behaviors quickly and have new ones that might not be an easy extension of current ideas. We can model some of the consequences of our actions in the dramatic workplace of our imagination, and generate new ideas. However, principles of overproduction, error, and culling still apply, in the realm of creative abstraction: Many of our ideas are fatal, larger proportions still are significantly out of touch with reality or insane, and those that are neither fatal nor insane are generally impractical. Indeed, the vast majority of new inventions do not reach the marketplace and most new businesses fail within 10 years (Shane, 2008). Perhaps it is for this fundamental reason that creativity and insanity appear linked by eternal intuition: In the hunt for a solution, more ideas are better, but most ideas are still bad. The conservative strategy may therefore be to avoid new ideas altogether, and that is a common approach. Many people appear neither intelligent nor open enough to be genuinely creative, and only a small proportion of those few who manifest creativity are also sufficiently disciplined and emotionally stable to see their ideas realized. The typical best strategy may therefore be to do whatever everyone else does. This is what animals generally do, and it generally works (Kaufman et al., 2011). Nonetheless, stasis also presents its dangers. Because situations change, creativity is necessary and valued to accommodate new circumstances, despite its dangers.
So we may ask such questions as: Where do new ideas come from?; Who generates them?; and How are they generated? Some answers may be forthcoming from the fields of artificial intelligence that attempt to model human creativity. Human beings are constrained and low-capacity processors. Our perceptions are necessarily low-resolution representations of an almost infinitely high-resolution reality. Likewise, our concepts are mere shadows of our perceptions. The informational array that constantly presents itself to us can be simplified, for pragmatic purposes, just as a low-resolution photograph can stand as a substitute for a high-resolution photograph (which can, in turn, stand as a substitute for the thing it represents). Thus, the same “thing-in-itself” can be perceived in many different ways, none of which are necessarily more or less accurate than any other (except insofar as they serve, or fail to serve, some motivated purpose).
As I was finishing up this project, a colleague asked me a question that stuck with me – Why exactly was I doing this book? I ask myself that question every time I am in the final stages of an edited project, as I track down the last chapters and begin the marketing questionnaires. But, of course, this question was a bit deeper. Why creativity and mental illness? Why study it?
Some would argue the topic has been done to death. There have been hundreds and hundreds of studies, most of them analyzing one small piece of the puzzle. Small effects or mixed results are overstated – or consistent patterns are minimized. The benchmark papers are flawed and, even if perfectly executed, would only shed light on one aspect of the question. The analogy of the blind men and the elephant (all touching a different part of the creature and reaching quite distinct assumptions about its essence) is overused but particularly apt in this field.
Part of me is ready for this question with two quotes. One is from my favorite play, Tom Stoppard’s Arcadia. The quote is spoken by an academic; her first sentence refers to her friends’ different research interests in mathematics, history, and English literature:
It’s all trivial – your grouse, my hermit, Bernard’s Byron. Comparing what we’re looking for misses the point. It’s wanting to know that makes us matter . . . If the answers are in the back of the book I can wait, but what a drag. Better to struggle on knowing that failure is final.
Many highly creative individuals have been noted to experience mental health problems. Since the 1960s, the cognitive-behavioral therapy (CBT) model has become pivotal in the understanding and treatment of psychological difficulties. However, it is rarely considered by creativity researchers who more usually focus upon psychoanalytic, personality, contextual, or biological factors. This chapter explores commonalities between the cognitive styles observed in both the creative and those who are psychologically vulnerable. In doing so, it offers new insights into the relationship between creativity and some forms of psychopathology. I begin by introducing CBT, before investigating similarities between the cognitive styles that are (a) considered to underlie mental health difficulties – which are more common in the highly creative – and (b) implicated in the creative process. The chapter ends by examining two strands of research that investigate whether these similarities exist within creative samples.
CBT has arguably become the foremost psychological treatment for many of those suffering with mental health difficulties. The approach has revolutionized mental health care. By way of illustration, the UK’s National Institute of Health and Care Excellence (NICE) has determined, by systematically reviewing the evidence base, that it should be a core component of treatment for those suffering from one of many disorders. These include depression (2009), panic disorder and generalized anxiety disorder (2007), posttraumatic stress disorder (2005a), obsessive–compulsive/body dysmorphic disorder (2005b), and also schizophrenia (2002). Consequently, as part of the largest scheme of its type to date, the government is committed to train and employ a further 3,600 new CBT therapists to practice within the English National Health Service in an attempt to improve access to psychological therapy in England (Clark et al., 2009).
Arguably, one of the most significant new developments in psychological research during the last 30 years is the exploration of the interface between affect and cognition (see De Houwer and Hermans, 2010; Lewis et al., 2008; Power and Dalgleish, 2008). A new perspective on the interface between emotion and cognition is emerging, in which the emphasis is on the interwoven and internal rather than the conceptually external relations between the two (e.g., Damasio, 1994). Another major emphasis has been on the constructive roles that affect can play in cognitive functions as opposed to previous conceptions, where affect was rather one-sidedly seen as detrimental to rational and effective thought (Forgas, 1995, 2008).
The meeting of affect and creativity
In the same period we have also witnessed a significant rise in research on creativity, seen as a scientifically respectable and empirically tractable phenomenon (e.g., Hennessey and Amabile, 2010; Kaufman and Sternberg, 2010). It is to be expected that these two new streams of research on affect and creativity, respectively, would interface. This has indeed happened, and we can now look back on a 25-year-long roster of research on the relationship between mood and creativity (Baas et al., 2008; Rank and Frese, 2008).
What happens in our brain when we make a decision? What triggers a neuron to send out a signal? What is the neural code? This textbook for advanced undergraduate and beginning graduate students provides a thorough and up-to-date introduction to the fields of computational and theoretical neuroscience. It covers classical topics, including the Hodgkin–Huxley equations and Hopfield model, as well as modern developments in the field such as generalized linear models and decision theory. Concepts are introduced using clear step-by-step explanations suitable for readers with only a basic knowledge of differential equations and probabilities, and are richly illustrated by figures and worked-out examples. End-of-chapter summaries and classroom-tested exercises make the book ideal for courses or for self-study. The authors also give pointers to the literature and an extensive bibliography, which will prove invaluable to readers interested in further study.
Are creative people more likely to be mentally ill? This basic question has been debated for thousands of years, with the 'mad genius' concept advanced by such luminaries as Aristotle. There are many studies that argue the answer is 'yes', and several prominent scholars who argue strongly for a connection. There are also those who argue equally strongly that the core studies and scholarship underlying the mad genius myth are fundamentally flawed. This book re-examines the common view that a high level of individual creativity often correlates with a heightened risk of mental illness. It reverses conventional wisdom that links creativity with mental illness, arguing that the two traits are not associated. With contributions from some of the most exciting voices in the fields of psychology, neuroscience, physics, psychiatry, and management, this is a dynamic and cutting-edge volume that will inspire new ideas and studies on this fascinating topic.
The neuron models discussed in the previous chapters are deterministic and generate, for most choices of parameters, spike trains that look regular when driven by a constant stimulus. In vivo recordings of neuronal activity, however, are characterized by a high degree of irregularity. The spike train of an individual neuron is far from being periodic, and correlations between the spike timings of neighboring neurons are weak. If the electrical activity picked up by an extracellular electrode is made audible by a loudspeaker then what we basically hear is noise. The question whether this is indeed just noise or rather a highly efficient way of coding information cannot easily be answered. Indeed, listening to a computer modem or a fax machine might also leave the impression that this is just noise. Being able to decide whether we are witnessing the neuronal activity that is underlying the composition of a poem (or the electronic transmission of a love letter) and not just meaningless flicker is one of the most burning problems in neuroscience.
Several experiments have been undertaken to tackle this problem. It seems that a neuron in vitro, once it is isolated from the network, can react in a very reliable and reproducible manner to a fluctuating input current, and so can neurons in the sensory cortex in vivo when driven by a strong time-dependent signal. On the other hand, neurons produce irregular spike trains in the absence of any temporally structured stimuli. Irregular spontaneous activity, i.e., activity that is not related in any obvious way to external stimulation, and trial-to-trial variations in neuronal responses are often considered as noise.
In the network models discussed in Parts III and IV, each synapse has so far been characterized by a single constant parameter wij, called the synaptic weight, synaptic strength, or synaptic efficacy. If wij is constant, the amplitude of the response of a postsynaptic neuron i to the arrival of action potentials from a presynaptic neuron j should always be the same. Electrophysiological experiments, however, show that the response amplitude is not fixed but can change over time. In experimental neuroscience, changes of the synaptic strength are called synaptic plasticity.
Appropriate stimulation paradigms can induce changes of the postsynaptic response that last for hours or days. If the stimulation paradigm leads to a persistent increase of the synaptic efficacy, the effect is called long-term potentiation of synapses, or LTP for short. If the result is a decrease of the synaptic efficacy, it is called long-term depression (LTD). These persistent changes are thought to be the neuronal correlate of learning and memory. LTP and LTD are different from short-term synaptic plasticity such as synaptic facilitation or depression that we have encountered in Section 3.1. Facilitated or depressed synapses decay back to their normal strength within less than a few seconds, whereas, after an LTP or LTD protocol, synapses keep their new values for hours. The long-term storage of the new values is thought to be the basis of long-lasting memories.
The firing of action potentials has been successfully described by the Hodgkin–Huxley model, originally for the spikes in the giant axon of the squid but also, with appropriate modifications of the model, for other neuron types. The Hodgkin–Huxley model is defined by four nonlinear differential equations. The behavior of high-dimensional systems of nonlinear differential equations is difficult to visualize – and even more difficult to analyze. For an understanding of the firing behavior of the Hodgkin–Huxley model, we therefore need to turn to numerical simulations of the model. In Section 4.1 we show, as an example, some simulation results in search of the firing threshold of the Hodgkin–Huxley model. However, it remains to show whether we can get some deeper insights into the observed behavior of the model.
Four equations are in fact just two more than two: in Section 4.2 we exploit the temporal properties of the gating variables of the Hodgkin–Huxley model so as to approximate the four-dimensional differential equation by a two-dimensional one. Two-dimensional differential equations can be studied in a transparent manner by means of a technique known as “phase plane analysis.” Section 4.3 is devoted to the phase plane analysis of generic neuron models consisting of two coupled differential equations, one for the membrane potential and the other for an auxiliary variable.
From a biophysical point of view, action potentials are the result of currents that pass through ion channels in the cell membrane. In an extensive series of experiments on the giant axon of the squid, Hodgkin and Huxley succeeded in measuring these currents and described their dynamics in terms of differential equations. Their paper published in 1952, which presents beautiful experiments combined with an elegant mathematical theory (Hodgkin and Huxley, 1952), was rapidly recognized as groundbreaking work and eventually led to the Nobel Prize for Hodgkin and Huxley in 1963. In this chapter, the Hodgkin–Huxley model is reviewed and its behavior illustrated by several examples.
The Hodgkin–Huxley model in its original form describes only three types of ion channel. However, as we shall see in Section 2.3 it can be extended to include many other ion channel types. The Hodgkin–Huxley equations are the basis for detailed neuron models which account for different types of synapse, and the spatial geometry of an individual neuron. Synaptic dynamics and the spatial structure of dendrites are the topics of Chapter 3. The Hodgkin–Huxley model is also the starting point for the derivation of simplified neuron models in Chapter 4 and will serve as a reference throughout the discussion of generalized integrate-and-fire models in Part II of the book.
Before we can turn to the Hodgkin–Huxley equations, we need to give some additional information on the equilibrium potential of ion channels.
Neurons in the brain receive input from thousands of other, presynaptic neurons, which emit action potentials and send their spikes to their postsynaptic targets. From the perspective of a postsynaptic neuron receiving a barrage of spikes, spike arrival times may look completely random, even under the assumption that presynaptic neurons generate their spikes by a deterministic process. Indeed, as we have seen in the preceding chapter, internal noise sources of a cell, such as spontaneous opening of ion channels, do not account for all the variability of spike-trains encountered in freely behaving animals in vivo. Rather, it is likely that a large fraction of the apparent variability is generated by the network. Modeling studies confirm that networks with fixed random connectivity can lead to chaos on the microscopic level, so that spike arrival times appear to be random even if generated by a deterministic network.
In this chapter, we discuss the consequences of stochastic spike arrivals for modeling. The “noise” generated by the network is often described by a noise term in the differential equation of the membrane voltage (Section 8.1). Such a noise term, typically modeled as white noise or colored noise, can be derived in a framework of stochastic spike arrival, as shown in Section 8.2. Stochastic spike arrival leads to fluctuations of the membrane potential which will be discussed in the case of a passive membrane (Section 8.2.1) – or, more generally, for neuron models in the subthreshold regime.
Humans remember important events in their lives. You might be able to recall every detail of your first exam at college, or of your first public speech, or of your first day in kindergarten, or of the first time you went to a new school after your family moved to a new city. Human memory works with associations. If you hear the voice of an old friend on the phone, you may spontaneously recall stories that you had not thought of for years. If you are hungry and see a picture of a banana, you might vividly recall the taste and smell of a banana … and thereby realize that you are indeed hungry.
In this chapter, we present models of neural networks that describe the recall of previously stored items from memory. In Section 17.1 we start with a few examples of associative recall to prepare the stage for the modeling work later on. In Section 17.2 we introduce an abstract network model of memory recall, known as the Hopfield model. We take this network as a starting point and add, in subsequent sections, some biological realism to the model.
Associations and memory
A well-known demonstration of the strong associations which are deeply embedded in the human brain is given by the following task. The aim is to respond as quickly as possible to three questions.
Detailed conductance-based neuron models can reproduce electrophysiological measurements to a high degree of accuracy, but because of their intrinsic complexity these models are difficult to analyze. For this reason, simple phenomenological spiking neuron models are highly popular for studies of neural coding, memory, and network dynamics. In this chapter we discuss formal threshold models of neuronal firing, also called integrate-andfire models.
The shape of the action potential of a given neuron is rather stereotyped with very little change between one spike and the next. Thus, the shape of the action potential which travels along the axon to a postsynaptic neuron cannot be used to transmit information; rather, from the point of view of the receiving neuron, action potentials are “events” which are fully characterized by the arrival time of the spike at the synapse. Note that spikes from different neuron types can have different shapes and the duration and shape of the spike does influence neurotransmitter release; but the spikes that arrive at a given synapse all come from the same presynaptic neuron and – if we neglect effects of fatigue of ionic channels in the axon – we can assume that its time course is always the same. Therefore we make no effort to model the exact shape of an action potential. Rather, spikes are treated as events characterized by their firing time – and the task consists in finding a model so as to reliably predict spike timings.