We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What happens in our brain when we make a decision? What triggers a neuron to send out a signal? What is the neural code? This textbook for advanced undergraduate and beginning graduate students provides a thorough and up-to-date introduction to the fields of computational and theoretical neuroscience. It covers classical topics, including the Hodgkin–Huxley equations and Hopfield model, as well as modern developments in the field such as generalized linear models and decision theory. Concepts are introduced using clear step-by-step explanations suitable for readers with only a basic knowledge of differential equations and probabilities, and are richly illustrated by figures and worked-out examples. End-of-chapter summaries and classroom-tested exercises make the book ideal for courses or for self-study. The authors also give pointers to the literature and an extensive bibliography, which will prove invaluable to readers interested in further study.
The neuron models discussed in the previous chapters are deterministic and generate, for most choices of parameters, spike trains that look regular when driven by a constant stimulus. In vivo recordings of neuronal activity, however, are characterized by a high degree of irregularity. The spike train of an individual neuron is far from being periodic, and correlations between the spike timings of neighboring neurons are weak. If the electrical activity picked up by an extracellular electrode is made audible by a loudspeaker then what we basically hear is noise. The question whether this is indeed just noise or rather a highly efficient way of coding information cannot easily be answered. Indeed, listening to a computer modem or a fax machine might also leave the impression that this is just noise. Being able to decide whether we are witnessing the neuronal activity that is underlying the composition of a poem (or the electronic transmission of a love letter) and not just meaningless flicker is one of the most burning problems in neuroscience.
Several experiments have been undertaken to tackle this problem. It seems that a neuron in vitro, once it is isolated from the network, can react in a very reliable and reproducible manner to a fluctuating input current, and so can neurons in the sensory cortex in vivo when driven by a strong time-dependent signal. On the other hand, neurons produce irregular spike trains in the absence of any temporally structured stimuli. Irregular spontaneous activity, i.e., activity that is not related in any obvious way to external stimulation, and trial-to-trial variations in neuronal responses are often considered as noise.
In the network models discussed in Parts III and IV, each synapse has so far been characterized by a single constant parameter wij, called the synaptic weight, synaptic strength, or synaptic efficacy. If wij is constant, the amplitude of the response of a postsynaptic neuron i to the arrival of action potentials from a presynaptic neuron j should always be the same. Electrophysiological experiments, however, show that the response amplitude is not fixed but can change over time. In experimental neuroscience, changes of the synaptic strength are called synaptic plasticity.
Appropriate stimulation paradigms can induce changes of the postsynaptic response that last for hours or days. If the stimulation paradigm leads to a persistent increase of the synaptic efficacy, the effect is called long-term potentiation of synapses, or LTP for short. If the result is a decrease of the synaptic efficacy, it is called long-term depression (LTD). These persistent changes are thought to be the neuronal correlate of learning and memory. LTP and LTD are different from short-term synaptic plasticity such as synaptic facilitation or depression that we have encountered in Section 3.1. Facilitated or depressed synapses decay back to their normal strength within less than a few seconds, whereas, after an LTP or LTD protocol, synapses keep their new values for hours. The long-term storage of the new values is thought to be the basis of long-lasting memories.
The firing of action potentials has been successfully described by the Hodgkin–Huxley model, originally for the spikes in the giant axon of the squid but also, with appropriate modifications of the model, for other neuron types. The Hodgkin–Huxley model is defined by four nonlinear differential equations. The behavior of high-dimensional systems of nonlinear differential equations is difficult to visualize – and even more difficult to analyze. For an understanding of the firing behavior of the Hodgkin–Huxley model, we therefore need to turn to numerical simulations of the model. In Section 4.1 we show, as an example, some simulation results in search of the firing threshold of the Hodgkin–Huxley model. However, it remains to show whether we can get some deeper insights into the observed behavior of the model.
Four equations are in fact just two more than two: in Section 4.2 we exploit the temporal properties of the gating variables of the Hodgkin–Huxley model so as to approximate the four-dimensional differential equation by a two-dimensional one. Two-dimensional differential equations can be studied in a transparent manner by means of a technique known as “phase plane analysis.” Section 4.3 is devoted to the phase plane analysis of generic neuron models consisting of two coupled differential equations, one for the membrane potential and the other for an auxiliary variable.
From a biophysical point of view, action potentials are the result of currents that pass through ion channels in the cell membrane. In an extensive series of experiments on the giant axon of the squid, Hodgkin and Huxley succeeded in measuring these currents and described their dynamics in terms of differential equations. Their paper published in 1952, which presents beautiful experiments combined with an elegant mathematical theory (Hodgkin and Huxley, 1952), was rapidly recognized as groundbreaking work and eventually led to the Nobel Prize for Hodgkin and Huxley in 1963. In this chapter, the Hodgkin–Huxley model is reviewed and its behavior illustrated by several examples.
The Hodgkin–Huxley model in its original form describes only three types of ion channel. However, as we shall see in Section 2.3 it can be extended to include many other ion channel types. The Hodgkin–Huxley equations are the basis for detailed neuron models which account for different types of synapse, and the spatial geometry of an individual neuron. Synaptic dynamics and the spatial structure of dendrites are the topics of Chapter 3. The Hodgkin–Huxley model is also the starting point for the derivation of simplified neuron models in Chapter 4 and will serve as a reference throughout the discussion of generalized integrate-and-fire models in Part II of the book.
Before we can turn to the Hodgkin–Huxley equations, we need to give some additional information on the equilibrium potential of ion channels.
Neurons in the brain receive input from thousands of other, presynaptic neurons, which emit action potentials and send their spikes to their postsynaptic targets. From the perspective of a postsynaptic neuron receiving a barrage of spikes, spike arrival times may look completely random, even under the assumption that presynaptic neurons generate their spikes by a deterministic process. Indeed, as we have seen in the preceding chapter, internal noise sources of a cell, such as spontaneous opening of ion channels, do not account for all the variability of spike-trains encountered in freely behaving animals in vivo. Rather, it is likely that a large fraction of the apparent variability is generated by the network. Modeling studies confirm that networks with fixed random connectivity can lead to chaos on the microscopic level, so that spike arrival times appear to be random even if generated by a deterministic network.
In this chapter, we discuss the consequences of stochastic spike arrivals for modeling. The “noise” generated by the network is often described by a noise term in the differential equation of the membrane voltage (Section 8.1). Such a noise term, typically modeled as white noise or colored noise, can be derived in a framework of stochastic spike arrival, as shown in Section 8.2. Stochastic spike arrival leads to fluctuations of the membrane potential which will be discussed in the case of a passive membrane (Section 8.2.1) – or, more generally, for neuron models in the subthreshold regime.
Humans remember important events in their lives. You might be able to recall every detail of your first exam at college, or of your first public speech, or of your first day in kindergarten, or of the first time you went to a new school after your family moved to a new city. Human memory works with associations. If you hear the voice of an old friend on the phone, you may spontaneously recall stories that you had not thought of for years. If you are hungry and see a picture of a banana, you might vividly recall the taste and smell of a banana … and thereby realize that you are indeed hungry.
In this chapter, we present models of neural networks that describe the recall of previously stored items from memory. In Section 17.1 we start with a few examples of associative recall to prepare the stage for the modeling work later on. In Section 17.2 we introduce an abstract network model of memory recall, known as the Hopfield model. We take this network as a starting point and add, in subsequent sections, some biological realism to the model.
Associations and memory
A well-known demonstration of the strong associations which are deeply embedded in the human brain is given by the following task. The aim is to respond as quickly as possible to three questions.
Detailed conductance-based neuron models can reproduce electrophysiological measurements to a high degree of accuracy, but because of their intrinsic complexity these models are difficult to analyze. For this reason, simple phenomenological spiking neuron models are highly popular for studies of neural coding, memory, and network dynamics. In this chapter we discuss formal threshold models of neuronal firing, also called integrate-andfire models.
The shape of the action potential of a given neuron is rather stereotyped with very little change between one spike and the next. Thus, the shape of the action potential which travels along the axon to a postsynaptic neuron cannot be used to transmit information; rather, from the point of view of the receiving neuron, action potentials are “events” which are fully characterized by the arrival time of the spike at the synapse. Note that spikes from different neuron types can have different shapes and the duration and shape of the spike does influence neurotransmitter release; but the spikes that arrive at a given synapse all come from the same presynaptic neuron and – if we neglect effects of fatigue of ionic channels in the axon – we can assume that its time course is always the same. Therefore we make no effort to model the exact shape of an action potential. Rather, spikes are treated as events characterized by their firing time – and the task consists in finding a model so as to reliably predict spike timings.
There are various ways to introduce noise in formal spiking neuron models. In the previous chapter we focused on input noise in the form of stochastic spike arrival. In this chapter we assume that the input is known or can be estimated. Stochasticity arises at the level of the neuronal spike generation, i.e., at the moment of the output. The noisy output can be interpreted as arising from a “soft” threshold that enables an “escape” of the membrane potential across the threshold even before the threshold is reached. Models with a noisy threshold or escape noise are the basis of Generalized Linear Models which will be used in Chapters 10 and 11 as a powerful statistical tool for modeling spike-train data.
In Section 9.1, the notion of escape noise is introduced. In Section 9.2 we determine the likelihood that a specific spike train is generated by a neuron model with escape noise. In Section 9.3 we apply the escape noise formalism to the Spike Response Model already encountered in Chapter 6 and show an interesting link to the renewal statistics encountered in Chapter 7. The escape rate formalism gives rise to an efficient description of noise processes, independently of their biophysical nature, be it channel noise or stochastic spike arrival. Indeed, as shown in Section 9.4, noisy input models and noisy output models can behave rather similarly.
When an experimenter injects a strong step current into the soma of a neuron, the response consists of a series of spikes separated by long or short intervals. The stereotypical arrangement of short, long or very long interspike intervals defines the neuronal firing pattern. In Chapter 2 we have already encountered firing patterns such as tonic, adapting, or delayed spike firing. In addition to these, several variants of burst firing have also been observed in real neurons (see Fig. 6.1). This diversity of firing patterns can be explained, to a large extent, by adaptation mechanisms which in turn depend on the zoo of available ion channels (Chapter 2) and neuronal anatomy (Chapter 3).
In order to describe firing patterns, and in particular adaptation, in a transparent mathematical framework, we start in this chapter with the simplified model of spike initiation from Chapter 5 and include a phenomenological equation for subthreshold and spiketriggered adaptation. The resulting model is called the adaptive exponential integrate-andfire (AdEx; Section 6.1). We then use this simple model to explain the main firing patterns (Section 6.2). In Section 6.3, we describe how the parameters of the subthreshold and spike-triggered adaptation reflect the contribution of various ion channels and of dendritic morphology. Finally, we introduce the Spike Response Model (SRM; Section 6.4) as a transparent framework to describe neuronal dynamics. The Spike Response Model will serve as a starting point for the Generalized Linear Models which we will discuss later, in Chapter 9.
The brain contains millions of neurons which are organized in different brain areas, within a brain area organized in different subregions, inside each small region into different layers, inside each layer into various cell types. The first two parts of this book focused on the mathematical description of an isolated neuron. Starting with this chapter, we shift our attention to the collective properties of groups of neurons, which we call “neuronal populations.” Instead of modeling the spike times of a single neuron which belongs, for example, to the cell class “pyramidal” in layer 5 of subregion C4 in brain region S1 (the numbers here are completely arbitrary), we can ask the question: Suppose a human subject or animal receives a visual, auditory, or somatosensory stimulus – what is the activity of all the cells in this layer of this subregion that are of type “pyramidal” in response to the stimulus? What is the response of this subregion as a whole? What is the response of a brain area? In other words, at any of the scales of spatial resolution (Fig. 12.1), we may be interested in the response of the neuronal population as a whole, rather than in the spikes of individual neurons.
In the previous chapter, the notion of a homogeneous population of neurons was introduced. Neurons within the population can be independent, fully connected, or randomly connected, but they should all have identical, or at least similar, parameters and all neurons should receive similar input. For such a homogeneous population of neurons, it is possible to predict the population activity in the stationary state of asynchronous firing (Section 12.4). While the arguments we made in the previous chapter are general and do not rely on any specific neuron model, they are unfortunately restricted to the stationary state.
In a realistic situation, neurons in the brain receive time-dependent input. Humans change their direction of gaze spontaneously two or three times per second. After each gaze change, a new image impinges on the retina and is transmitted to the visual cortex. Auditory stimuli such as music or traffic noise have a rich intrinsic temporal structure. If humans explore the texture of a surface which by itself is static, they move their fingers so as to actively create temporal structure in the touch perception. If we think back to our last holiday, we recall sequences of events rather than static memory items. When we type a message on a keyboard, we move our fingers in a rapid pattern.
The world is continuous. Humans walk along corridors and streets, move their arms, turn their head, and orient the direction of gaze. All of these movements and gestures can be described by continuous variables such as position, head direction, gaze orientation, etc. These continuous variables need to be represented in the brain. Field models are designed to encode such continuous variables.
Objects such as houses, trees, cars, pencils have a finite extension in three-dimensional space. Visual input arising from these and other objects is projected onto the retinal photoreceptors and gives rise to a two-dimensional image in the retina. This image is already preprocessed by nerve cells in the retina and undergoes some further processing stages before it arrives in the cortex. A large fraction of the primary visual cortex is devoted to processing of information from the retinal area around the fovea. As a consequence, the activation pattern on the cortical surface resembles a coarse, deformed and distorted image of the object (Fig. 18.1). Topology is largely preserved so that neighboring neurons in the cortex process neighboring points of retinal space. In other words, neighboring neurons have similar receptive fields, which give rise to cortical maps; see Chapter 12.
In the ten preceding chapters, we have come a long way: starting from the biophysical basis of neuronal dynamics we arrived at a description of neurons that we called generalized integrate-and-fire models. We have seen that neurons contain multiple types of ion channels embedded in a capacitive membrane (Chapter 2). We have seen how basic principles regulate the dynamics of electrical current and membrane potential in synapses, dendrites and axons (Chapter 3). We have seen that sodium and potassium ion channels form an excitable system characterized by a threshold mechanism (Chapter 4) and that other ion channels shape the spike after-effects (Chapter 6). Finally, we have seen in Chapters 4, 5 and 6 how biophysical models can be reduced by successive approximations to other, simpler, models such as the LIF, EIF, AdEx, and SRM. Moreover, we have added noise to our neuron models (Chapters 7 and 9). At this point, it is natural to step back and check whether our assumptions were too stringent, whether the biophysical assumptions were well-founded, and whether the generalized models can explain neuronal data.
We laid out the mathematical groundwork in Chapter 10; we can now set out to apply these statistical methods to real data. We can test the performance of these, and other, models by using them as predictive models of encoding.