To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There are various ways to introduce noise in formal spiking neuron models. In the previous chapter we focused on input noise in the form of stochastic spike arrival. In this chapter we assume that the input is known or can be estimated. Stochasticity arises at the level of the neuronal spike generation, i.e., at the moment of the output. The noisy output can be interpreted as arising from a “soft” threshold that enables an “escape” of the membrane potential across the threshold even before the threshold is reached. Models with a noisy threshold or escape noise are the basis of Generalized Linear Models which will be used in Chapters 10 and 11 as a powerful statistical tool for modeling spike-train data.
In Section 9.1, the notion of escape noise is introduced. In Section 9.2 we determine the likelihood that a specific spike train is generated by a neuron model with escape noise. In Section 9.3 we apply the escape noise formalism to the Spike Response Model already encountered in Chapter 6 and show an interesting link to the renewal statistics encountered in Chapter 7. The escape rate formalism gives rise to an efficient description of noise processes, independently of their biophysical nature, be it channel noise or stochastic spike arrival. Indeed, as shown in Section 9.4, noisy input models and noisy output models can behave rather similarly.
When an experimenter injects a strong step current into the soma of a neuron, the response consists of a series of spikes separated by long or short intervals. The stereotypical arrangement of short, long or very long interspike intervals defines the neuronal firing pattern. In Chapter 2 we have already encountered firing patterns such as tonic, adapting, or delayed spike firing. In addition to these, several variants of burst firing have also been observed in real neurons (see Fig. 6.1). This diversity of firing patterns can be explained, to a large extent, by adaptation mechanisms which in turn depend on the zoo of available ion channels (Chapter 2) and neuronal anatomy (Chapter 3).
In order to describe firing patterns, and in particular adaptation, in a transparent mathematical framework, we start in this chapter with the simplified model of spike initiation from Chapter 5 and include a phenomenological equation for subthreshold and spiketriggered adaptation. The resulting model is called the adaptive exponential integrate-andfire (AdEx; Section 6.1). We then use this simple model to explain the main firing patterns (Section 6.2). In Section 6.3, we describe how the parameters of the subthreshold and spike-triggered adaptation reflect the contribution of various ion channels and of dendritic morphology. Finally, we introduce the Spike Response Model (SRM; Section 6.4) as a transparent framework to describe neuronal dynamics. The Spike Response Model will serve as a starting point for the Generalized Linear Models which we will discuss later, in Chapter 9.
The brain contains millions of neurons which are organized in different brain areas, within a brain area organized in different subregions, inside each small region into different layers, inside each layer into various cell types. The first two parts of this book focused on the mathematical description of an isolated neuron. Starting with this chapter, we shift our attention to the collective properties of groups of neurons, which we call “neuronal populations.” Instead of modeling the spike times of a single neuron which belongs, for example, to the cell class “pyramidal” in layer 5 of subregion C4 in brain region S1 (the numbers here are completely arbitrary), we can ask the question: Suppose a human subject or animal receives a visual, auditory, or somatosensory stimulus – what is the activity of all the cells in this layer of this subregion that are of type “pyramidal” in response to the stimulus? What is the response of this subregion as a whole? What is the response of a brain area? In other words, at any of the scales of spatial resolution (Fig. 12.1), we may be interested in the response of the neuronal population as a whole, rather than in the spikes of individual neurons.
In the previous chapter, the notion of a homogeneous population of neurons was introduced. Neurons within the population can be independent, fully connected, or randomly connected, but they should all have identical, or at least similar, parameters and all neurons should receive similar input. For such a homogeneous population of neurons, it is possible to predict the population activity in the stationary state of asynchronous firing (Section 12.4). While the arguments we made in the previous chapter are general and do not rely on any specific neuron model, they are unfortunately restricted to the stationary state.
In a realistic situation, neurons in the brain receive time-dependent input. Humans change their direction of gaze spontaneously two or three times per second. After each gaze change, a new image impinges on the retina and is transmitted to the visual cortex. Auditory stimuli such as music or traffic noise have a rich intrinsic temporal structure. If humans explore the texture of a surface which by itself is static, they move their fingers so as to actively create temporal structure in the touch perception. If we think back to our last holiday, we recall sequences of events rather than static memory items. When we type a message on a keyboard, we move our fingers in a rapid pattern.
The world is continuous. Humans walk along corridors and streets, move their arms, turn their head, and orient the direction of gaze. All of these movements and gestures can be described by continuous variables such as position, head direction, gaze orientation, etc. These continuous variables need to be represented in the brain. Field models are designed to encode such continuous variables.
Objects such as houses, trees, cars, pencils have a finite extension in three-dimensional space. Visual input arising from these and other objects is projected onto the retinal photoreceptors and gives rise to a two-dimensional image in the retina. This image is already preprocessed by nerve cells in the retina and undergoes some further processing stages before it arrives in the cortex. A large fraction of the primary visual cortex is devoted to processing of information from the retinal area around the fovea. As a consequence, the activation pattern on the cortical surface resembles a coarse, deformed and distorted image of the object (Fig. 18.1). Topology is largely preserved so that neighboring neurons in the cortex process neighboring points of retinal space. In other words, neighboring neurons have similar receptive fields, which give rise to cortical maps; see Chapter 12.
In the ten preceding chapters, we have come a long way: starting from the biophysical basis of neuronal dynamics we arrived at a description of neurons that we called generalized integrate-and-fire models. We have seen that neurons contain multiple types of ion channels embedded in a capacitive membrane (Chapter 2). We have seen how basic principles regulate the dynamics of electrical current and membrane potential in synapses, dendrites and axons (Chapter 3). We have seen that sodium and potassium ion channels form an excitable system characterized by a threshold mechanism (Chapter 4) and that other ion channels shape the spike after-effects (Chapter 6). Finally, we have seen in Chapters 4, 5 and 6 how biophysical models can be reduced by successive approximations to other, simpler, models such as the LIF, EIF, AdEx, and SRM. Moreover, we have added noise to our neuron models (Chapters 7 and 9). At this point, it is natural to step back and check whether our assumptions were too stringent, whether the biophysical assumptions were well-founded, and whether the generalized models can explain neuronal data.
We laid out the mathematical groundwork in Chapter 10; we can now set out to apply these statistical methods to real data. We can test the performance of these, and other, models by using them as predictive models of encoding.
This textbook for advanced undergraduate and beginning graduate students provides a systematic introduction into the fields of neuron modeling, neuronal dynamics, neural coding, and neural networks. It can be used as a text for introductory courses on Computational and Theoretical Neuroscience or as main text for a more focused course on Neural Dynamics and Neural Modeling at the graduate level. The book is also a useful resource for researchers and students who want to learn how different models of neurons and descriptions of neural activity are related to each other.
All mathematical concepts are introduced the pedestrian way: step by step. All chapters are richly illustrated by figures and worked examples. Each chapter closes with a short summary and a series of mathematical Exercises. On the authors' webpage Python source code is provided for numerical simulations that illustrate the main ideas and models of the chapter (http://lcn.epfl.ch/~gerstner/NeuronalDynamics.html).
The book is organized into four parts with a total of 20 chapters. Part I provides a general introduction to the foundations of computational neuroscience and its mathematical tools. It covers classic material such as the Hodgkin–Huxley model, ion channels and dendrites, or phase plane analysis of two-dimensional systems of differential equations. A special focus is put on the firing threshold for the generation of action potentials, in the Hodgkin–Huxley models, as well as in reduced two-dimensional neuron models such as the Morris–Lecar model.
Neurons have intricate morphologies: the central part of the cell is the soma, which contains the genetic information and a large fraction of the molecular machinery. At the soma originate long wire-like extensions which come in two different flavors. First, the dendrites form a multitude of smaller or larger branches on which synapses are located. The synapses are the contact points where information from other neurons (i.e., “presynaptic” cells) arrives. Second, also originating at the soma, is the axon, which the neuron uses to send action potentials to its target neurons. Traditionally, the transition region between soma and axon is thought to be the crucial region where the decision is taken whether a spike is sent out or not.
The Hodgkin–Huxley model, at least in the form presented in the previous chapter, disregards this spatial structure and reduces the neuron to a point-like spike generator – despite the fact that the precise spatial layout of a neuron could potentially be important for signal processing in the brain. In this chapter we will discuss how some of the spatial aspects can be taken into account by neuron models. In particular we focus on the properties of the synaptic contact points between neurons and on the electrical function of dendrites.
The primary aim of this chapter is to introduce several elementary notions of neuroscience, in particular the concepts of action potentials, postsynaptic potentials, firing thresholds, refractoriness, and adaptation. Based on these notions a preliminary model of neuronal dynamics is built and this simple model (the leaky integrate-and-fire model) will be used as a starting point and reference for the generalized integrate-and-fire models, which are the main topic of the book, to be discussed in Parts II and III. Since the mathematics used for the simple model is essentially that of a one-dimensional linear differential equation, we take this first chapter as an opportunity to introduce some of the mathematical notation that will be used throughout the rest of the book.
Owing to the limitations of space, we cannot – and do not want to – give a comprehensive introduction to such a complex field as neurobiology. The presentation of the biological background in this chapter is therefore highly selective and focuses on those aspects needed to appreciate the biological background of the theoretical work presented in this book. For an in-depth discussion of neurobiology we refer the reader to the literature mentioned at the end of this chapter.
After the review of neuronal properties in Sections 1.1 and 1.2 we will turn, in Section 1.3, to our first mathematical neuron model. The last two sections are devoted to a discussion of the strengths and limitations of simplified models.
It is helpful to break neural data analysis into two basic problems. The “encoding” problem concerns how information is encoded in neural spike trains: can we predict the spike trains of a neuron (or population of neurons), given an arbitrary synaptic input, current injection, or sensory stimulus? Conversely, the “decoding” problem concerns how much we can learn from the observation of a sequence of spikes: in particular, how well can we estimate the stimulus that gave rise to the spike train?
The problems of encoding and decoding are difficult both because neural responses are stochastic and because we want to identify these response properties given any possible stimulus in some very large set (e.g., all images that might occur in the world), and there are typically many more such stimuli than we can hope to sample by brute force. Thus the neural coding problem is fundamentally statistical: given a finite number of samples of noisy physiological data, how do we estimate, in a global sense, the neural codebook?
This basic question has taken on a new urgency as neurophysiological recordings allow us to peer into the brain with ever greater facility: with the development of fast computers, inexpensive memory, and large-scale multineuronal recording and high-resolution imaging techniques, it has become feasible to directly observe and analyze neural activity at a level of detail that was impossible in the twentieth century.
In this final chapter, we combine the dynamics of single neurons (Parts I and II) and networks (Part III) with synaptic plasticity (Chapter 19) and illustrate their interaction in a few applications.
In Section 20.1 on “reservoir computing” we show that the network dynamics in random networks of excitatory and inhibitory neurons is sufficiently rich to serve as a computing device that buffers past inputs and computes on present ones. In Section 20.2 we study oscillations that arise in networks of spiking neurons and outline how synaptic plasticity interacts with oscillations. Finally, in Section 20.3, we illustrate why the study of neuronal dynamics is not just an intellectual exercise, but might, one day, become useful for applications or, eventually, benefit human patients.
Reservoir computing
One of the reasons the dynamics of neuronal networks are rich is that networks have a nontrivial connectivity structure linking different neuron types in an intricate interaction pattern. Moreover, network dynamics are rich because they span many time scales. The fastest time scale is set by the duration of an action potential, i.e., a few milliseconds. Synaptic facilitation and depression (Chapter 3) or adaptation (Chapter 6) occur on time scales from a few hundred milliseconds to seconds. Finally, long-lasting changes of synapses can be induced in a few seconds, but last from hours to days (Chapter 19).
In the previous chapter it was shown that an approach based on membrane potential densities can be used to analyze the dynamics of networks of integrate-and-fire neurons. For neuron models that include biophysical phenomena such as refractoriness and adaptation on multiple time scales, however, the resulting system of partial differential equations is situated in more than two dimensions and therefore difficult to solve analytically; even the numerical integration of partial differential equations in high dimensions is slow. To cope with these difficulties, we now indicate an alternative approach to describing the population activity in networks of model neurons. The central concept is expressed as an integral equation of the population activity.
The advantage of the integral equation approach is four-fold. First, the approach works for a broad spectrum of neuron models, such as the Spike Response Model with escape noise and other Generalized Linear Models (see Chapter 9) for which parameters can be directly extracted from experiments (see Chapter 11). Second, it is easy to assign an intuitive interpretation to the quantities that show up in the integral equation. For example, the interspike interval distribution plays a central role. Third, an approximative mathematical treatment of adaptation is possible not only for the stationary population activity, but also for the case of arbitrary time-dependent solutions. Fourth, the integral equations provide a natural basis for the transition to classical “rate equations,” which will be discussed in Chapter 15.
Human Motivation, originally published in 1987, offers a broad overview of theory and research from the perspective of a distinguished psychologist whose creative empirical studies of human motives span forty years. David McClelland describes methods for measuring motives, the development of motives out of natural incentives and the relationship of motives to emotions, to values and to performance under a variety of conditions. He examines four major motive systems - achievement, power, affiliation and avoidance - reviewing and evaluating research on how these motive systems affect behaviour. Scientific understanding of motives and their interaction, he argues, contributes to understanding of such diverse and important phenomena as the rise and fall of civilisations, the underlying causes of war, the rate of economic development, the nature of leadership, the reasons for authoritarian or democratic governing styles, the determinants of success in management and the factors responsible for health and illness. Students and instructors alike will find this book an exciting and readable presentation of the psychology of human motivation.
Linguistics, neurocognition, and phenomenological psychology are fundamentally different fields of research. Helmut Schnelle provides an interdisciplinary understanding of a new integrated field in which linguists can be competent in neurocognition and neuroscientists in structure linguistics. Consequently the first part of the book is a systematic introduction to the function of the form and meaning-organising brain component - with the essential core elements being perceptions, actions, attention, emotion and feeling. Their descriptions provide foundations for experiences based on semantics and pragmatics. The second part is addressed to non-linguists and presents the structural foundations of currently established linguistic frameworks. This book should be serious reading for anyone interested in a comprehensive understanding of language, in which evolution, functional organisation and hierarchies are explained by reference to brain architecture and dynamics.