To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Neurons have intricate morphologies: the central part of the cell is the soma, which contains the genetic information and a large fraction of the molecular machinery. At the soma originate long wire-like extensions which come in two different flavors. First, the dendrites form a multitude of smaller or larger branches on which synapses are located. The synapses are the contact points where information from other neurons (i.e., “presynaptic” cells) arrives. Second, also originating at the soma, is the axon, which the neuron uses to send action potentials to its target neurons. Traditionally, the transition region between soma and axon is thought to be the crucial region where the decision is taken whether a spike is sent out or not.
The Hodgkin–Huxley model, at least in the form presented in the previous chapter, disregards this spatial structure and reduces the neuron to a point-like spike generator – despite the fact that the precise spatial layout of a neuron could potentially be important for signal processing in the brain. In this chapter we will discuss how some of the spatial aspects can be taken into account by neuron models. In particular we focus on the properties of the synaptic contact points between neurons and on the electrical function of dendrites.
The primary aim of this chapter is to introduce several elementary notions of neuroscience, in particular the concepts of action potentials, postsynaptic potentials, firing thresholds, refractoriness, and adaptation. Based on these notions a preliminary model of neuronal dynamics is built and this simple model (the leaky integrate-and-fire model) will be used as a starting point and reference for the generalized integrate-and-fire models, which are the main topic of the book, to be discussed in Parts II and III. Since the mathematics used for the simple model is essentially that of a one-dimensional linear differential equation, we take this first chapter as an opportunity to introduce some of the mathematical notation that will be used throughout the rest of the book.
Owing to the limitations of space, we cannot – and do not want to – give a comprehensive introduction to such a complex field as neurobiology. The presentation of the biological background in this chapter is therefore highly selective and focuses on those aspects needed to appreciate the biological background of the theoretical work presented in this book. For an in-depth discussion of neurobiology we refer the reader to the literature mentioned at the end of this chapter.
After the review of neuronal properties in Sections 1.1 and 1.2 we will turn, in Section 1.3, to our first mathematical neuron model. The last two sections are devoted to a discussion of the strengths and limitations of simplified models.
We have approached the idea of universal property from three different angles, producing three different formalisms: adjointness, representability, and limits. In this final chapter, we work out the connections between them.
In principle, anything that can be described in one of the three formalisms can also be described in the others. The situation is similar to that of cartesian and polar coordinates: anything that can be done in polar coordinates can in principle be done in cartesian coordinates, and vice versa, but some things are more gracefully done in one system than the other.
In comparing the three approaches, we will discover many of the fundamental results of category theory. Here are some highlights.
• Limits and colimits in functor categories work in the simplest possible way.
• The embedding of a category A into its presheaf category [Aop, Set] preserves limits (but not colimits).
• The representables are the prime numbers of presheaves: every presheaf can be expressed canonically as a colimit of representables.
• A functor with a left adjoint preserves limits. Under suitable hypotheses, the converse holds too.
• Categories of presheaves [Aop, Set] behave very much like the category of sets, the beginning of an incredible story that brings together the subjects of logic and geometry.
A category is a system of related objects. The objects do not live in isolation: there is some notion of map between objects, binding them together.
Typical examples of what ‘object’ might mean are ‘group’ and ‘topological space’, and typical examples of what ‘map’ might mean are ‘homomorphism’ and ‘continuous map’, respectively. We will see many examples, and we will also learn that some categories have a very different flavour from the two just mentioned. In fact, the ‘maps’ of category theory need not be anything like maps in the sense that you are most likely to be familiar with.
Categories are themselves mathematical objects, and with that in mind, it is unsurprising that there is a good notion of ‘map between categories’. Such maps are called functors. More surprising, perhaps, is the existence of a third level: we can talk about maps between functors, which are called natural transformations. These, then, are maps between maps between categories.
In fact, it was the desire to formalize the notion of natural transformation that led to the birth of category theory. By the early 1940s, researchers in algebraic topology had started to use the phrase ‘natural transformation’, but only in an informal way. Two mathematicians, Samuel Eilenberg and Saunders Mac Lane, saw that a precise definition was needed. But before they could define natural transformation, they had to define functor; and before they could define functor, they had to define category. And so the subject was born.
It is helpful to break neural data analysis into two basic problems. The “encoding” problem concerns how information is encoded in neural spike trains: can we predict the spike trains of a neuron (or population of neurons), given an arbitrary synaptic input, current injection, or sensory stimulus? Conversely, the “decoding” problem concerns how much we can learn from the observation of a sequence of spikes: in particular, how well can we estimate the stimulus that gave rise to the spike train?
The problems of encoding and decoding are difficult both because neural responses are stochastic and because we want to identify these response properties given any possible stimulus in some very large set (e.g., all images that might occur in the world), and there are typically many more such stimuli than we can hope to sample by brute force. Thus the neural coding problem is fundamentally statistical: given a finite number of samples of noisy physiological data, how do we estimate, in a global sense, the neural codebook?
This basic question has taken on a new urgency as neurophysiological recordings allow us to peer into the brain with ever greater facility: with the development of fast computers, inexpensive memory, and large-scale multineuronal recording and high-resolution imaging techniques, it has become feasible to directly observe and analyze neural activity at a level of detail that was impossible in the twentieth century.
In this final chapter, we combine the dynamics of single neurons (Parts I and II) and networks (Part III) with synaptic plasticity (Chapter 19) and illustrate their interaction in a few applications.
In Section 20.1 on “reservoir computing” we show that the network dynamics in random networks of excitatory and inhibitory neurons is sufficiently rich to serve as a computing device that buffers past inputs and computes on present ones. In Section 20.2 we study oscillations that arise in networks of spiking neurons and outline how synaptic plasticity interacts with oscillations. Finally, in Section 20.3, we illustrate why the study of neuronal dynamics is not just an intellectual exercise, but might, one day, become useful for applications or, eventually, benefit human patients.
Reservoir computing
One of the reasons the dynamics of neuronal networks are rich is that networks have a nontrivial connectivity structure linking different neuron types in an intricate interaction pattern. Moreover, network dynamics are rich because they span many time scales. The fastest time scale is set by the duration of an action potential, i.e., a few milliseconds. Synaptic facilitation and depression (Chapter 3) or adaptation (Chapter 6) occur on time scales from a few hundred milliseconds to seconds. Finally, long-lasting changes of synapses can be induced in a few seconds, but last from hours to days (Chapter 19).
In the previous chapter it was shown that an approach based on membrane potential densities can be used to analyze the dynamics of networks of integrate-and-fire neurons. For neuron models that include biophysical phenomena such as refractoriness and adaptation on multiple time scales, however, the resulting system of partial differential equations is situated in more than two dimensions and therefore difficult to solve analytically; even the numerical integration of partial differential equations in high dimensions is slow. To cope with these difficulties, we now indicate an alternative approach to describing the population activity in networks of model neurons. The central concept is expressed as an integral equation of the population activity.
The advantage of the integral equation approach is four-fold. First, the approach works for a broad spectrum of neuron models, such as the Spike Response Model with escape noise and other Generalized Linear Models (see Chapter 9) for which parameters can be directly extracted from experiments (see Chapter 11). Second, it is easy to assign an intuitive interpretation to the quantities that show up in the integral equation. For example, the interspike interval distribution plays a central role. Third, an approximative mathematical treatment of adaptation is possible not only for the stationary population activity, but also for the case of arbitrary time-dependent solutions. Fourth, the integral equations provide a natural basis for the transition to classical “rate equations,” which will be discussed in Chapter 15.
The goal of the work reported in this paper is to use automated, combinatorial synthesis to generate alternative solutions to be used as stimuli by designers for ideation. FuncSION, a computational synthesis tool that can automatically synthesize solution concepts for mechanical devices by combining building blocks from a library, is used for this purpose. The objectives of FuncSION are to help generate a variety of functional requirements for a given problem and a variety of concepts to fulfill these functions. A distinctive feature of FuncSION is its focus on automated generation of spatial configurations, an aspect rarely addressed by other computational synthesis programs. This paper provides an overview of FuncSION in terms of representation of design problems, representation of building blocks, and rules with which building blocks are combined to generate concepts at three levels of abstraction: topological, spatial, and physical. The paper then provides a detailed account of evaluating FuncSION for its effectiveness in providing stimuli for enhanced ideation.
We present and compare two evolutionary algorithm based methods for rectangular architectural layout generation: dense packing and subdivision algorithms. We analyze the characteristics of the two methods on the basis of three floor plan scenarios. Our analyses include the speed with which solutions are generated, the reliability with which optimal solutions can be found, and the number of different solutions that can be found overall. In a following step, we discuss the methods with respect to their different user interaction capabilities. In addition, we show that each method has the capability to generate more complex L-shaped layouts. Finally, we conclude that neither of the methods is superior but that each of them is suitable for use in distinct application scenarios because of its different properties.