To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Here is a list of resources related to computational neuroscience modelling. Most of these are resources that, at the time of writing, are available as open source software, but we cannot say for how long they will continue to be available in this way. Please refer to our web site, compneuroprinciples.org, for more up-to-date information.
Simulators
If the solution of a computational model is the evolution of a quantity, such as membrane potential or ionic concentration, over time and space, it constitutes a simulation of the system under study. Often simulated quantities change continuously and deterministically, but sometimes quantities can move between discrete values stochastically to represent, for example, the release of a synaptic vesicle or the opening and closing of an ion channel.
The process of describing and implementing the simulations of complex biophysical processes efficiently is an art in itself. Fortunately, for many of the models described in this book, in particular, models of the electrical and chemical activity of neurons, and to an extent the models of networks, this problem has been solved. An abundance of mature computer simulation packages exists, and the problem is in choosing a package and learning to use it.
This chapter covers a spectrum of models for both chemical and electrical synapses. Different levels of detail are delineated in terms of model complexity and suitability for different situations. These range from empirical models of voltage waveforms to more detailed kinetic schemes, and to complex stochastic models, including vesicle recycling and release. Simple static models that produce the same postsynaptic response for every presynaptic action potential are compared with more realistic models incorporating short-term dynamics that produce facilitation and depression of the postsynaptic response. Different postsynaptic receptor mediated excitatory and inhibitory chemical synapses are described. Electrical connections formed by gap junctions are considered.
Synaptic input
So far we have considered neuronal inputs in the form of electrical stimulation via an electrode, as in an electrophysiological experiment. Many neuronal modelling endeavours start by trying to reproduce the electrical activity seen in particular experiments. However, once a model is established on the basis of such experimental data, it is often desired to explore the model in settings that are not reproducible in an experiment. For example, how does the complex model neuron respond to patterns of synaptic input? How does a model network of neurons function? What sort of activity patterns can a network produce? These questions, and many others besides, require us to be able to model synaptic input. We discuss chemical synapses in most detail as they are the principal mediators of targeted neuronal communication. Electrical synapses are discussed in Section 7.7.
Most of the mathematical models presented in this book involve differential equations describing the evolution in time and space of quantities such as membrane potential or calcium concentration. The differential equations are usually too complex to allow an analytical solution that would enable the explicit calculation of a value of, say, voltage at any particular time point or spatial position. The alternative is to derive algebraic expressions that approximate the differential equations and allow the calculation of quantities at specific, predefined points in time and space. This is known as numerical integration. Methods for defining temporal and spatial grid points and formulating algebraic expressions involving these grid points from the continuous (in time and space) differential equations are known as finite difference and finite element methods.
It is not our intention here to provide full details of these numerical integration methods. Instead, we will outline some of the simplest methods to illustrate how they work. This includes the Crank–Nicholson method (Crank and Nicholson, 1947), which is widely used as a basis for solving the cable equation. Further details on these methods as applied to neural models can be found in Carnevale and Hines (2006) and Mascagni and Sherman (1998).
In this chapter a range of models with fewer details than in previous chapters is considered. These simplified neuron models are particularly useful for incorporating into networks since they are computationally more efficient, and sometimes they can be analysed mathematically. Reduced compartmental models can be derived from large compartmental models by lumping together compartments. Additionally, the number of gating variables can be reduced while retaining much of the dynamical flavour of a model. These approaches make it easier to analyse the function of the model using the mathematics of dynamical systems. In the yet simpler integrate-and-fire model, there are no gating variables, action potentials being produced when the membrane potential crosses a threshold. At the simplest end of the spectrum, rate-based models communicate via firing rates rather than individual spikes.
Up until this point, the focus has been on adding details to our neuron models. In this chapter we take the apparently paradoxical step of throwing away a lot of what is known about neurons. Given all the painstaking work that goes into building detailed models of neurons, why do this? There are at least two reasons:
(1) We wish to explain how a complicated neural model works by stripping it down to its bare essentials. This gives an explanatory model in which the core mechanisms have been exposed and so are easier to understand.
An essential component of the art of modelling is to carry out appropriate simplifications. This is particularly important when modelling networks of neurons. Generally, it is not possible to represent each neuron of the real system in the model, and so many design questions have to be asked. The principal questions concern the number of neurons in the model network, how each neuron should be modelled and how the neurons should interact. To illustrate how these questions are addressed, different types of model are described. These range from a series of network models of associative memory, in which both neurons and synapses are represented as simple binary or multistate devices, two different models of thalamocortical interactions, in which the neurons are represented either as multi-compartmental neurons or as spiking neurons, and multi-compartmental models of the basal ganglia and their use in understanding Parkinson's disease. The advantages and disadvantages of these different types of model are discussed.
In their implementation of Marr's influential theory of cerebellar cortex as a learning machine (Marr, 1969), Tyrrell and Willshaw (1992) constructed a simulation model of all the circuitry associated with a single Purkinje cell. With the limited computing resources available at the time, they did this by modelling each 3D layer of cells and connections in a 2D plane. To build the model they had to guess many parameter values about the geometry as these were not available. Their simulation results agreed broadly with the analysis carried out by Marr.
This book is about how to construct and use computational models of specific parts of the nervous system, such as a neuron, a part of a neuron or a network of neurons. It is designed to be read by people from a wide range of backgrounds from the biological, physical and computational sciences. The word ‘model’ can mean different things in different disciplines, and even researchers in the same field may disagree on the nuances of its meaning. For example, to biologists, the term ‘model’ can mean ‘animal model’; to physicists, the standard model is a step towards a complete theory of fundamental particles and interactions. We therefore start this chapter by attempting to clarify what we mean by computational models and modelling in the context of neuroscience. Before giving a brief chapter-by-chapter overview of the book, we also discuss what might be called the philosophy of modelling: general issues in computational modelling that recur throughout the book.
Mendel's Laws of Inheritance form a good example of a theory formulated on the basis of the interactions of elements whose existence was not known at the time. These elements are now known as genes.
Theories and mathematical models
In our attempts to understand the natural world, we all come up with theories. Theories are possible explanations for how the phenomena under investigation arise, and from theories we can derive predictions about the results of new experiments.
There are many types of active ion channel beyond the squid giant axon sodium and potassium voltage-gated ion channels studied in Chapter 3, including channels gated by ligands such as calcium. The aim of this chapter is to present methods for modelling the kinetics of voltage-gated and ligandgated ion channels at a level suitable for inclusion in compartmental models. The chapter will show how the basic formulation used by Hodgkin and Huxley of independent gating particles can be extended to describe many types of ion channel. This formulation is the foundation for thermodynamic models, which provide functional forms for the rate coefficients derived from basic physical principles. To improve on the fits to data offered by models with independent gating particles, the more flexible Markov models are introduced. When and how to interpret kinetic schemes probabilistically to model the stochastic behaviour of single ion channels will be considered. Experimental techniques for characterising channels are outlined and an overview of the biophysics of channels relevant to modelling channels is given.
Over 100 types of ion channel are known. Each type of channel has a distinct response to the membrane potential, intracellular ligands, such as calcium, and extracellular ligands, such as neurotransmitters. The membrane of a single neuron may contain a dozen or more different types, with the density of each type depending on its location in the membrane.
Intracellular ionic signalling plays a crucial role in channel dynamics and, ultimately, in the behaviour of the whole cell. In this chapter we investigate ways of modelling intracellular signalling systems. We focus on calcium, as it plays an extensive role in many cell functions. Included are models of intracellular buffering systems, ionic pumps, and calcium dependent processes. This leads us to outline other intracellular signalling pathways involving more complex enzymatic reactions and cascades. We introduce the well-mixed approach to modelling these pathways and explore its limitations. When small numbers of molecules are involved, stochastic approaches are necessary. Movement of molecules through diffusion must be considered in spatially inhomogeneous systems.
Ionic concentrations and electrical response
Most work in computational neuroscience involves the construction and application of computational models for the electrical response of neurons in experimental and behavioural conditions. So far, we have presented the fundamental components and techniques of such models. Already we have seen that differences in particular ionic concentrations between the inside and outside of a cell are the basis of the electrical response. For many purposes our electrical models do not require knowledge of precise ionic concentrations. They appear only implicitly in the equilibrium potentials of ionic species such as sodium and potassium, calculated from the Nernst equation assuming fixed intra-and extracellular concentrations (Chapter 2). The relative ionic concentrations, and hence the equilibrium potentials, are assumed to remain constant during the course of the electrical activity our models seek to reproduce.
Functional magnetic resonance imaging (fMRI) has become the most popular method for imaging brain function. Handbook of Functional MRI Data Analysis provides a comprehensive and practical introduction to the methods used for fMRI data analysis. Using minimal jargon, this book explains the concepts behind processing fMRI data, focusing on the techniques that are most commonly used in the field. This book provides background about the methods employed by common data analysis packages including FSL, SPM and AFNI. Some of the newest cutting-edge techniques, including pattern classification analysis, connectivity modeling and resting state network analysis, are also discussed. Readers of this book, whether newcomers to the field or experienced researchers, will obtain a deep and effective knowledge of how to employ fMRI analysis to ask scientific questions and become more sophisticated users of fMRI analysis software.
Brain damage can cause memory to break down in a number of different ways, the analysis of which can illuminate how the intact brain mediates memory processes. After first considering the problems involved in assessing memory, this book provisionally advances a taxonomy of elementary memory disorders and, for each in turn, reviews both the specific processes that are disrupted and the lesions responsible for the disruption. These disorders include short-term memory deficits, deficits in previously well-established memory, memory decifits caused by frontal lobe lesions, the organic amnesias, the disorders of conditioning and skill acquisition. Particular attention is paid to the organic amnesias, about which we know the most, and to the contributions of animal models to our knowledge. Andrew Mayes argues that the memory deficits found in several neurological and psychiatric syndromes comprise co-occurring elementary memory disorders. Finally, he outlines the implications of his taxonomy for our understanding of normal memory. A wide audience of researchers and students will find Human Organic Memory Disorders a helpful guide to a complex problem area.
Understanding how the brain works is probably the greatest scientific and intellectual challenge of our generation. The cerebral cortex is the instrument by which we carry the most complex mental functions. Fortunately, there exists an immense body of knowledge concerning both cortical structure and the properties of single neurons in the cortex. With the advent of the supercomputer, there has been increased interest in neural network modeling. What is needed is a new approach to an understanding of the mammalian cerebral cortex that will provide a link between the physiological description and the computer model. This book meets that need by combining anatomy, physiology, and modeling to achieve a quantitative description of cortical function. The material is presented didactically, starting with descriptive anatomy and comprehensively examining all aspects of modeling. The book gradually leads the reader from the macroscopic cortical anatomy and standard electrophysiological properties of single neurons to neural network models and synfire chains. The most modern trends in neural network modeling are explored.
The neuropeptide calcitonin gene-related peptide (CGRP) is known to play a pro-nociceptive role after peripheral nerve injury upon its release from primary afferent neurons in preclinical models of neuropathic pain. We previously demonstrated a critical role for spinal cord microglial CD40 in the development of spinal nerve L5 transection (L5Tx)-induced mechanical hypersensitivity. Herein, we investigated whether CGRP is involved in the CD40-mediated behavioral hypersensitivity. First, L5Tx was found to significantly induce CGRP expression in wild-type (WT) mice up to 14 days post-L5Tx. This increase in CGRP expression was reduced in CD40 knockout (KO) mice at day 14 post-L5Tx. Intrathecal injection of the CGRP antagonist CGRP8–37 significantly blocked L5Tx-induced mechanical hypersensitivity. In vitro, CGRP induced glial IL-6 and CCL2 production, and CD40 stimulation added to the effects of CGRP in neonatal glia. Further, there was decreased CCL2 production in CD40 KO mice compared to WT mice 21 days post-L5Tx. However, CGRP8–37 did not significantly affect spinal cord CCL2 production following L5Tx in WT mice. Altogether, these data suggest that CD40 contributes to the maintenance of behavioral hypersensitivity following peripheral nerve injury in part through two distinct pathways, the enhancement of CGRP expression and spinal cord CCL2 production.
Nitric oxide (NO) plays an important role in pathophysiology of the nervous system. Copper/zinc superoxide dismutase (SOD1) reacts with superoxide, which is also a substrate for NO, to provide antioxidative protection. NO production is greatly altered following nerve injury, therefore we hypothesised that SOD1 and NO may be involved in modulating axotomy responses in dorsal root ganglion (DRG)–spinal network. To investigate this interaction, adult Thy1.2 enhanced membrane-bound green fluorescent protein (eGFP) mice underwent sciatic nerve axotomy and received NG-nitro- <l-arginine methylester (L-NAME) or vehicle 7–9 days later. L4–L6 spinal cord and DRG were harvested for immunohistochemical analyses. Effect of injury was confirmed by axotomy markers; small proline-rich repeat protein 1A (SPRR1A) was restricted to ipsilateral neuropathology, while Thy1.2 eGFP revealed also contralateral crossover effects. L-NAME, but not axotomy, increased neuronal NO synthase (nNOS) and SOD1 immunoreactive neurons, with no colocalisation, in a lamina-dependent manner in the dorsal horn of the spinal cord. Axotomy and/or L-NAME had no effect on total nNOS+ and SOD1+ neurons in DRG. However, L-NAME altered SOD1 expression in subsets of axotomised DRG neurons. These findings provide evidence for differential distribution of SOD1 and its modulation by NO, which may interact to regulate axotomy-induced changes in DRG–spinal network.