We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This is the first of two volumes dealing with theories of the dynamical behavior of neurons. It is intended to be useful to graduate students and research workers in both applied mathematics and neurobiology. It would be suitable for a one-quarter or one-semester course in quantitative methods in neurobiology.
The book essentially contains descriptions and analyses of the principal mathematical models that have been developed for neurons in the last 30 years. Chapter 1, however, contains a brief review of the basic neuroanatomical and neurophysiological facts that will form the focus of the mathematical development. A number of suggestions are made for further reading for the reader whose training has been primarily mathematical.
The remainder of the book is a mathematical treatment of nerve-cell properties and responses. From Chapter 2 onward, there is a steady increase in mathematical level. An attempt has been made to explain some of the essential mathematics as it is needed, although some familiarity with differential equations and linear algebra is desirable. It is hoped that physiologists will benefit from this method of presentation. Biophysicists, engineers, physicists, and psychologists who are interested in theoretical descriptions of neurons should also find this book useful.
From Chapter 2 onward, the theme is the systematic development of mathematical theories of the dynamical behavior of neurons. The fundamental observation is of the resting membrane potential. Hence Chapter 2 is mainly concerned with the passive properties of cells and is an exposition of the classical theory of membrane potentials (i.e., the Nernst–Planck and Poisson equations).
This book is about densities. In the history of science, the concept of densities emerged only recently as attempts were made to provide unifying descriptions of phenomena that appeared to be statistical in nature. Thus, for example, the introduction of the Maxwellian velocity distribution rapidly led to a unification of dilute gas theory; quantum mechanics developed from attempts to justify Planck's ad hoc derivation of the equation for the density of blackbody radiation; and the field of human demography grew rapidly after the introduction of the Gompertzian age distribution.
From these and many other examples, as well as the formal development of probability and statistics, we have come to associate the appearance of densities with the description of large systems containing inherent elements of uncertainty. Viewed from this perspective one might find it surprising to pose the question: “What is the smallest number of elements that a system must have, and how much uncertainty must exist, before a description in terms of densities becomes useful and/or necessary?” The answer is surprising, and runs counter to the intuition of many. A one-dimensional system containing only one object whose dynamics are completely deterministic (no uncertainty) can generate a density of states! This fact has only become apparent in the past half-century due to the pioneering work of E. Borel [1909], A. Renyi [1957], and S. Ulam and J. von Neumann. These results, however, are not generally known outside that small group of mathematicians working in ergodic theory.
In previous chapters we concentrated on discrete time systems because they offer a convenient way of introducing many concepts and techniques of importance in the study of irregular behaviors in model systems. Now we turn to a study of continuous time systems.
Continuous and discrete systems differ in several important and interesting ways, which we will touch on throughout the remainder of this book. For example, in a continuous time system, complicated irregular behaviors are possible only if the dimension of the phase space of the system is three or greater. As we have seen, this is in sharp contrast to discrete time processes that can have extremely complicated dynamics in only one dimension. Further, continuous time processes in a finite dimensional phase space are in general invertible, which immediately implies that exactness is a property that will not occur for these systems (recall that noninvertibility is a necessary condition for exactness). However, systems in an infinite dimensional phase space, namely, time delay equations and some partial differential equations, are generally not invertible and, thus, may display exactness.
This chapter is devoted to an introduction of the concept of continuous time systems, an extension of many properties developed previously for discrete time systems, and the development of tools and techniques specifically designed for studying continuous time systems.
We have seen two ways in which uncertainty (and thus probability) may appear in the study of strictly deterministic systems. The first was the consequence of following a random distribution of initial states, which, in turn, led to a development of the notion of the Frobenius–Perron operator and an examination of its properties as a means of studying the asymptotic properties of flows of densities. The second resulted from the random application of a transformation S to a system and led naturally to our study of the linear Boltzmann equation.
In this chapter we consider yet another source of probabilistic distributions in deterministic systems. Specifically, we examine discrete time situations in which at each time the value xn+1 = S(xn) is reached with some error. An extremely interesting situation occurs when this error is small and the system is “primarily” governed by a deterministic transformation S. We consider two possible ways in which this error might be small: Either the error occurs rather rarely and is thus small on the average, or the error occurs constantly but is small in magnitude. In both cases, we consider the situation in which the error is independent of S(xn) and are, thus, led to first recall the notion of independent random variables in the next section and to explore some of their properties in Sections 10.2 and 10.3.