To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The preceding chapter dealt with methods for calculating the probability of finding synaptic contacts between neurons. In this chapter we assume that such contacts exist and create a certain network. This chapter develops methods for computing the input-output relations of a network from its structure.
Chapters 2 and 3 provide the material essential for analyzing small cortical networks. It is asumed that one can experimentally observe the input-output relation of a cortical network. Then, with the insight gained from studying Chapter 3, one can surmise the structure of cortical networks that might generate the observed relation. Finally, using the methods described in Chapter 2, one can determine which of the possible networks is most likely to exist in the brain. Chapters 6 and 7 will illustrate an important example of the use of such considerations.
This chapter derives quantitative descriptions of the way in which a spike train arriving at a neural network is modified and converted into an output spike train. The reader should be aware that this assignment is very ambitious and can be only partially achieved. Specifically, we could arrive at formulas that would hold true for spike trains in which the spikes did not follow each other too closely (we would, of course, have to define what “too closely” means).
The execution of brain processes often requires hundreds of milliseconds. Even a simple reaction-time exercise (in which the subject is required to press a button each time a sound is heard) has a central delay of about 100 ms. As the task becomes more complex (e.g., by requesting the subject to respond only to one of several possible sounds), the central delay becomes longer. The long central delay is accounted for by assuming that information processing is done through neuronal activity that must traverse through many stations in tandem. This type of processing often is visualized as being carried through a chain of serially connected neurons, such as that shown in Figure 6.1.1A. However, this arrangement is faulty, because if one of the neurons in the chain is damaged, the entire chain (composed of n cells) will become inoperative. The cortex is subject to a process by which its neuronal population is continually thinned out. Comparisons of neuronal densities in the brains of people who died at different ages (from causes not associated with brain damage) indicate that about a third of the cortical cells die between the ages of twenty and eighty years [Gerald, Tomlinson, and Gibson, 1980]. Adults can no longer generate new neurons, and therefore those neurons that die are never replaced.
Comprehending how the brain functions is probably the greatest intellectual and scientific challenge we face. We have virtually no knowledge of the neural mechanisms of relatively simple brain processes, such as perception, motor planning, and retrieval of memories, and we are completely ignorant regarding more complex processes, such as cognition, thinking, and learning.
The cerebral cortex (and particularly its most recently developed part, the neocortex) is considered essential for carrying out these higher brain functions. Although the neocortex has been divided into many subareas, each subarea contains many small modules composed of the same building elements, which are interconnected in essentially the same manner. I subscribe to the belief that the same basic computations are carried out by each small region of the neocortex. Understanding the nature of these computations and their neurophysiological mechanisms is a major challenge for science today.
There are many strategies for attacking these questions. Some researchers believe that one must first prepare detailed descriptions of all the types of neurons and of their internal and external connections. Others believe that the most urgent need is to construct appropriate models to describe how the brain might work. Between these extreme bottom-up and top-down strategies there is room for a multitude of intermediate approaches. The work reported here exemplifies one such approach. It combines anatomical observations, electrophysiological measurements, and abstract modeling in order to obain a quantitative description of cortical function.
This chapter examines techniques for evaluating the probability of finding a synaptic contact between neurons. A simple case for which one might want to estimate that probability is shown in Figure 2.0.1, where neurons from region A send their axons into region B and establish synaptic contacts there.
One of the principal reasons to evaluate the probability of such contact is to compare the evaluated probability and the experimental findings. Such comparisons can reveal the existence of rules that govern the connectivity between neurons. To illustrate this point, let us suppose that we are able to obtain an estimate of the probability of contact between a neuron from A and a neuron from B, assuming complete randomness (i.e., every neuron from A has the same probability of establishing contact with every neuron in B). We can then conduct experiments in which electrodes are thrust into A and B. With one electrode we stimulate a neuron from A, and with the other we record the response of a neuron from B. Such an experiment can help us decide if the cell we stimulate indeed affects the recorded cell synaptically. By repeating the experiment many times, we can experimentally evaluate the probability that a neuron from A will make a functioning synaptic contact on a neuron from B.
If the probability of contact that we determined experimentally agrees with the probability evaluated theoretically, then the idea of completely random connectivity can be accepted.
In Chapter 3 we discussed the input-output relations of synaptic connections, as measured by spike-train analysis. In this chapter we attempt to relate these phenomenological transmission curves to intracellular synaptic mechanisms. There appears to be no uniquely accurate way of translating the membrane potential changes into firing rates, though two extreme cases can be understood fairly well. The first case is that of a neuron whose membrane potential hyperpolarizes strongly after each action potential, and then begins to depolarize gradually until it hits the threshold and fires again. Firing times of such neurons are quasi-periodic. The second case is that of a neuron whose membrane potential fluctuates strongly around a constant mean level and whose excitatory postsynaptic potentials (EPSPs) and inhibitory postsynaptic potentials (IPSPs) are small.
Both types of neurons can be found in the mammalian nervous system. We refer to them as the periodically firing neurons and the randomly firing neurons. Sections 4.1 and 4.2 deal with the analysis of firing times in these two types of neurons. Section 4.3 describes the autocorrelation function and shows how it can be used to distinguish between the two types of neurons. It also shows that cortical neurons behave like randomly firing neurons. Sections 4.4 and 4.5 describe the expected modulations of firing rates generated by a postsynaptic potential in these two types of neurons.
The methods and data presented in Chapters 2,3, and 4 are essential for assessing the feasibility of neuronal circuits composed of a small number of elements. However, the neural network models presented in recent years have demonstrated that computations can also be carried out by massive interactions among a multitude of neurons. This chapter offers an introduction to current trends in neural network modeling.
Modeling of neural networks has been carried out extensively in the past five years. Among the first attempt to build circuits that would compute were McCullouch and Pitts [1943], who showed how to compute logic predicates with neurons. They later constructed a neuronal circuit that recognized shapes, regardless of their position in the visual field [Pitts and McCulloch, 1947]. Subsequently there were many other attempts to construct computing circuits from neural-like elements [e.g., Wooldridge, 1979], but most of those attempts did not leave a lasting impression in the neurosciences. The original McCulloch and Pitts paper was difficult to follow, even for mathematicians [Palm, 1986b]; thus, adoption and development of their ideas by neurophysiologists did not follow. The image-recognizing circuit they developed had connectivities that, so far as we know, did not exist in the visual areas. That seems to have been the fate of most “neural computers” suggested in the past.
Our discussion of the transmission of activity through synapses concentrated on asynchronous transmission. We defined the ASG as the number of spikes added to the postsynaptic stream after a presynaptic spike, but we were not concerned with the exact time at which these extra spikes were produced. We have shown that if several spikes converge on one postsynaptic target, their effects add up linearly as long as their EPSPs do not overlap in time.
In this chapter we deal with the properties of synchronous transmission of activity. We have already shown that if a cell receives convergent excitations from several sources, their combined effect will be greater than the sum of their isolated effects. Here we examine this property quantitatively. An examination of the properties of synchronous activation of neurons will lead us to the conclusion that diverging/converging chains are bound to transmit activity synchronously.
The last section deals with the question how one can observe synchronous activity experimentally. In fact, this entire book came about because in studies in which we recorded the activity of several neurons simultaneously, we observed phenomena that could be explained only by assuming synchronous transmission through diverging/converging chains.
This chapter presents a brief overview of the macroscopic structure, the microscopic structure, and the embryological genesis of the cerebral cortex. In the last section of this chapter, the quantitative aspects of the cortical morphology are considered. It should be noted that a comprehensive survey of all the known facts about the cortex would be an impossible task. Any selection of facts necessarily reflects the personal opinions of the author regarding the relevance of particular facts and issues. The selection of topics and facts presented here is certainly biased by the general aims of the later chapters and by my view that the brain is essentially deterministic on a large scale, but probabilistic on a small scale.
In studying this chapter, the reader is expected to understand the “classical” views of the cortical anatomy and microscopy, to understand methods and concepts that are dealt with in current neuromorphological research, and to be able to interpret microscopic images in a manner suitable for calculating the probabilities of contact, as described in Chapter 2. Most of this chapter is intended for the reader with little background in neurobiology. The reader who is familiar with the cortical structure may skim this chapter, paying detailed attention only to the tables in Section 1.5.
One of the most remarkable achievements of the human visual system is the capacity to resolve fine detail in the retinal image and efficiently to detect contrast between neighbouring regions of the image. In the central visual field these perceptual abilities appear to be limited by the physical properties of the photoreceptors themselves. The development of spatial vision provides a fine example of the way in which the efficiency of coding in the visual system emerges through an interplay between innate (presumably genetically determined) organization and plasticity of synaptic organization at the level of the visual cortex. As Barlow (1972) pointed out, developmental plasticity might allow the visual cortex to discover, in the pattern of stimulation it receives, important associations and coincidences in the retinal image that relate to the nature of the visual world.
Efficiency of spatial vision in the adult
Factors that might limit spatial vision
The resolution of spatial detail and the detection of contrast in the retinal image might, in principle, be limited by one of a number of factors. Obviously, the optical quality of the image could determine spatial performance and certainly does so in states of refractive error. Even when the eye is accurately focused, chromatic and spherical aberration degrade the image, as does the effect of diffraction, which is dependent on the size of the pupil. Interestingly, under photopic conditions, the pupil of the human eye tends to adopt a diameter that optimizes visual acuity: a larger pupil size would augment the effects of aberrations and a smaller one would increase diffraction, as well as decreasing retinal illumination (Campbell & Gregory, 1960).
Photoreceptors perform the first step in the analysis of a visual image, namely the conversion of light into an electrical signal. The cellular mechanism of this process of energy transformation or transduction has become much clearer in recent years as a result of advances in the study of the biochemistry and physiology of single photoreceptors, and it is on this aspect that this brief review mainly focuses. It should not be forgotten, though, that photoreceptors perform a more complex task than mere energy conversion and amplification. Photoreceptors adapt by altering the gain of transduction to accord with the prevailing level of illumination, and they thereby widen the range of light intensities over which they can respond. The first stage of temporal analysis occurs in the photoreceptors: time-dependent conductance mechanisms in the inner segment membrane ensure that the voltage signal which drives the transfer of information to second-order cells reaches a peak earlier than the current change across the outer membrane. Finally, the synaptic transfer itself seems to be highly nonlinear, so that even a small hyperpolarization caused by steady illumination greatly reduces the gain of signal transfer. All of these features must be considered by those studying higher levels of information processing in the visual system, and it should be borne in mind that the photoreceptors do not present a faithful spatial and temporal map of the external world to second-order neurons in the visual pathway any more than the visual system as a whole presents an unprocessed image to an imaginary homunculus sitting at the seat of consciousness deep within the brain.
Over the last few years we have developed machine algorithms for detection and discrimination of liver disease from diagnostic ultrasound scans of the liver (Wagner, Insana & Brown, 1986; Insana et al., 1986a; Insana et al, 1986b; Wagner, Insana & Brown, 1987). Several diffuse disease conditions are manifested through very subtle changes in the texture of the image. In these cases the machine algorithms significantly outperform expert clinical readers of the images (Garra et al., 1989). The discrimination of textures by the machine depends principally on an analysis and partitioning of second-order statistical features such as the autocorrelation and power spectral estimators. This finding has prompted us to investigate whether the human visual system might be more limited in its performance of such second-order tasks than it is for the wide range of first-order tasks where it scores so well.
At the beginning of this decade we learned how to use the words ‘well or good’ and ‘poorly or bad’ in the context of visual performance. We enjoyed a very fruitful collaboration with Horace Barlow through Arthur Burgess who split his sabbatical at that time between Horace Barlow's lab and ours (Burgess, Wagner, Jennings & Barlow, 1981; Burgess, Jennings & Wagner, 1982). From this collaboration we learned how instructive it is to compare the performance of the human visual system with that of the ideal observer from statistical decision theory. The latter introduces no fluctuations or sources of error beyond those inherent to the data that conveys the scene or image.
Many striking visual illusions are caused by disturbances to the equilibrium of the visual system resulting from relatively short periods of intense activation; after-images fall into this category, as do motion and tilt after-effects. I am going to suggest a goal, or computational theory, for the equilibration mechanisms that are revealed by some of these illusions: I think they take account of the correlational structure of sensory messages, thereby making the system specially sensitive to new associations. The suspicious coincidences thus discovered are likely to signal new causal factors in the environment, so adaptation mechanisms of the kind suggested could provide the major advantageous feature of the sensory representations formed in the cortex.
Visual adaptation
The visual system changes its characteristics when the image it is handling alters. The simplest and best understood example is the change in sensitivity that occurs when the mean luminance increases or decreases, and it is now well recognised that this parallels the electronic engineer's automatic gain control. The idea was formulated by Craik (1938) and recordings from photoreceptors and bipolars in the retina show the system in operation (e.g. Werblin, 1973), though the exact parts played by the different elements are not yet clear. What is obvious, however, is that the retinal ganglion cells could not possibly be so sensitive to small increments and decrements of light if they had to signal the whole range of luminances the eye is exposed to without changing their response characteristics.
Perhaps the most fascinating and yet provocative aspect of vision is the manner in which eyes adapt to their environment. There is no single optimum eye design as physicists might like, but rather a variety of different solutions each dictated by the animals lifestyle, e.g. the ‘Four-eyed fish’, Anableps anableps, with one pair of aerial pupils and a second pair of aquatic pupils (Walls, 1942). This unique fish, which patrols the water surface, dramatizes the extreme plasticity of optics in adapting to a particular subset of the environment. The animal's life style within an environment obviously shapes many properties of the retina such as the distribution and types of rods, cones and ganglion cells (Walls, 1942; Hughes, 1977; Lythgoe, 1979). Furthermore, it is probable that the grand strategy of early visual information processing is also an adaptation to the particular world in which we live, i.e. a world of objects rather than the infinitely unexpected. We develop this perspective after considering the optical design of eyes.
Our first objective here is to show that elementary ideas of physics and information sciences can give insight into eye design. In doing so we stress the comparative approach. Only by studying diverse eyes, of both the simple and compound variety can we appreciate the common design principles necessary to apply meaningfully concepts from physics to biology. Accordingly, we try to explain observations such as: (a) the optical image quality is often superior to the photoreceptor grain; (b) the resolving power of falconiforms and dragonflies is proportional to their head size; (c) the cone outer segment diameter of diverse hawks that differ enormously in head size is fixed at about 2μm;