To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There are many lies in the world, and not a few liars, but there are no liars like our bodies, except it be the sensations of our bodies
Rudyard Kipling: Kim
Why do physiological changes accompany emotion?
The work of Cannon discussed in Chapter 1 centred on pain and fear as ‘great emotions’. It is also now obvious that ‘in the natural state and particularly in subhuman species, the aggression, attachment and sexual patterns are usually accompanied by autonomic discharge …. The same functional stimuli activate the behaviour pattern and the autonomic nervous system arousal’ (Mandler, 1975, p.136). Cannon suggested, essentially, that most such changes can be viewed as physiological preparations for the situation facing the animal. Even with the broader selection of emotions presented by Mandler, this view is unlikely to be contentious in either mechanistic or teleonomic terms.
Let us consider mechanism first. Particularly with examples like the let-down reflex or salivation (Section 2.4) to guide us we can accept that certain stimuli or the interpretation of those stimuli could result in activity in either the autonomic nervous system or glands under nervous control. The release of compounds such as adrenaline or direct action of the autonomic nervous system can then produce extensive changes in the animal's physiology which can involve increased muscular power, decreased bleeding, etc.
‘Is there any other point to which you would wish to draw my attention?’
‘To the curious incident of the dog in the night-time.’
‘The dog did nothing in the night-time.’
‘That was the curious incident,’ remarked Sherlock Holmes
Arthur Conan-Doyle: Silver Blaze
Teleonomy, physiological change and feelings
Chapter 5 discussed a variety of physiological changes which can accompany emotions and which, we argued, adjust the organism's bodily systems in preparation for classes of action frequently required when that emotion is present. Chapter 6 concluded that such changes are not merely of physiological utility but can also play a controlling role in the psychology of emotion. As was noted, the presence of some compound such as adrenaline, or the changes in particular organ systems induced by such compounds, could come, through further evolution, to act as controllers of psychological states. Why, teleonomically speaking, they should do so is, however, not at all clear – and is quite likely to involve rather different reasons for different internal changes and different emotions.
In the present chapter I will discuss a particular behavioural phenomenon, the partial reinforcement extinction effect, and its underlying control. I will present a provisional account of the teleonomy of this phenomenon which will, I hope, show that reasonable teleonomic accounts of the psychological role of physiological changes can be constructed.
Dialectical and non-dialectical interactions in emotion
The previous chapters have, as far as possible, treated the various components of emotion in isolation. One reason for doing this has been simplicity. However, a more important reason has been that, given the likely evolution of emotional systems (Chapters 2, 3), there is no guarantee that the individual ‘components of emotion’ do not have entirely separate control systems from each other. Such separation would not require us to give up emotion as a concept, since teleonomy alone could provide a conceptual link between different components. However, the normal use of the word emotion implies some direct connection between the different components. This chapter, therefore, considers interactions between those aspects of emotion which have been separated by the previous chapters. Throughout, it should be borne in mind that the usual co-occurrence of such components is no justification for treating them as linked. As a corollary to this it should also be remembered that mechanistic links between components of one emotion do not imply mechanistic links between the same components of some other emotion.
Computer simulation has become a valuable - even indispensable - tool in the search for viable models of the self-organizing and self-replicating systems of the biological world as well as the inanimate systems of conventional physics. In this paper we shall present selected results from a large number of computer experiments on model neural networks of a very simple type. In the spirit of McCulloch & Pitts (1943) and Caianiello (1961), the model involves binary threshold elements (which may crudely represent neurons); these elements operate synchronously in discrete time. The synaptic interactions between neurons are represented by a non-symmetric coupling matrix which determines the strength of the stimulus which an arbitrary neuronal element, in the ‘on’ configuration, can exert on a second neuron to which it sends an input connection line. Within this model, the classes of networks singled out for study are defined by one or another prescription for random connection of the nodal units, implying that the entries in the coupling matrix are chosen randomly subject to certain overall constraints governing the number of inputs per ‘neuron’, the fraction of inhibitory ‘neurons’ and the magnitudes of the non-zero couplings.
We are primarily concerned with the statistics of cycling activity in such model networks, as gleaned from computer runs which follow the autonomous dynamical evolution of sample nets. An aspect of considerable interest is the stability of cyclic modes under disturbance of a single neuron in a single state of the cycle.
During the last decade, a conspicuous theme of experimental and theoretical efforts toward understanding the behavior of complex systems has been the identification and analysis of chaotic phenomena in a wide range of physical contexts where the underlying dynamical laws are considered to be deterministic (Schuster, 1984). Such chaotic activity has been examined in great detail in hydrodynamics, chemical reactions, Josephson junctions, semiconductors, and lasers, to mention just a few examples. Chaotic solutions of deterministic evolution equations are characterized by (i) irregular motion of the state variables, and (ii) extreme sensitivity to initial conditions. The latter feature implies that the future time development of the system is effectively unpredictable. An essential prerequisite for deterministic chaos is non-linear response; and although there are famous examples of chaos in relatively simple systems (e.g. Lorenz, 1963; Feigenbaum, 1978), we expect this kind of behavior to arise most naturally in systems of high complexity. Since biological nerve nets are notoriously non-linear and are perhaps the most complex of all known physical systems, it would be most surprising if the phenomena associated with deterministic chaos were irrelevant to neurobiology. Indeed, there has been a growing interest in the detection and verification of deterministic chaos in biological preparations consisting of few or many neurons. At one extreme we may point to the pioneering work of Guevara et al. (1981) on irregular dynamics observed in periodically stimulated cardiac cells; and, at the other, to the recent analysis by Babloyantz et al. (1985) of EEG data from the brains of human subjects during the sleep cycle, aimed at establishing the existence of chaotic attractors for sleep stages two and four.
The brain and the computer: a misleading metaphor in place of brain theory
Contrary to the philosophy of natural sciences, the brain has always been understood in terms of the most complex scientific technology of manmade organisms, for the simple reason of human vanity. Before and after the computer era, the brain was paraded in the clothing of hydraulic systems (in Descartes' times), and in the modern era as radio command centers, telephone switchboards, learn-matrices or feedback control amplifiers. Presently, it is fashionable to borrow terms of holograms, catastrophes or even spin glasses. Comparing brains to computers, however, has been by far the most important and most grossly misleading metaphor of all. Its importance has been twofold. First, the early post-war era was the first and last time in history that such analogy paved the way both to a model of the single neuron, the flip–flop binary element, cf. McCulloch & Pitts, 1943, and to a grand mathematical theory of the function of the entire brain (i.e., information processing and control by networks implementing Boolean algebra, cf. Shannon, 1948; Wiener, 1948). Second, the classical computer, the so-called von Neumann machine, provided neuroscience with not only a metaphor, but at the same time with a powerful working tool. This made computer simulation and modeling flourish in the brain sciences as well (cf. Pellionisz, 1979).
The basic misunderstanding inherent in the metaphor, nevertheless, left brain theory in an eclipse, although the creator of the computers was the first to point out (von Neumann, 1958) that these living- and non-living epitomes of complex organisms appear to operate on diametrically opposite structuro–functional principles.
The modeling of dendritic trees was carefully presented and discussed in earlier publications; only a few points will be summarized here. In Rail, 1962 it was shown how the partial differential equation for a passive nerve cable can represent an entire dendritic tree, and how this can be generalized from cylindrical to tapered branches and trees; this paper also showed how to incorporate synaptic conductance input into the mathematical model, and presented several computed examples. In Rail, 1964 it was shown how the same results can be obtained with compartmental modeling of dendritic trees; this paper also pointed out that such compartmental models are not restricted to the assumption of uniform membrane properties, or to the family of dendritic trees which transforms to an equivalent cylinder or an equivalent taper and, consequently, that such models can be used to represent any arbitrary amount of nonuniformity in branching pattern, in membrane properties, and in synaptic input that one chooses to specify. Recently, this compartmental approach has been applied to detailed dendritic anatomy represented as thousands of compartments (Bunow et al., 1985; Segev et al., 1985; Redman & Clements, personal communication).
Significant theoretical predictions and insights were obtained by means of computations with a simple ten-compartment model (Rail, 1964). One computation predicted different shapes for the voltage transients expected at the neuron soma when identical brief synaptic inputs are delivered to different dendritic locations; these predictions (and their elaboration, Rail, 1967) have been experimentally confirmed in many laboratories (see Jack et al., 1975; Redman, 1976; Rail, 1977).
I am the set of neural firings taking place in your brain as you read the set of letters in this sentence and think of me.
(D. Hofstadter, Metamagical Themas)
Neurobiological systems embody solutions to many difficult problems such as associative memory, learning, pattern recognition, motor coordination, vision and language. It appears they do this via massive parallel processing within and between specialized structures. The mammalian brain is a marvel of coordinated specialization. There are separate areas for each sense modality, with massive intercommunication between areas. There are topographic maps, many specialized neuron types, and quasi-regular small-scale structure (columns and layers) which vary from area to area to accommodate local needs, and plasticity in connections between neurons. Feedback occurs on many levels. This complexity is apparently necessary for the kind of multi-mode processing that brains perform, but it's not clear how much of this structure is necessary to perform isolated tasks such as vision or speech recognition; nor do we know if nature's solutions are optimal. (See chapter 8 of Oster & Wilson (1978), for example, for an interesting discussion of optimization in biology.)
Regardless of whether the brain represents the optimal structure for cognitive processes, it is the only successful one we know of. By analyzing it and modeling it, we may learn the principles on which it operates, and presumably be able to apply these principles to computer technology.
A question of great interest in neural network theory is the way such a network modifies its synaptic connections. It is in the synapses that memory is believed to be stored: the progression from input to output somehow leads to cognitive behaviour. When our work began more than ten years ago, this point of view was shared by relatively few people. Certainly, Kohonen was one of those who not only shared the attitude, but probably preceded us in advocating it. There had been some early work done on distributed memories by Pribram Grossberg Longuet Higgins and Anderson. If you consider a neural network, there are at least two things you can be concerned with. You can look at the instantaneous behaviour, at the individual spikes, and you can think of the neurons as adjusting themselves over short time periods to what is around them. This has led recently to much work related to Hopfield's model; many people are now working on such relaxation models of neural networks. But we are primarily concerned with the longer term behaviour of neural networks. To a certain extent this too can be formulated as a relaxation process, although it is a relaxation process with a much longer lifetime.
We realized very early, as did many others, that if we could put the proper synaptic strengths at the different junctions, then we would have a machine which, although it might not talk and walk, would begin to do some rather interesting things.
In the last few years we have learnt an enormous amount about how the immune system functions. We now have at least the outline of an immune system network theory that seems to account for much of the phenomenology (Hoffmann, 1980, 1982, Hoffmann et al. 1988). The many similarities between the immune system and the central nervous system suggested the possibility that the same kind of mathematical model could be applicable to both systems. We found that a neural network theory analogous to the immune system theory can indeed be formulated (Hoffmann, 1986). The basic variables in the immune system network theory are clone sizes; the corresponding variables in the neural network theory are the rates of firing of neurons. We need to postulate that neurons are slightly more complex than has been assumed in conventional neural network theories, namely that there can be hysteresis in the rate of firing of a neuron as the input level of the neuron is varied.
The added complexity of the hysteresis postulate is compensated by a new simplicity at the level of the network; the network can learn without any changes in the synaptic connection strengths (Hoffmann, Benson, Bree & Kinahan, 1986). Learned information is associated solely with a state vector; memory is a consequence of the fact that due to the hysteresis associated with each neuron, the system tends to stay in the region of an N-dimensional phase space to which its experiences have taken it. A network's stimulus–response behaviour is determined by its location in that space.
There has recently been a marked increase in research activity regarding the structural and function of the brain. Much of this has been generated by the more general advances in biology, particularly at the molecular and microscopic levels, but it is probably fair to say that the stimulation has been due at least as much to recent advances in computer simulation. To accept this view does not mean that one is equating the brain to an electronic computer, of course; far from it, those involved in brain research have long since come to appreciate the considerable differences between the cerebral cortex and traditional computational hardware. But the computer is nevertheless a useful device in brain science, because it permits one to simulate processes which are difficult to monitor experimentally, and perhaps impossible to handle by theoretical analysis.
The articles in this book are written records of talks presented at a meeting held at the Gentofte Hotel, Copenhagen, during the three days August 20–22, 1986. They have been arranged in an order that places more general aspects of the subject towards the beginning, preceding those applications to specific facets of brain science which make up the balance of the book. The final chapters are devoted to a number of ramifications, including the design of experiments, communication and control.
The meeting could not have been held without the financial support generously donated by the Augustinus Foundation, the Carlsberg Foundation, the Mads Clausen (Danfoss) Foundation, the Danish Natural Science Research Council, the Hartmann Foundation, IBM, the Otto Mønsted Foundation, NORDITA, the NOVO Foundation, and SAS.
A prominent feature of the brain is the apparent diversity of its structure: the distribution of neurons and the way in which their dendrites and axon fibers differ in various brain centers. The pattern of inputs and outputs of each neuron in the brain most probably differs from that of any other neuron in the system, and this possibility clearly imposes constraints on any attempts at generalization. Yet, since its inception, microscopy of the central nervous system (CNS) has involved a sustained effort to define the laws of spatial arrangement and of connectivity distinguishing specific structures. However, the question which naturally arises from the above is whether these structural features may reflect, and perhaps determine, fundamental differences in the mode of operation of distinct brain structures. Alternatively, the possibility may exist that such structural specializations merely represent anatomical ‘accidents of development’, perhaps reflecting phylogenic origin, but playing a functional role which is no more significant than, for example, that of the appendix or the coccygeal vertebrae in man.
It is difficult to provide an answer to this question from the presently available anatomical and physiological data. Although substantial neurohistological data, on one hand, and neurophysiological information, on the other, are available, meaningful correlation of these two sets of data can only be accomplished in very isolated instances. In general, unlike recording from invertebrates, where the simplicity and viability of the nervous system makes it feasible to observe the elements recorded, physiological studies of the mammalian CNS are performed in a ‘blind’ fashion and it is exceedingly difficult to correlate these studies with the microscopical anatomy of the tissue.