To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The serotonin transporter is a key brain protein that modulates the reuptake of the neurotransmitter serotonin from synaptic spaces back into the presynaptic neuron. This control over neuronal signalling makes it a prime area of neuroscientific study. In this book an international team of top experts introduce and explicate the role of serotonin and the serotonin transporter in both human and animal brains. They demonstrate the relevance of the transporter and indeed the serotonergic system to substrates of neuropsychiatric disorders, and explain how this knowledge is translated into valid animal models that will help foster new discoveries in human neurobiology. Writing for graduate students and academic researchers, they provide a comprehensive coverage of a wide spectrum of data from animal experimentation to clinical psychiatry, creating the only book exclusively dedicated to this exciting new avenue of brain research.
Bird song is one of the most remarkable and impressive sounds in the natural world, and has inspired not only students of natural history, but also great writers, poets and composers. Extensively updated from the first edition, the main thrust of this book is to suggest that the two main functions of song are attracting a mate and defending territory. It shows how this evolutionary pressure has led to the amazing variety and complexity we see in the songs of different species throughout the world. Writing primarily for students and researchers in animal behavior, the authors review over 1000 scientific papers and reveal how scientists are beginning to unravel and understand how and why birds communicate with the elaborate vocalizations we call song. Highly illustrated throughout and written in straightforward language, Bird Song also holds appeal for amateur ornithologists with some knowledge of biology.
Functional Magnetic Resonance Imaging (fMRI) has become a standard tool for mapping the working brain's activation patterns, both in health and in disease. It is an interdisciplinary field and crosses the borders of neuroscience, psychology, psychiatry, radiology, mathematics, physics and engineering. Developments in techniques, procedures and our understanding of this field are expanding rapidly. In this second edition of Introduction to Functional Magnetic Resonance Imaging, Richard Buxton – a leading authority on fMRI – provides an invaluable guide to how fMRI works, from introducing the basic ideas and principles to the underlying physics and physiology. He covers the relationship between fMRI and other imaging techniques and includes a guide to the statistical analysis of fMRI data. This book will be useful both to the experienced radiographer, and the clinician or researcher with no previous knowledge of the technology.
Neural networks have been employed as research tools both for machine learning applications and the simulation of artificial organisms. In recent times, much research has been undertaken on the evolution of neural networks where the architecture, weights or both are allowed to be determined by an evolutionary process such as a genetic algorithm. Much of this research is carried out with the machine learning and evolutionary computation community in mind rather than the artificial life community and as such, the latter has been slow to adopt innovative techniques which could lead to the development of complex, adaptive neural networks and in addition, shorten experiment development and design times for researchers.
This chapter attempts to address this issue by reminding researchers of the wealth of techniques that have been made available for evolutionary neural network research. Many of these techniques have been refined into freely available and well-maintained code libraries which can easily be incorporated into artificial life projects hoping to evolve neural network controllers.
The first section of this chapter outlines a review of the techniques employed to evolve neural network architectures, weights or both architectures and weights simultaneously. The encoding schemes presented in this chapter describe the encoding of multi-layer feedforward and recurrent neural networks but there are some encoding schemes which can (and have been) employed to generate more complex neural networks such as spiking (Floreano & Mattiussi, 2001; Di Paulo, 2002) and gasNets (Smith et al., 2002) which are beyond the scope of this chapter.
This paper has two main aims. First, to give an introduction to some of the construction techniques – the ‘nuts-and-bolts’ as it were – of neural networks deployed by the authors in this book. Our intention is to emphasise conceptual principles and their associated terminology, and to do this wherever possible without recourse to detailed mathematical descriptions. However, the term ‘neural network’ has taken on a multitude of meanings over the last couple of decades, depending on its methodological and scientific context. A second aim, therefore, given that the application of the techniques described in this book may appear rather diverse, is to supply some meta-theoretical landmarks to help understand the significance of the ensuing results.
In general terms, neural networks are tools for building models of systems that are characterised by data sets which are often (but not always) derived by sampling a system input-output behaviour. While a neural network model is of some utility if it mimics the behaviour of the target system, it is far more useful if key mechanisms underlying the model functionality can be unearthed, and identified with those of the underlying system. That is, the modeller can ‘break into’ the model, viewed initially as an input-output ‘black box’, and find internal representations, variable relationships, and structures which may correspond with the underlying target system. This target system may be entirely non-biological (e.g. stock market prices), or be of biological origin, but have nothing to do with brains (e.g. ecologically driven patterns of population dynamics).
This book represents a substantial update of a theme issue of the Philosophical Transactions of the Royal Society B Journal, ‘The use of artificial neural networks to study perception in animals’ (Phil Trans R Soc B 2007 March 29; 362(1479)). Most of the 14 papers in that theme issue have been significantly updated and we include a further five entirely new chapters, reflecting emerging themes in neural network research. Our reasons for undertaking the theme issue and this book were not entirely altruistic. Having a young but growing interest in the use of artificial neural networks, we hoped that the publications would be an excuse for us to learn about areas in neural network research that seemed interesting to us and of potential application to our research. The people who will get most from the book are, therefore, ecologists and evolutionary biologists, perhaps with a notion of using neural network models of perception, but with little experience of their use. That said, the content of this book is extremely broad and we are confident that there is something in it for any scientist with an interest in animal (including human) perception and behaviour.
We organise the book into four fairly loose categories. The chapters by Kevin Gurney and Steve Phelps are broad reviews and introduce the two main themes of the book: neural networks as tools to explore the nature of perceptual processes, and neural networks as models of perception in ecology and evolutionary biology.
The brain has various functions such as memory, learning, awareness, thinking and so on. These functions are produced by the activity of neurons that are connected to each other in the brain. There are many models to reproduce the memory of the brain, and the Hopfield model is one of the most studied (Hopfield, 1982). The Hopfield model was proposed to reproduce associative memory, and it has been studied extensively by physicists because this model is similar to the Ising model of spin glasses. This model was studied circumstantially, for example, the storage capacity was analysed by the replica method (Amit, 1989; Hertz et al., 1991). However, in these studies, the neural networks are completely connected, i.e. each neuron is connected to all other neurons. It was not clear how the properties of the model depend on the connections of neurons until recently (Tosh & Ruxton, 2006a, 2006b).
In recent years the study of complex networks has been paid much attention. A network consists of nodes and links. A node is a site or point on the network such as a neuron; the nodes are connected by links such as an axon or synapse of a neuron. Several characteristic network structures have been proposed, and the small-world and the scale-free networks have been studied heavily in recent years. Small-world networks have the properties that the characteristic path length is very short, and simultaneously the clustering coefficient is large (Watts & Strogatz, 1998).
A key attribute of all but the simplest organisms is an ability to modify their actions in the light of experience – that is to learn. This attribute allows individuals to adapt to rapidly changing environments. Learning is a fundamental aspect of animal behaviour (Barnard, 2003). One aspect of animal behaviour where learning has been particularly extensively studied is food gathering (see recent reviews by Adams-Hunt & Jacobs, 2007; Sherry & Mitchell, 2007; Stephens, 2007), and it is this aspect that we will focus on. We use the term ecological learning to describe an organism learning about its environment.
Neural network models are being used increasingly as effective tools for the description and study of animal behaviour (see Enquist & Ghirlanda, 2005 for a review). There are many different techniques that can be used to model animal learning, with Bayesian approaches being one such example. However, with the desire of taking advantage of neural networks' ability to generalise, neural networks have also been used to model stimulus learning in animals, and have even been used to examine the difference between neural network predators that evolve or learn (for example, see Kamo et al., 2002). In this paper we focus solely on the use of neural networks to represent ecological learning (such as a predator learning and generalising over prey) and argue that there are fundamental differences between the way neural network models are generally trained and the way organisms learn.
Consideration of the design and use of animal signals is of fundamental importance for our understanding of the social organisation and the perceptual and cognitive abilities of animals (e.g. Endler & Basolo, 1998). Movement-based visual signals have proven particularly difficult to understand because (in contrast to colour and auditory signals) perception, environmental conditions at the time of signalling and information content of motion signals cannot be easily modelled. Image motion has to be computed by the brain from the temporal and spatial correlations of photoreceptor signals. Although the computational structure of motion perception is well understood, in most situations it is still practically impossible to accurately quantify image motion signals under natural conditions from the animal's perspective. This undermines our ability to understand the perceptual constraints on movement-based signal design.
Extrapolating from other signalling systems, the diversity of movement-based signals between species is likely to be a function of the characteristics of competing, irrelevant sensory stimulation, or ‘noise’, and sensory system capabilities. The extent to which the spatiotemporal properties of signal and noise overlap remains unclear, however, and indeed, the motion characteristics that reliably lead to segmentation of the signal from noise are largely unresolved. It is therefore difficult to know the circumstances in which signal detection is compromised. In this chapter, I begin to generate the kind of data that will help explain movement-based signal evolution by modelling the changing perceptual task facing the Australian lizard Amphibolurus muricatus in detecting conspecific communicative displays.
In this chapter I will examine the use of artificial neural networks in the study of prey colouration as an adaptation against predation. Prey colouration provides numerous spectacular examples of adaptation (e.g. Cott, 1940; Edmunds, 1974; Ruxton et al., 2004). These include prey colour patterns used to disguise and make their bearers difficult to detect as well as brilliant colourations and patterns that prey may use to deter a predator. As a consequence, prey colouration has been a source of inspiration for biologists since the earliest days of evolutionary biology (e.g. Wallace, 1889).
The anti-predation function of prey colouration is evidently a consequence of natural selection imposed by predation. More specifically, it is the predators' way of processing visual information that determines the best possible appearance of the colouration of a prey for a given anti-predation function and under given conditions. Because predators' ability to process visual information has such a central role in the study of prey colouration, it follows that we need models that enable us to capture the essential features of such information processing.
An artificial neural network can be described as a data processing system consisting of a large number of simple, highly interconnected processing elements (artificial neurons) in an architecture inspired by biological nerve systems (Tsoukalas & Uhrig, 1997). Artificial neural networks provide a technique that has been applied in various disciplines of science and engineering for tasks such as pattern recognition, categorisation and decision making, as well as a modelling tool in neural biology (e.g. Bishop, 1995; Haykin, 1999).
In motion vision, two distinct models have been proposed to account for direction-selectivity: the Reichardt detector and the gradient detector (Figure 3.1). In the Reichardt detector (also called ‘Hassenstein–Reichardt’ detector or correlation-type motion detector), the luminance levels of two neighbouring image locations are multiplied after being filtered asymmetrically (Figure 3.1, left). This operation is performed twice in a mirror-symmetrical fashion, before the outputs of both multipliers are subtracted from each other (Hassenstein & Reichardt,1956; Reichardt, 1961, 1987; Borst & Egelhaaf, 1989). The spatial or temporal average of such local motion detector signals is proportional to the image velocity within a range set by the detector time-constant (Egelhaaf & Reichardt, 1987). However, it is one of the hallmarks of this model that the output of the individual velocity detectors depends, in addition to stimulus velocity, in a characteristic way on the spatial structure of the moving pattern: in response to drifting gratings, for example, the local Reichardt detector output consists of two components: a sustained (DC) component which indicates by its sign the direction of the moving stimulus, and an AC component, which follows the local intensity modulation and, thus, carries no directional information at all. Since the local intensity modulations are phase-shifted with respect to each other, the AC components in the local signals become cancelled by spatial integration of many adjacent detectors. Unlike the AC component, the DC component survives spatial or temporal averaging (integration). The global output signal, therefore, is purely directional.
Artificial neural networks are increasingly being used by ecosystem, behavioural and evolutionary ecologists. A particularly popular model is the three-layer, feedforward network, trained with the back-propagation algorithm (e.g. Arak & Enquist, 1993; Ghirlanda & Enquist, 1998; Spitz & Lek, 1999; Manel et al., 1999; Holmgren & Getz, 2000; Kamo et al., 2002, Beauchard et al., 2003). The utility of this design (especially if, as is common, the output layer consists of a single node) is that for a given set of input data, the network can be trained to make decisions, and this decision apparatus can subsequently be applied to inputs that are novel to the network. For example, an ecosystem ecologist with a finite set of ecological, biochemical and bird-occurrence data for a river environment can train a network to produce a predictive tool that will determine the likelihood of bird occurrence through sampling of the environment (Manel et al., 1999). Or in behavioural and evolutionary ecology, a network can be trained to distinguish between a ‘resident animal’ signal and ‘background’ signals, and subsequently used to determine how stimulating a mutant animal signal is, and hence, how signals can evolve to exploit receiver training (Kamo et al., 2002). Reasons for the popularity of the back-propagation training method (Rumelhart et al., 1986) include its computational efficiency, robustness and flexibility with regard to network architecture (Haykin, 1999).
from
Part II
-
The use of artificial neural networks to elucidate the nature of perceptual processes in animals
By
L. Douw, VU University Medical Centre,
C.J. Stam, VU University Medical Centre,
M. Klein, VU University Medical Centre,
J.J. Heimans, VU University Medical Centre,
J.C. Reijneveld, VU University Medical Centre
The human brain is by far the most complex network known to man. Neuroscience has for a long time focused on a reductionistic approach when studying the brain, in part precisely because of its daunting complexity. Although highly important insights have been obtained by using a localisational method, this type of research has failed to elucidate the elaborate mechanisms involved in higher brain functioning and perception. As a consequence, an increasing body of research regarding the brain's functional status has become founded on modern network theory. In this subdivision of mathematics and physics, emphasis is placed on the manner in which several parts of the brain interact, instead of on which specific part of the cortex is responsible for a certain task. The first studies using networks to investigate the brain have made use of computational models and animal studies. Due to the great research advances in recent years, network theory is now being readily applied to the human brain. Studies are being performed in both the healthy population and several patient groups, in order to find out what constitutes a healthy versus a diseased brain (for an introduction into brain networks, see Watts & Strogatz, 1998; Bassett & Bullmore, 2006; Reijneveld et al., 2007).
Brain tumours almost invariably cause highly burdensome symptoms, such as cognitive deficits and epileptic seizures. The tumour has significant impact on the brain, since it forces the non-tumoural tissue to adapt to the presence and constant expansion of a foreign entity.
Biological systems are inherently noisy and typically comprised of distributed, partially autonomous components. These features require that we understand evolutionary traits in terms of probabilistic design principles, rather than traditional deterministic, engineering frameworks. This characterisation is particularly relevant for signalling systems. Signals, whether between cells or individuals, provide essential integrative mechanisms for building complex, collective, structures. These signalling mechanisms need to integrate, or average, information from distributed sources in order to generate reliable responses. Thus there are two primary pressures operating on signals: the need to process information from multiple sources, and the need to ensure that this information is not corrupted or effaced. In this chapter we provide an information-theoretic framework for thinking about the probabilistic logic of animal communication in relation to robust, multi-modal, signals.
There are many types of signals that have evolved to allow for animal communication. These signals can be classified according to five features: modality (the number of sensory systems involved in signal production), channels (the number of channels involved in each modality), components (the number of communicative units within modalities and channels), context (variation in signal meaning due to social or environmental factors) and combinatoriality (whether modalities, channels, components and/or contextual usage can be rearranged to create different meaning). In this paper we focus on multi-channel and multi-modal signals, exploring how the capacity for multi-modality could have arisen and whether it is likely to have been dependent on selection for increased information flow or on selection for signalling system robustness.
All of animal behaviour can be considered a series of choices – at any given moment an animal must decide whether to mate, eat, sleep, fight or simply rest. Such decisions require estimates of the immediate environment, and despite the diversity of those estimates, they are all carried out by sensory systems and the neural functions contingent on them. This functional diversity is central to the concept of sensory drive (Endler, 1992; Figure 2.1), which notes that animal mating, foraging and other activities are evolutionarily coupled through their shared dependence on sensory systems and local environments. In light of the many demands made of a sensory system, what does it mean to design one well?
It is often useful to consider how an ideal receiver would perform on a given task. Aside from the potentially conflicting demands posed by different aspects of one's environment, there are additional reasons to think that such an approach may not be complete. The climb to a global optimum can be a tortuous one, complicated by genetic drift, allelic diversity and phylogenetic history. Analytic models often focus on defining the best possible performance and neglect the existence of alternative local optima, or the ability to arrive at such optima through evolutionary processes. In sexual selection, researchers have suggested pleiotropy in sensory systems may be one key feature that shapes the direction of evolution (Kirkpatrick & Ryan, 1991).