To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In understanding visual processing, it is important to establish not only the local response properties for elements in the visual field, but also the scope of neural interactions when two or more elements are present at different locations in the field. Since the original report by Polat and Sagi (1993), the presence of interactions in the two-dimensional (2D) field has become well established by threshold measures (Polat and Tyler, 1999; Chen and Tyler, 1999, 2001, 2008; Levi et al., 2002). A large array of other studies have also looked at such interactions with suprathreshold paradigms (e.g., Field et al., 1993; Hess et al., 2003). The basic story from both kinds of studies is that there are facilitatory effects between oriented elements that are collinear with an oriented test target and inhibitory effects elsewhere in the 2D spatial domain of interaction (although the detectability of a contrast increment on a Gabor pedestal also reveals strong collinear masking effects).
The present work extends this question to the third dimension of visual space as specified by binocular disparity, asking both what interactions are present through the disparity dimension and how these interactions vary with the spatial location of the disparate targets. Answering these questions is basic to the understanding of the visual processing of the 3D environment in which we find ourselves.
It has been evolved alike in many unrelated groups of animals by the hunter and the hunted; in the sea and on land. It tones the canvas on which are painted the Leopard's spots, the Tiger's stripes, and the patterns of smaller Carnivora such as Serval and Ocelot, Civet, and Genet, Jackal and Hyaena. It is the dress almost universally worn by rodents, including the Vizcacha, Jerboas, Gerbils, Cavies, Agouties, Hares, and many other. It is the essential uniform adopted by Conies, Asses, Antelopes, Deer, and other groups of ungulates. It is repeated extensively among the marsupials, as seen in the coloration of the Tasmanian wolf, Opossums, Wallabies and others. It forms the background to reveal the beautiful subtle picture patterns worn by Wheatears, Warblers, Pipits, Woodcock, Bustards, and innumerable other birds. It provides a basic livery for the great majority of snakes, lizards, and amphibians. Among insects it reaches a fine state of perfection in different caterpillars and grasshoppers.
Hugh Cott (1940)
In 1896 American artist and naturalist Abbott Handerson Thayer published an article in The Auk entitled ‘The law which underlies protective coloration’. In this article he observed that ‘animals are painted by nature darkest on those parts which tend to be most lighted by the sky's light, and vice versa’. As an example, Thayer described the plumage of the ruffed grouse, whose feathers are dark brown on the back and blend gradually into white on the underneath. Such a gradation in shading, Thayer hypothesised, made three-dimensional bodies appear less round and less solid by balancing and neutralising the effects of illumination by the sun. Thayer called this type of patterning obliterative shading, which today we term countershading.
Visual camouflage is used by animals as well as humans in order to conceal or obscure their visual signature. In the field of computer vision, work related to camouflage can be roughly divided into two: camouflage assessment and design (e.g. Copeland & Trivedi 1997; Gretzmacher et al. 1998), and camouflage breaking. Despite the ongoing research, only little has been said in the computer vision literature on visual camouflage breaking (Marouani et al. 1995; Guilan & Shunqing 1997; McKee et al. 1997; Ternovskiy & Jannson 1997; Huimin et al. 1999).
Most recent tests of the theory of disruptive coloration have focussed on the disguise of the body's outline (e.g. Merilaita 1998; Cuthill et al. 2005; Schaefer & Stobbe 2006; Stevens et al. 2006b; Fraser et al. 2007). When placed at the body's edge, the high-contrast colour boundaries that are characteristic of disruptive patterning create false contours of higher stimulus intensity than those of the real outline (Stevens & Cuthill 2006; Stevens et al. 2006a). In this way, the probability of object recognition through boundary shape is diminished. However, the pioneers of the theory of disruptive coloration, Abbott Thayer (1909) and Hugh Cott (1940), also emphasised the importance of concealing other characteristic, and thus potentially revealing, body parts, such as eyes and limbs. Cott (1940) devoted a whole chapter of his influential textbook to this topic, arguing that the successful disguise of such features could be achieved through what he termed ‘coincident disruptive coloration’ (Figure 3.1).
Seeing in 3D is a fundamental problem for any organism or device that has to operate in the real world. Answering questions such as “how far away is that?” or “can we fit through that opening?” requires perceiving and making judgments about the size of objects in three dimensions. So how do we see in three dimensions? Given a sufficiently accurate model of the world and its illumination, complex but accurate models exist for generating the pattern of illumination that will strike the retina or cameras of an active agent (see Foley et al., 1995). The inverse problem, how to build a three-dimensional representation from such two-dimensional patterns of light impinging on our retinas or the cameras of a robot, is considerably more complex.
In fact, the problem of perceiving 3D shape and layout is a classic example of an ill-posed and underconstrained inverse problem. It is an underconstrained problem because a unique solution is not obtainable from the visual input. Even when two views are present (with the slightly differing viewpoints of each eye), the images do not necessarily contain all the information required to reconstruct the three-dimensional structure of a viewed scene. It is an illposed problem because small changes in the input can lead to significant changes in the output: that is, reconstruction is very vulnerable to noise in the input signal. The problem of constructing the three-dimensional structure of the viewed scene is an extremely difficult and usually impossible problem to solve uniquely.
When we enter the marine environment as divers, snorkellers or even as television viewers, two things are immediately notable. We are supported by the water (or possibly armchair) ‘flying’ through a three-dimensional world, and we can't see very far. The latter is an uncomfortable experience as we are afraid of what might be just beyond our visual range, brandishing lots of teeth. These two physical features also set real limits for the animals that have evolved in this habitat and have a significant influence on their camouflage strategies. Many marine inhabitants are also wary of lurking teeth and know, through evolution, that attack may come from any direction.
When an object in the world moves relative to the eye, the image of the object moves across the retina. Motion that occurs on the retina is referred to as retinal motion. When objects move within our visual field we tend to move our eyes, head, and body to track them in order to keep them sharply focused on the fovea, the region of the retina with the highest spatial resolution. When the eyes move to track the object, there is no retinal motion if the tracking is perfect (Figure 10.1), yet we still perceive object motion. Retinal motion is therefore not the only signal required for motion perception. In this chapter, we discuss the problem of how retinal motion and eye movements are integrated for motion perception. After introducing the problem of representing position and motion in three-dimensional space, we will concentrate specifically on the topic of how retinal and eye-movement signals contribute to the perception of motion in depth. To conclude, we discuss what we have learned about how the combination of eye movements and retinal motion differs between the perception of frontoparallel motion and the perception of motion in depth.
A headcentric framework for motion perception
Position (and motion) in the physical three-dimensional world can be described in a number of different ways. For example, it can be described in Cartesian coordinates (x, y, z) or in terms of angles and distances with respect to a certain origin.
Considering its widespread occurrence and importance in the animal kingdom, background matching is clearly one of the most under-studied means of concealment. Background matching means that to decrease the risk of being detected by its predators or prey an animal possesses body colours or patterns that resemble those in the surrounding environment (Figure 2.1). The principle has long been acknowledged (e.g. Darwin 1794), and because of the apparent obviousness of its function, it was used as an example to promote the idea of adaptation in many early evolutionary texts. For instance, Wallace (1889) presented numerous examples of what we today call background matching, and described various cases in which animals ‘blended into’ their backgrounds or had colours ‘assimilated’ to or to ‘harmonise’ with it.
The visual sense is very useful to many animals. It allows the detection and identification of distant objects. The properties of visual systems vary considerably between different animals (e.g. Walls 1942; Autrum et al. 1973; Weckstrom & Laughlin 1995; Bowmaker & Hunt 2006), but the main issues concern the directional sensitivity (acuity) of the system; the light levels under which it operates; the field of view, including any areas of binocular overlap; the extent to which specific features such as spectral or motion information are extracted from the visual environment; and the spatial and temporal characteristics of sampling the environment.
We perceive the world as three-dimensional. The inputs to our visual system, however, are only a pair of two-dimensional projections on the two retinal surfaces. As emphasized by Marr and Poggio (1976), it is generally impossible to uniquely determine the three-dimensional world from its two-dimensional retinal projections. How, then, do we usually perceive a well-defined three-dimensional environment? It has long been recognized that, since the world we live in is not random, the visual system has evolved and developed to take advantage of the world's statistical regularities, which are reflected in the retinal images. Some of these image regularities, termed depth cues, are interpreted by the visual system as depth. Numerous depth cues have been discovered. Many of them, such as perspective, shading, texture, motion, and occlusion, are present in the retina of a single eye, and are thus called monocular depth cues. Other cues are called binocular, as they result from comparing the two retinal projections. In the following, we will review our physiologically based models for three binocular depth cues: horizontal disparity (Qian, 1994; Chen and Qian, 2004), vertical disparity (Matthews et al., 2003), and interocular time delay (Qian and Andersen, 1994; Qian and Freeman, 2009). We have also constructed a model for depth perception from monocularly occluded regions (Assee and Qian, 2007), another binocular depth cue, but have omitted it here owing to space limitations.
Binocular vision provides important information about depth to help us navigate in a three-dimensional environment and allow us to identify and manipulate 3D objects. The relative depth of any feature with respect to the fixation point can be determined by triangulating the horizontal shift, or disparity, between the images of that feature projected onto the left and right eyes. The computation is difficult because, in any given visual scene, there are many similar features, which create ambiguity in the matching of corresponding features registered by the two eyes. This is called the stereo correspondence problem. An extreme example of such ambiguity is demonstrated by Julesz's (1964) random-dot stereogram (RDS). In an RDS (Figure 7.1a), there are no distinct monocular patterns. Each dot in the left-eye image can be matched to several dots in the right-eye image. Yet when the images are fused between the two eyes, we readily perceive the hidden 3D structure.
In this chapter, we will review neurophysiological data that suggest how the brain might solve this stereo correspondence problem. Early studies took a mostly bottom-up approach. An extensive amount of detailed neurophysiological work has resulted in the disparity energy model (Ohzawa et al., 1990; Prince et al., 2002). Since the disparity energy model is insufficient for solving the stereo correspondence problem on its own, recent neurophysiological studies have taken a more top-down approach by testing hypotheses generated by computational models that can improve on the disparity energy model (Menz and Freeman, 2003; Samonds et al., 2009a; Tanabe and Cumming, 2009).
Studies of the evolution of animal signals and sensory behaviour have more recently shifted from considering 'extrinsic' (environmental) determinants to 'intrinsic' (physiological) ones. The drive behind this change has been the increasing availability of neural network models. With contributions from experts in the field, this book provides a complete survey of artificial neural networks. The book opens with two broad, introductory level reviews on the themes of the book: neural networks as tools to explore the nature of perceptual mechanisms, and neural networks as models of perception in ecology and evolutionary biology. Later chapters expand on these themes and address important methodological issues when applying artificial neural networks to study perception. The final chapter provides perspective by introducing a neural processing system in a real animal. The book provides the foundations for implementing artificial neural networks, for those new to the field, along with identifying potential research areas for specialists.
Two jellynose fish captured in southern Brazilian waters were identified and compared with the previous descriptions of ateleopodid species to improve our understanding of the diversity and distribution of these fish off Brazil. The body of each individual is elongate, tapering, and posteriorly compressed, with a large, robust head with relatively small eyes (~7% head length), a short trunk and a long tail. Radiographs of the specimens show two undeveloped pelvic rays buried in integument and five caudal rays originating from the last vertebrae which are exclusive characteristics of Ijimaia antillarum.
Syngnathus rostellatus is a nearshore pipefish species whose distributional range extends along the European Atlantic coast between Bergen (NO) and the Bay of Biscay (ES). Several recent articles suggest that this species has experienced a major range expansion of more than 4000 km into the eastern Mediterranean, but a critical review of these studies indicates that the majority of these reports are based on specimen misidentifications. Considering a reliable report of S. rostellatus from the Mediterranean coast near Gibraltar, it appears that the current distribution of this species is restricted to the north-eastern Atlantic Ocean and the southern Mediterranean coast of the Iberian Peninsula.
The rare ophichthid eel Asarcenchelys longimanus is reported for the first time from Bahia State, north-eastern Brazil. To date, only two specimens of A. longimanus, the holotype and a paratype, were known. The new finding extends its distribution to about 2400 km southwards along the Brazilian coastline and provides a new maximum size for the species. Comparisons of the morphometric and meristic data between our specimens and those used in the original description are provided.
Here we report on intraspecific cleaning behaviour between two adult bluestreak cleaner wrasse Labroides dimidiatus of similar size on coral reefs surrounding Lizard Island, Great Barrier Reef, Australia. During a SCUBA dive, we observed these individuals posing and the resulting cleaning interactions. While aggression in this species is common between adults of similar size and social class, our observation suggests that these individuals may also cooperate and partially rely on conspecific individuals for cleaning.
A female great white shark, Carcharodon carcharias, was caught by a tuna hand-liner in the Bosphorus Strait, in late March 1968. Its total length was estimated to be 551 cm and precaudal length 433 cm. Carcharodon carcharias now seems to be extinct from the Sea of Marmara due to the decline of tuna populations in the Marmaric waters.
The pan-tropical spotted dolphin Stenella attenuata is typically found in deep tropical and warm temperate waters and has been previously confirmed from the waters of most of Pakistan's neighbouring countries. However, to date, there has been no record of this species from Pakistan. This paper reports the first confirmed occurrence of this species in Pakistani waters, specifically a mass stranding event of 200–250 animals on 6 March 2009. The animals live stranded and all except two were rescued. These possibly died as a result of being stranded for a long time on the beach in hot, arid conditions that generally prevail along the coastline of Pakistan. All the animals appeared healthy but the exact cause of this mass stranding event could not be determined. Being the first confirmed record of this species in Pakistan, this information is an important addition and consideration for the Pakistan Biodiversity Action Plan.
In August 2009 six specimens of the ovulid gastropod Xandarovula patula (Pennant, 1777) (formerly known as Simnia patula Pennant, 1777), were found in dredge samples from a locality west of Smögen in western Sweden (58°22′N 11°05′E). In June and November 2010 a total of three specimens of the same species were found in dredge samples from near Svelgen Bridge, Øygarden, Hordaland, western Norway (60°27 ′N 04°57 ′E). Several small colonies of the presumed prey species, Alcyonium digitatum Linnaeus, 1758 and Tubularia indivisa Linnaeus, 1758, were found in the same dredge hauls. Xandarovula patula has been recorded from the Atlantic coast of southern Spain to the western end of the English Channel, with scattered records from the west coasts of Ireland and Britain, as far north as the Orkneys. More recently it has been reported from most Irish coasts, several parts of the Scottish coast and also from some places in the North Sea. Until now there have been no confirmed records from Scandinavian waters. The specimens recorded here may indicate recent immigration of a southern species due to warmer water temperatures.