To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The term “saccadic chronostasis” refers to the subjective temporal lengthening of a visual stimulus perceived following a saccadic eye movement. In this chapter, we discuss our preferred account of the illusion, which posits that the onset of the postsaccadic stimulus is antedated to a moment just prior to movement initiation, and review supporting evidence that illustrates key characteristics of the illusion, including its dependency on saccade extent. We conclude with a brief discussion of other examples of biased time perception that have been linked to saccadic chronostasis.
Introduction
When people make a saccadic eye movement to fixate a new visual target, they overestimate the duration for which that target is perceived (Yarrow et al. 2001). This illusion, which we have called saccadic chronostasis, has been demonstrated using the following basic procedure. Subjects make a saccade to a target that changes form or color during the saccade. They judge the duration of the new target stimulus relative to subsequently presented reference stimuli, and these judgments are used to determine a point of subjective equality (PSE; the point at which the target and reference stimuli are perceived to have identical durations). This procedure is schematized in Fig. 10.1. The same task performed while fixating forms a control. Reduced PSEs in saccadic conditions compared to control fixation conditions are a gauge of the temporal overestimation of the postsaccadic stimulus.
Due to neuromuscular delays and the inertial properties of the arm people must consider where a moving object will be in the future if they want to intercept it. We previously proposed that people automatically aim ahead of moving objects they are trying to intercept because they pursue such objects with their eyes, and objects that are pursued with the eyes are mislocalized in their direction of motion. To test this hypothesis we examined whether asking subjects to fixate a static point on a moving target's path, rather than allowing them to pursue the target with their eyes, makes them try to intercept the target at a point that the target has already passed. Subjects could not see their hand during the movement and received no feedback about their performance. They did tend to cross the target's path later – with respect to when the target passed that position – when not pursuing the target with their eyes, but the effect of fixation was much smaller than we predicted, even considering that the subjects could not completely refrain from pursuing the moving target as their hand approached it. Moreover, when subjects first started to move, their hands did not aim farther ahead when pursuing the target than when trying to fixate. We conclude that pursuing the target with one's eyes may be important for interception, but not because it gives rise to localization errors that predict the target's displacement during the neuromuscular delay.
How does the visual system provide us with the perception of a continuous and stable world in the face of the spatial–temporal chaos that characterizes its input? In this chapter we summarize several programs of research that all point to a solution we refer to as object updating. We use this phrase because perceptual continuity seems to occur at an object level (as opposed to an image level or a higher conceptual level) and because our research suggests that the visual system makes a sharp distinction between the formation of new object representations versus the updating of existing object representations. We summarize the research that led us to this view in the areas of masking by object substitution, the flash-lag illusion, response priming, and an illusion of perceptual asynchrony.
Introduction
Biological vision is the marvelous ability of an organism to be informed about its surroundings at a distance and with a high degree of spatial and temporal resolution. This ability allows us to know where things are, what shape and color they are, and equally importantly, when they are there, so that we may interact with them appropriately. Yet, contrary to many people's implicit understanding of how biological vision is accomplished, it is not a process by which light, reflected from surfaces in the three-dimensional world, is recorded faithfully by the brain in order to reconstruct the nature of the surfaces that gave rise to the recorded pattern of light.
Quasi-periodic or “discrete” brain processes are, in theory, susceptible to a phenomenon known in engineering as “temporal aliasing.” When the rate of occurrence of events in the world is fast enough, the perceived direction of these events may be reversed. We have recently demonstrated that, because of a quasi-periodic attentional capture of motion information, continuously moving objects are sometimes perceived to move in the wrong direction (the “continuous Wagon Wheel Illusion”). Using a simple Fourier energy model of motion perception, we established that this type of attentional capture occurs at a rate of about 13 Hz. We verified with EEG recordings that the electrophysiological correlates of this illusion are restricted to a specific frequency band around 13 Hz, over right parietal regions – known for their involvement in directing attention to temporal events. We summarize these results and discuss their implications for visual attention and awareness.
Introduction
With respect to the temporal organization of visual perception – the topic of this book – one important issue that has puzzled scientists for more than a century (James 1890; Pitts & McCulloch 1947; Stroud 1956; White 1963; Shallice 1964; Harter 1967; Varela et al. 1981; Purves et al. 1996; Crick & Koch 2003; VanRullen & Koch 2003) is whether our experience relies on a continuous sampling or a discrete sequence of periodic “snapshots” or “perceptual frames” of the external world. Although it may seem that such radically different mechanisms should be easy to distinguish using elementary introspection, the realism of the cinema serves to remind us that these two alternatives can in fact lead to equivalent perceptual outcomes.
Space and time are modes by which we think and not the conditions in which we live.
–Albert Einstein
Since the beginning of sentience, the fabric of reality has been the subject of intense curiosity, and the twin concepts of space and time have figured prominently in the thinking of individuals of various intellectual persuasions. Understanding in science has advanced significantly through the postulates that underpin coherence and precision in the representation, and measurement, of space and time. These advances have formed the bedrock of the development of many disciplines. However, until the latter half of the nineteenth century many properties of space and time were assumed and therefore remained unquestioned. For example, the implicit acceptance of concepts such as absolute space (a coordinate system at rest, relative to which all inertial frames move at constant velocity) and absolute time (a universal time independent of any “clock” or mechanism) made most issues related to space and time impervious to empirical investigation and theoretical debate. This state of affairs was robustly challenged by scientists such as Ernst Mach, who among others imagined observers equipped with measuring devices (rulers and clocks) arriving at concepts at odds with notions of absolute space and absolute time.
Many well-known scientists whose work spanned the latter half of the nineteenth century (Mach included) crossed the disciplinary boundaries of physics, philosophy, and vision science. In Mach's thinking on space and time, the observer's sense perception played a critical role.
The locations of stationary objects appear invariant, although saccadic eye movements shift the images of physically stationary objects on the retina. Two features of this perceptual stability related to saccades are that postsaccadic locations of objects appear invariant relative to their appearance in the presaccadic view, and perception of postsaccadic stimulation is free from interference by remnants of presaccadic stimulation. To generate stability, quantitatively accurate cancellation between retinal input (RI) and extraretinal eye position information (EEPI) must occur, and persisting influences from the presaccadic view must be eliminated. We describe experiments with briefly flashed visual stimuli that have measured (1) the time course of perisaccadic spatial localization, (2) the interfering effects of persisting stimulation prior to the postsaccadic period, (3) the achievement of perceptual stability by removing visual persistence early, and (4) the influence of metacontrast utilizing the normal perisaccadic spatiotemporal distribution of retinal input to prevent interference from visual persistence.
For the steady eye, a generalized cancellation mechanism is analyzed through studying mislocalizations in perceptual orientation and visually guided manual behavior produced by (1) modifying EEPI in observers with experimental partial paralysis (curare) of the extraocular muscles and/or (2) modifying RI by varying visual field orientation (i.e., its pitch and/or roll). The influences of visual pitch and roll derive from the retinal orientations of individual straight lines and their combinations, with the identical lines influencing perceived verticality and elevation. […]
Perceived duration of interstimulus intervals is influenced by the spatial configuration of stimuli. When participants judge the two intervals between a sequence of three stimuli presented with different spatial distances, a greater distance between two stimuli makes the corresponding time interval appear longer (kappa effect, Experiment 1). By employing a choice-reaction time task, we demonstrate that this effect is at least partly due to a facilitating influence of the preceding stimulus on the timing of the subsequent one while the timing of the first stimulus presented is not influenced by the subsequent one. Moreover, reaction times to the subsequent stimulus increased with spatial distance between successive stimuli, and this was valid for a three-stimulus condition (Experiment 2) as well as for a two-stimulus condition (Experiment 3). Thus, our results provide evidence for spatial priming in the temporal kappa effect.
Introduction
Perceiving space and time is often considered to be independent. However, the interdependency of both dimensions has been known for a long time and is most apparent in the perception of moving stimuli. For example, in 1862 Zöllner discovered a subjective spatial contraction of figures when moved behind a vertical slit (anorthoscopic distorted pictures, see also Vierordt 1868; Parks 1965). Through the motion, the slit uncovered only small figure sections at any time, and apparently the perceptual integration of the temporally separated sections contracted the figure spatially. This phenomenon (and related phenomena, e.g., the Ansbacher effect, Ansbacher 1944, or the tandem effect, Müsseler & Neumann 1992) demonstrates that perceived space depends on the temporal characteristics of stimulus presentation, here as a consequence of stimulus motion.
In the “chopstick illusion” (Anstis 1990, 2003) a vertical and horizontal line overlapped to form a cross and followed clockwise circular orbits in counterphase, with one line being at 6 o'clock when the other was at 12 o'clock. The intersection of the lines moved counterclockwise, but it was wrongly perceived as rotating clockwise. This chopstick illusion reveals how moving objects are parsed, based upon the intrinsic and extrinsic terminators of lines viewed through apertures. We conclude that intersections were not parsed as objects, but instead the motion of the terminators (tips) propagated along the lines and was blindly assigned to the intersection. In the similar “sliding rings illusion,” we found that observers could use their eyes to track intersections only when these appeared rigid and not when they appeared to slide. Conclusion – smooth pursuit eye movements are under top-down control and are compelled to rely upon perceptual interpretation of objects.
In the “flash-lag” effect, a static object that is briefly flashed up next to a moving object appears to lag behind the moving object (Nijhawan 2002). We superimposed a flashed spot on a chopsticks intersection that appeared to be moving clockwise along a circular path but was actually moving counterclockwise. We found that the flash appeared displaced clockwise. This was appropriate to the physical, not the subjective direction of rotation, indicating that the flash-lag and the chopstick illusions coexist without interacting. Similarly, the flash-lag effect was unaffected by reversed phi. […]
Anticipation is a hallmark of skilled movements. For example, when removing plates from a loaded tray, the upward force generated by the supporting hand is reduced in anticipation of the reduced load. An adjustment of the postural force occurs as a result of the predicted consequences of the self-initiated action. Although the effect of anticipatory processes is easily discerned in the actions themselves, it is unclear whether these processes also affect our perceptual experience. In this chapter we focus on the relationship between action and the perceptual experience. We begin by reviewing how actions provide reliable predictions of forthcoming sensory information. Following this, we discuss how the anticipation of the time of external events is an important component of action-linked expectations. Finally, we report two experiments that examine how temporal predictions are integrated with the incoming sensory information, evaluating whether this integration occurs in a statistically optimal manner. This predictive process provides the important advantage of compensating for lags in conduction time between peripheral input and the central integration of this information, thus overcoming the physical limitations of sensory channels.
Racing against sensory delays
An important problem for the brain to solve is how to compensate for the temporal gap between when a stimulus is registered by a sensory detector and when it is recognized, either consciously or subconsciously, in the cortex. In humans, such delays happen on the order of hundreds of milliseconds (for review, see Welch & Warren 1986).
“Real-time sensorimotor control requires the sampling and manipulation not only of parameters representing space but also of those representing time. In particular, when the system itself has inherent processing delays, it invites a situation in which sampled parameters from a peripheral sensor may no longer be valid at the time they are to be used, due to the change in state that took place during the processing delay” (Dominey et al. 1997). In this chapter, we focus on the situation in which a visual stimulus is flashed near the time of a saccade, and the subject's task is to orient the eyes toward the site where the stimulus has been. To perform this task in complete darkness, the subject's brain has to rely on only two signals: retinal error signal and internal eye position signal (iEPS). This is one of the most interesting situations in which the brain has to compute something in the face of specific physical odds (e.g., very long latencies), and we have some hints on how it proceeds. We analyze the time course of the iEPS – which appears quite distorted – using electrical stimulation of brain structures, instead of natural visual stimuli, to provide the goal to be localized. Different hypotheses are then discussed regarding the possible source and possible neural correlate of the iEPS.
Although vision is usually thought of as a continuous process – continuous in space and time – it is periodically interrupted by rapid eye movements called saccades.
When rapidly successive objects or object replicas are presented as sensory streams, a stimulus within a stream is perceptually facilitated relative to an otherwise identical stimulus not within the stream. Experiments on perceptual latency priming and flash-lag have convincingly shown this. Unfortunately, no consensus exists on what is (are) the mechanism(s) responsible for in-stream facilitation. Here, I discuss several alternative explanations: perceptual extrapolation of change in the specific properties of continuous stimulation, time-saving for target processing due to the early microgenetic/formation stages for target being completed on pretarget in-stream items, control of focused selective attention by the onsets of stimulus input, and preparation of the nonspecific perceptual retouch by the preceding nontarget input in stream for the succeeding target input in stream. Revisions are outlined to overcome the explanatory difficulties that the retouch theory has encountered in the face of new phenomena of perceptual dissociation.
Introduction
Objects that do not occur in isolation are processed differently compared to when they appear as separate entities. If we compare the visual latency of an object presented alone with the latency of its replica that is presented after another object (which is presented nearby in space and time), we see that the object that comes after having been primed by other input achieves awareness faster (Neumann 1982; Bachmann 1989; Scharlau & Neumann 2003a & b; Scharlau 2004). In a typical experiment, a visual prime stimulus is presented, followed by another stimulus that acts as a backward mask to the prime.
There is a delay before sensory information arising from a given event reaches the central nervous system. This delay may be different for information carried by different senses. It will also vary depending on how far the event is from the observer and stimulus properties such as intensity. However, it seems that at least some of these processing time differences can be compensated for by a mechanism that resynchronizes asynchronous signals and enables us to perceive simultaneity correctly. This chapter explores how effectively simultaneity constancy can be achieved, both intramodally within the visual and tactile systems and cross-modally between combinations of auditory, visual, and tactile stimuli. We propose and provide support for a three-stage model of simultaneity constancy in which (1) signals within temporal and spatial windows are identified as corresponding to a single event, (2) a crude resynchronization is applied based on simple rules corresponding to the average processing speed differences between the individual sensory systems, and (3) fine-tuning adjustments are applied based on previous experience with particular combinations of stimuli.
Introduction
Although time is essential for the perception of the outside world, there is no energy that carries duration information, and consequently there can be no sensory system for time. Time needs to be constructed by the brain, and because this process itself takes time, it follows that the perception of when an event occurs must necessarily lag behind the occurrence of the event itself.
Information about eye position comes from efference copy, a record of the innervation to the extraocular muscles that move the eye and proprioceptive signals from sensors in the extraocular muscles. Together they define extraretinal signals and indicate the position of the eye. By pressing on the eyelid of a viewing eye, the extraocular muscles can be activated to maintain a steady gaze position without rotation of the eye. This procedure decouples efference copy from gaze position, making it possible to measure the gain of the efference copy signal. The gain is 0.61; the gain of oculomotor proprioception, measured by a similar eye press technique, is 0.26. The two signals together sum to only 0.87, leading to the conclusion that humans underestimate the deviations of their own eyes and that extraretinal signals cannot be the mechanisms underlying space constancy (the perception that the world remains stable despite eye movements). The underregistration of eye deviation accounts quantitatively for a previously unexplained illusion of visual direction. Extraretinal signals are used in static conditions, especially for controlling motor behavior. The role of extraretinal signals during a saccade, if any, is not to compensate the previous retinal position but to destroy it. Then perception can begin with a clean slate during the next fixation interval.
The volume identifies how stressful conditions affect plants. Various stresses, such as drought, salinity, waterlogging, high and low temperatures, can have a major impact on plant growth and survival - with important economic consequences in crop plants. This book examines some of the more important stresses, shows how they affect the plant and then reviews how new varieties or new species can be selected which are less vulnerable to stress. The wide-ranging and important consequences of stress should ensure that the volume is widely read by plant biologists at the graduate and research level.