To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Low-density-lipoprotein receptors (LDLRs) are an evolutionarily ancient surface protein family with the ability to activate a diversity of extracellular signals across the cellular membrane in the adult central nervous system (CNS). Their intimate roles in modulating synaptic plasticity and their necessity in hippocampal-dependent learning and memory have only recently come to light. Two known LDLR ligands, specifically apolipoprotein E (apoE) and reelin, have been the most widely investigated in this regard. Most of our understanding of synaptic plasticity comes from investigation of both pre- and postsynaptic alterations. Therefore, it is interesting to note that neurons and glia that do not contribute to the synaptic junction in question can secrete signaling molecules that affect synaptic plasticity. Notably, reelin and apoE have been shown to modulate hippocampal long-term potentiation in general, and affect NMDA receptor and AMPA receptor regulation specifically. Furthermore, these receptors and signaling molecules have significant roles in neuronal degenerative diseases such as Alzheimer's disease. The recent production of recombinant proteins, knockout and transgenic mice for receptors and ligands and the development of human ApoE targeted replacement mice have significantly expanded our understanding of the roles LDLRs and their ligands have in certain disease states and the accompanying initiation of specific signaling pathways. This review describes the role LDLRs, apoE and reelin have in the regulation of hippocampal synaptic plasticity.
Research on the molecular and cellular basis of learning and memory has focused on the mechanisms that underlie the induction and expression of synaptic plasticity. There is increasing evidence that structural changes at the synapse are associated with synaptic plasticity and that extracellular matrix (ECM) components and cell adhesion molecules are associated with these changes. The functions of both groups of molecules can be regulated by proteolysis. In this article we review the roles of selected proteases and protease inhibitors in perisynaptic proteolysis of the ECM and synaptic adhesion proteins and the impact of proteolysis on synaptic modification and cognitive function.
Adhesive and repellent molecular cues guide migrating cells and growing neurites during development. They also contribute to synaptic function, learning and memory in adulthood. Here, we review the roles of cell adhesion molecules of the immunoglobulin superfamily (Ig-CAMs) and semaphorins (some of which also contain Ig-like domains) in regulation of synaptic transmission and plasticity. Interestingly, among the seven studied Ig-CAMs, the neuronal cell adhesion molecule proved to be important for all tested forms of hippocampal plasticity, while its associated unusual glycan polysialic acid is necessary and sufficient part for synaptic plasticity only at CA3-CA1 synapses. In contrast, Thy-1 and L1 specifically regulate long-term potentiation (LTP) at synapses formed by entorhinal axons in the dentate gyrus and cornu ammonis, respectively. Contactin-1 is important for long-term depression but not for LTP at CA3-CA1 synapses. Analysis of CHL1-deficient mice illustrates that at intermediate stages of development a deficit in a cell adhesion molecule is compensated but appears as impaired LTP during early and late postnatal development. The emerging mechanisms by which adhesive Ig-CAMs contribute to synaptic plasticity involve regulation of activities of NMDA receptors and L-type Ca2+ channels, signaling via mitogen-activated protein kinase p38, changes in GABAergic inhibition and motility of synaptic elements. Regarding repellent molecules, available data for semaphorins demonstrate their activity-dependent regulation in normal and pathological conditions, synaptic localization of their receptors and their potential to elevate or inhibit synaptic transmission either directly or indirectly.
Many neurons and their synapses are enwrapped in a brain-specific form of the extracellular matrix (ECM), the so-called perineuronal net (PNN). It forms late in the postnatal development around the time when synaptic contacts are stabilized. It is made of glycoproteins and proteoglycans of glial as well as neuronal origin. The major organizing polysaccharide of brain extracellular space is the polymeric carbohydrate hyaluronic acid (HA). It forms the backbone of a meshwork consisting of CNS proteoglycans such as the lectican family of chondroitin sulphate proteoglycans (CSPG). This family comprises four abundant components of brain ECM: aggrecan and versican as broadly expressed CSPGs and neurocan and brevican as nervous-system-specific family members. In this review, we intend to focus on the specific role of the HA-based ECM in synapse development and function.
One of the most important functions of the visual system is to be able to recognise an object under a variety of different viewing conditions. For this to be achieved, the stimulus features that make up that object must appear constant under these conditions. If stimulus parameters do not form a reliable ‘label’ for an object under different conditions, they are considerably devalued in their use to the visual system. For example, if we perceive a square shape on a video screen and the area it covers increases or decreases, we experience a sense of movement. The square seems to get closer or further away. The visual system assumes that the size of the square will not change, so that changes in its apparent size will signal changes in its relative distance from us. This is called object constancy. This is a sensible assumption, as under normal conditions, objects seldom change in size. Another example is lightness constancy. Over the course of a normal day, light levels change significantly, but the apparent lightness of an object will change very little. The visual system scales its measure of lightness to the rest of the environment, so that the apparent lightness of an object will appear constant relative to its surroundings. A similar problem exists with the perception of colour. Over the space of a day, the spectral content of daylight changes significantly (Figure 7.1).
As we have seen in previous chapters, visual information is broken down into its components and processed, in parallel, in specialised areas, so that cells in different areas show a preference for different combinations of, for example, colour, motion, orientation, texture, shape and depth. This is all carried out in a complex network of 32 visual areas connected by at least 305 connections (Van Essen, Anderson & Felleman, 1992). These connections can run in three ‘directions’. Firstly, from lower areas (such as V1) to higher areas (such as V2). These are called feed-forward connections. Secondly, all these feed-forward connections have reciprocal feedback connections running from higher to lower areas. Thirdly, there are also lateral connections running from areas of equivalent processing complexity. In addition to these problems, there are the differences in how fast different visual parameters are processed (Zeki, 2003). For example, location is perceived before colour, and colour is perceived before motion and orientation (Zeki & Moutoussis, 1997; Pisella et al., 1998). It seems a far from trivial task to re-integrate all of this information from this complex spatial and temporal network into the seamless, coherent perception of the world we all share.
There are two obvious problems. Firstly, we have to put all the different visual features of an object back together in the right spatial and temporal relationship to one another.
Vision is the primary sensory modality in primates such as ourselves, and this is reflected in the complexity of the visual system and the extent of the cerebral cortex used for the analysis of visual information. On the basis of anatomical, physiological and behavioural studies, it is believed that at least 32 separate cortical areas are involved with the processing of visual processing in the macaque monkey (Van Essen et al., 1992). Twenty-five of these areas are primarily visual in function; the remaining seven are also implicated in other functions such as polysensory integration or visually guided motor control. These visual areas occupy about half of the 100cm2 area of each of the monkey's cortical hemispheres. Two of the areas, V1 and V2, each occupies more than 10cm2 of the cortical surface, but most visual areas occupy less than a tenth of this area. Comparatively little is known of the functional anatomy of the human visual cortex, but it seems to be at least as complex as that of the monkey (Kaas, 1992; Sereno et al., 1995). Fortunately, it is possible to simplify this picture by concentrating on the key visual areas and looking at their functional organisation.
As one moves up the visual system, from the retina to the lateral geniculate nucleus and then on to successive cortical areas, visual neurons become responsive to more and more complex stimuli.
In the primary stages of the visual system, such as Vl, objects are coded in terms of retinotopic co-ordinates, and lesions of Vl cause defects in retinal space, which move with eye movements, maintaining a constant retinal location. Several stages later in the visual system, at the inferior temporal cortex (IT) in non-human primates, the receptive fields are relatively independent of retinal location, and neurons can be activated by a specific stimulus, such as a face, over a wide range of retinal locations. Deficits that result from lesions of IT are based on the co-ordinate system properties of the object, independent of retinal location. Thus, at some point in the visual system, the pattern of excitation that reaches the eye must be transposed from a retinotopic co-ordinate system to a co-ordinate system centred on the object itself (Marr, 1982). An outline of such a transformation can be seen in Table 8.1.
At the same time that co-ordinates become object centred, the system becomes independent of the precise metric regarding the object itself within its own co-ordinate system, that is to say the system remains responsive to an object despite changes in its size, orientation, texture and completeness. Single-cell recording studies in the macaque suggest that, for face processing, these transformations occur in the anterior IT. The response of the majority of cells in the superior temporal sulcus (STS) is view-selective and their outputs could be combined in a hierarchical manner to produce view-independent cells in the inferior temporal cortex.
The aim of this book is to provide a concise, but detailed account of how your visual system is organised and functions to produce visual perception. There have been a host of new advances in our understanding of how our visual system is organised. These new discoveries stretch from the structural basis of the visual pigments that capture light to the neural basis of higher visual function.
In the past few years, the application of the techniques of molecular genetics have allowed us to determine the genetic and structural basis of the molecules that make up the photopigments, and the faults that can arise and produce visual deficits such as colour blindness, night blindness and retinitis pigmentosa. Careful analysis has also allowed the changes in cell chemistry that convert the absorption of light by the photopigment into a neural signal to be understood. The use of functional imaging techniques, in concert with more traditional techniques such as micro-electrode recording, have made it possible to understand how visual information is processed in the brain. This processing seems to be both parallel and hierarchical. Visual information is split into its different component parts such as colour, motion, orientation, texture, shape and depth, and these are analysed in parallel in separate areas, each specialised for this particular visual feature. The processed information is then reassembled into a single coherent perception of our visual world in subsequent, higher visual areas.
The primary visual cortex (V1) or striate cortex is an important area in which partially processed information from the retina and LGN is separated and packaged up for more elaborate analysis in the specialised visual areas of the extrastriate cortex. But V1 is more than just a neural version of a post office sorting department. The response properties of most neurons in V1 are very different from those of neurons in the preceding area. New response features, such as sensitivity to lines and bars of different orientations and movements are created, along with a specialisation of some neurons to an existing visual feature such as colour. Moreover, the functional organisation of V1 into repeating columns and modules seems to be a standard pattern in all cortical visual areas, and this pattern of organisation is an efficient way of mapping a multi-dimensional stimulus, such as vision, on to an irregularly shaped piece of two-dimensional cortex.
Visual information passes to the cortex from the LGN through the optic radiation. In the monkey, the first cortical visual area (V1) consists of a folded plate of cells about 2mm thick, with a surface area of a few square inches. This is a much larger and more complex structure than the LGN, for example, the LGN is composed of 1.5 million cells, whereas V1 is composed of around 200 million. V1 lies posteriorly in the occipital lobe and can be recognised by its characteristic appearance.
In determining the nature of the movement of an object or scene across the retina, the visual system has to determine if the eyes are moving, the head or body is moving or the object itself is moving. To determine whether the eyes are moving, it seems that the cortical motor areas that control eye movement simultaneously send a signal to the visual system (the corollary discharge theory). For example, if the eye muscles of volunteers are temporarily paralysed, and they are asked to try and move their eyes, the volunteers report that the scene seems to jump to a new position, even though their eyes do not move and the scene does not change (Stevens et al., 1976; Matin et al., 1982).
It is important for the visual system to know about eye movements and to be able to compensate for their effects, as under normal circumstances our eyes are constantly moving. The reason for this constant movement can be found in the organisation of the retina. High acuity, colour vision is limited to the central 2 degrees of the visual field subserved by the fovea. Outside this small window, the spatial sampling of the retinal image declines sharply with increasing distance from the fovea (Perry & Cowey, 1985). Similarly, the packing of colour-sensitive cones declines by a factor of about 30 as one moves from central vision to 10 degrees of eccentricity (Curcio et al., 1991).
In this chapter we will review the purpose of the eye and how the complex optical and neural machinery within it functions to perform this task. The basic function of the eye is to catch and focus light on to a thin layer of specially adapted sensory receptor cells that line the back of the eye. The eyeball is connected to an elaborate arrangement of muscles that allow it to move to follow target stimuli in the environment. The lens within the eye, which helps focus light, is also connected to muscles that can alter the lens shape and thus its focal length. This allows target stimuli at different distances to be focused on the back of the eye. At the back of the eye, light energy is transformed into a neural signal by specialised receptor cells. This signal is modified in the retina, to emphasise changes and discontinuities in illumination, before the signal travels onto the brain via the optic nerve. In the sections that follow we will examine these procedures in detail.
Light
Light has a dual nature, being considered both an electromagnetic wave, which can vary in frequency and wavelength, and also a series of discrete packets of energy, called photons. Both forms of description are used in explaining how the visual system responds to light. In determining the sensitivity of the visual system to light, such as the minimum threshold of light detection, it is usual to refer to light in terms of photons.
The recognition and interpretation of faces and facially conveyed information are complex, multi-stage processes. A face is capable of signalling a wide range of information. It not only identifies the individual, but also provides information about a person's gender, age, health, mood, feelings, intentions and attentiveness. This information, together with eye contact, facial expression and gestures, is important in the regulation of social interactions. It seems that the recognition of faces and facially conveyed information are separate from the interpretation of this information.
Face recognition
The accurate localisation in humans of the area, or areas, important in the recognition of faces and how it is organised has plagued psychologists and neuroscientists for some years. The loss of the ability to recognise faces (prosopagnosia) has been reported in subjects with damage in the region of the occipito-temporal cortex, but the damage, whether through stroke or head injury, is usually diffuse. The subjects suffer not only from prosopagnosia, but usually from other forms of agnosias too, and often from impaired colour perception (achromatopsia). However, functional imaging has allowed more accurate localisation (see Figure 9.1), and these studies have suggested that the human face recognition system in many ways mirrors that of the non-human primates discussed in the previous chapter. The superior temporal sulcus (STS) in humans (as in monkeys) seems sensitive to the direction of gaze and head angle (cues to the direction of attention) and to movement of the mouth (important for lip reading), as well as to movement of the hands and body (Allison, Puce & McCarthy, 2000).
The perception of depth is essential to the generation of a three-dimensional representation of the spatial relationships in our surroundings; a representation which is essential if we are to be able to interact with our environment in any meaningful way. The visual system has two sets of depth cues: oculomotor and visual (Figure 11.1). They are termed cues because they must be learnt through association with non-visual aspects of experience. Oculomotor cues are based on the degree of convergence (a measure of the angle of alignment) of the eyes and the degree of accommodation (change in shape) of the lens. The visual cues can be both monocular and binocular. The monocular cues include interposition, relative size, perspective and motion parallax. Binocular cues are based on the disparity between the different views of the world from the two eyes. From this disparity, a three-dimensional or stereoscopic representation can be generated. The information on depth, together with information about movement and velocity, seem to be integrated with information from other sensory modalities to produce a map of perceptual spacewhich is common to all our senses. This integration seems to occur in the posterior parietal cortex. Damage to this area causes profound impairments in our perception of space, including that occupied by our own bodies.
Oculomotor cues
When you fixate an object, your eyes are accommodated and converged by an amount dependent on the distance between you and that object (Figure 11.2).
In the vertebrate eye, colour is detected by cone receptors. In the case of humans and other Old World primates, there are three cone classes (Figure 3.1). A blue or short-wavelength pigment absorbing maximally at 420 nm, a green or middle-wavelength pigment absorbing maximally at 530nm and a red or long-wavelength pigment absorbing at 565nm (Dartnall et al., 1983). For an animal to be able to discriminate between colours, it must have two or more different classes of cones. This is because a single cone pigment cannot discriminate between changes in wavelength and changes in the intensity of a light. For example, a red cone will respond strongly to a 560-nm light, but weakly to a 500-nm light. However, the same pattern of response can be obtained by a light of fixed wavelength, say 560 nm, and changing intensity, as a single cone class can only signal the number of photons absorbed by its pigment. This pattern of response is called univariance. To make the crucial differentiation between wavelength and intensity, a comparison of signals from two or more cone classes is required. 540-nm and 640-nm lights will produce different patterns of firing in the red and green cones as compared with two 540-nm lights of different intensity. As a general rule of thumb, the more cone classes in an eye, the better will be the wavelength discrimination.