To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the language ℒ{num str} we may perform calculations such as the doubling of a given expression, but we cannot express doubling as a concept in itself. To capture the general pattern of doubling, we abstract away from the particular number being doubled by using a variable to stand for a fixed, but unspecified, number to express the doubling of an arbitrary number. Any particular instance of doublingmay then be obtained by substituting a numeric expression for that variable. In general an expression may involve many distinct variables, necessitating that we specify which of several possible variables is varying in a particular context, giving rise to a function of that variable.
In this chapter we consider two extensions of ℒ{num str} with functions. The first, and perhaps most obvious, extension is by adding function definitions to the language. A function is defined by binding a name to an abt with a bound variable that serves as the argument of that function. A function is applied by substituting a particular expression (of suitable type) for the bound variable, obtaining an expression.
The domain and range of defined functions are limited to the types nat and str, as these are the only types of expression. Such functions are called first-order functions, in contrast to higher-order functions, which permit functions as arguments and results of other functions. Because the domain and range of a function are types, this requires that we introduce function types whose elements are functions.
Modularity is the most important technique for controlling the complexity of programs. Programs are decomposed into separate components with precisely specified, and tightly controlled, interactions. The pathways for interaction among components determine dependencies that constrain the process by which the components are integrated, or linked, to form a complete system. Different systems may use the same components, and a single system may use multiple instances of a single component. Sharing of components amortizes the cost of their development across systems and helps limit errors by limiting coding effort.
Modularity is not limited to programming languages. In mathematics the proof of a theorem is broken down into a collection of definitions and lemmas. Cross references among lemmas determine a dependency structure that constrains their integration to form a complete proof of the main theorem. Of course, one person's theorem is another person's lemma; there is no intrinsic limit on the depth and complexity of the hierarchies of results in mathematics. Mathematical structures are themselves composed of separable parts, as, for example, a Lie group is a group structure on a manifold.
Modularity arises from the structural properties of the hypothetical and general judgments. Dependencies among components are expressed by free variables whose typing assumptions state the presumed properties of the component. Linking consists of substitution and discharge of the hypothesis.
At birth, infants can see only large objects of high contrast located in the central visual field. Over the next half year, basic visual sensitivity improves dramatically. The infant begins to perceive the direction of moving objects and stereoscopic depth, and to integrate the features of objects and faces. Nevertheless, it takes until about 7 years of age for acuity and contrast sensitivity to become as acute as those of adults and into adolescence for some aspects of motion and face processing to reach adult levels of expertise.
An important developmental question is whether, and to what extent, the improvements in vision during normal development depend on normal visual experience. To find out, we have taken advantage of a natural experiment: children born with dense, central cataracts in both eyes that block all patterned visual input to the retina. The children are treated by surgically removing the cataractous lenses and fitting the eyes with compensatory contact lenses that allow the first focused patterned visual input to reach the retina. In the studies summarized in this chapter, the duration of deprivation – from birth until the fitting of contact lenses after surgery – ranged from just a few weeks to most of the first year of life. In other cases, the child began with apparently normal eyes but developed dense bilateral cataracts postnatally that blocked visual input. As in the congenital cases, the cataractous lenses were removed and the eyes fitted with contact lenses.
A symbol is an atomic datum with no internal structure. Whereas a variable is given meaning by substitution, a symbol is given meaning by a family of operations indexed by symbols. A symbol is therefore just a name, or index, for an instance of a family of operations. Many different interpretations may be given to symbols according to the operations we choose to consider, giving rise to concepts such as fluid binding, dynamic classification, mutable storage, and communication channels. With each symbol is associated a type whose interpretation depends on the particular application. The type of a symbol influences the type of its associated operations under each interpretation. For example, in the case of mutable storage, the type of symbol constrains the contents of the cell named by that symbol to values of that type. It is important to bear in mind that a symbol is not a value of its associated type, but only a constraint on how that symbol may be interpreted by the operations associated with it.
In this chapter we consider two constructs for computing with symbols. The first is a means of declaring new symbols for use within a specified scope. The expression v a:ρ in e introduces a “new” symbol a with associated type ρ for use within e. The declared symbol a is new in the sense that it is bound by the declaration within e, and so may be renamed at will to ensure that it differs from any finite set of active symbols.
The semantics of many control constructs (such as exceptions and coroutines) can be expressed in terms of reified control stacks, a representation of a control stack as an ordinary value. This is achieved by allowing a stack to be passed as a value within a program and to be restored at a later point, even if control has long since returned past the point of reification. Reified control stacks of this kind are called continuations; they are values that can be passed and returned at will in a computation. Continuations never “expire,” and it is always sensible to reinstate a continuation without compromising safety. Thus continuations support unlimited “time travel”—we can go back to a previous point in the computation and then return to some point in its future, at will.
Why are continuations useful? Fundamentally, they are representations of the control state of a computation at a given point in time. Using continuations we can “checkpoint” the control state of a program, save it in a data structure, and return to it later. In fact, this is precisely what is necessary to implement threads (concurrently executing programs)—the thread scheduler must be able to checkpoint a program and save it for later execution, perhaps after a pending event occurs or another thread yields the processor.
When we look at our environment, we immediately detect and recognize they objects, buildings, and people surrounding us. Our perception of fine detail, lines, edges, color, movement, and depth are all important for building up representations of these objects, scenes, and people. This processing occurs rapidly and is achieved effortlessly by the visual system as we take in the world with both eyes. Imagine what it might be like to not have vision through two eyes – to be completely blind. We would have to use our remaining intact sensory systems to their fullest capacity in order to interact with the world. Our senses of touch, taste, smell, and hearing would become significantly more important to allow us to connect with and understand our world.
Now instead, consider what it might be like to lose vision in only one eye. With one completely nonfunctional eye and one intact eye, our visual system would still receive light input through the intact remaining eye. So, one might ask, how could having only one eye affect our ability to see? From a systems point of view, the physical light input to our visual system would be reduced by half compared to the intact binocular visual system.
The brain has long been considered as being hard wired in a predetermined manner shaped by evolution. This view has been challenged in the past decades by increasing evidence documenting the impressive capacity of the brain to be modulated through learning and experience, even well into adulthood. Pioneering studies of Hubel and Wiesel (1963; Hubel et al., 1977) on the development of ocular dominance columns have compellingly demonstrated that alterations in visual experience can influence the normal development of the visual cortex.
One of the most striking demonstrations of experience-dependent plasticity comes from studies in congenitally blind individuals (CB) showing dramatic cortical reorganizations as a consequence of visual deprivation. Experiments have documented that cortical sensory maps in the remaining senses of CB can expand with experience. For instance, finger representation in the somatosensory cortex is increased in blind individuals who are proficient Braille readers (Pascual-Leone et al., 1993; Sterr et al., 1999), and the tonotopic map in the auditory cortex is larger in visually deprived individuals (Elbert et al., 2002). Such cortical changes are thought to underlie enhanced reading abilities and auditory processing skills in the blind (Elbert et al., 2002; Sterr et al., 1998).
Aside from these examples of intramodal plasticity, massive crossmodal changes have been reported in the occipital cortex deprived of its natural visual inputs. In people born blind, occipital regions thatwould normally process visual stimuli are “hijacked” by the other senses as these regions become responsive to nonvisual input (Bavelier and Neville, 2002; Pascual-Leone et al., 2005).
When I was 48 years old, my vision improved in ways that most scientists and physicians considered impossible (Barry, 2009; Sacks 2006, 2010). I had developed strabismus, or misaligned eyes, within the first months of life. When I looked at an object, I aimed or fixated one eye at the target and turned the other eye in. In contrast, most infants aim their eyes simultaneously at the same point in space and are able to fuse the two eyes' images into one view of the world. They develop stereopsis, or the ability to use the slightly different viewing perspectives of the two eyes to create the perception of stereoscopic depth. Because I aimed my eyes at different regions of space, I received uncorrelated images that could not be fused. How could I create a single worldview from the conflicting input from my two eyes? Like most children with strabismus, I learned to ignore or suppress the input from the turned eye. This provided me with a single view of the world but one that lacked stereoscopic depth. I did not see with stereopsis: I was stereoblind.
In an attempt to correct this condition, I underwent three eye muscle surgeries at the ages of 2, 3, and 7 years. The operations helped my eyes to look straight but did not change my viewing habits. I continued to fixate with one eye and turn in the other, rapidly alternating between the eye that I used for fixation and the eye that I turned in.
Lazy evaluation refers to a variety of concepts that seek to defer evaluation of an expression until it is definitely required and to share the results of any such evaluation among all instances of a single deferred computation. The net result is that a computation is performed at most once among all of its instances. Laziness manifests itself in a number of ways.
One form of laziness is the by-need evaluation strategy for function application. Recall from Chapter 8 that the by-name evaluation order passes the argument to a function in unevaluated form so that it is evaluated only if it is actually used. But because the argument is replicated by substitution, it may be evaluated more than once. By-need evaluation ensures that the argument to a function is evaluated at most once by ensuring that all copies of an argument share the result of evaluating any one copy.
Another form of laziness is the concept of a lazy data structure. As we have seen in Chapters 11, 12, and 16, we may choose to defer evaluation of the components of a data structure until they are actually required rather than when the data structure is created. But if a component is required more than once, then the same computation will, without further provision, be repeated on each use. To avoid this, the deferred portions of a data structure are shared so an access to one will propagate its result to all occurrences of the same computation.