To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Over the past ten or so years, brain plasticity has become an extremely hot scientific trend and a huge commercial enterprise. From the parent who wants to give his or her newborn an enriched environment to promote superior brain growth to the aging adult who wants to stave off Alzheimer's disease, exercising, enriching, and training the brain has become a multimillion-dollar industry. Hundreds of brain promotion companies have sprouted up, such as The Baby Einstein Company, LLC, and hundreds of new books are published each year on brain enrichment. “Brain health,” “brain training,” and “brain fitness” are terms that are bandied about in the advertising world, suggestive of the possibility of improving and prolonging intellectual health. However, this “brain improvement” commercialism, although occasionally overstated, is not without some foundation in hard science: the discovery of brain plasticity.
The roots of the concept of “brain plasticity” can be traced toWilliam James's seminal work, The Principles of Psychology (1890), in which he clearly understood that behavior, habits, or instincts are governed by certain physiological limitations. He states, “Plasticity, … in the wide sense of the word, means the possession of a structure weak enough to yield to an influence, but strong enough not to yield all at once.… Organic matter, especially nervous tissue, seems endowed with a very extraordinary degree of plasticity of this sort; so that we may without hesitation lay down as our first proposition the following, that the phenomena of habit in living beings are due to the plasticity of the organic materials of which their bodies are composed” (p. 106).
We mostly take object vision for granted, simply because our brain makes it seem easy. As a consequence, most of what we learn about objects during both development and adulthood goes unnoticed. Once the input to the system is in order (so excluding retinal disorders), almost all people can recognize cars, Coca-Cola bottles, and Barbie dolls. We only get a glimpse of the complexity of the underlying processes when we go through the most challenging tasks that we are typically confronted with. For example, some people have below average skills in face recognition. In this respect, interindividual differences in the most challenging object recognition tasks, created either naturally or in the lab by manipulating experience, serve as a gold mine for trying to understand the brain's exceptional ability to recognize objects.
My favorite example of an idiosyncratic object recognition talent is Gudrun, my eight-year-old daughter. She has a favorite teddy bear, are affection that developed when she was only a few months old. When Gudrun was one year old, my wife and I bought a second identical bear (just in case the first one was lost). Obviously, she noticed the difference between the old bear (which she calls “pretty bear”) and the new one. It was also easy for us parents to differentiate between the old worn bear and the new exemplar. However, over the years these differences became very minor, and now no one can reliably differentiate “pretty bear” from “new bear.”
The plasticity of the mammalian brain – that is, its ability to adapt to environmental situations by changing its connectivity – is one of its most outstanding properties, distinguishing it from most other computational devices. This plasticity is perhaps most striking in the sensory systems that provide input to the brain. The plasticity of sensory systems in higher centers of the brain, such as the cerebral cortex, is the basis for its adaptability to the environment. During individual development, neural plasticity is greater than during adulthood, which is necessitated by the growth of the organism and the need of the brain to get programmed. Although sensory plasticity tapers off in adulthood, it does not cease completely. This chapter deals with the behavioral, anatomical, and physiological plasticity in animals and humans that grow up blind. I discuss plasticity in the somatosensory and auditory systems of visually deprived cats, mice, and humans. Evidence for crossmodal plasticity was acquired using single-unit neurophysiology and neuroanatomy in animal models of early blindness and using imaging techniques in humans. The data support a concept of developmental plasticity whereby major sensory processing modules in the cortex are set up without the influence of sensory experience, but the sensory modality that drives them depends on sensory experience.
Expansion of Whisker-Barrel System in Early-Blind Animals
In rodents, the facial vibrissae, or whiskers, provide one of the most important sources of information to the brain. This is underscored by the fact that rodents possess a special representation in their somatosensory cortex known as barrels that can be visualized with various anatomical and histochemical techniques (Van der Loos and Woolsey, 1973). The barrel cortex shows pronounced intramodal plasticity: when one of the whiskers is removed, the corresponding barrel shrinks. However, this plasticity of the whisker-barrel system is apparent even when sensory deprivation is exerted in a different sensory modality, such as the visual: mice that are reared blind from birth with a binocular enucleation develop significantly longer vibrissae and, correspondingly, an expanded barrel field (Rauschecker et al., 1992; Fig. 8.1). This may be interpreted by increased usage of the whiskers, which leads to not only use-dependent expansion of their central representation but also a hypertrophy of the peripheral sense organ itself.
In Chapters 12 and 25 we investigated the use of sums for the classification of values of disparate type. Every value of a classified type is labeled with a symbol that determines the type of the instance data. A classified value is decomposed by pattern matching against a known class, which reveals the type of the instance data.
Under this representation the possible classes of an object are fully determined statically by its type. However, it is sometimes useful to allow the possible classes of data value to be determined dynamically. There are many uses for such a capability, some less apparent than others. The most obvious is simply extensibility, that we wish to introduce new classes of data during execution (and, presumably, define how methods act on values of those new classes).
A less obvious application exploits the fact that the new class is guaranteed to be distinct from any other class that has already been introduced. The class itself is a kind of “secret” that can be disclosed only if the computation that creates the class discloses its existence to another computation. In particular, the class is opaque to any computation to which this disclosure has not been explicitly made. This capability has a number of practical applications.
One application is to use dynamic classification as a “perfect encryption” mechanism that guarantees that a value cannot be determined without access to the appropriate “keys.”
Modernized Algol, or ℒ{nat cmd ⇀}, is an imperative, block-structured programming language based on the classic language Algol. ℒ{nat cmd ⇀} may be seen as an extension to ℒ{nat ⇀} with a new syntactic sort of commands that act on assignables by retrieving and altering their contents. Assignables are introduced by declaring them for use within a specified scope; this is the essence of block structure. Commands may be combined by sequencing and may be iterated by recursion.
ℒ{nat cmd ⇀} maintains a careful separation between pure expressions, whose meaning does not depend on any assignables, and impure commands, whose meaning is given in terms of assignables. This ensures that the evaluation order for expressions is not constrained by the presence of assignables in the language, and allows for expressions to be manipulated, much as in PCF. Commands, on the other hand, have a tightly constrained execution order, because the execution of one may affect the meaning of another.
A distinctive feature of ℒ{nat cmd ⇀} is that it adheres to the stack discipline, which means that assignables are allocated on entry to the scope of their declaration, and deallocated on exit, using a conventional stack discipline. This avoids the need for more complex forms of storage management, at the expense of reducing the expressiveness of the language.
Basic Commands
The syntax of the language ℒ {nat cmd ⇀} of modernized Algol distinguishes pure expressions from impure commands .
Data abstraction is perhaps the most important technique for structuring programs. The main idea is to introduce an interface that serves as a contract between the client and the implementor of an abstract type. The interface specifies what the client may rely on for its own work, and, simultaneously, what the implementor must provide to satisfy the contract. The interface serves to isolate the client from the implementor so that each may be developed in isolation from the other. In particular, one implementation may be replaced by another without affecting the behavior of the client, provided that the two implementations meet the same interface and are, in a sense to bemade precise shortly, suitably related to one another. (Roughly, each simulates the other with respect to the operations in the interface.) This property is called representation independence for an abstract type.
Data abstraction may be formalized by extending the language ℒ{→ ∀} with existential types. Interfaces are modeled as existential types that provide a collection of operations acting on an unspecified, or abstract, type. Implementations are modeled as packages, the introductory form for existentials, and clients are modeled as uses of the corresponding elimination form. It is remarkable that the programming concept of data abstraction is modeled so naturally and directly by the logical concept of existential type quantification. Existential types are closely connected with universal types and hence are often treated together.
Types are the central organizing principle of the theory of programming languages. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The soundness of a language design – the absence of ill-defined programs – follows naturally.
The purpose of this book is to explain this remark. A variety of programming language features are analyzed in the unifying framework of type theory. A language feature is defined by its statics, the rules governing the use of the feature in a program, and its dynamics, the rules defining how programs using this feature are to be executed. The concept of safety emerges as the coherence of the statics and the dynamics of a language.
In this way we establish a foundation for the study of programming languages. But why these particular methods? The main justification is provided by the book itself. The methods we use are both precise and intuitive, providing a uniform framework for explaining programming language concepts. Importantly, these methods scale to a wide range of programming language concepts, supporting rigorous analysis of their properties. Although it would require another book in itself to justify this assertion, these methods are also practical in that they are directly applicable to implementation and uniquely effective as a basis for mechanized reasoning. No other framework offers as much.
Severe visual impairments varying in etiology and intensity, affect more than 280 million people worldwide (World Health Organization [WHO], 2011; Elkhayat, 2012). Although, as described in other chapters in this book, the brain of the blind undergoes massive plastic changes in an effort to compensate for the lack of vision, providing increased support for other senses and abilities, the blind and visually impaired remain significantly limited in their ability to perform tasks ranging from navigation and orientation to object recognition. Thus, the blind are prevented from fully taking part in modern society, constituting a major clinical and scientific challenge to develop effective visual rehabilitation techniques for them. Many attempts have been made to help the blind using a wide variety of different approaches; however, unfortunately, until recent years most have born discouraging results. This chapter discusses if and how the plasticity described in this book can be harnessed for visual rehabilitation in adulthood to enable the blind to use their own brain to process “raw” visual information, despite the discouraging outcome of past attempts. We describe several different approaches to visual rehabilitation and their practical real world and clinical results, focusing on sensory substitution devices and their potential. We then show some examples of what using these devices has taught us about the brain, offering a theoretical basis for their empirical results, and finish with some practical conclusions and recommendations for future visual rehabilitation attempts.
The binary product of two types consists of ordered pairs of values, one from each type in the order specified. The associated eliminatory forms are projections, which select the first and second components of a pair. The nullary product, or unit, type consists solely of the unique “null tuple” of no values and has no associated eliminatory form. The product type admits both a lazy and an eager dynamics. According to the lazy dynamics, a pair is a value without regard to whether its components are values; they are not evaluated until (if ever) they are accessed and used in another computation. According to the eager dynamics, a pair is a value only if its components are values; they are evaluated when the pair is created.
More generally, we may consider the finite product ⟨τi ⟩iϵI indexed by a finite set of indices I. The elements of the finite product type are I -indexed tuples whose ith component is an element of the type τi for each i ϵ I. The components are accessed by I -indexed projection operations, generalizing the binary case. Special cases of the finite product include n-tuples, indexed by sets of the form I = {0, …, n − 1}, and labeled tuples, or records, indexed by finite sets of symbols. Similar to binary products, finite products admit both an eager and a lazy interpretation.
The technique of structural dynamics is very useful for theoretical purposes, such as proving type safety, but is too high level to be directly usable in an implementation. One reason is that the use of “search rules” requires the traversal and reconstruction of an expression in order to simplify one small part of it. In an implementation we would prefer to use some mechanism to record “where we are” in the expression so that we may resume from that point after a simplification. This can be achieved by introducing an explicit mechanism, called a control stack, that keeps track of the context of an instruction step for just this purpose. By making the control stack explicit, the transition rules avoid the need for any premises—every rule is an axiom. This is the formal expression of the informal idea that no traversals or reconstructions are required to implement it. This chapter introduces an abstract machine K{nat ⇀} for the language ℒ{nat ⇀}. The purpose of this machine is to make control flow explicit by introducing a control stack that maintains a record of the pending subcomputations of a computation. We then prove the equivalence of K{nat ⇀} with the structural dynamics of ℒ{nat ⇀}.
Constructive logic codifies the principles of mathematical reasoning as they are actually practiced. In mathematics a propositionmay be judged to be true exactly when it has a proof and may be judged to be false exactly when it has a refutation. Because there are, and always will be, unsolved problems, we cannot expect in general that a proposition is either true or false, for in most cases we have neither a proof nor a refutation of it. Constructive logic may be described as logic as if people matter, as distinct from classical logic, which may be described as the logic of the mind of god. From a constructive viewpoint the judgment “ϕ true” means that “there is a proof of ϕ.”
What constitutes a proof is a social construct, an agreement among people as to what a valid argument is. The rules of logic codify a set of principles of reasoning that may be used in a valid proof. The valid forms of proof are determined by the outermost structure of the proposition whose truth is asserted. For example, a proof of a conjunction consists of a proof of each of its conjuncts, and a proof of an implication consists of a transformation of a proof of its antecedent to a proof of its consequent. When spelled out in full, the forms of proof are seen to correspond exactly to the forms of expression of a programming language.