To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Constructive logic codifies the principles of mathematical reasoning as they are actually practiced. In mathematics a propositionmay be judged to be true exactly when it has a proof and may be judged to be false exactly when it has a refutation. Because there are, and always will be, unsolved problems, we cannot expect in general that a proposition is either true or false, for in most cases we have neither a proof nor a refutation of it. Constructive logic may be described as logic as if people matter, as distinct from classical logic, which may be described as the logic of the mind of god. From a constructive viewpoint the judgment “ϕ true” means that “there is a proof of ϕ.”
What constitutes a proof is a social construct, an agreement among people as to what a valid argument is. The rules of logic codify a set of principles of reasoning that may be used in a valid proof. The valid forms of proof are determined by the outermost structure of the proposition whose truth is asserted. For example, a proof of a conjunction consists of a proof of each of its conjuncts, and a proof of an implication consists of a transformation of a proof of its antecedent to a proof of its consequent. When spelled out in full, the forms of proof are seen to correspond exactly to the forms of expression of a programming language.
When I arrived in Cambridge in the late 1970s to work with the late Fergus Campbell, the notion of critical periods for visual development was already well established through the seminal works of Hubel and Wiesel (1967), Blakemore (1976), and others (see Daw, 1995). The clinical implication of this was also recognized, namely, that visual improvements from the patching treatment of amblyopia were likely to be strongly age dependent and ineffective after the age of 7 to 8 years. At that time, the wisdom of Sir Stewart Duke-Elder held sway: “for patching to work it needs to be total and complete, day and night.” Fergus always felt that the stronger the statement, the more likely it was to be wrong. In fact, at that time, I remember his method for choosing research projects for graduate students. He would go to his bookshelf and choose at random one of Duke-Elder's System of Ophthalmology volumes, open it at random and locate a definitive statement on some visual topic. “Let's show this is wrong,” he would say, and the die was cast for what usually turned out to be another fruitful piece of research.
Fergus felt that some form of active stimulation would be better than the passive viewing that resulted from patching. He was enamored with the idea of the visual cortex as a spatial frequency analyzer, and because my early results suggested amblyopes suffered from severe spatial distortions (Hess et al., 1978), his first suggestion for a new therapy was to show a single spatial frequency at sequential orientations to help their visual cortex “sort out” what we hypothesized were anomalous interactions between these spatial analysers.
Amblyopia is a visual impairment of one eye caused by inadequate use during early childhood and cannot be corrected by optical means (American Academy of Ophthalmology, 2007). Clinically, it is usually defined as a visual acuity of 20/30 or worse without any apparent structural abnormality in the affected eye. Amblyopia is a significant public health issue because it is the number one cause of monocular visual loss worldwide, affecting 3 to 5 percent of the population in the Western world (Attebo et al., 1998; Hillis, 1986). Because of its prevalence, the financial burden of amblyopia is enormous. A major U.S. study estimated that untreated amblyopia causes a yearly loss of US$7.4 billion in earning power and a corresponding decrease in the gross domestic product. An estimated US$341 million is spent each year to prevent and treat amblyopia (Membreno et al., 2002). Unfortunately, approximately 50 percent of patients do not respond to therapies (Holmes, Beck, et al., 2003; Holmes, Kraker, et al., 2003; The Pediatric Eye Disease Investigator Group [PEDIG], 2003; Repka et al., 2004, 2008; Scheiman et al., 2005). The personal cost of amblyopia is also substantial. People with amblyopia (including those treated successfully and those whose treatment failed) often have limited career choices and reduced quality of life such as reduced social contact, distance and depth estimation deficits, visual disorientation, and fear of losing vision in the better eye (van de Graaf et al., 2004).
The central nervous system (CNS) integrates information from multiple sensory modalities, including visual and proprioceptive information, when planning a reaching movement (Jeannerod, 1988). Although visual and proprioceptive information regarding hand (or end point effector) position are not always consistent, performance is typically better under reaching conditions in which both sources of information are available. Under certain task conditions, visual signals tend to dominate such that one relies more on visual information than proprioception to guide movement. For example, individuals reaching to a target with misaligned visual feedback of the hand, as experienced when reaching in a virtual reality environment or while wearing prism displacement goggles, adjust their movements in order for the visual representation of the hand to achieve the desired end point even when their actual hand is elsewhere in the workspace (Krakauer et al., 1999, 2000; Redding and Wallace, 1996; Simani et al., 2007). This motor adaptation typically occurs rapidly, reaching baseline levels within twenty trials per target, and without participants' awareness (Krakauer et al., 2000). Furthermore, participants reach with these adapted movement patterns following removal of the distortion, and hence show aftereffects (Baraduc and Wolpert, 2002; Buch et al., 2003; Krakauer et al., 1999, 2000; Martin et al., 1996). These aftereffects provide a measure of motor learning referred to as visuomotor adaptation and result from the CNS learning a new visuomotor mapping to guide movement.
A distributed computation is one that takes place at many different sites, each of which controls some resources located at that site. For example, the sites might be nodes on a network, and a resource might be a device or sensor located at that site or a database controlled by that site. Only programs that execute at a particular site may access the resources situated at that site. Consequently, command execution always takes place at a particular site, called the locus of execution. Access to resources at a remote site from a local site is achieved by moving the locus of execution to the remote site, running code to access the local resource, and returning a value to the local site.
In this chapter we consider the language ℒ{nat cmd ⇀∥}, an extension of Concurrent Algol with a spatial type system that mediates access to located resources on a network. The type safety theorem ensures that all accesses to a resource controlled by a site are through a program executing at that site, even though references to local resources may be freely passed around to other sites on the network. The key idea is that channels and events are located at a particular site and that synchronization on an event may occur only at the site appropriate to that event. Issues of concurrency, which are to do with nondeterministic composition, are thereby cleanly separated from those of distribution, which are to do with the locality of resources on a network.
Programming languages are languages, a means of expressing computations in a form comprehensible to both people and machines. The syntax of a language specifies the means by which various sorts of phrases (expressions, commands, declarations, and so forth) may be combined to form programs. But what sort of thing are these phrases? What is a program made of?
The informal concept of syntax may be seen to involve several distinct concepts. The surface, or concrete, syntax is concerned with how phrases are entered and displayed on a computer. The surface syntax is usually thought of as given by strings of characters from some alphabet (say, ASCII or Unicode). The structural, or abstract, syntax is concerned with the structure of phrases, specifically how they are composed from other phrases. At this level a phrase is a tree, called an abstract syntax tree, whose nodes are operators that combine several phrases to form another phrase. The binding structure of syntax is concerned with the introduction and use of identifiers: how they are declared and how declared identifiers are to be used. At this level phrases are abstract binding trees, which enrich abstract syntax trees with the concepts of binding and scope.
We do not concern ourselves in this book with matters of concrete syntax, but instead work at the level of abstract syntax. To prepare the ground for the rest of the book, this chapter begins by definining abstract syntax trees and abstract binding trees and some functions and relations associated with them.
It frequently arises that the values of a type are partitioned into a variety of classes, each classifying data with a distinct internal structure. A good example is provided by the type of points in the plane, which may be classified according to whether they are represented in Cartesian or polar form. Both are represented by a pair of real numbers, but in the Cartesian case these are the x and y coordinates of the point, whereas in the polar case these are its distance r from the origin and its angle θ with the polar axis. A classified value is said to be an object, or instance, of its class. The class determines the type of the classified data, which are called the instance type of the class. The classified data itself is called the instance data of the object.
Functions that act on classified values are sometimes called methods. The behavior of a method is determined by the class of its argument. The method is said to dispatch on the class of the argument. Because it happens at run time, this is called dynamic dispatch. For example, the squared distance of a point from the origin is calculated differently according to whether the point is represented in Cartesian or polar form. In the former case the required distance is x2 + y2, whereas in the latter it is simply r itself.
Themotivation for introducing polymorphism was to enablemore programs to be written—those that are “generic” Chapter 20. Then if a program does not depend on the choice of types, we can code it by using polymorphism. Moreover, if we wish to insist that a program cannot depend on a choice of types, we demand that it be polymorphic. Thus polymorphism can be used both to expand the collection of programs we may write and also to limit the collection of programs that are permissible in a given context.
The restrictions imposed by polymorphic typing give rise to the experience that in a polymorphic functional language, if the types are correct, then the program is correct. Roughly speaking, if a function has a polymorphic type, then the strictures of type genericity vastly cut down the set of programs with that type. Thus if you have written a program with this type, it is quite likely to be the one you intended!
The technical foundation for these remarks is called parametricity. The goal of this chapter is to give an account of parametricity for ℒ{→ ∀} under a call-by-name interpretation.
Overview
We begin with an informal discussion of parametricity based on a “seat of the pants” understanding of the set of well-formed programs of a type.
Suppose that a function value f has the type ∀(t .t → t).
from
III
-
PLASTICITY IN ADULTHOOD AND VISION REHABILITATION
By
Carolin Gall, Otto-von-Guericke University of Magdeburg Medical Faculty,
Bernhard A. Sabel, Otto-von-Guericke University of Magdeburg Medical Faculty
In this chapter, we discuss the recently proposed residual vision activation theory that is based on both human and animal studies (Sabel, Henrich-Noack, et al., 2011). The central point of the theory is that partially damaged brain systems have a particularly good potential for restoration of vision. Fortunately, in the clinical world, complete visual system lesions are extremely rare because complete damage is only found in total eye or optic nerve damage or some severe congenital defects. As a consequence, even in patients considered to be legally blind, there is almost always some degree of residual vision and hence restoration potential.
This chapter focuses on human studies and therapeutic applications that build on the “residual vision activation theory.” We discuss work with patients that suffered postretinal lesions to the central visual pathway. Although visual fields may recover spontaneously to some extent, after the first few weeks or months following damage, this recovery no longer continues in most cases. Therefore, we focus on the time after this initial recovery and consider the effects of training procedures and noninvasive alternating current stimulation as an innovative mean to kindle vision restoration long after the lesion has occurred. Both treatment approaches restore visual fields, mainly in areas that were not absolutely blind but have some residual capacities. The nature of residual vision, its measurement, and the role of activating residual vision as a means to promote recovery of vision are discussed.
In constructive logic a proposition is true exactly when it has a proof, a derivation of it from axioms and assumptions, and is false exactly when it has a refutation, a derivation of a contradiction from the assumption that it is true. Constructive logic is a logic of positive evidence. To affirm or deny a proposition requires a proof, either of the proposition itself or of a contradiction, under the assumption that it has a proof. We are not always in a position to affirm or deny a proposition. An open problem is one for which we have neither a proof nor a refutation-so that, constructively speaking, it is neither true nor false.
In contrast, classical logic (the one we learned in school) is a logic of perfect information in which every proposition is either true or false. We may say that classical logic corresponds to “god's view” of the world-there are no open problems; rather, all propositions are either true or false. Put another way, to assert that every proposition is either true or false is to weaken the notion of truth to encompass all that is not false, dually to the constructively (and classically) valid interpretation of falsity as all that is not true. The symmetry between truth and falsity is appealing, but there is a price to pay for this: The meanings of the logical connectives are weaker in the classical case than in the constructive.