To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is mildly ironic that the title of this chapter is an unfulfilled (or improper) definite description because Russell really had two versions of the theory of definite descriptions. The two versions differ in primary goals, character and philosophical strength.
The first version of Russell's theory of definite descriptions was developed in his famous essay of 1905, ‘On Denoting’. Its primary goal was to ascertain the logical form of natural language statements containing denoting phrases. The class of such statements included statements with definite descriptions, a species of denoting phrase, such as ‘The Prime Minister of England in 1904 favored retaliation’ and ‘The gold mountain is gold’. So the theory of definite descriptions contained in what Russell himself regarded as his finest philosophical essay is a theory about how to paraphrase natural language statements containing definite descriptions into an incompletely specified formal language about propositional functions. Russell used this version of his theory to disarm arguments such as Meinong's arguments for beingless objects. Such reasoning, he said, is the product of a mistaken view about the logical form of statements containing definite descriptions.
The second and later version is presented in that epic work of 1910, Principia Mathematica (hereafter usually Principia). Its primary goal. in contrast to the first version, was to provide a foundation for mathematics, indeed, to reduce all of mathematics to logic. In chapter *14 Russell introduces a special symbol, the inverted iota, and uses it to make singular term-like expressions out of quasi-statements.
Elementary microphysical statements can be neither true nor false without violating the classical codification of statement logic. The existence of such a possibility depends upon a revision in the standard explication of logical truth, a revision more harmonious with the idea of argument validity as merely truth-preserving. The revision in question, in turn, depends upon Bas van Fraassen's investigations into the semantical foundations of positive free logic, a species of logical system whose philosophical significance was first made plain in Henry Leonard's pioneering study of 1956, ‘The Logic of Existence’.
Students who steadfastly refuse to accept an argument as valid unless all of the component statements are in fact true frustrate teachers of logic. “Now look!” the teacher may heatedly emphasize, “the validity of an argument has to do with its form alone. So to say that an argument is valid is to say only that if its premises were true its conclusion would also be true!” But, then, not only is the argument from the pair of false statements
Jim Thorpe was Russian
and
If Jim Thorpe was Russian, he was a Bolshevik
to the false statement
Jim Thorpe was a Bolshevik
valid, but so is the argument from the pair of (allegedly) truth-valueless statements
The Queen of the United States dreamed she was being led down a bridal path by a gorilla
and
If the Queen of the United States dreamed she was being led down a bridal path by a gorilla, she desires to marry a man named ‘Harry’
to the (allegedly) truth-valueless statement
The Queen of the United States desires to marry a man named ‘Harry’.
On page 149 of the second edition of their book, Deductive Logic, Hugues Leblanc and William Wisdom say the following about the origins of free logic.
Presupposition-free logic (known for short as free logic) grew out of two papers published simultaneously: Hintikka's ‘Existential Presuppositions and Existential Commitments’, The Journal of Philosophy. Vol. 56. 1959. pp. 125–137, and Leblanc and (Theodore) Hailperin's ‘Nondesignating Singular Terms’. The Philosophical Review. Vol. 68, 1959. pp. 129–136. Both made use of the identity sign ‘=’. In a later paper Karel Lambert devised a free logic without ‘=’ … (See Lambert's ‘Existential Import Revisited’, Notre Dame Journal of Formal Logic, vol. 4, (1963), pp. 288–292).
This account is in one respect misleading, and in another inaccurate. It is misleading because Rolf Schock, independently either of Hintikka or of Leblanc and Hailperin, was developing a version of free logic in the early 1960s quite different in character from those mentioned by Leblanc and Wisdom. Moreover Schock's ideas were in certain respects more fully developed because he also supplied models for his own systematization. The writers mentioned by Leblanc and Wisdom had suggested – in print at any rate – only informally the semantical bases for their formulations. (Apparently Schock's ideas were known to some European scholars in the early 1960s but oddly were not disseminated. One can find an account of Schock's pioneering efforts in his 1968 book, Logics Without Existence Assumptions.
David Kaplan once suggested to me that the pair of self-contradictory statements:
(1) The round square both is and isn't a round square,
and
(2) The class of all classes not members of themselves both is and isn't a member of itself
“ought to have the same father”. But apparently they don't despite their family resemblance. Russell deduced (1) from a principle he presumed correctly to be a key ingredient of Meinong's theory of objects. That principle says:
MP The object that is so and so is (a) so and so.
On the other hand, Russell deduced (2) from a seemingly unrelated but no less fundamental principle in Frege's version of set theory, the principle of set abstraction. That principle, a version of the principle of comprehension, (in effect) says:
FP Everything is such that it is a member of the class of so and sos if and only if it is (a) so and so.
The lack of common ancestry between MP and FP, and hence between their respective consequences (1) and (2), enabled Russell to treat the theory of objects and the theory of sets (or classes) very differently. He thought (1) “demolished” the theory of objects, but he didn't think (2) destroyed the theory of classes. Russell's attitude was wrong, because Kaplan's suspicion of the common kinship of (1) and (2) is justified, and the proof of this fact is the next order of business.
We present an axiomatic framework for Girard's Geometry of Interaction based on the notion of linear combinatory algebra. We give a general construction on traced monoidal categories, with certain additional structure, that is sufficient to capture the exponentials of Linear Logic, which produces such algebras (and hence also ordinary combinatory algebras). We illustrate the construction on six standard examples, representing both the ‘particle-style’ as well as the ‘wave-style’ Geometry of Interaction.
Chapters 2–11 have described the fundamental components of a good compiler: a front end, which does lexical analysis, parsing, construction of abstract syntax, type-checking, and translation to intermediate code; and a back end, which does instruction selection, dataflow analysis, and register allocation.
What lessons have we learned? We hope that the reader has learned about the algorithms used in different components of a compiler and the interfaces used to connect the components. But the authors have also learned quite a bit from the exercise.
Our goal was to describe a good compiler that is, to use Einstein's phrase, “as simple as possible – but no simpler.” we will now discuss the thorny issues that arose in designing the MiniJava compiler.
Structured l-values. Java (and MiniJava) have no record or array variables, as C, C++, and Pascal do. Instead, all object and array values are really just pointers to heap-allocated data. Implementing structured l-values requires some care but not too many new insights.
Tree intermediate representation. The Tree language has a fundamental flaw: It does not describe procedure entry and exit. These are handled by opaque procedures inside the Frame module that generate Tree code. This means that a program translated to Trees using, for example, the Pentium- Frame version of Frame will be different from the same program translated using SparcFrame – the Tree representation is not completely machine-independent.
This book is intended as a textbook for a one- or two-semester course in compilers. Students will see the theory behind different components of a compiler, the programming techniques used to put the theory into practice, and the interfaces used to modularize the compiler. To make the interfaces and programming examples clear and concrete, we have written them in Java. Another edition of this book is available that uses the ML language.
Implementation project. The “student project compiler” that we have outlined is reasonably simple, but is organized to demonstrate some important techniques that are now in common use: abstract syntax trees to avoid tangling syntax and semantics, separation of instruction selection from register allocation, copy propagation to give flexibility to earlier phases of the compiler, and containment of target-machine dependencies. Unlike many “student compilers” found in other textbooks, this one has a simple but sophisticated back end, allowing good register allocation to be done after instruction selection.
This second edition of the book has a redesigned project compiler: It uses a subset of Java, called MiniJava, as the source language for the compiler project, it explains the use of the parser generators JavaCC and SableCC, and it promotes programming with the Visitor pattern. Students using this edition can implement a compiler for a language they're familiar with, using standard tools, in a more object-oriented style.
func-tion: a mathematical correspondence that assigns exactly one element of one set to each element of the same or another set
Webster's Dictionary
The mathematical notion of function is that if f(x) = a “this time,” then f(x) = a “next time”; there is no other value equal to f(x). This allows the use of equational reasoning familiar from algebra: If a = f(x), then g(f(x), f(x)) is equivalent to g(a, a). Pure functional programming languages encourage a kind of programming in which equational reasoning works, as it does in mathematics.
Imperative programming languages have similar syntax: a → f(x). But if we follow this by b → f(x), there is no guarantee that a = b; the function f can have side effects on global variables that make it return a different value each time. Furthermore, a program might assign into variable x between calls to f(x), so f(x) really means a different thing each time.
Higher-order functions. Functional programming languages also allow functions to be passed as arguments to other functions, or returned as results. Functions that take functional arguments are called higher-order functions.
Higher-order functions become particularly interesting if the language also supports nested functions with lexical scope (also called block structure). Lexical scope means that each function can refer to variables and parameters of any function in which it is nested. A higher-order functional language is one with nested scope and higher-order functions.
trans-late: to turn into one's own or another language
Webster's Dictionary
The semantic analysis phase of a compiler must translate abstract syntax into abstract machine code. It can do this after type-checking, or at the same time.
Though it is possible to translate directly to real machine code, this hinders portability and modularity. Suppose we want compilers for N different source languages, targeted to M different machines. In principle this is N · M compilers (Figure 7.1a), a large implementation task.
An intermediate representation (IR) is a kind of abstract machine language that can express the target-machine operations without committing to too much machine-specific detail. But it is also independent of the details of the source language. The front end of the compiler does lexical analysis, parsing, semantic analysis, and translation to intermediate representation. The back end does optimization of the intermediate representation and translation to machine language.
A portable compiler translates the source language into IR and then translates the IR into machine language, as illustrated in Figure 7.1b. Now only N front ends and M back ends are required. Such an implementation task is more reasonable.
Even when only one front end and one back end are being built, a good IR can modularize the task, so that the front end is not complicated with machine-specific details, and the back end is not bothered with information specific to one source language. Many different kinds of IR are used in compilers; for this compiler we have chosen simple expression trees.
We present a category of locally convex topological vector spaces that is a model of propositional classical linear logic and is based on the standard concept of Köthe sequence spaces. In this setting, the ‘of course’ connective of linear logic has a quite simple structure of a commutative Hopf algebra. The co-Kleisli category of this linear category is a cartesian closed category of entire mappings. This work provides a simple setting in which typed λ-calculus and differential calculus can be combined; we give a few examples of computations.
ab-stract: disassociated from any specific instance
Webster's Dictionary
A compiler must do more than recognize whether a sentence belongs to the language of a grammar – it must do something useful with that sentence. The semantic actions of a parser can do useful things with the phrases that are parsed.
In a recursive-descent parser, semantic action code is interspersed with the control flow of the parsing actions. In a parser specified in JavaCC, semantic actions are fragments of Java program code attached to grammar productions. SableCC, on the other hand, automatically generates syntax trees as it parses.
SEMANTIC ACTIONS
Each terminal and nonterminal may be associated with its own type of semantic value. For example, in a simple calculator using Grammar 3.37, the type associated with exp and INT might be int; the other tokens would not need to carry a value. The type associated with a token must, of course, match the type that the lexer returns with that token.
For a rule A → B C D, the semantic action must return a value whose type is the one associated with the nonterminal A. But it can build this value from the values associated with the matched terminals and nonterminals B, C, D.
RECURSIVE DESCENT
In a recursive-descent parser, the semantic actions are the values returned by parsing functions, or the side effects of those functions, or both.
We show a method for translating concurrent systems into partially ordered sets in a functorial way. This is done in a way resembling the construction of the fundamental groups in topology. Since the morphisms of concurrent systems have a flavour of the implementability of one system in another, the functor provides a tool for proving certain non-implementability results.
sched-ule: a procedural plan that indicates the time and sequence of each operation
Webster's Dictionary
A simple computer can process one instruction at a time. First it fetches the instruction, then decodes it into opcode and operand specifiers, then reads the operands from the register bank (or memory), then performs the arithmetic denoted by the opcode, then writes the result back to the register back (or memory), and then fetches the next instruction.
Modern computers can execute parts of many different instructions at the same time. At the same time the processor is writing results of two instructions back to registers, it may be doing arithmetic for three other instructions, reading operands for two more instructions, decoding four others, and fetching yet another four. Meanwhile, there may be five instructions delayed, awaiting the results of memory-fetches.
Such a processor usually fetches instructions from a single flow of control; it's not that several programs are running in parallel, but the adjacent instructions of a single program are decoded and executed simultaneously. This is called instruction-level parallelism (ILP), and is the basis for much of the astounding advance in processor speed in the last decade of the twentieth century.
A pipelined machine performs the write-back of one instruction in the same cycle as the arithmetic “execute” of the next instruction and the operand-read of the previous one, and so on. A very-long-instruction-word (VLIW) issues several instructions in the same processor cycle; the compiler must ensure that they are not data-dependent on each other.
mem-o-ry: a device in which information can be inserted and stored and from which it may be extracted when wanted
hi-er-ar-chy: a graded or ranked series
Webster's Dictionary
An idealized random access memory (RAM) has N words indexed by integers such that any word can be fetched or stored – using its integer address – equally quickly. Hardware designers can make a big slow memory, or a small fast memory, but a big fast memory is prohibitively expensive. Also, one thing that speeds up access to memory is its nearness to the processor, and a big memory must have some parts far from the processor no matter how much money might be thrown at the problem.
Almost as good as a big fast memory is the combination of a small fast cache memory and a big slow main memory; the program keeps its frequently used data in cache and the rarely used data in main memory, and when it enters a phase in which datum x will be frequently used it may move x from the slow memory to the fast memory.
It's inconvenient for the programmer to manage multiple memories, so the hardware does it automatically. Whenever the processor wants the datum at address x, it looks first in the cache, and – we hope – usually finds it there. If there is a cache miss – x is not in the cache – then the processor fetches x from main memory and places a copy of x in the cache so that the next reference to x will be a cache hit.