To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The programs in this book were run under the VM/CMS time-sharing system on a large IBM 370 mainframe, a 3090 processor. A virtual machine with 4 megabytes of storage was used.
The compiler for converting register machine programs into exponential diophantine equations is a 700-line REXX program. REXX is a very nice and easy to use pattern-matching string processing language implemented by means of a very efficient interpreter.
There are three implementations of our version of pure LISP:
The first is in REXX, and is 350 lines of code. This is the simplest implementation of the LISP interpreter, and it serves as an “executable design document.”
The second is on a simulated register machine. This implementation consists of a 250-line REXX driver that converts M-expressions into S-expressions, remembers function definitions, and does most input and output formating, and a 1000-line 370 Assembler H expression evaluator. The REXX driver wraps each expression in a lambda expression which binds all current definitions, and then hands it to the assembler expression evaluator. The 1000 lines of assembler code includes the register machine simulator, many macro definitions, and the LISP interpreter in register machine language. This is the slowest of the three implementations; its goals are theoretical, but it is fast enough to test and debug.
The third LISP implementation, like the previous one, has a 250-line REXX driver; the real work is done by a 700-line 370 Assembler Hexpression evaluator. This is the high-performance evaluator, and it is amazingly small: less than 8K bytes of 370 machine language code, tables, and buffers, plus a megabyte of storage for the stack, and two megabytes for the heap, so that there is another megabyte left over for the REXX driver. It gets by without a garbage collector: since all information that must be preserved from one evaluation to another (mostly function definitions) is in the form of REXX character strings, the expression evaluator can be reinitialized after each evaluation. Another reason for the simplicity and speed of this interpreter is that our version of pure LISP is "permissive;" error checking and the production of diagnostic messages are usually a substantial portion of an interpreter.
In this chapter we present a “permissive” simplified version of pure LISP designed especially for metamathematical applications. Aside from the rule that an S-expression must have balanced ()'s, the only way that an expression can fail to have a value is by looping forever. This is important because algorithms that simulate other algorithms chosen at random, must be able to run garbage safely.
This version of LISP developed from one originally designed for teaching [CHAITIN (1976a)]. The language was reduced to its essence and made as easy to learn as possible, and was actually used in several university courses. Like APL, this version of LISP is so concise that one can write it as fast as one thinks. This LISP is so simple that an interpreter for it can be coded in three hundred and fifty lines of REXX.
How to read this chapter: This chapter can be quite difficult to understand, especially if one has never programmed in LISP before. The correct approach is to read it several times, and to try to work through all the examples in detail. Initially the material will seem completely incomprehensible, but all of a sudden the pieces will snap together into a coherent whole. Alternatively, one can skim Chapters 3, 4, and 5, which depend heavily on the details of this LISP, and proceed directly to the more theoretical material in Chapter 6, which could be based on Turing machines or any other formalism for computation.
The purpose of Chapters 3 and 4 is to show how easy it is to implement an extremely powerful and theoretically attractive programming language on the abstract register machines that we presented in Chapter 2.
Cambridge LCF proofs are conducted in PPλ, a logic of domain theory. Leaving aside domains — discussed in later chapters — PPλ is a typical natural deduction formulation of first order logic. This chapter introduces formal proof, first order logic, natural deduction, and sequent calculi. The discussion of semantics is brief and informal; the emphasis is how to construct formal proofs. See a logic textbook for a proper account of model theory.
If you seriously intend to construct proofs, memorizing the inference rules is not enough. You must learn the individual characteristics and usage of each rule. Many sample proofs are given; study every line. Work the exercises.
Fundamentals of formal logic
A formal logic or calculus is a game for producing symbolic objects according to given rules. Sometimes the motivation of the rules is vague; with the lambda calculus there ensued a protracted enquiry into the meaning of lambda expressions. But usually the rules are devised with respect to a well-understood meaning or semantics. Too many of us perform plausible derivations using notations that have no precise meaning. Most mathematical theories are interpreted in set theory: each term corresponds to a set; each rule corresponds to a fact about sets. Set theory is taken to be the foundation of everything else. Its axioms are justified by informal but widely accepted intuitions that sets exist, the union of two sets is a set, and so forth.
In this appendix we prove the results concerning the number of S-expressions of a given size that were used in Chapter 5 to show that there are few minimal LISP programs and other results. We have postponed the combinatorial and analytic arguments to here, in order not to interrupt our discussion of program size with material of a rather different mathematical nature. However, the estimates we obtain here of the number of syntactically correct LISP programs of a given size, are absolutely fundamental to a discussion of the basic program-size characteristics of LISP. And if we were to discuss another programming language, estimates of the number of different possible programs and outputs of a given size would also be necessary. In fact, in my first paper on program-size complexity [CHAITIN (1966)], I go through an equivalent discussion of the number of different Turing machine programs with n-states and m-tape symbols, but using quite different methods.
Let us start by stating more precisely what we are studying, and by looking at some examples. Let a be the number of different characters in the alphabet used to form S-expressions, not including the left and right parentheses. In other words, α is the number of atoms, excluding the empty list. In fact α = 126, but let's proceed more generally. We shall study Sn, the number of different S-expressions n characters long that can be formed from these a atoms by grouping them together with parentheses. The only restriction that we need to take into account is that left and right parentheses must balance for the first time precisely at the end of the expression.