To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
Leen Helmink, Philips Research Laboratories, P.O. Box 80.000, 5600 JA Eindhoven, the Netherlands,
René Ahn, Philips Research Laboratories, P.O. Box 80.000, 5600 JA Eindhoven, the Netherlands
Edited by
Gerard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,G. Plotkin, University of Edinburgh
In this paper, a method is presented for proof construction in Generalised Type Systems. An interactive system that implements the method has been developed. Generalised type systems (GTSs) provide a uniform way to describe and classify type theoretical systems, e.g. systems in the families of AUTOMATH, the Calculus of Constructions, LF. A method is presented to perform unification based top down proof construction for generalised type systems, thus offering a well-founded, elegant and powerful underlying formalism for a proof development system. It combines clause resolution with higher-order natural deduction style theorem proving. No theoretical contribution to generalised type systems is claimed.
A type theory presents a set of rules to derive types of objects in a given context with assumptions about the type of primitive objects. The objects and types are expressions in typed λ-calculus. The propositions as types paradigm provides a direct mapping between (higher-order) logic and type theory. In this interpretation, contexts correspond to theories, types correspond to propositions, and objects correspond to proofs of propositions. Type theory has successfully demonstrated its capabilities to formalise many parts of mathematics in a uniform and natural way. For many generalised type systems, like the systems in the so-called λ-cube, the typing relation is decidable. This permits automatic proof checking, and such proof checkers have been developed for specific type systems.
The problem addressed in this paper is to construct an object in a given context, given its type.
Various languages have been proposed as specification languages for representing a wide variety of logics. The development of typed λ-calculi has been one approach toward this goal. The logical framework (LF), a λ-calculus with dependent types is one example of such a language. A small subset of intuitionistic logic with quantification over the simply typed λ-calculus has also been proposed as a framework for specifying general logics. The logic of hereditary Harrop formulas with quantification at all non-predicate types, denoted here as hhw, is such a meta-logic. In this paper, we show how to translate specifications in LF into hhw specifications in a direct and natural way, so that correct typing in LF corresponds to intuitionistic provability in hhw. In addition, we demonstrate a direct correspondence between proofs in these two systems. The logic of hhw can be implemented using such logic programming techniques as providing operational interpretations to the connectives and implementing unification on λ-terms. As a result, relating these two languages makes it possible to provide direct implementations of proof checkers and theorem provers for logics specified in LF.
Introduction
The design of languages that can express a wide variety of logics has been the focus of much recent work. Such languages attempt to provide a general theory of inference systems that captures uniformities across different logics, so that they can be exploited in implementing theorem provers and proof systems.
This book is a collection of papers presented at the first annual Workshop held under the auspices of the ESPRIT Basic Research Action 3245, “Logical Frameworks: Design, Implementation and Experiment”. It took place at Sophia-Antipolis, France from the 7th to the 11th of May, 1990. Seventy-four people attended the Workshop: one from Japan, six from the United States, and the rest from Europe.
We thank the European Community for the funding which made the Workshop possible. We also thank Gilles Kahn who, with the help of the Service des Relations Extérieures of INRIA, performed a most excellent job of organisation. Finally, we thank the following researchers who acted as referees: R. Constable, T. Coquand, N.G. deBruijn, P. de Groote, V. Donzeau-Gouge, G. Dowek, P. Dybjer, A. Felty, L. Hallnäs, R. Harper, L. Helmink, F. Honsell, Z. Luo, N. Mendler, C. Paulin, L. Paulson, R. Pollack, D. Pym, F. Rouaix, P. Schröder-Heister, A. Smaill, and B. Werner.
We cannot resist saying a word or two about how these proceedings came into being. Immediately after the Workshop, participants were invited to contribute papers by electronic mail, as LATEX sources. One of us (Huet) then collected the papers together, largely unedited, and the result was “published electronically” by making the collection a file available worldwide by ftp (a remote file transfer protocol). This seems to have been somewhat of a success, at least in terms of numbers of copies circulated, and perhaps had merit in terms of rapid and widespread availability of recent work.
By
Peter Aczel, Computer Science Department Manchester University Manchester, M13 9PL,
David P. Carlisle, Computer Science Department Manchester University Manchester, M13 9PL,
Nax Mendler, Computer Science Department Manchester University Manchester, M13 9PL
Edited by
Gerard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,G. Plotkin, University of Edinburgh
In this paper we describe a version of the LTC (Logical Theory of Constructions) framework, three Martin-Löf type theories and interpretations of the type theories in the corresponding LTC theories. Then we discuss the implementation of the above in the generic theorem prover Isabelle. An earlier version of the LTC framework was described by Aczel and Mendler in.
Introduction
In the notion of an open-ended framework of deductive interpreted languages is formulated, and in particular an example is given of a hierarchy of languages Li in the LTC framework. In the first part of this three part paper, sections 2 to 4, we review this hierarchy of languages and then discuss some issues concerning the framework, which lead to another hierarchy of languages, LTC0, LTC1, LTCW. In the second part, sections 5 and 6, we give three type theories, TT0, TT1, and TTW, and their interpretations in the corresponding LTC language. In the final part, sections 7 to 9, we document the implementation of the LTC hierarchy in the generic theorem prover, Isabelle, developed by Larry Paulson at Cambridge. We also describe a programme for verifying, in Isabelle, the interpretations of the type theories TT0, TT1 and TTW.
The basic LTC framework is one that runs parallel to the ITT framework. ITT stands for “Intuitionistic Theory of Types”, see. It is a particular language from the latter framework that has been implemented in the Cornell Nuprl System.
We show how Natural Deduction extended with two replacement operators can provide a framework for defining programming languages, a framework which is more expressive than the usual Operational Semantics presentation in that it permits hypothetical premises. This allows us to do without an explicit environment and store. Instead we use the hypothetical premises to make assumptions about the values of variables. We define the extended Natural Deduction logic using the Edinburgh Logical Framework.
Introduction
The Edinburgh Logical Framework (ELF) provides a formalism for defining Natural Deduction style logics. Natural Deduction is rather more powerful than the notation which is commonly used to define programming languages in “inference-style” Operational Semantics, following Plotkin and others, for example Kahn. So one may ask
“Can a Natural Deduction style be used with advantage to define programming languages?”.
We show here that, with a slight extension, it can, and hence that the ELF can be used as a formal meta-language for defining programming languages. However ELF employs the “judgements as types” paradigm and takes the form of a typed lambda calculus with dependent types. We do not need all this power here, and in this paper we present a slight extension of Natural Deduction as a semantic notation for programming language definition. This extension can itself be defined in ELF.
The inspiration for using a meta-logic for Natural Deduction proofs comes from Martin-Löf.
It is to be expected that logical frameworks will become more and more important in the near future, since they can set the stage for an integrated treatment of verification systems for large areas of the mathematical sciences (which may contain logic, mathematics, and mathematical constructions in general, such as computer software and even computer hardware). It seems that the moment has come to try to get to some kind of a unification of the various systems that have been proposed.
Over the years there has been the tendency to strengthen the frameworks by rules that enrich the notion of definitional equality, thus causing impurities in the backbones of those frameworks: the typed lambda calculi. In this paper a plea is made for the opposite direction: to expel those impurities from the framework, and to replace them by material in the books, where the role of definitional equality is taken over by (possibly strong) book equality.
Introduction
Verification systems
A verification system consists of
(i) a framework, to be called the frame, which defines how mathematical material (in the wide sense) can be written in the form of books, such that the correctness of those books is decidable by means of an algorithm (the checker),
(ii) a set of basic rules (axioms) that the user of the frame can proclaim in his books as a general basis for further work.
By
David Basin, Department of Artificial Intelligence, University of Edinburgh, Edinburgh Scotland,
Matt Kaufmann, Computational Logic, Inc. Austin, Texas 78703 USA
Edited by
Gerard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,G. Plotkin, University of Edinburgh
We use an example to compare the Boyer-Moore Theorem Prover and the Nuprl Proof Development System. The respective machine verifications of a version of Ramsey's theorem illustrate similarities and differences between the two systems. The proofs are compared using both quantitative and non-quantitative measures, and we examine difficulties in making such comparisons.
Introduction
Over the last 25 years, a large number of logics and systems have been devised for machine verified mathematical development. These systems vary significantly in many important ways, including: underlying philosophy, object-level logic, support for meta-level reasoning, support for automated proof construction, and user interface. A summary of some of these systems, along with a number of interesting comments about issues (such as differences in logics, proof power, theory construction, and styles of user interaction), may be found in Lindsay's article. The Kemmerer study compares the use of four software verification systems (all based on classical logic) on particular programs.
In this report we compare two interactive systems for proof development and checking: The Boyer-Moore Theorem Prover and the Nuprl Proof Development System. We have based our comparison on similar proofs of a specific theorem: the finite exponent two version of Ramsey's theorem (explained in Section 2). The Boyer-Moore Theorem Prover is a powerful (by current standards) heuristic theorem prover for a quantifier-free variant of first order Peano arithmetic with additional data types.
In this chapter, we will be putting the results we proved in Chapter 6 to work. We will develop algorithms to solve a variety of optimization problems, all important in their own right.
The first is to minimize the cost of a network joining together several nodes. This can always be achieved by using what is called a ‘greedy algorithm’.
Another problem is to find the distance between any two nodes along a given network, say a road network. Two simple iterative algorithms exist for this problem. A related problem is to find the path of longest length between two vertices of an acyclic directed graph. This arises in certain types of sequencing problems, where the edges represent elapsed times. We shall see that one of our algorithms for shortest paths can easily be adapted to solve this problem.
A different type of problem is exemplified by the construction of a timetable, given simple compatibility constraints. We can model this by colouring the vertices of a graph, but the best we can achieve is a heuristic algorithm, not optimal but just reasonably efficient.
We should note that Warshall's algorithm can also be interpreted as a graphical algorithm to calculate strong components, but that was treated when we studied transitive closures.
Implementations of these algorithms in Modula-2 can be found in the appendices.
This chapter will introduce a far-reaching generalization of the concept of a function. Its definition will reflect the fact that it could be implemented on a computer by a list with two columns, one column with entries from one set, and the other column with entries from a second set. This is a simple example of a relational database.
The idea of a cartesian product of two sets introduced in Chapter 3 is a very powerful one, and will enable us to considerably extend the range of applications we can model.
Example 4.1 In a Modula-2 program, several procedures Proc_1, Proc_2, Proc_3, …, are defined. In the definition of Proc_1, there are calls to both Proc_2 and Proc_1 itself. In Proc_2, there are calls to Proc_1 and Proc_3. In Proc_3, there are calls to Proc_2, and Proc_5, and so on. It is required to find the exact dependency of each procedure on any others, both directly and indirectly.
If we can tabulate the direct dependencies, then we can find all of the indirect ones too, by chasing through chains of direct calls. To find the effect of a call of any procedure, we merely need to create a list or database of pairs of procedure names, listing (Proc_i, Proc_j) whenever Proc_i calls Proc_j. The list is as in Figure 4.1.
Computing was historically taught as a branch of mathematics, usually of applied mathematics if a distinction was made. This family union came to an end when computer science diverged from mere numerical calculations towards more general objects such as data records, parse trees, and also the theory of how a computer works. The emphasis has recently become more mathematical, but with a different sort of mathematics. Also, computer science has itself spawned its own offspring, in the shape of software engineering and information technology.
Mathematics departments accustomed to teaching first year single honours mathematics undergraduates normally teach them a course in continuous mathematics and one on abstract algebra, or linear algebra, or both. Computer scientists have been pressing for their first year to be taught mathematics which is more relevant to the current needs of that discipline, and, in particular, some discrete mathematics. The content of discrete mathematics is broadly similar to what used to be known as combinatorics, but also includes topics from the foundations of mathematics, such as logic and set theory.
This book is designed as a course in discrete mathematics for first year computer scientists or software engineers in universities and colleges of further education. The book should form the basis of a full, one year option of two lectures a week, either as a subsidiary course to computer science, or as part of a mathematics first year option, say replacing part or all of the algebra normally taught there.
In this chapter, we shall investigate the important class of relations known as mappings or functions. These are relations between two sets such that every possible element of the first set appears in one and only one ordered pair. We can regard the relation as a list, with the first column listing the elements in the domain, and the second listing the values of the function. A mapping is usually specified either by giving a rule for which y appears in each pair (x, y), or, for finite sets, by listing the value of the mapping for each value of x.
Example 5.1 The relation R = {(n, n2): n ∈ ℤ} is a mapping, and the ‘rule’ is to form the pair containing n as first component, square n and take the result as second component.
Example 5.2 Consider the Modula-2 declaration
f:ARRAY[l‥n] OF INTEGER.
This produces n integers f[1], f[2], …, f[n]. These can be regarded as the values of a function f(m) which is only defined when 1 ≤ m ≤ n, and this function is represented in the computer by a list f of its values. If n is the number of employees in a firm, m is a payroll number, and f(m) is the salary of employee m, this representation of the salary function would be necessary if there were no obvious formula relating m to f(m).
Discrete mathematics is the study of those parts of mathematics which do not require any knowledge of limits, convergence, differentiation, and so on. It encompasses most of the foundations of mathematics, such as logic, set theory, relations, and also graph theory, formal language theory and an indeterminate chunk of abstract algebra.
The boundaries are necessarily vague, as they are in any subject, and we can never be sure, as our study progresses, that we will not need some result from another area. Anyone studying the time complexity of sorting algorithms would find it difficult not to use some ideas from the calculus; both logic and formal languages subsume the whole of mathematics; the study of finite error correcting codes leads into some sophisticated uses of matrices and polynomial algebras.
Mathematics provides us with a way of describing the so called ‘real world’ in an accurate, concise and unambiguous way. We extract the properties which we wish to describe, write down a few mathematical relations, and then work algebraically with those relations. As long as the mathematics and the object of our study have those common properties, any deduction we make in the mathematical model can be translated back to the real world.
It does not matter whether the mathematical description is close to the way a task is implemented. A common mistake is to think in terms of concrete realizations rather than properties.