To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In, Harper, Honsell, and Plotkin present LF (the Logical Framework) as a general framework for the definition of logics. LF provides a uniform way of encoding a logical language, its inference rules and its proofs. In, Avron, Honsell, and Mason give a variety of examples for encoding logics in LF. In this paper we describe Elf, a meta-language intended for environments dealing with deductive systems represented in LF.
While this paper is intended to include a full description of the Elf core language, we only state, but do not prove here the most important theorems regarding the basic building blocks of Elf. These proofs are left to a future, paper. A preliminary account of Elf can be found in. The range of applications of Elf includes theorem proving and proof transformation in various logics, definition and execution of structured operational and natural semantics for programming languages, type checking and type inference, etc. The basic idea behind Elf is to unify logic definition (in the style of LF) with logic programming (in the style of λProlog, see). It achieves this unification by giving types an operational interpretation, much the same way that Prolog gives certain formulas (Horn-clauses) an operational interpretation. An alternative approach to logic programming in LF has been developed independently by Pym.
Here are some of the salient characteristics of our unified approach to logic definition and meta-programming.
Martin-Löf's type theory is presented in several steps. The kernel is a dependently typed λ-calculus. Then there are schemata for inductive sets and families of sets and for primitive recursive functions and families of functions. Finally, there are set formers (generic polymorphism) and universes. At each step syntax, inference rules, and set-theoretic semantics are given.
Introduction
Usually Martin-Löf's type theory is presented as a closed system with rules for a fixed collection of set formers including Π, ∑, +, Eq, Nn, N, W, and Un. But it is often pointed out that the system is in principle open to extension: we may introduce new sets when there is a need for them. The principle is that a set is by definition inductively generated – it is defined by its introduction rules, which are rules for generating its elem ents. The elimination rule is determined by the introduction rules and expresses definition by primitive recursion on the way the elements of the set are generated. (Normally the term primitive recursive refers to number-theoretic functions. But it makes sense to use this term generally for the kind of recursion you have in Martin-Löf's type theory, since it is recursion on the way the elements of a set are generated. This includes primitive recursive functionals and transfinite recursion on well-orderings. An alternative term would be structural recursion in analogy with structural induction.)
Backhouse et.al. exhibited a schema for inductive sets which delimits a class of definitions admissible in Martin-Löf's type theory, which includes all the standard operations for forming small sets except the equality set.
We define an extended version of Nederpelt's calculus which can be used as a logical framework. The extensions have been introduced in order to support the notions of mathematical definition of constants and to internalize the notion of theory. The resulting calculus remains concise and simple, a basic requirement for logical frameworks. The calculus manipulates two kinds of objects: texts which correspond to λ-expressions, and contexts which are mainly sequences of variable declarations, constant definitions, or context abbreviations. Basic operations on texts and contexts are provided. It is argued that these operations allow one to structure large theories. An example is provided.
Introduction
This paper introduces the static kernel of a language called DEVA. This language, which has been developed in the framework of the ToolUse Esprit project, is intended to express software development mathematically. The general paradigm which was followed considered development methods as theories and developments as proofs. Therefore, the kernel of the language should provide a general treatment of formal theories and proofs.
The problem of defining a generic formal system is comparable to the one of defining a general computing language. While, according to Church's thesis, any algorithm can be expressed as a recursive function, one uses higher level languages for the actual programming of computers. Similarly, one could argue that any formal system can be expressed as Post productions, but to use such a formalism as a logical framework is, in practice, inadequate.
The goal of this note is to present a “modular” proof, for various type systems with η-conversion, of the completeness and correctness of an algorithm for testing the conversion of two terms. The proof of completeness is an application of the notion of logical relations (see Statman 1983, that uses also this notion for a proof of Church-Rosser for simply typed λ-calculus).
An application of our result will be the equivalence between two formulations of Type Theory, the one where conversions are judgement, like in the present version of Martin-Löf set theory, and the one where conversion is defined at the level of raw terms, like in the standard version of LF (for a “conversion-as-judgement” presentation of LF, see Harper 1988). Even if we don't include η-conversion, the equivalence between the “conversion-as-judgement” and “conversion defined on raw terms” formulation appears to be a non-trivial property.
In order to simplify the presentation we will limit ourselves to type theory with only Π, and one universe. This calculus contains LF. After some motivations, we present the algorithm, the proof of its completeness and, as a corollary, its correctness. As a corollary of our argument, we prove normalisation, Church-Rosser, and the equivalence between the two possible formulations of Type Theory.
Informal motivation
The algorithm
The idea is to compute the weak head-normal form of the two terms (in an untyped way), and, in order to take care of η-conversion, in the case where one weak-head normal form is an abstraction (λx:A)M and the other is N a variable or an application, to compare recursively apply(N,ξ) and M[ξ].
We illustrate the effectiveness of proof transformations which expose the computational content of classical proofs even in cases where it is not apparent. We state without proof a theorem that these transformations apply to proofs in a fragment of type theory and discuss their implementation in Nuprl. We end with a discussion of the applications to Higman's lemma by the second author using the implemented system.
Introduction: Computational content
Informal practice
Sometimes we express computational ideas directly as when we say 2 + 2 reduces to 4 or when we specify an algorithm for solving a problem: “use Euclid's GCD (greatest common divisor) algorithm to reduce this fraction.” At other times we refer only indirectly to a method of computation, as in the following form of Euclid's proof that there are infinitely many primes:
For every natural number n there is a prime p greater than n. To prove this, notice first that every number m has a least prime factor; to find it, just try dividing it by 2, 3, …, m and take the first divisor. In particular n! + 1 has a least prime factor. Call it p. Clearly p cannot be any number between 2 and n since none of those divide n! + 1 evenly. Therefore p > n. QED
This proof implicitly provides an algorithm to find a prime greater than n.
This book contains a collection of papers concerned with logical frameworks. Such frameworks arise in a number of ways when considering the relationship between logic and computation, and indeed the general structure of logical formalisms. In particular, in Computer Science, there is interest in the representation and organization of mathematical knowledge on the computer, and in obtaining computational assistance with its derivation. One would especially like to implement program logics and prove programs correct. Again, there is direct computational content in various logical formalisms, particularly constructive ones. Finally, such issues provoke interest in re-examining purely logical questions.
Logical frameworks arise in two distinct but related senses. First, very many logics are of interest in Computer Science, and great repetition of effort is involved in implementing each. It would therefore be helpful to create a single framework, a kind of meta-logic, which is itself implementable and in which the logics of interest can be represented. Putting the two together there results an implementation of any represented logic.
In the second sense, one chooses a particular “universal” logic which is strong enough to do all that is required, and sticks to it. For example, one might choose a set theory, and do mathematics within that. Both approaches have much in common. Even within a fixed logic there is the need for a descriptive apparatus for particular mathematical theories, notations, derived rules and so on. Providing such is rather similar to providing a framework in the first sense.
By
Leen Helmink, Philips Research Laboratories, P.O. Box 80.000, 5600 JA Eindhoven, the Netherlands,
René Ahn, Philips Research Laboratories, P.O. Box 80.000, 5600 JA Eindhoven, the Netherlands
Edited by
Gerard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,G. Plotkin, University of Edinburgh
In this paper, a method is presented for proof construction in Generalised Type Systems. An interactive system that implements the method has been developed. Generalised type systems (GTSs) provide a uniform way to describe and classify type theoretical systems, e.g. systems in the families of AUTOMATH, the Calculus of Constructions, LF. A method is presented to perform unification based top down proof construction for generalised type systems, thus offering a well-founded, elegant and powerful underlying formalism for a proof development system. It combines clause resolution with higher-order natural deduction style theorem proving. No theoretical contribution to generalised type systems is claimed.
A type theory presents a set of rules to derive types of objects in a given context with assumptions about the type of primitive objects. The objects and types are expressions in typed λ-calculus. The propositions as types paradigm provides a direct mapping between (higher-order) logic and type theory. In this interpretation, contexts correspond to theories, types correspond to propositions, and objects correspond to proofs of propositions. Type theory has successfully demonstrated its capabilities to formalise many parts of mathematics in a uniform and natural way. For many generalised type systems, like the systems in the so-called λ-cube, the typing relation is decidable. This permits automatic proof checking, and such proof checkers have been developed for specific type systems.
The problem addressed in this paper is to construct an object in a given context, given its type.
Various languages have been proposed as specification languages for representing a wide variety of logics. The development of typed λ-calculi has been one approach toward this goal. The logical framework (LF), a λ-calculus with dependent types is one example of such a language. A small subset of intuitionistic logic with quantification over the simply typed λ-calculus has also been proposed as a framework for specifying general logics. The logic of hereditary Harrop formulas with quantification at all non-predicate types, denoted here as hhw, is such a meta-logic. In this paper, we show how to translate specifications in LF into hhw specifications in a direct and natural way, so that correct typing in LF corresponds to intuitionistic provability in hhw. In addition, we demonstrate a direct correspondence between proofs in these two systems. The logic of hhw can be implemented using such logic programming techniques as providing operational interpretations to the connectives and implementing unification on λ-terms. As a result, relating these two languages makes it possible to provide direct implementations of proof checkers and theorem provers for logics specified in LF.
Introduction
The design of languages that can express a wide variety of logics has been the focus of much recent work. Such languages attempt to provide a general theory of inference systems that captures uniformities across different logics, so that they can be exploited in implementing theorem provers and proof systems.
This book is a collection of papers presented at the first annual Workshop held under the auspices of the ESPRIT Basic Research Action 3245, “Logical Frameworks: Design, Implementation and Experiment”. It took place at Sophia-Antipolis, France from the 7th to the 11th of May, 1990. Seventy-four people attended the Workshop: one from Japan, six from the United States, and the rest from Europe.
We thank the European Community for the funding which made the Workshop possible. We also thank Gilles Kahn who, with the help of the Service des Relations Extérieures of INRIA, performed a most excellent job of organisation. Finally, we thank the following researchers who acted as referees: R. Constable, T. Coquand, N.G. deBruijn, P. de Groote, V. Donzeau-Gouge, G. Dowek, P. Dybjer, A. Felty, L. Hallnäs, R. Harper, L. Helmink, F. Honsell, Z. Luo, N. Mendler, C. Paulin, L. Paulson, R. Pollack, D. Pym, F. Rouaix, P. Schröder-Heister, A. Smaill, and B. Werner.
We cannot resist saying a word or two about how these proceedings came into being. Immediately after the Workshop, participants were invited to contribute papers by electronic mail, as LATEX sources. One of us (Huet) then collected the papers together, largely unedited, and the result was “published electronically” by making the collection a file available worldwide by ftp (a remote file transfer protocol). This seems to have been somewhat of a success, at least in terms of numbers of copies circulated, and perhaps had merit in terms of rapid and widespread availability of recent work.
By
Peter Aczel, Computer Science Department Manchester University Manchester, M13 9PL,
David P. Carlisle, Computer Science Department Manchester University Manchester, M13 9PL,
Nax Mendler, Computer Science Department Manchester University Manchester, M13 9PL
Edited by
Gerard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,G. Plotkin, University of Edinburgh
In this paper we describe a version of the LTC (Logical Theory of Constructions) framework, three Martin-Löf type theories and interpretations of the type theories in the corresponding LTC theories. Then we discuss the implementation of the above in the generic theorem prover Isabelle. An earlier version of the LTC framework was described by Aczel and Mendler in.
Introduction
In the notion of an open-ended framework of deductive interpreted languages is formulated, and in particular an example is given of a hierarchy of languages Li in the LTC framework. In the first part of this three part paper, sections 2 to 4, we review this hierarchy of languages and then discuss some issues concerning the framework, which lead to another hierarchy of languages, LTC0, LTC1, LTCW. In the second part, sections 5 and 6, we give three type theories, TT0, TT1, and TTW, and their interpretations in the corresponding LTC language. In the final part, sections 7 to 9, we document the implementation of the LTC hierarchy in the generic theorem prover, Isabelle, developed by Larry Paulson at Cambridge. We also describe a programme for verifying, in Isabelle, the interpretations of the type theories TT0, TT1 and TTW.
The basic LTC framework is one that runs parallel to the ITT framework. ITT stands for “Intuitionistic Theory of Types”, see. It is a particular language from the latter framework that has been implemented in the Cornell Nuprl System.
We show how Natural Deduction extended with two replacement operators can provide a framework for defining programming languages, a framework which is more expressive than the usual Operational Semantics presentation in that it permits hypothetical premises. This allows us to do without an explicit environment and store. Instead we use the hypothetical premises to make assumptions about the values of variables. We define the extended Natural Deduction logic using the Edinburgh Logical Framework.
Introduction
The Edinburgh Logical Framework (ELF) provides a formalism for defining Natural Deduction style logics. Natural Deduction is rather more powerful than the notation which is commonly used to define programming languages in “inference-style” Operational Semantics, following Plotkin and others, for example Kahn. So one may ask
“Can a Natural Deduction style be used with advantage to define programming languages?”.
We show here that, with a slight extension, it can, and hence that the ELF can be used as a formal meta-language for defining programming languages. However ELF employs the “judgements as types” paradigm and takes the form of a typed lambda calculus with dependent types. We do not need all this power here, and in this paper we present a slight extension of Natural Deduction as a semantic notation for programming language definition. This extension can itself be defined in ELF.
The inspiration for using a meta-logic for Natural Deduction proofs comes from Martin-Löf.