We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We define and discuss higher-order unification with a built-in higher-order equational theory. It subsumes unification and matching of simply typed λ-terms at all orders, and we present the major forms and their complexities down to the first-order cases.
Other forms of unification, their properties or applications are discussed in surveys by Gallier [64], and Siekmann [175, 176], and in collections of papers [110, 111].
Higher-order equational unifiability uses a definition of higher-order rewriting which can also be used for proofs in higher-order equational logic. We give soundness and completeness results for higher-order equational unification procedures which enable us to define higher-order resolution for CTT formulas of all orders. These results link unification and general model semantics.
We show that pure third-order equational matching is undecidable by a reduction from Hilbert's Tenth Problem. We discuss the open problem of the decidability of higher-order matching. Several approaches for its solution are presented. We give a higher-order matching algorithm which is sound and terminating. We also present a class of decidable pure third-order matching problems based on the Schwichtenberg-Statman characterization of the λ-definable functions on the simply typed Church numerals, a class of decidable matching problems of arbitrary order, show that pure third-order matchability is NP-hard by a reduction from propositional satisfiability, discuss resolving the Plotkin-Statman Conjecture, and consider Zaionc's treatment of regular unification problems. All of these approaches suggest that the problem is decidable.
This paper is an amalgam of two introductory lecture courses given at the Summer School. As the title suggests, the aim is to present fundamental notions of Proof Theory in their simplest settings, thus: Completeness and Cut-Elimination in Pure Predicate Logic; the Curry-Howard Correspondence and Normalization in the core part of Natural Deduction; connections to Sequent Calculus and Linear Logic; and applications to the Σ1-Inductive fragment of arithmetic and the synthesis of primitive recursive bounding functions. The authors have tried to preserve a (readable) balance between rigour and informal lecture-note style.
Pure Predicate Logic—Completeness
Classical first order predicate calculus (PC) is formulated here essentially in “Schütte-Ackermann-Tait” style, but with multisets instead of sets of formulas for sequents. It is kept “pure” (i.e., no function symbols) merely for the sake of technical simplicity. The refinement to multiset sequents illuminates the rôle of the so-called structural inferences of contraction and weakening in proof-theoretic arguments.
ABSTRACT. We work in the context of abstract data types, modelled as classes of many-sorted algebras closed under isomorphism. We develop notions of computability over such classes, in particular notions of primitive recursiveness and μ-recursiveness, which generalize the corresponding classical notions over the natural numbers. We also develop classical and intuitionistic formal systems for theories about such data types, and prove (in the case of universal theories) that if an existential assertion is provable in either of these systems, then it has a primitive recursive selection function. It is a corollary that if a μ-recursive scheme is provably total, then it is extensionally equivalent to a primitive recursive scheme. The methods are proof-theoretical, involving cut elimination. These results generalize to an abstract setting previous results of C. Parsons and G. Mints over the natural numbers.
INTRODUCTION
We will examine the provability or verifiability in formal systems of program properties, such as termination or correctness, from the point of view of the general theory of computable functions over abstract data types. In this theory an abstract data type is modelled semantically by a class K of many-sorted algebras, closed under isomorphism, and many equivalent formalisms are used to define computable functions and relations on an algebra A, uniformly for all A ∈ K. Some of these formalisms are generalizations to A and K of sequential deterministic models of computation on the natural numbers.
The method of local predicativity as developed by Pohlers in [10],[11],[12] and extended to subsystems of set theory by Jäger in [4],[5],[6] is a very powerful tool for the ordinal analysis of strong impredicative theories. But up to now it suffers considerably from the fact that it is based on a large amount of very special ordinal theoretic prerequisites. This is true even for the most recent (very polished) presentation of local predicativity in (Pohlers [15]). The purpose of the present paper is to expose a simplified and conceptually improved version of local predicativity which – besides some very elementary facts on ordinal addition, multiplication, and exponentiation – requires only amazingly little ordinal theory. (All necessary nonelementary ordinal theoretic prerequisites can be developed from scratch on just two pages, as we will show in section 4.) The most important feature of our new approach however seems to be its conceptual clarity and flexibility, and in particular the fact that its basic concepts (i.e. the infinitary system RS∞ and the notion of an H-controlled RS∞-derivation) are in no way related to any system of ordinal notations or collapsing functions. Our intention with this paper is to make the fascinating field of ‘admissible proof theory’ (created by Jäger and Pohlers) more easily accessible for non-proof-theorists, and to provide a technically and conceptually well developed basis for further research in this area.
This is a collection of ten refereed papers presented at an international Summer School and Conference on Proof Theory held at Bodington Hall, Leeds University between 24th July and 2nd August 1990. The meeting was held under the auspices of the “Logic for Information Technology” (Logfit) initiative of the UK Science and Engineering Research Council, in collaboration with the Leeds Centre for Theoretical Computer Science (CTCS). The principal funding came from SERC Logfit under contract SO/72/90 with additional contributions gratefully received from the British Logic Colloquium and the London Mathematical Society. There were 100 participants representing at least twelve different countries: Belgium, Canada, Estonia, France, Germany, Italy, Japan, Norway, Sweden, Switzerland, USA and UK.
The first three papers printed here represent short lecture courses given in the summer school and are intended to be of a more instructional nature, leading from basic to more advanced levels of ‘pure’ proof theory. The others are conference research papers reflecting a somewhat wider range of topics, and we see no better way of ordering them than alphabetically by author.
The programme of lectures given at the meeting is set out overleaf. Though not all of the invited speakers were able to contribute to this volume we believe that what remains provides rich flavours of a tasty subject.
Suppose a formal proof of ∀x∃y Spec(x, y) is given, where Spec(x, y) is an atomic formula expressing some specification for natural numbers x, y. For any particular number n we then obtain a formal proof of ∃ySpec(n, y). Now the proof–theoretic normalization procedure yields another proof of ∃ySpec(n, y) which is in normal form. In particular, it does not use induction axioms any more, and it also does not contain non–evaluated terms. Hence we can read off, linearly in the size of the normal proof, an instance m for y such that Spec(n, m) holds. In this way a formal proof can be seen as a program, and the central part in implementing this programming language consists in an implementation of the proof–theoretic normalization procedure.
There are many ways to implement normalization. As usual, a crucial point is a good choice of the data structures. One possibility is to represent a term as a function (i.e. a SCHEME–procedure) of its free variables, and similarly to represent a derivation (in a Gentzen–style system of natural deduction) as a function of its free assumption and object variables. Then substitution is realized as application, and normalization is realized as the built–in evaluation process of SCHEME (or any other language of the LISP–family). We presently experiment with an implementation along these lines, and the results up to now are rather promising. Some details are given in an appendix.
This paper discusses proof theoretic characterisations of termination orderings for rewrite systems and compares them with the proof theoretic characterisations of fragments of first order arithmetic.
Rewrite systems arise naturally from systems of equations by orienting the equations into rules of replacement. In particular, when a number theoretic function is introduced by a set of defining equations, as is the case in first order systems of arithmetic, this set of equations can be viewed as a rewrite system which computes the function.
A termination ordering is a well-founded ordering on terms and is used to prove termination of a term rewriting system by showing that the rewrite relation is a subset of the ordering and hence is also well founded thus guaranteeing the termination of any sequence of rewrites.
The successful use of a specific termination ordering in proving termination of a given rewrite system, R, is necessarily a restriction on the form of the rules in R, and, as we show here in specific cases, translates into a restriction on the proof theoretical complexity of the function computed by R. We shall mainly discuss two termination orderings. The first, the so-called recursive path ordering (recently re-christened as the multiset path ordering) of [Der79], is widely known and has been implemented in various theorem provers. The second ordering is a derivative of another well known ordering, the lexico-graphic path ordering of [KL80]. This derivative we call the ramified lexicographic path ordering. We shall show that the recursive path ordering and the ramified lexicographic path ordering prove termination of different algorithms yet characterise the same class of number-theoretic functions, namely the primitive recursive functions.
The Lacedæmonians [advanced] slowly and to the music of
many flute-players […], meant to make them advance evenly,
stepping in time, without breaking their order, as large
armies are apt to do in the moment of engaging.
—Thucydides
The routing algorithms in Chapter 5 can be converted into efficient asynchronous algorithms by replacing the global clock with a synchronization scheme based on message passing. We also demonstrate that asynchronous fsra has low sensitivity to variations in link and processor speeds.
Introduction
The assumption of synchronism often greatly simplifies the design of algorithms, be they sequential or parallel. Many computation models — for example, RAM (“Random Access Machine”) [5] in the sequential setting and PRAM in the parallel setting — assume the existence of a global clock. But, this assumption will become less desirable as the number of processors increases. For one thing, a global clock introduces a single point of failure. A global clock also restrains each processor's degree of autonomy and renders the machine unable to exploit differences in running speed [42, 192], limiting the overall speed to, so to speak, that of the (“slowest” component instead of the “average” one, thus wasting cycles. Tight synchronization also limits the size of the parallel computer, since it takes time to distribute the clock-signal to the whole system [316].
Proceeding in epochs, our routing schemes in Chapter 5 assume synchronism. In fact, the very definition of ECS assumes a global clock to synchronize epochs. We show in this chapter that with synchronization done via message passing, ECSs can be made asynchronous without loss of efficiency and without global control. Much work has been done in this area; see, for example, [31, 32, 33].