To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper is an amalgam of two introductory lecture courses given at the Summer School. As the title suggests, the aim is to present fundamental notions of Proof Theory in their simplest settings, thus: Completeness and Cut-Elimination in Pure Predicate Logic; the Curry-Howard Correspondence and Normalization in the core part of Natural Deduction; connections to Sequent Calculus and Linear Logic; and applications to the Σ1-Inductive fragment of arithmetic and the synthesis of primitive recursive bounding functions. The authors have tried to preserve a (readable) balance between rigour and informal lecture-note style.
Pure Predicate Logic—Completeness
Classical first order predicate calculus (PC) is formulated here essentially in “Schütte-Ackermann-Tait” style, but with multisets instead of sets of formulas for sequents. It is kept “pure” (i.e., no function symbols) merely for the sake of technical simplicity. The refinement to multiset sequents illuminates the rôle of the so-called structural inferences of contraction and weakening in proof-theoretic arguments.
ABSTRACT. We work in the context of abstract data types, modelled as classes of many-sorted algebras closed under isomorphism. We develop notions of computability over such classes, in particular notions of primitive recursiveness and μ-recursiveness, which generalize the corresponding classical notions over the natural numbers. We also develop classical and intuitionistic formal systems for theories about such data types, and prove (in the case of universal theories) that if an existential assertion is provable in either of these systems, then it has a primitive recursive selection function. It is a corollary that if a μ-recursive scheme is provably total, then it is extensionally equivalent to a primitive recursive scheme. The methods are proof-theoretical, involving cut elimination. These results generalize to an abstract setting previous results of C. Parsons and G. Mints over the natural numbers.
INTRODUCTION
We will examine the provability or verifiability in formal systems of program properties, such as termination or correctness, from the point of view of the general theory of computable functions over abstract data types. In this theory an abstract data type is modelled semantically by a class K of many-sorted algebras, closed under isomorphism, and many equivalent formalisms are used to define computable functions and relations on an algebra A, uniformly for all A ∈ K. Some of these formalisms are generalizations to A and K of sequential deterministic models of computation on the natural numbers.
The method of local predicativity as developed by Pohlers in [10],[11],[12] and extended to subsystems of set theory by Jäger in [4],[5],[6] is a very powerful tool for the ordinal analysis of strong impredicative theories. But up to now it suffers considerably from the fact that it is based on a large amount of very special ordinal theoretic prerequisites. This is true even for the most recent (very polished) presentation of local predicativity in (Pohlers [15]). The purpose of the present paper is to expose a simplified and conceptually improved version of local predicativity which – besides some very elementary facts on ordinal addition, multiplication, and exponentiation – requires only amazingly little ordinal theory. (All necessary nonelementary ordinal theoretic prerequisites can be developed from scratch on just two pages, as we will show in section 4.) The most important feature of our new approach however seems to be its conceptual clarity and flexibility, and in particular the fact that its basic concepts (i.e. the infinitary system RS∞ and the notion of an H-controlled RS∞-derivation) are in no way related to any system of ordinal notations or collapsing functions. Our intention with this paper is to make the fascinating field of ‘admissible proof theory’ (created by Jäger and Pohlers) more easily accessible for non-proof-theorists, and to provide a technically and conceptually well developed basis for further research in this area.
This is a collection of ten refereed papers presented at an international Summer School and Conference on Proof Theory held at Bodington Hall, Leeds University between 24th July and 2nd August 1990. The meeting was held under the auspices of the “Logic for Information Technology” (Logfit) initiative of the UK Science and Engineering Research Council, in collaboration with the Leeds Centre for Theoretical Computer Science (CTCS). The principal funding came from SERC Logfit under contract SO/72/90 with additional contributions gratefully received from the British Logic Colloquium and the London Mathematical Society. There were 100 participants representing at least twelve different countries: Belgium, Canada, Estonia, France, Germany, Italy, Japan, Norway, Sweden, Switzerland, USA and UK.
The first three papers printed here represent short lecture courses given in the summer school and are intended to be of a more instructional nature, leading from basic to more advanced levels of ‘pure’ proof theory. The others are conference research papers reflecting a somewhat wider range of topics, and we see no better way of ordering them than alphabetically by author.
The programme of lectures given at the meeting is set out overleaf. Though not all of the invited speakers were able to contribute to this volume we believe that what remains provides rich flavours of a tasty subject.
Suppose a formal proof of ∀x∃y Spec(x, y) is given, where Spec(x, y) is an atomic formula expressing some specification for natural numbers x, y. For any particular number n we then obtain a formal proof of ∃ySpec(n, y). Now the proof–theoretic normalization procedure yields another proof of ∃ySpec(n, y) which is in normal form. In particular, it does not use induction axioms any more, and it also does not contain non–evaluated terms. Hence we can read off, linearly in the size of the normal proof, an instance m for y such that Spec(n, m) holds. In this way a formal proof can be seen as a program, and the central part in implementing this programming language consists in an implementation of the proof–theoretic normalization procedure.
There are many ways to implement normalization. As usual, a crucial point is a good choice of the data structures. One possibility is to represent a term as a function (i.e. a SCHEME–procedure) of its free variables, and similarly to represent a derivation (in a Gentzen–style system of natural deduction) as a function of its free assumption and object variables. Then substitution is realized as application, and normalization is realized as the built–in evaluation process of SCHEME (or any other language of the LISP–family). We presently experiment with an implementation along these lines, and the results up to now are rather promising. Some details are given in an appendix.
This paper discusses proof theoretic characterisations of termination orderings for rewrite systems and compares them with the proof theoretic characterisations of fragments of first order arithmetic.
Rewrite systems arise naturally from systems of equations by orienting the equations into rules of replacement. In particular, when a number theoretic function is introduced by a set of defining equations, as is the case in first order systems of arithmetic, this set of equations can be viewed as a rewrite system which computes the function.
A termination ordering is a well-founded ordering on terms and is used to prove termination of a term rewriting system by showing that the rewrite relation is a subset of the ordering and hence is also well founded thus guaranteeing the termination of any sequence of rewrites.
The successful use of a specific termination ordering in proving termination of a given rewrite system, R, is necessarily a restriction on the form of the rules in R, and, as we show here in specific cases, translates into a restriction on the proof theoretical complexity of the function computed by R. We shall mainly discuss two termination orderings. The first, the so-called recursive path ordering (recently re-christened as the multiset path ordering) of [Der79], is widely known and has been implemented in various theorem provers. The second ordering is a derivative of another well known ordering, the lexico-graphic path ordering of [KL80]. This derivative we call the ramified lexicographic path ordering. We shall show that the recursive path ordering and the ramified lexicographic path ordering prove termination of different algorithms yet characterise the same class of number-theoretic functions, namely the primitive recursive functions.
In 1980, when the British Computer Society's Specialist Group on Expert Systems was established, it was remarked that the number of operational expert systems in the world could be counted on the fingers of one mutilated hand.
Expert Systems and its parent field Artificial Intelligence, which were then barely known outside a few specialist academic institutions, are now accepted parts of most degree courses in Computer Science.
Moreover, the history of expert systems in the last ten years is a highly successful example of technology transfer from the research laboratory to industry.
Today there are thousands, possibly tens of thousands of expert systems in use world-wide. They cover a very wide range of application areas, from archaeology, through munitions disposal to welfare benefits advice (see, for example, Bramer 1987, 1988, 1990).
Many of these systems are small-scale, developed in a few months (or even weeks) and often comprising just a few hundred rules. However, even relatively straightforward expert systems can still frequently be of great practical and commercial value.
The Department of Trade and Industry recently produced a series of 12 case studies of commercially successful expert system applications in the UK which included systems for tasks as diverse as product design at Lucas Engineering, corporate meetings planning at Rolls-Royce and personnel selection at Marks and Spencer (DTI, 1990). However, despite explosive growth in the last ten years, it seems clear that we are still only scratching the surface of possible applications.
By
G. A. Ringland, SERC Rutherford Appleton Laboratory. Chilton, Didcot, OXON. 0X11 OQX. United Kingdom.,
H. R. Chappel, SERC Rutherford Appleton Laboratory. Chilton, Didcot, OXON. 0X11 OQX. United Kingdom.,
S. C. Lambert, SERC Rutherford Appleton Laboratory. Chilton, Didcot, OXON. 0X11 OQX. United Kingdom.,
M. D. Wilson, SERC Rutherford Appleton Laboratory. Chilton, Didcot, OXON. 0X11 OQX. United Kingdom.,
G. J. Doe, SERC Rutherford Appleton Laboratory. Chilton, Didcot, OXON. 0X11 OQX. United Kingdom.
The degree to which users understand and accept advice from Knowledge-Based Systems can be increased through explanation. However, different application tasks and different sets of users place diverse requirements on an explanation component of a Knowledge-Based System. Thus, the degree of portability of explanation components between applications is reduced. This paper discusses the aspects of explanation that change between application tasks and those that are required for any satisfactory explanation. The requirements placed on Knowledge-based Systems resulting from explanatory capabilities raises implications for the structure and contents of the knowledge-base and the visibility of the system. The discussion is illustrated by four Knowledge-Based System projects.
INTRODUCTION
An important feature of knowledge-based systems compared to other information-providing systems is that the knowledge on which they are based is represented explicitly in the system rather than hidden in the design of the system, or represented implicitly in an algorithm. The knowledge can therefore be used not only to solve the problem for which the knowledge-based system was built, but also to show the user what knowledge is used to solve the problem and hence go some way to explain the system's behaviour. However, whilst the explicitness of the knowledge makes it possible to provide some explanatory capability, it does not necessarily mean that the system is capable of producing every explanation required by its users. Some explanations require further reasoning and knowledge to retrieve and act on the knowledge already present in the knowledge-based system.
“Pruning is done to prevent over crowding, for the health of the plant, to open up the lower branches to the light and to create space.” – Ashley Stephenson, The Garden Planner (1981).
Abstract: Discrimination or Classification Trees are a popular form of knowledge representation, and have even been used as the basis for expert systems. One reason for their popularity is that efficient algorithms exist for inducing such trees automatically from sample data (Brieman et al., 1984; Quinlan, 1986). However, it is widely recognized among machine-learning researchers that trees derived from noisy or inconclusive data sets tend to be over-complex. This unnecessary complexity renders them hard to interpret and typically degrades their performance on unseen test cases. The present paper introduces a measure of tree quality, and an associated tree pruning technique, based on the minimum-message-length (MML) criterion (Wallace & Freeman, 1987; Wolff, 1991). Empirical trials with a variety of data sets indicate that it achieves greater than 80% reduction in tree size, coupled with a slight improvement in accuracy in classifying unseen test cases, thus comparing favourably with alternative simplification strategies. Moreover, it is simpler that previously published pruning techniques, even those based on the MML principle such as that of Quinlan & Rivest (1989).
Keywords: Machine Learning, Data Compression, Inductive Inference, Information Theory, Entropy Minimax, Classification Algorithms, Discrimination Trees.
INTRODUCTION
One reason for the popularity of discrimination trees (also known as decision trees) for representing knowledge is that they are relatively easy to understand.
By
C. J. Hinde, Dept. Computer Studies University of Technology Loughborough Leics LE11 3TU.,
A. D. Bray, Dept. Computer Studies University of Technology Loughborough Leics LE11 3TU.
The truth maintained blackboard model of problem solving as used in the Loughborough University Manufacturing Planner had supported collaboration between experts which were closely linked to the management system. On realistic problems the size of the assumption bases produced by the system and the overall size of the blackboard combined to impair the performance of the system. This model of design supported the collaboration of experts around a central blackboard. Clearly collaboration is a necessary condition for concurrent decision making and so the basic framework for collaboration is preserved in this model.
The Design to Product management system within which the Planner had to operate had a central “Tool Manager” through which all communication was routed. In order to implement a model of simultaneous engineering and also to support collaborative work using this model a multiple context design system is useful, if not essential. Our model extends this by distributing the control between the various expert agents where each agent treats the others as knowledge sources to its own private blackboard. All interaction between agents is done using a common communication protocol which is capable of exchanging contextual information necessary to separate contexts in the Assumption based Truth Maintenance System (de Kleer 84) environment. The hierarchical model of control by a central tool manager has been replaced by a hierarchical model of distributed control. The agents are configured using a single line inheritance scheme which endows each agent with its required knowledge and also allows it to declare its functionality to its colleagues.
Abstract. In this paper, the problem of obtaining unbiased attribute selection in probabilistic induction is described. This problem is one which is at present only poorly appreciated by those working in the field and has still not been satisfactorily solved. It is shown that the method of binary splitting of attributes goes only part of the way towards removing bias and that some further compensation mechanism is required to remove it completely. Work which takes steps in the direction of finding such a compensation mechanism is described in detail.
Introduction
Automatic induction algorithms have a history which can be traced back to Hunt's concept learning systems (Hunt et al., 1966). Later developments include AQ11 (Michalski & Larson, 1978) and ID3 (Quinlan, 1979). The extension of this type of technique to the task of induction under uncertainty is characterised by algorithms such as AQ15 (Michalski et al., 1986) and C4 (Quinlan, 1986). Other programs, developed specifically to deal with noisy domains include CART (Breiman et al., 1984) and early versions of Predictor (White 1985, 1987; White & Liu, 1990). A recent review of inductive techniques may be found in Liu & White (1991). However, efforts to develop these systems have uncovered a problem which is at present only poorly appreciated by those working in the field and has still not been satisfactorily solved.