To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In discussing the translation from English into Ltype in Chapter 4, rules for generating and interpreting simple passives were omitted. Although it is possible to define the extension of a passive verb phrase like kicked by Jo as the characteristic function of the set of things that Jo kicks, it is not possible with the apparatus we currently have to link this function directly with the extension of the active verb kick, kicks, kicked. The appropriate relationship between the two voices is that, in the relation denoted by the passive, the entity denoted by the object of the preposition by corresponds to the entity denoted by the subject of the active and the entity denoted by the passive subject corresponds to that denoted by the object in the active. This correspondence was handled in Chapter 2 directly in the translation for the passive rule by switching around the individual constants translating the two noun phrases in the passive rule to yield an identical translation to that of the active. So, for example, Jo kicked Chester and Chester was kicked by Jo are both translated into Lp as kick' (jo', Chester'). Unfortunately, this simple expedient is no longer open to us because of the existence in G2 of a verb phrase constituent. This prevents subject and complement NPs from being ordered with respect to each other in a translation rule because they are no longer introduced by the same syntactic rule.
The semantic theory developed up to Chapter 6 has concentrated mainly on the interpretation of sentences and phrases in isolation from each other, but one of the criteria for assessing the adequacy of a semantic theory set out in Chapter 1 is that it should account for the meaning relations that hold between different expressions in a language. This means, amongst other things, that the semantic theory proposed here ought to guarantee that, where reference and context are kept constant, the sentences in (1.b) and (1.c) are paraphrases of (1.a) while (1.d) and (1.e) are entailments of it and (1.f) and (1.g) are contradictions of it.
a. Jo stroked the cat and kicked the dog.
b. Jo kicked the dog and stroked the cat.
c. The cat was stroked by Jo and the dog was kicked by Jo.
d. Jo stroked the cat.
e. Someone kicked the dog.
f. The dog wasn't kicked.
g. No-one stroked anything.
The intuitively identified relations between the sentences in (1) derive from the interpretations of the conjunction and, the negative not and the quantifier pronouns no-one and someone. Such relations are generally referred to as logical entailments, paraphrases or contradictions. (Note that these terms are used ambiguously between the relation that holds amongst sentences, as here, and the product sentences themselves, as in the first paragraph above.)
One of the conditions of adequacy for a semantic theory set up in Chapter 1 is that it conform to the Principle of Compositionality. This principle requires the meaning of a sentence to be derived from the meaning of its parts and the way they are put together. The interpretation procedure for the grammar fragment set up in the last two chapters adheres to this principle insofar as the translations of sentences, and thereby their interpretations, are derived from the translations of their parts and the syntactic rules used to combine them. Thus, for example, the translation of the sentence Ethel kicked the student is derived from the translations of the two noun phrases Ethel and the student and the verb kicked. These are combined using the translation rule for transitive sentences to give kick'(ethel',the-student'). The truth or falsity of the resulting formula can then be directly ascertained by checking whether the ordered pair of entities denoted by the subject and object in that order is in the set of ordered pairs denoted by the predicate, kick'.
Unfortunately, in the theory of Chapters 2 and 3, compositionality is maintained only at the expense of the syntax. The ‘flat’ structure of the predicate-argument syntax of Lp and its interpretation requires a flat sentence structure in the English syntax in order to maintain a direct correspondence between syntax and translation, and thus a transparent relation between elements in the interpretation and constituents of the English sentence.
In this chapter and the next, we will lay the foundations on which a good deal of logical semantics is built. In accordance with the discussion in Chapter 1, we first define a logical language into which sentences of English are translated in order to circumvent the problems of ambiguity and underdeterminacy found in the object language. Having defined the translation language, and specified the procedure for translating simple English sentences into it, our attention will turn to the interpretation of these logical expressions in terms of their truth-conditions, thus providing an indirect interpretation of the corresponding English sentences.
The syntax of LP
Like all languages, natural or artificial, logical languages have a syntax, i.e. a set of rules for constructing composite expressions from simpler ones. The logical language described in this chapter, called LP, contains expressions that fall into one of four logical categories: individuals, predicates, formulae and operators (or connectives). Expressions in each of the first three categories can be further subdivided into two sorts: constants, which have a fixed interpretation, and variables, which do not. These two sorts of expression correspond, roughly, to content words (e.g. table, run, Ethel) and pronominal expressions (e.g. she, they) in natural languages, respectively. This chapter deals only with constants, but variables will become increasingly important in later chapters.
Sentences in natural languages translate into formulae in LP which have the logical category t (as sentences have the syntactic category S).
In its broadest sense, semantics is the study of meaning and linguistic semantics is the study of meaning as expressed by the words, phrases and sentences of human languages. It is, however, more usual within linguistics to interpret the term more narrowly, as concerning the study of those aspects of meaning encoded in linguistic expressions that are independent of their use on particular occasions by particular individuals within a particular speech community. In other words, semantics is the study of meaning abstracted away from those aspects that are derived from the intentions of speakers, their psychological states and the socio-cultural aspects of the context in which their utterances are made. A further narrowing of the term is also commonly made in separating the study of semantics from that of pragmatics. Unfortunately, the nature of the object of inquiry of the discipline (what constitutes semantic meaning, as opposed to pragmatic meaning) and the domain of the inquiry (what aspects of meaning should be addressed by the discipline) remain difficult and controversial questions. There are, however, three central aspects of the meaning of linguistic expressions that are currently accepted by most semanticists as forming the core concern of linguistic semantics. These central concerns of semantic theory, adapted from Kempson (1977:4), are stated in (1) and may be adopted as criteria for ascertaining the adequacy of semantic theories which apply in addition to the general conditions on scientific theories of falsifiability and rigour.
In previous chapters (particularly Chapter 7), we looked at certain types of entailment relations that are guaranteed by the theory of interpretation set out in the earlier part of this book. Certain contexts exist, however, where expected entailments do not hold. Consider, for example, the inference pattern in (1).
a. The Morning Star is the planet Venus.
b. The Evening Star is the Morning Star.
c. Therefore, the Evening Star is the planet Venus.
The validity of this inference pattern illustrates a general rule that holds in the extensional semantic theory developed in Chapters 2 to 6 of this book. This rule is called Leibniz's Law or the Law of Substitution and it allows the substitution of extensionally equivalent expressions for one another in a formula while maintaining the truth value of the original formula. Thus, in (1), since the Morning Star and the Evening Star denote the same entity, the latter expression may be substituted for the former in the first premiss to give the conclusion. Indeed, because all three terms in (1) have the same extension all of them may be substituted for each other salva veritate (the Latin phrase used by Leibniz meaning ‘with truth unchanged’). The Law of Substitution can be formally defined as in (2) which, in words, says that if an expression a is extensionally equivalent to another expression b, then a formula φ is truth-conditionally equivalent to the formula formed from φ by substituting an instance of b for every instance of a.
This paper is an amalgam of two introductory lecture courses given at the Summer School. As the title suggests, the aim is to present fundamental notions of Proof Theory in their simplest settings, thus: Completeness and Cut-Elimination in Pure Predicate Logic; the Curry-Howard Correspondence and Normalization in the core part of Natural Deduction; connections to Sequent Calculus and Linear Logic; and applications to the Σ1-Inductive fragment of arithmetic and the synthesis of primitive recursive bounding functions. The authors have tried to preserve a (readable) balance between rigour and informal lecture-note style.
Pure Predicate Logic—Completeness
Classical first order predicate calculus (PC) is formulated here essentially in “Schütte-Ackermann-Tait” style, but with multisets instead of sets of formulas for sequents. It is kept “pure” (i.e., no function symbols) merely for the sake of technical simplicity. The refinement to multiset sequents illuminates the rôle of the so-called structural inferences of contraction and weakening in proof-theoretic arguments.
ABSTRACT. We work in the context of abstract data types, modelled as classes of many-sorted algebras closed under isomorphism. We develop notions of computability over such classes, in particular notions of primitive recursiveness and μ-recursiveness, which generalize the corresponding classical notions over the natural numbers. We also develop classical and intuitionistic formal systems for theories about such data types, and prove (in the case of universal theories) that if an existential assertion is provable in either of these systems, then it has a primitive recursive selection function. It is a corollary that if a μ-recursive scheme is provably total, then it is extensionally equivalent to a primitive recursive scheme. The methods are proof-theoretical, involving cut elimination. These results generalize to an abstract setting previous results of C. Parsons and G. Mints over the natural numbers.
INTRODUCTION
We will examine the provability or verifiability in formal systems of program properties, such as termination or correctness, from the point of view of the general theory of computable functions over abstract data types. In this theory an abstract data type is modelled semantically by a class K of many-sorted algebras, closed under isomorphism, and many equivalent formalisms are used to define computable functions and relations on an algebra A, uniformly for all A ∈ K. Some of these formalisms are generalizations to A and K of sequential deterministic models of computation on the natural numbers.
The method of local predicativity as developed by Pohlers in [10],[11],[12] and extended to subsystems of set theory by Jäger in [4],[5],[6] is a very powerful tool for the ordinal analysis of strong impredicative theories. But up to now it suffers considerably from the fact that it is based on a large amount of very special ordinal theoretic prerequisites. This is true even for the most recent (very polished) presentation of local predicativity in (Pohlers [15]). The purpose of the present paper is to expose a simplified and conceptually improved version of local predicativity which – besides some very elementary facts on ordinal addition, multiplication, and exponentiation – requires only amazingly little ordinal theory. (All necessary nonelementary ordinal theoretic prerequisites can be developed from scratch on just two pages, as we will show in section 4.) The most important feature of our new approach however seems to be its conceptual clarity and flexibility, and in particular the fact that its basic concepts (i.e. the infinitary system RS∞ and the notion of an H-controlled RS∞-derivation) are in no way related to any system of ordinal notations or collapsing functions. Our intention with this paper is to make the fascinating field of ‘admissible proof theory’ (created by Jäger and Pohlers) more easily accessible for non-proof-theorists, and to provide a technically and conceptually well developed basis for further research in this area.
This is a collection of ten refereed papers presented at an international Summer School and Conference on Proof Theory held at Bodington Hall, Leeds University between 24th July and 2nd August 1990. The meeting was held under the auspices of the “Logic for Information Technology” (Logfit) initiative of the UK Science and Engineering Research Council, in collaboration with the Leeds Centre for Theoretical Computer Science (CTCS). The principal funding came from SERC Logfit under contract SO/72/90 with additional contributions gratefully received from the British Logic Colloquium and the London Mathematical Society. There were 100 participants representing at least twelve different countries: Belgium, Canada, Estonia, France, Germany, Italy, Japan, Norway, Sweden, Switzerland, USA and UK.
The first three papers printed here represent short lecture courses given in the summer school and are intended to be of a more instructional nature, leading from basic to more advanced levels of ‘pure’ proof theory. The others are conference research papers reflecting a somewhat wider range of topics, and we see no better way of ordering them than alphabetically by author.
The programme of lectures given at the meeting is set out overleaf. Though not all of the invited speakers were able to contribute to this volume we believe that what remains provides rich flavours of a tasty subject.