To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The main subject of Mathematical Logic is mathematical proof. In this introductory chapter we deal with the basics of formalizing such proofs and, via normalization, analysing their structure. The system we pick for the representation of proofs is Gentzen's natural deduction from [1935]. Our reasons for this choice are twofold. First, as the name says this is a natural notion of formal proof, which means that the way proofs are represented corresponds very much to the way a careful mathematician writing out all details of an argument would go anyway. Second, formal proofs in natural deduction are closely related (via the so-called Curry–Howard correspondence) to terms in typed lambda calculus. This provides us not only with a compact notation for logical derivations (which otherwise tend to become somewhat unmanageable tree-like structures), but also opens up a route to applying (in part 3) the computational techniques which underpin lambda calculus.
Apart from classical logic we will also deal with more constructive logics: minimal and intuitionistic logic. This will reveal some interesting aspects of proofs, e.g., that it is possible and useful to distinguish beween existential proofs that actually construct witnessing objects, and others that don't.
An essential point for Mathematical Logic is to fix a formal language to be used. We take implication → and the universal quantifier ∀ as basic. Then the logic rules correspond precisely to lambda calculus.
In this chapter we develop the basics of recursive function theory, or as it is more generally known, computability theory. Its history goes back to the seminal works of Turing, Kleene and others in the 1930s.
A computable function is one defined by a program whose operational semantics tell an idealized computer what to do to its storage locations as it proceeds deterministically from input to output, without any prior restrictions on storage space or computation time. We shall be concerned with various program styles and the relationships between them, but the emphasis throughout this chapter and in part 2 will be on one underlying data type, namely the natural numbers, since it is there that the most basic foundational connections between proof theory and computation are to be seen in their clearest light. This is not to say that computability over more general and abstract data types is less important. Quite the contrary. For example, from a logical point of view, Stoltenberg-Hansen and Tucker [1999], Tucker and Zucker [2000], [2006] and Moschovakis [1997] give excellent presentations of a more abstract approach, and our part 3 develops a theory in higher types from a completely general standpoint.
The two best-known models of machine computation are the Turing Machine and the (Unlimited) Register Machine of Shepherdson and Sturgis [1963]. We base our development on the latter since it affords the quickest route to the results we want to establish (see also Cutland [1980]).
In this final chapter we focus much of the technical/logical work of previous chapters onto theories with limited (more feasible) computational strength. The initial motivation is the surprising result of Bellantoni and Cook [1992] characterizing the polynomial-time functions by the primitive recursion schemes, but with a judicially placed semicolon first used by Simmons [1988], separating the variables into two kinds (or sorts). The first “normal” kind controls the length of recursions, and the second “safe” kind marks the places where substitutions are allowed. Various alternative names have arisen for the two sorts of variables, which will play a fundamental role throughout this chapter, thus “normal”/“input” and “safe”/“output”; we shall use the input–output terminology. The important distinction here is that input and output variables will not just be of base type, but may be of arbitrary higher type.
We begin by developing a basic version of arithmetic which incorporates this variable separation. This theory EA(;) will have elementary recursive strength (hence the prefix E) and sub-elementary (polynomially bounded) strength when restricted to its Σ1-inductive fragment. EA(;) is a first-order theory which we use as a means to illustrate the underlying principles available in such two-sorted situations. Our aim however is to extend the Bellantoni and Cook variable separation to also incorporate higher types. This produces a theory A(;) extending EA(;) with higher type variables and quantifiers, having as its term system a two-sorted version T(;) of Gödel's T. T(;) will thus give a functional interpretation for A(;), which has the same elementary computational strength, but is more expressive and applicable.
In this chapter we will develop a somewhat more general view of computability theory, where not only numbers and functions appear as arguments, but also functionals of any finite type.
Abstract computability via information systems
There are two principles on which our notion of computability will be based: finite support and monotonicity, both of which have already been used (at the lowest type level) in section 2.4.
It is a fundamental property of computation that evaluation must be finite. So in any evaluation of Φ(ϕ) the argument ϕ can be called upon only finitely many times, and hence the value—if defined—must be determined by some finite subfunction of ϕ. This is the principle of finite support (cf. section 2.4).
Let us carry this discussion somewhat further and look at the situation one type higher up. Let ℋ be a partial functional of type 3, mapping type-2 functionals Φ to natural numbers. Suppose Φ is given and ℋ(Φ) evaluates to a defined value. Again, evaluation must be finite. Hence the argument Φ can only be called on finitely many functions ϕ. Furthermore each such ϕ must be presented to Φ in a finite form (explicitly say, as a set of ordered pairs). In other words, ℋ and also any type-2 argument Φ supplied to it must satisfy the finite support principle, and this must continue to apply as we move up through the types.
This chapter develops the classification theory of the provably recursive functions of arithmetic. The topic has a long history tracing back to Kreisel [1951], [1952] who, in setting out his “no-counter-example” interpretation, gave the first explicit characterization of the functions “computable in” arithmetic, as those definable by recursions over standard well-orderings of the natural numbers with order types less than ε0. Such a characterization seems now, perhaps, not so surprising in light of the groundbreaking work of Gentzen [1936], [1943], showing that these well orderings are just the ones over which one can prove transfinite induction in arithmetic, and hence prove the totality of functions defined by recursions over them. Subsequent work of the present authors [1970], [1971], [1972], extending previous results of Grzegorczyk [1953] and Robbin [1965], then provided other complexity characterizations in terms of natural, simply defined hierarchies of so-called “fast growing” bounding functions. What was surprising was the deep connection later discovered, first by Ketonen and Solovay [1981], between these bounding functions and a variety of combinatorial results related to the “modified” finite Ramsey theorem of Paris and Harrington [1977]. It is through this connection that one gains immediate access to a range of mathematically meaningful independence results for arithmetic and stronger theories. Thus, classifying the provably recursive functions of a theory not only gives a measure of its computational power; it also serves to delimit its mathematical power in providing natural examples of true mathematical statements it cannot prove.
Referencing. References are by chapter, section and subsection: i.j.k refers to subsection k of section j in chapter i. Theorems and the like are referred to, not by number, but by their names or the number of the subsection they appear in. Equations are numbered within a chapter where necessary; reference to equation n in section j is in the form “(j.n)”.
Mathematical notation. Definitional equivalence or equality (according to context) is written ≔. Application of terms is left associative, and lambda abstraction binds stronger than application. For example, MNK means (MN)K and not M(NK), and λxMN means (λxM)N, not λx(MN). We also sometimes save on parentheses by writing, e.g., Rxyz, Rt0t1t2 instead of R(x, y, z), R(t0, t1, t2), where R is some predicate symbol. Similarly for a unary function symbol with a (typographically) simple argument, we write fx for f(x), etc. In this case no confusion will arise. But readability requires that we write in full R(fx, gy, hz), instead of Rfxgyhz. Binary function and relation symbols are usually written in infix notation, e.g., x + y instead of +(x, y), and x < y instead of <(x, y). We write t ≠ s for ¬(t = s) and t ≮ s for ¬(t < s).
Logical formulas. We use the notation →, ∧, ∨, ⊥, ¬A, ∀xA, ∃xA, where ⊥ means logical falsity and negation is defined (most of the time) by¬A ≔ A→⊥.
More than one and a half centuries have passed since Charles Darwin presented his theory on the origin of species asserting that all organisms are related to each other by common descent via a “tree of life”. Since then, biologists have been able to piece together a great deal of information concerning this tree — relying in particular in more recent times on the advent of ever cheaper and faster DNA sequencing technologies. Even so, there remain many fascinating open problems concerning the tree of life and the evolutionary processes underlying it, problems that often require sophisticated techniques from areas such as mathematics, computer science, and statistics.
Phylogenetic combinatorics can be regarded as a branch of discrete applied mathematics concerned with the combinatorial description and analysis of phylogenetic or evolutionary trees and related mathematical structures such as phylogenetic networks, complexes, and tight spans. In this book, we present a systematic approach to phylogenetic combinatorics based on a natural conceptual framework that, simultaneously, allows and forces us to encompass many classical as well as a good number of new pertinent results.
More specifically, this book concentrates on the interrelationship between the three principal ways commonly used for encoding phylogenetic trees: Split systems, metrics, and quartet systems (see Figure 1). Informally, for X some finite set, a split system over X is a collection of bipartitions of X, a quartet system is a collection of two-versus-two bipartitions of subsets of X of size four, and a metric is a bivariate function assigning a “distance” to any pair of elements in X.
This is the point at which we bring proof and recursion together and begin to study connections between the computational complexity of recursive functions and the logical complexity of their formal termination or existence proofs. The rest of the book will largely be motivated by this theme, and will make repeated use of the basics laid out here and the proof-theoretic methods developed earlier. It should be stressed that by “computational complexity” we mean complexity “in the large” or “in theory”; not necessarily feasible or practical complexity. Feasibility is always desirable if one can achieve it, but the fact is that natural formal theories of even modest logical strength prove the termination of functions with enormous growth rate, way beyond the realm of practical computability. Since our aim is to unravel the computational constraints implicit in the logic of a given theory, we do not wish to have any prior bounds imposed on the levels of complexity allowed.
At the base of our hierarchy of theories lie ones with polynomially or at most exponentially bounded complexity, and these are studied in part 3 at the end of the book. The principal objects of study in this chapter are the elementary functions, which (i) will be characterized as those provably terminating in the theory IΔ0(exp) of bounded induction, and (ii) will be shown to be adequate for the arithmetization of syntax leading toGödel's theorems, a fact which most logicians believe but which rarely has received a complete treatment elsewhere.
Using the concept of a generalised priority constraint satisfaction problem, we previously found a way to introduce priority queries into fuzzy relational databases. The results were PFSQL (Priority Fuzzy Structured Query Language) together with a database independent interpreter for it. In an effort to improve the performance of the resolution of PFSQL queries, the aim of the current paper is to formalise PFSQL queries by obtaining their interpretation in an existing fuzzy logic. We have found that the ŁΠ logic provides sufficient elements. The SELECT line of PFSQL queries is semantically a formula of some fuzzy logic, and we show that such formulas can be naturally expressed in a conservative extension of the ŁΠ logic. Furthermore, we prove a theorem that gives the PSPACE containment for the complexity of finding a model for a given ŁΠ logic formula.
Nature has always inspired engineers. This research tries to understand the contribution of snake anatomy in its locomotion from engineering point of view to be adopted in the design of snake robots. Rib design and muscular structure of snake robots will have a great impact on snake robot flexibility, weight, and actuators' torque. It will help to eliminate wheels in snake robots during serpentine locomotion. The result of this research shows that snakes can establish the required peg points on smooth surfaces by deflecting the body and ribs. The results are verified by both field observations and simulation.