To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Abstract. We recognize Alan Turing's work in the foundations of numerical computation (in particular, his 1948 paper “Rounding-Off Errors in Matrix Processes”), its influence in modern complexity theory, and how it helps provide a unifying concept for the two major traditions of the theory of computation.
§1. Introduction. The two major traditions of the theory of computation, each staking claim to similar motivations and aspirations, have for the most part run a parallel non-intersecting course. On one hand, we have the tradition arising from logic and computer science addressing problems with more recent origins, using tools of combinatorics and discrete mathematics. On the other hand, we have numerical analysis and scientific computation emanating from the classical tradition of equation solving and the continuous mathematics of calculus. Both traditions are motivated by a desire to understand the essence of computation, of algorithm; both aspire to discover useful, even profound, consequences.
While the logic and computer science communities are keenly aware of Alan Turing's seminal role in the former (discrete) tradition of the theory of computation, most remain unaware of Alan Turing's role in the latter (continuous) tradition, this notwithstanding the many references to Turing in the modern numerical analysis/computational mathematics literature, e.g., [Bur 10, Hig02, Kah66, TB97, Wil71]. These references are not to recursive/computable analysis (suggested in Turing's seminal 1936 paper), usually cited by logicians and computer scientists, but rather to the fundamental role that the notion of “condition” (introduced in Turing's seminal 1948 paper) plays in real computation and complexity.
§1. Introduction. The year 2012 was the centenary of the birth of one of the most brilliant mathematicians of the 20th century. There were many celebrations of this fact, and many conferences based around Turing's work and life during 2012. In particular, there was a half year program (Syntax and Semantics) at the Newton Institute in Cambridge, and many “Turing 100/Centenary” conferences throughout the year. These events included truly major meetings featuring many of the world's best mathematicians and computer scientists (and even Gary Kasparov) around his actual birth day of June 23, including The Incomputable, ACM A. M. Turing Centenary Celebration, How the World Computes (CiE 2012), and The Turing Centenary Conference. There are also a number of publications devoted to Turing's life, work and legacy.
To the general public, Turing is probably best known for his part in Bletchley Park and the war-winning efforts of the code-breakers at Hut 8. To biologists, Turing is best known for his work on morphogenesis, the paper “A Chemical Basis for Morphogenesis” being his most highly cited work.
To logicians, and computer scientists, Alan Turing is best known for his work in computation, arguably leading to the development of the digital computer. This development has caused almost certainly the most profound change in human history in the last century. Turing's work in computation grew from philosophical questions in logic. Thus it seems fitting that the Association for Symbolic Logic sponsored this volume.
Abstract. Turing's beautiful capture of the concept of computability by the “Turing machine” linked computability to a device with explicit steps of operations and use of resources. This invention led in a most natural way to build the foundations for computational complexity.
§1. Introduction. Computational complexity provides mechanisms for classifying combinatorial problems and measuring the computational resources necessary to solve them. The discipline provides explanations of why no practical solutions to certain problems have been found, and provides a way of anticipating difficulties involved in solving these problems. The classification is quantitative and is intended to investigate what resources are necessary, lower bounds, and what resources are sufficient, upper bounds, to solve various problems.
This classification should not depend on a particular computational model but rather should measure the intrinsic difficulty of a problem. Precisely for this reason, as we will explain, the basic model of computation for our study is the multitape Turing machine.
Computational complexity theory today addresses issues of contemporary concern, for example, parallel computation, circuit design, computations that depend on random number generators, and development of efficient algorithms. Above all, computational complexity is interested in distinguishing problems that are efficiently computable. Algorithms whose running times are n2 in the size of their inputs can be implemented to execute efficiently even for fairly large values of n, but algorithms that require an exponential running time can be executed only for small values of n.
In fact, the only evidence for the freedom from contradiction of Principia Mathematica is the empirical evidence arising from the fact that the system has been in use for some time, many of its consequences have been drawn, and no one has found a contradiction.
(Church in a letter to Godel, July 27, 1932)
Abstract. Alonzo Church's mathematical work on computability and undecidability is well known indeed, and we seem to have an excellent understanding of the context in which it arose. The approach Church took to the underlying conceptual issues, by contrast, is less well understood. Why, for example, was “Church's Thesis” put forward publicly only in April 1935, when it had been formulated already in February/March 1934? Why did Church choose to formulate it then in terms of Gödel's general recursiveness, not his own λ-definability as he had done in 1934? A number of letters were exchanged between Church and Paul Bernays during the period from December 1934 to August 1937; they throw light on critical developments in Princeton during that period and reveal novel aspects of Church's distinctive contribution to the analysis of the informal notion of effective calculability. In particular, they allow me to give informed, though still tentative answers to the questions I raised; the character of my answers is reflected by an alternative title for this paper, Why Church needed Gödel's Recursiveness for his Thesis.
I take Turing's thesis (equivalently, Church's thesis) to assert that those functions on the integers which can be computed by a human being following any fixed algorithm on pencil and paper can also be computed by a Turing machine algorithm (or alternately by a lambda calculus algorithm). This thesis can be formulated using any of the many definitions of algorithm developed in the past eighty years which compute the same functions of integers. This has often been implicitly replaced by what I would call the physical version of Turing's thesis. This asserts that those functions on the integers which can be computed on any physical machine can be computed by a Turing algorithm. If the brain is regarded as a physical machine, this version subsumes the first version. But not everyone regards the brain as entirely physical (“Mathematics is a free creation of the human mind”—Brouwer). So we separate these formulations.
The meaning of Turing's thesis depends on determining what algorithms are possible, deciding whether algorithms should be defined to allow unbounded search using potentially infinite time and space, and what algorithms the brain can execute. The meaning of the physical Turing thesis depends in addition on determining what can be manufactured in the physical world. Neither the capabilities of the brain nor the capabilities of physical materials have been or are likely to be characterized by science. These questions have an intuitive, informal, and inexhaustibly open character.
Abstract. The problem of replicating the flexibility of human common-sense reasoning has captured the imagination of computer scientists since the early days of Alan Turing's foundational work on computation and the philosophy of artificial intelligence. In the intervening years, the idea of cognition as computation has emerged as a fundamental tenet of Artificial Intelligence (AI) and cognitive science. But what kind of computation is cognition?
We describe a computational formalism centered around a probabilistic Turing machine called QUERY, which captures the operation of probabilistic conditioning via conditional simulation. Through several examples and analyses, we demonstrate how the QUERY abstraction can be used to cast common-sense reasoning as probabilistic inference in a statistical model of our observations and the uncertain structure of the world that generated that experience. This formulation is a recent synthesis of several research programs in AI and cognitive science, but it also represents a surprising convergence of several of Turing's pioneering insights in AI, the foundations of computation, and statistics.
§1. Introduction. In his landmark paper Computing Machinery and Intelligence [Tur50], Alan Turing predicted that by the end of the twentieth century, “general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” Even if Turing has not yet been proven right, the idea of cognition as computation has emerged as a fundamental tenet of Artificial Intelligence (AI) and cognitive science. But what kind of computation—what kind of computer program—is cognition?
Abstract. We trace the emergence of unsolvable problems in algebra and topology from the unsolvable halting problem for Turing machines.
§1. Introduction. Mathematicians have always been interested in being able to calculate with or about the things they study. For instance early developers of number theory and the calculus apparently did extensive calculations. By the early 1900s a number of problems were introduced asking for general algorithms to do certain calculations. In particular the tenth problem on Hilbert's influential list asked for an algorithm to determine whether an integer polynomial in several variables has an integer solution.
The introduction by Poincaré of the fundamental group as an invariant of a topological space which can often be finitely described by generators and relations led to Dehn's formulation of the word and isomorphism problem for groups. To make use of such group invariants we naturally want to calculate them and determine their properties. It turns out many of these problems do not have algorithmic solutions and we will trace the history and some of the ideas involved in showing these natural mathematical problems are unsolvable.
In the 1930s several definitions of computable functions emerged together with the formulation of the Church-Turing Thesis that these definitions captured intuitive notions of computability. Church and independently Turing showed that there is no algorithm to determine which formulas of first-order logic are valid, that is, the Entscheidungsproblem is unsolvable.
Abstract. The “Turing Model”, in the form of “Classical Computability Theory”, was generalized in various ways. This paper deals with generalizations where computations may be infinite. We discuss the original motivations for generalizing computability theory and three directions such generalizations took. One direction is the computability theory of ordinals and of admissible structures. We discuss why Post's problem was considered the test case for generalizations of this kind and briefly how the problem was approached. This direction started with metarecursion theory, and so did the computability theory of normal functionals. We survey the key results of the computability theory of normal functionals of higher types, and how, and why, this theory led to the discovery and development of set recursion. The third direction we survey is the computability theory of partial functionals of higher types, and we discuss how the contributions by Platek on the one hand and Kleene on the other led to typed algorithms of interest in Theoretical Computer Science. Finally, we will discuss possible ways to axiomatize parts of higher computability theory.
Throughout, we will discuss to what extent concepts like “finite” and “computably enumerable” may be generalized in more than one way for some higher models of computability.
§1. Introduction. In this paper we will survey what we may call higher analogues of the Turing model. The Turing model, in the most restricted interpretation of the term, consists of the Turing machines as a basis for defining computable functions, decidable languages, semi-decidable languages and so forth.
Abstract. We revisit the notion of a quantum Turing-machine, whose design is based on the laws of quantum mechanics. It turns out that such a machine is not more powerful, in the sense of computability, than the machine originally constructed by Turing. Quantum Turing-machines do not violate the Church–Turing thesis. The benefit of quantum computing lies in efficiency. Quantum computers appear to be more efficient, in time, than classical Turing-machines, however its exact additional computational power is unclear, as this question ties in with deep open problems in complexity theory. We will sketch where BQP, the quantum analogue of the complexity class P, resides in the realm of complexity classes.
§1. Introduction. A decade before Turing developed his theory of computing, physicists struggled with the advent of quantum mechanics. During the famous 5th Solvay Conference in 1927 it was clear that a new era of physics had surfaced. Its strange features like superposition and entanglement still lead to heated discussions and much confusion. However strange and counter-intuitive, the theory has never been refuted by experiments that are performed daily and in great numbers throughout laboratories around the world. Time after time the predictions of quantum mechanics are in full agreement with experiment.
Shortly after the advent of quantum mechanics, Church, Turing and Post developed the notion of computability [Chu36, Tur36, Pos36]. Less than 10 years later these formal ideas would be put to practice resulting in the ENIAC, the first general purpose machine.
Abstract. This article looks at the applications of Turing's legacy in computation, particularly to the theory of algorithmic randomness, where classical mathematical concepts such as measure could be made computational. It also traces Turing's anticipation of this theory in an early manuscript.
§1. Introduction. Beginning with the work of Church, Kleene, Post and particularly Turing, especially in the magic year of 1936, we know what computation means. Turing's theory has substantially developed under the names of recursion theory and computability theory. Turing's work can be seen as perhaps the high point in the confluence of ideas in 1936. This paper, and Turing's 1939 paper [141] (based on his PhD Thesis of the same name), laid solid foundations to the pure theory of computation. This article gives a brief history of some of the main lines of investigation in computability theory, a major part of Turing's legacy.
Computability theory and its tools for classifying computational tasks have seen applications in many areas such as analysis, algebra, logic, computer science and the like. Such applications will be discussed in articles in this volume. The theory even has applications into what is thought of as proof theory in what is called reverse mathematics. Reverse mathematics attempts to calibrate the logical strength of theorems of mathematics according to calibrations of comprehension axioms in second order mathematics. Generally speaking most separations, that is, proofs that a theorem is true in one system but not another, are performed in normal “ω” models rather than nonstandard ones.
§1. Introduction. For most of its history, mathematics was algorithmic in nature. The geometric claims in Euclid's Elements fall into two distinct categories: “problems,” which assert that a construction can be carried out to meet a given specification, and “theorems,” which assert that some property holds of a particular geometric configuration. For example, Proposition 10 of Book I reads “To bisect a given straight line.” Euclid's “proof” gives the construction, and ends with the (Greek equivalent of) Q.E.F., for quod erat faciendum, or “that which was to be done.” Proofs of theorems, in contrast, end with Q.E.D., for quod erat demonstrandum, or “that which was to be shown”; but even these typically involve the construction of auxiliary geometric objects in order to verify the claim.
Similarly, algebra was devoted to developing algorithms for solving equations. This outlook characterized the subject from its origins in ancient Egypt and Babylon, through the ninth century work of al-Khwarizmi, to the solutions to the quadratic and cubic equations in Cardano's Ars Magna of 1545, and to Lagrange's study of the quintic in his Réflexions sur la résolution algébrique des équations of 1770.
The theory of probability, which was born in an exchange of letters between Blaise Pascal and Pierre de Fermat in 1654 and developed further by Christian Huygens and Jakob Bernoulli, provided methods for calculating odds related to games of chance.
“And when it comes to mathematics, you must realize that this is the human mind at the extreme limit of its capacity.”
(H. Robbins)
“ … so reduce the use of the brain and calculate!”
(E. W. Dijkstra)
“The fact that a brain can do it seems to suggest that the difficulties [of trying with a machine] may not really be so bad as they now seem.”
(A. Turing)
§1. Computer calculation.
1.1. A panorama of the status quo. Where stands the mathematical endeavor?
In 2012, many mathematical utilities are reaching consolidation. It is an age of large aggregates and large repositories of mathematics: the arXiv, Math Reviews, and euDML, which promises to aggregate the many European archives such as Zentralblatt Math and Numdam. Sage aggregates dozens of mathematically oriented computer programs under a single Python-scripted front-end.
Book sales in the U.S. have been dropping for the past several years. Instead, online sources such as Wikipedia and Math Overflow are rapidly becoming students' preferred math references. The Polymath blog organizes massive mathematical collaborations. Other blogs organize previously isolated researchers into new fields of research. The slow, methodical deliberations of referees in the old school are giving way; now in a single stroke, Tao blogs, gets feedback, and publishes.
Machine Learning is in its ascendancy. Log Answer and Wolfram Alpha answer our elementary questions about the quantitative world; Watson our Jeopardy questions.
§1. Introduction. In recent years there has emerged the study of discrete computational models which are allowed to act transfinitely. By ‘discrete’ we mean that the machine models considered are not analogue machines, but compute by means of distinct stages or in units of time. The paradigm of such models is, of course, Turing's original machine model. If we concentrate on this for a moment, the machine is considered to be running a program P perhaps on some natural number input n ∈ ℕ and is calculating P(n). Normally we say this is a successful computation if the machine halts after a finite number of stages and we may read off some designated form of output: ‘P(n)↓’ However if the machine fails to halt after a finite time it may be exhibiting a variety of behaviours on its tape. Mathematically we may ask what happens ‘in the limit’ as the number of stages approaches ω. The machine may of course go haywire, and simply be rewriting a particular cell infinitely often, or else the Read/Write head may go ‘off to infinity’ as it moves inexorably down the tape. These kind of considerations are behind the notion of ‘computation in the limit’ which we consider below.
Or, it may only rewrite finitely often to any cell on the tape, and leave something meaningful behind: an infinite string of 0, Is and thus an element of Cantor space 2ℕ. What kind of elements could be there?
Abstract. In §1 we give a short overview for a general audience of Godel, Church, Turing, and the discovery of computability in the 1930s. In the later sections we mention a series of our previous papers where a more detailed analysis of computability, Turing's work, and extensive lists of references can be found. The sections from §2—§9 challenge the conventional wisdom and traditional ideas found in many books and papers on computability theory. They are based on a half century of my study of the subject beginning with Church at Princeton in the 1960s, and on a careful rethinking of these traditional ideas.
The references in all my papers and books are given in the format, author [year], as in Turing [1936], in order that the references are easily identified without consulting the bibliography and are uniform over all papers. A complete bibliography of historical articles from all my books and papers on computabilityis given on the page as explained in §10.
§1. A very brief overview of computability.
1.1. Hilbert's programs. Around 1880 Georg Cantor, a German mathematician, invented naive set theory. A small fraction of this is sometimes taught to elementary school children. It was soon discovered that this naive set theory was inconsistent because it allowed unbounded set formation, such as the set of all sets. David Hilbert, the world's foremost mathematician from 1900 to 1930, defended Cantor's set theory but suggested a formal axiomatic approach to eliminate the inconsistencies. He proposed two programs.
Synopsis: To prove soundness of the Verifiable C separation logic, we first give a model of mpred as pred(rmap), that is, predicates on resource maps. We give a model for permission-shares using trees of booleans. We augment the C light operational semantics with juicy memories that keep track of resources as well as “dry” values. We give a semantic model of the Hoare judgment, using the continuation-passing notion of “guards.” We use this semantic model to prove all the Hoare rules. Our model and proofs have a modular structure, so that they can be ported to other programming languages (especially in the CompCert family).