To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter explores a topic in the intersection of two fields to which Alan Turing has made fundamental contributions: the theory of computing and cryptography.
A main goal in cryptography is to prove the security of cryptographic schemes. This means that one wants to prove that the computational problem of breaking the scheme is infeasible, i.e., its solution requires an amount of computation beyond the reach of current and even foreseeable future technology. As cryptography is a mathematical science, one needs a (mathematical) definition of computation and of the complexity of computation. In modern cryptography, and more generally in theoretical computer science, the complexity of a problem is defined via the number of steps it takes for the best program on a universal Turing machine to solve the problem.
Unfortunately, for this general model of computation, no proofs of useful lower bounds on the complexity of a computational problem are known. However, if one considers a more restricted model of computation, which captures reasonable restrictions on the power of an algorithm, then very strong lower bounds can be proved. For example, one can prove an exponential lower bound on the complexity of computing discrete logarithms in a finite cyclic group, a key problem in cryptography, if one considers only so-called generic algorithms that cannot exploit the specific properties of the representation (as bit-strings) of the group elements.
Introduction
The task set to the authors of articles in this volume was to write about a topic of (general) scientific interest and related to Alan Turing's work. Here we present a topic in the intersection of computing theory and cryptography, two fields to which Turing has contributed significantly. The concrete technical goal of this chapter is to introduce the issue of provable security in cryptography. The article is partly based on Maurer (2005).
Computation and information are the two most fundamental concepts in computer science, much like mass, energy, time, and space are fundamental concepts in physics. Understanding these concepts continues to be a primary goal of research in theoretical computer science. As witnessed by Turing's work, many underlying questions are of as comparable intellectual depth to the fundamental questions in physics and mathematics, and are still far from being well understood.
In his important 1939 paper, Alan Turing introduced novel notions such as ordinal logics and oracle machines. These could be interpreted as possible ingredients of an approach to model human mathematical understanding in a way that goes beyond the conventional ideas of formal systems of axioms and rules of procedure. A hope appears to have been that in this way one might circumvent the limitations to formal reasoning that are revealed by Gödel's incompleteness theorems. In line with such aims, an idea of a cautious oracle device is here introduced (differing, in intention, from related ideas put forward by others previously), which is supposed to give accurate answers to mathematical questions whenever it claims to have an answer, but which may sometimes confess to being unable to provide an answer and sometimes continues trying indefinitely without success. Despite such devices seeming to be somewhat closer to human mathematical capabilities than appears to be provided by a standard Turing machine, or Turing oracle machine, they are still limited by being subject to a Gödel-type diagonalization argument. Although leaving open the question of what actual physical processes might underlie human mathematical insight, these arguments appear to indicate a significant constraint on any such hypothetical process.
Turing's ordinal logics
In early September 1955, I attended a lecture given by Max Newman on the topic of ordinal logic. I found the lecture to be one of the most fascinating that I ever attended. Alan Turing had died only a little over a year earlier and this talk was dedicated to him, being essentially based on Turing's 1939 paper on this topic. Newman also started his lecture by providing, as Turing had done in his paper, an introduction to Church's λ -calculus. It has been said that the somewhat limited initial impact that Turing's 1939 paper had on the mathematical community at that time may have been partly due to his phrasing the paper in terms of the λ -calculus, which is hard to employ in an explicit way and makes the reading difficult. Nonetheless, one of the things that did strike me particularly about Newman's lecture was the extraordinary economy of concept exhibited by Church's calculus.
We offer here some historical notes on the conceptual routes taken in the development of recursion theory over the last 60 years, and their possible significance for computational practice. These illustrate, incidentally, the vagaries to which mathematical ideas may be susceptible on the one hand, and – once keyed into a research program – their endless exploitation on the other.
At the hands primarily of mathematical logicians, the subject of effective computability, or recursion theory as it has come to be called (for historical reasons to be explained in the next section), has developed along several interrelated but conceptually distinctive lines. While this began with what were offered as analyses of the absolute limits of effective computability, the immediate primary aim was to establish negative results of the effective unsolvability of various problems in logic and mathematics. From this the subject turned to refined classifications of unsolvability for which a myriad of techniques were developed. The germinal step, conceptually, was provided by Turing's notion of computability relative to an ‘oracle’. At the hands of Post, this provided the beginning of the subject of degrees of unsolvability, which became a massive research program of great technical difficulty and combinatorial complexity. Less directly provided by Turing's notion, but implicit in it, were notions of uniform relative computability, which led to various important theories of recursive functionals. Finally the idea of computability has been relativized by extension, in various ways, to more or less arbitrary structures, leading to what has come to be called generalized recursion theory. Marching in under the banner of degree theory, these strands were to some extent woven together by the recursion theorists, but the trend has been to pull the subject of effective computability even farther away from questions of actual computation. The rise in recent years of computation theory as a subject with that as its primary concern forces a reconsideration of notions of computability theory both in theory and practice. Following the historical sections, I shall make the case for the primary significance for practice of the various notions of relative (rather than absolute) computability, but not of most methods or results obtained thereto in recursion theory.
Alan Turing's exploits in code-breaking, philosophy, artificial intelligence and the foundations of computer science are by now well known to many. Less well known is that Turing was also interested in number theory, in particular the distribution of prime numbers and the Riemann hypothesis. These interests culminated in two programs that he implemented on the Manchester Mark 1 (see Figure 3.1), the first stored-program digital computer, during its 18 months of operation in 1949–1950. Turing's efforts in this area were modest, and one should be careful not to overstate their influence. However, one cannot help but see in these investigations the beginning of the field of computational number theory, bearing a close resemblance to active problems in the field today despite a gap of 60 years. We can also perceive, in hindsight, some striking connections to Turing's other areas of interests, in ways that might have seemed far-fetched in his day. This chapter will attempt to explain the two problems in detail, including their early history, Turing's contributions, some developments since the 1950s, and speculation for the future.
Prime numbers
People have been interested in prime numbers since at least the ancient Greeks. Euclid recorded a proof that there are infinitely many of them around 300 BC (Narkiewicz, 2000, §1.1.2). His proof, still one of the most elegant in all mathematics, can be expressed as an algorithm:
Write down some prime numbers.
Multiply them together and add 1; call the result n.
Find a prime factor of n.
For instance, if we know that 2, 5 and 11 are all prime then, applying the algorithm with these numbers, we get n = 2× 5× 11+1 = 111, which is divisible by the prime 3. By an earlier theorem in Euclid's Elements, the number n computed in step (2) must have a prime factor (and in fact it can be factored uniquely into a product of primes by the Fundamental Theorem of Arithmetic), so step (3) is always possible. On the other hand, from the way that Euclid constructs the number n, the prime factor found in step (3) cannot be any prime written down in step (1). Thus, no list of primes can be complete, i.e. there are infinitely many of them.
from
Part Three
-
The Reverse Engineering Road to Computing Life
By
Philip K. Maini, Mathematical Institute, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK,
Thomas E. Woolley, Mathematical Institute, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK,
Eamonn A. Gaffney, Mathematical Institute, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK,
Ruth E. Baker, Mathematical Institute, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK
Elucidating the mechanisms underlying the formation of structure and form is one of the great challenges in developmental biology. From an initial, seemingly spatially uniform mass of cells, emerge the spectacular patterns that characterise the animal kingdom – butterfly wing patterns, animal coat markings, skeletal structures, skin organs, horns etc. (Figure 9.1). Although genes obviously play a key role, the study of genetics alone does not tell us why certain genes are switched on or off in specific places and how the properties they impart to cells result in the highly coordinated emergence of pattern and form. Modern genomics has revealed remarkable molecular similarity among different animal species. Specifically, biological diversity typically emerges from differences in regulatory DNA rather than detailed protein coding sequences. This implicit universality highlights that many aspects of animal development can be understood from studies of exemplar species such as fruit flies and zebrafish while also motivating theoretical studies to explore and understand the underlying common mechanisms beyond a simply descriptive level.
However, when Alan Turing wrote his seminal paper, ‘The chemical basis of morphogenesis’ (Turing, 1952), such observations were many decades away. At that time biology was following a very traditional classification route of list-making activities. Indeed, there was very little theory regarding development other than D'Arcy Thompson's classic 1917 work (see Thompson, 1992, for the abridged version) exploring how biological forms arose, though even this was still very much at the descriptive rather than the mechanistic level.
Undeterred, Turing started exploring the question of how developmental systems might undertake symmetry-breaking and thus create and amplify structure from seeming uniformity. For example, if one looks at a cross-section of a tree trunk, it has circular symmetry which is broken when a branch starts to grow outwards. Turing proposed an underlying mechanism explaining how asymmetric structure could emerge dynamically, without innate hardwiring. In particular, he described how a symmetric pattern, for instance of a growth hormone, could break up so that more hormone was concentrated on one part of the circle, thus inducing extra growth there.
In order to achieve such behaviour Turing came up with a truly ingenious theory. He considered a system of chemicals reacting with each other and assumed that in the well-mixed case (no spatial heterogeneities) this system exhibited an equilibrium (steady) state which was stable.
We study the persistence of network segregation in networks characterized by the co-evolution of vertex attributes and link structures, in particular where individual vertices form linkages on the basis of similarity with other network vertices (homophily), and where vertex attributes diffuse across linkages, making connected vertices more similar over time (influence). A general mathematical model of these processes is used to examine the relative influence of homophily and influence in the maintenance and decay of network segregation in self-organizing networks. While prior work has shown that homophily is capable of producing strong network segregation when attributes are fixed, we show that adding even minute levels of influence is sufficient to overcome the tendency towards segregation even in the presence of relatively strong homophily processes. This result is proven mathematically for all large networks and illustrated through a series of computational simulations that account for additional network evolution processes. This research contributes to a better theoretical understanding of the conditions under which network segregation and related phenomenon—such as community structure—may emerge, which has implications for the design of interventions that may promote more efficient network structures.
We outline an intuitionistic view of knowledge which maintains the original Brouwer–Heyting–Kolmogorov semantics for intuitionism and is consistent with the well-known approach that intuitionistic knowledge be regarded as the result of verification. We argue that on this view coreflection A → KA is valid and the factivity of knowledge holds in the form KA → ¬¬A ‘known propositions cannot be false’.
We show that the traditional form of factivity KA → A is a distinctly classical principle which, like tertium non datur A ∨ ¬A, does not hold intuitionistically, but, along with the whole of classical epistemic logic, is intuitionistically valid in its double negation form ¬¬(KA ¬ A).
Within the intuitionistic epistemic framework the knowability paradox is resolved in a constructive manner. We argue that this paradox is the result of an unwarranted classical reading of constructive principles and as such does not have the consequences for constructive foundations traditionally attributed it.
We give a new combinatorial interpretation of the stationary distribution of the (partially) asymmetric exclusion process on a finite number of sites in terms of decorated alternative trees and coloured permutations. The corresponding expressions of the multivariate partition functions are then related to multivariate generalisations of Eulerian polynomials for coloured permutations considered recently by N. Williams and the third author, and others. We also discuss stability and negative dependence properties satisfied by the partition functions.
We understand a socio-technical system (STS) as a cyber-physical system in which two or more autonomous parties interact via or about technical elements, including the parties’ resources and actions. As information technology begins to pervade every corner of human life, STSs are becoming ever more common, and the challenge of governing STSs is becoming increasingly important. We advocate a normative basis for governance, wherein norms represent the standards of correct behaviour that each party in an STS expects from others. A major benefit of focussing on norms is that they provide a socially realistic view of interaction among autonomous parties that abstracts low-level implementation details. Overlaid on norms is the notion of a sanction as a negative or positive reaction to potentially any violation of or compliance with an expectation. Although norms have been well studied as regards governance for STSs, sanctions have not. Our understanding and usage of norms is inadequate for the purposes of governance unless we incorporate a comprehensive representation of sanctions.
We address the aforementioned gap by proposing (i) a sanction typology that reflects the relevant features of sanctions, and (ii) a conceptual sanctioning process model providing a functional structure for sanctioning in STS. We demonstrate our contributions via a motivating scenario from the domain of renewable energy trading.
This article presents the first study on using a parallel corpus to teach Cantonese, the variety of Chinese spoken in Hong Kong. We evaluated this approach with Mandarin-speaking undergraduate students at the beginner level. Exploiting their knowledge of Mandarin, a closely related language, the students studied Cantonese with authentic material in a Cantonese-Mandarin parallel corpus, transcribed from television programs. They were given a list of Mandarin words that yield a range of possible Cantonese translations, depending on the linguistic context. Leveraging sentence and word alignments in the parallel corpus, the students independently searched for example sentences to discover these translation equivalents. Experimental results showed that, in both the short- and long-term, this data-driven learning approach helped students improve their knowledge of Cantonese vocabulary. These results suggest the potential of applying parallel corpora at even the beginners’ level for other L1-L2 pairs of closely related languages.
Graphics Processing Units (GPUs) offer potential for very high performance; they are also rapidly evolving. Obsidian is an embedded language (in Haskell) for implementing high performance kernels to be run on GPUs. We would like to have our cake and eat it too; we want to raise the level of abstraction beyond CUDA code and still give the programmer control over the details relevant to kernel performance. To that end, Obsidian provides array representations that guarantee elimination of intermediate arrays while also using the type system to model the hierarchy of the GPU. Operations are compiled very differently depending on what level of the GPU they target, and as a result, the user is gently constrained to write code that matches the capabilities of the GPU. Thus, we implement not Nested Data Parallelism, but a more limited form that we call Hierarchical Data Parallelism. We walk through case-studies that demonstrate how to use Obsidian for rapid design exploration or auto-tuning, resulting in performance that compares well to the hand-tuned kernels used in Accelerate and NVIDIA Thrust.
Reliability is set to become a major concern on emergent large-scale architectures. While there are many parallel languages, and indeed many parallel functional languages, very few address reliability. The notable exception is the widely emulated Erlang distributed actor model that provides explicit supervision and recovery of actors with isolated state. We investigate scalable transparent fault tolerant functional computation with automatic supervision and recovery of tasks. We do so by developing HdpH-RS, a variant of the Haskell distributed parallel Haskell (HdpH) DSL with Reliable Scheduling. Extending the distributed work stealing protocol of HdpH for task supervision and recovery is challenging. To eliminate elusive concurrency bugs, we validate the HdpH-RS work stealing protocol using the SPIN model checker. HdpH-RS differs from the actor model in that its principal entities are tasks, i.e. independent stateless computations, rather than isolated stateful actors. Thanks to statelessness, fault recovery can be performed automatically and entirely hidden in the HdpH-RS runtime system. Statelessness is also key for proving a crucial property of the semantics of HdpH-RS: fault recovery does not change the result of the program, akin to deterministic parallelism. HdpH-RS provides a simple distributed fork/join-style programming model, with minimal exposure of fault tolerance at the language level, and a library of higher level abstractions such as algorithmic skeletons. In fact, the HdpH-RS DSL is exactly the same as the HdpH DSL, hence users can opt in or out of fault tolerant execution without any refactoring. Computations in HdpH-RS are always as reliable as the root node, no matter how many nodes and cores are actually used. We benchmark HdpH-RS on conventional clusters and an High Performance Computing platform: all benchmarks survive Chaos Monkey random fault injection; the system scales well e.g. up to 1,400 cores on the High Performance Computing; reliability and recovery overheads are consistently low even at scale.
The counting and (upper) mass dimensions of a set A ⊆ $\mathbb{R}^d$ are
$$D(A) = \limsup_{\|C\| \to \infty} \frac{\log | \lfloor A \rfloor \cap C |}{\log \|C\|}, \quad \smash{\overline{D}}\vphantom{D}(A) = \limsup_{\ell \to \infty} \frac{\log | \lfloor A \rfloor \cap [-\ell,\ell)^d |}{\log (2 \ell)},$$
where ⌊A⌋ denotes the set of elements of A rounded down in each coordinate and where the limit supremum in the counting dimension is taken over cubes C ⊆ $\mathbb{R}^d$ with side length ‖C‖ → ∞. We give a characterization of the counting dimension via coverings:
in which the infimum is taken over cubic coverings {Ci} of A ∩ C. Then we prove Marstrand-type theorems for both dimensions. For example, almost all images of A ⊆ $\mathbb{R}^d$ under orthogonal projections with range of dimension k have counting dimension at least min(k, D(A)); if we assume D(A) = D(A), then the mass dimension of A under the typical orthogonal projection is equal to min(k, D(A)). This work extends recent work of Y. Lima and C. G. Moreira.
In previous work, we proposed a logic-based framework in which computation is the execution of actions in an attempt to make reactive rules of the form if antecedent then consequent true in a canonical model of a logic program determined by an initial state, sequence of events, and the resulting sequence of subsequent states. In this model-theoretic semantics, reactive rules are the driving force, and logic programs play only a supporting role. In the canonical model, states, actions, and other events are represented with timestamps. But in the operational semantics (OS), for the sake of efficiency, timestamps are omitted and only the current state is maintained. State transitions are performed reactively by executing actions to make the consequents of rules true whenever the antecedents become true. This OS is sound, but incomplete. It cannot make reactive rules true by preventing their antecedents from becoming true, or by proactively making their consequents true before their antecedents become true. In this paper, we characterize the notion of reactive model, and prove that the OS can generate all and only such models. In order to focus on the main issues, we omit the logic programming component of the framework.