We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book is devoted to five main principles of algorithm design: divide and conquer, greedy algorithms, thinning, dynamic programming, and exhaustive search. These principles are presented using Haskell, a purely functional language, leading to simpler explanations and shorter programs than would be obtained with imperative languages. Carefully selected examples, both new and standard, reveal the commonalities and highlight the differences between algorithms. The algorithm developments use equational reasoning where applicable, clarifying the applicability conditions and correctness arguments. Every chapter concludes with exercises (nearly 300 in total), each with complete answers, allowing the reader to consolidate their understanding and apply the techniques to a range of problems. The book serves students (both undergraduate and postgraduate), researchers, teachers, and professionals who want to know more about what goes into a good algorithm and how such algorithms can be expressed in purely functional terms.
The last two decades have seen a wave of exciting new developments in the theory of algorithmic randomness and its applications to other areas of mathematics. This volume surveys much of the recent work that has not been included in published volumes until now. It contains a range of articles on algorithmic randomness and its interactions with closely related topics such as computability theory and computational complexity, as well as wider applications in areas of mathematics including analysis, probability, and ergodic theory. In addition to being an indispensable reference for researchers in algorithmic randomness, the unified view of the theory presented here makes this an excellent entry point for graduate students and other newcomers to the field.
We present an overview of higher randomness and its recent developments. After an introduction, we provide in the second section some background on higher computability, presenting in particular $\Pi^1_1$ and $\Sigma^1_1$ sets from the viewpoint of the computability theorist. In the third section we give an overview of the different higher randomness classes: $\Delta^1_1$-randomness, $\Pi^1_1$-Martin-Löf randomness, higher weak-2 randomness, higher difference randomness, and $\Pi^1_1$-randomness. We then move on to study each of these classes, separating them and inspecting their respective lowness classes. We put more attention on $\Pi^1_1$-Martin-Löf randomness and $\Pi^1_1$-randomness: The former is the higher analogue of the most well-known and studied class in classical algorithmic randomness. We show in particular how to lift the main classical randomness theorems to the higher settings by putting continuity in higher reductions and relativisations. The latter presents, as we will see, many remarkable properties and does not have any analogue in classical randomness. Finally in the eighth section we study randomness along with a higher hierarchy of complexity of sets, motivated by the notion of higher weak-2 randomness. We show that this hierarchy collapses eventually.
This chapter explains functions and modules. Functions perform some operation. They can operate directly on data or indirectly via other functions. All the niceties of different kinds of functions are explained. In addition, the topic treats the concept of modules, bits of code that are available to you or other programmers. The chapter shows how you can use existing modules in your own code and how you can write your own modules for others. Finally, the chapter goes into the proper way to document your functions and modules.
In this introductory survey, we provide an overview of the major developments of algorithmic randomness with an eye towards the historical development of the discipline. First we give a brief introduction to computability theory and the underlying mathematical concepts that later appear in the survey. Next we selectively cover four broad periods in which the primary developments in algorithmic randomness occurred: (1) the mid-1960s to mid-1970s, in which the main definitions of algorithmic randomness were laid out and the basic properties of random sequences were established; (2) the 1980s through the 1990s, which featured intermittent and important work from a handful of researchers; (3) the 2000s, during which there was an explosion of results as the discipline matured into a fully-fledged subbranch of computability theory; and (4) the early 2010s, in which ties between algorithmic randomness and other subfields of mathematics were discovered. The aim of this survey is to provide a point of entry for newcomers to the field and a useful reference for practitioners.
This chapter reviews the basic control structures of the Python language. These are used to lay out the logic of a program so as to allow it to do an infinite amount of stuff with finite means. These structures include if, for, while, etc.
The halting probability of a Turing machine was introduced by Chaitin, who also proved that it is an algorithmically random real number and named it Omega. Since his seminal work, many popular expositions have appeared, mainly focusing on the metamathematical or philosophical significance of this number (or debating against it). At the same time, a rich mathematical theory exploring the properties of Chaitin's Omega has been brewing in various technical papers, which quietly reveals the significance of this number to many aspects of contemporary algorithmic information theory. The purpose of this survey is to expose these developments and tell a story about Omega which outlines its multi-faceted mathematical properties and roles in algorithmic randomness.
This chapter outlines the book and lays out important preliminaries. First, it addresses why linguists need to know how to program. Second, it explains why Python is a good choice for a programming language. Third, it describes how this book is different from others. Finally, it outlines the text and explains how to get the most out of it.
This chapter covers how to get and use data from the internet. It covers methods for parsing webpages and for how to treat different character encodings. It also treats parallel processing to a certain extent as it has real consequences when dealing with remote data.
This appendix gives an overview of the Natural Language Toolkit (NLTK), a useful and widely used module for manipulating natural language data. The topics of corpora, tagging, and tokenizing are all covered.
This is a survey of constructive and computable measure theory with an emphasis on the close connections with algorithmic randomness. We give a brief history of constructive measure theory from Brouwer to the present, emphasizing how Schnorr randomness is the randomness notion implicit in the work of Brouwer, Bishop, Demuth, and others. We survey a number of recent results showing that classical almost everywhere convergence theorems can be used to characterize many of the common randomness notions including Schnorr randomness, computable randomness, and Martin-Löf randomness. Last, we go into more detail about computable measure theory, showing how all the major approaches are basically equivalent (even though the definitions can vary greatly).
In this survey, we lay out the central results in the study of algorithmic randomness with respect to biased probability measures. The first part of the survey covers biased randomness with respect to computable measures. The central technique in this area is the transformation of random sequences via certain randomness-preserving Turing functionals, which can be used to induce non-uniform probability measures. The second part of the survey covers biased randomness with respect to non-computable measures, with an emphasis on the work of Reimann and Slaman on the topic, as well as the contributions of Miller and Day in developing Levin's notion of a neutral measure. We also discuss blind randomness as well as van Lambalgen's theorem for both computable and non-computable measures. As there is no currently-available source covering all of these topics, this survey fills a notable gap in the algorithmic randomness literature.
This chapter introduces object-oriented programming and explains how to make use of it in Python. It covers the basic syntax of defining and using objects. It also introduces the object inheritance system and closes with an extended example of object-oriented syllable structure.
This chapter introduces input and output – how we get data into and out of our programs. The input modes include the command line, file input, pipes, and typed input. (A later chapter treats graphical user interface input.) The output modes include the command line, file output, and pipes.