To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This appendix gives an overview of the Natural Language Toolkit (NLTK), a useful and widely used module for manipulating natural language data. The topics of corpora, tagging, and tokenizing are all covered.
This is a survey of constructive and computable measure theory with an emphasis on the close connections with algorithmic randomness. We give a brief history of constructive measure theory from Brouwer to the present, emphasizing how Schnorr randomness is the randomness notion implicit in the work of Brouwer, Bishop, Demuth, and others. We survey a number of recent results showing that classical almost everywhere convergence theorems can be used to characterize many of the common randomness notions including Schnorr randomness, computable randomness, and Martin-Löf randomness. Last, we go into more detail about computable measure theory, showing how all the major approaches are basically equivalent (even though the definitions can vary greatly).
In this survey, we lay out the central results in the study of algorithmic randomness with respect to biased probability measures. The first part of the survey covers biased randomness with respect to computable measures. The central technique in this area is the transformation of random sequences via certain randomness-preserving Turing functionals, which can be used to induce non-uniform probability measures. The second part of the survey covers biased randomness with respect to non-computable measures, with an emphasis on the work of Reimann and Slaman on the topic, as well as the contributions of Miller and Day in developing Levin's notion of a neutral measure. We also discuss blind randomness as well as van Lambalgen's theorem for both computable and non-computable measures. As there is no currently-available source covering all of these topics, this survey fills a notable gap in the algorithmic randomness literature.
This chapter introduces object-oriented programming and explains how to make use of it in Python. It covers the basic syntax of defining and using objects. It also introduces the object inheritance system and closes with an extended example of object-oriented syllable structure.
This chapter introduces input and output – how we get data into and out of our programs. The input modes include the command line, file input, pipes, and typed input. (A later chapter treats graphical user interface input.) The output modes include the command line, file output, and pipes.
This chapter covers graphical user interfaces (GUIs) using the tkinter module. The chapter starts with how to lay out a window and place user interface elements (widgets) in that window. It goes on to explain event-driven programming and how to attach basic functionality to widgets.
This chapter introduces the different data types of Python: integers, characters, strings, lists, dictionaries, tuples, etc. The chapter also treats the concept of mutability.
Ergodic theory is concerned with dynamical systems -- collections of points together with a rule governing how the system changes over time. Much of the theory is concerned with the long term behavior of typical points-- how points behave over time, ignoring anomalous behavior from a small number of exceptional points. Computability theory has a family of precise notions of randomness: a point is "algorithmically random'' if no computable test can demonstrate that it is not random. These notions capture something essential about the informal notion of randomness: algorithmically random points are precisely the ones that have typical orbits in computable dynamical systems. For computable dynamical systems with or without assumptions of ergodicity, the measure 0 set of exceptional points for various theorems (such as Poincaré's Recurrence Theorem or the pointwise ergodic theorem) are precisely the Schnorr or Martin-Löf random points identified in algorithmic randomness.
Designing complex products involves working with uncertainties as the product, the requirements and the environment in which it is used co-evolve, and designers and external stakeholders make decisions that affect the evolving design. Rather than being held back by uncertainty, designers work, cooperate and communicate with each other notwithstanding these uncertainties by making assumptions to carry out their own tasks. To explain this, the paper proposes an adaptation of Kendall Walton’s make-believe theory to conceptualise designing as playing games of make-believe by inferring what is required and imagining what is possible given the current set of assumptions and decisions, while knowing these are subject to change. What one is allowed and encouraged to imagine, conclude or propose is governed by socially agreed rules and constraints. The paper uses jet engine component design as an example to illustrate how different design teams make assumptions at the beginning of design activities and negotiate what can and cannot be done with the design. This often involves iteration – repeating activities under revised sets of assumptions. As assumptions are collectively revised, they become part of a new game of make-believe in the sense that there is social agreement that the decisions constitute part of the constraints that govern what can legitimately be inferred about the design or added to it.
We discuss the different contexts in which relativization occurs in randomness and the effect that the relativization chosen has on the results we can obtain. We study several characterizations of the K-trivials in terms of concepts ranging from cuppability to density, and we consider a uniform relativization for randomness that gives us more natural results for computable randomness, Schnorr randomness, and Kurtz randomness than the classical relativization does (the relativization for Martin-Löf randomness is unaffected by this change). We then evaluate the relativizations we have considered and suggest some avenues for further work.