To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 10 considered functional programming with data involving names and name abstractions. The new language features introduced in that chapter were motivated by the theory of nominal sets introduced in Part One of the book. We encouraged the reader to think of the type and function expressions of the FreshML functional programming language as describing nominal sets and finitely supported functions between them. However, just as for conventional functional programming, the sets-and-functions viewpoint is too naive, because of the facilities these languages provide for making recursive definitions. Giving a compositional semantics for such language features requires solving fixed-point equations both at the level of types (for recursively defined data types) and the level of expressions of some type (for recursively defined functions). As is well known, solutions to these fixed-point equations cannot always be found within the world of sets and totally defined functions; and this led the founders of denotational semantics to construct mathematical models of partially defined objects, functions, functionals, etc., based on a fascinating mixture of partial order, topology and computation theory that has come to be known as domain theory. For an introduction to domain theory we refer the reader to Abramsky and Jung (1994).
In this chapter we consider merging domain theory with the concepts from nominal sets – names, permutations, support and freshness. As a result we gain new forms of domain, in particular domains of name abstractions.
A very useful feature of functional programming languages such as OCaml (http://caml.inria.fr/ocaml) or Haskell (http://www.haskell.org) is the facility for programmers to declare their own algebraic data types and to specify functions on that data using pattern-matching. This makes them especially useful for metaprogramming, that is, writing programs that manipulate programs, or more generally, expressions in formal languages. In this context the functional programming language is often called the meta-level language, while the language whose expressions appear as data in the functional programs is called the object-level language. We already noted at the beginning of Chapter 8 that object-level languages often involve name binding operations. In this case we may well want meta-level programs to operate not on object-level parse trees, but on their α-equivalence classes. OCaml or Haskell programmers have to deal with this issue on a case-by-case basis, according to the nature of the object-level language being implemented, using a selfimposed discipline. For example, they might work out some ‘nameless’ representation of α-equivalence classes for their object-level language, in the style of de Bruijn (1972). When designing extensions of OCaml or Haskell that deal more systematically with this issue, three desirable properties come to mind:
• Expressivity. Informal algorithms for manipulating syntactic data very often make explicit use of the names of bound entities; when representing α-equivalence classes of object-level expressions as meta-level data, one would still like programmers to have access to object-level bound names.
This book has its origins in my interest in semantics and logics for locality in programming languages. By locality, I mean the various mechanisms that exist for making local declarations, restricting a resource to a specific scope, or hiding information from the environment. Although mathematics and logic are involved in understanding these things, this is a distinctively computer science topic. I was introduced to it by Matthew Hennessy and Alley Stoughton when we all arrived at the University of Sussex in the second half of the 1980s. At the time I was interested in applying category theory and logic to computer science and they were interested in the properties of the mixture of local mutable state and higher-order functions that occurs in the ML family of languages (Milner et al., 1997).
Around that time Moggi introduced the use of category-theoretic monads to structure different notions of computational effect (Moggi, 1991). That is now an important technique in denotational semantics; and thanks to the work of Wadler (1992) and others, monads are the accepted way of ‘tackling the awkward squad’ (Peyton Jones, 2001) of side-effects within functional programming. One of Moggi's monads models the computational effect of dynamically allocating fresh names. It is less well known than some of the other monads he uses, because it needs categories of functors and is only mentioned in (Moggi, 1989), rather than (Moggi, 1991).
Every graphon defines a random graph on any given number n of vertices. It was known that the graphon is random-free if and only if the entropy of this random graph is subquadratic. We prove that for random-free graphons, this entropy can grow as fast as any subquadratic function. However, if the graphon belongs to the closure of a random-free hereditary graph property, then the entropy is O(n log n). We also give a simple construction of a non-step-function random-free graphon for which this entropy is linear, refuting a conjecture of Janson.
We show that the expected number of maximal empty axis-parallel boxes amidst n random points in the unit hypercube [0,1]d in $\mathbb{R}^d$ is (1 ± o(1)) $\frac{(2d-2)!}{(d-1)!}$n lnd−1n, if d is fixed. This estimate is relevant to analysis of the performance of exact algorithms for computing the largest empty axis-parallel box amidst n given points in an axis-parallel box R, especially the algorithms that proceed by examining all maximal empty boxes. Our method for bounding the expected number of maximal empty boxes also shows that the expected number of maximal empty orthants determined by n random points in $\mathbb{R}^d$ is (1 ± o(1)) lnd−1n, if d is fixed. This estimate is related to the expected number of maximal (or minimal) points amidst random points, and has application to algorithms for coloured orthogonal range counting.
Let m,n and t be positive integers. Consider [m]n as the set of sequences of length n on an m-letter alphabet. We say that two subsets A⊂[m]n and B⊂[m]n cross t-intersect if any two sequences a∈A and b∈B match in at least t positions. In this case it is shown that if $m > (1-\frac 1{\sqrt[t]2})^{-1}$ then |A||B|≤(mn−t)2. We derive this result from a weighted version of the Erdős–Ko–Rado theorem concerning cross t-intersecting families of subsets, and we also include the corresponding stability statement. One of our main tools is the eigenvalue method for intersection matrices due to Friedgut [10].
Computer algebra systems are now ubiquitous in all areas of science and engineering. This highly successful textbook, widely regarded as the 'bible of computer algebra', gives a thorough introduction to the algorithmic basis of the mathematical engine in computer algebra systems. Designed to accompany one- or two-semester courses for advanced undergraduate or graduate students in computer science or mathematics, its comprehensiveness and reliability has also made it an essential reference for professionals in the area. Special features include: detailed study of algorithms including time analysis; implementation reports on several topics; complete proofs of the mathematical underpinnings; and a wide variety of applications (among others, in chemistry, coding theory, cryptography, computational logic, and the design of calendars and musical scales). A great deal of historical information and illustration enlivens the text. In this third edition, errors have been corrected and much of the Fast Euclidean Algorithm chapter has been renovated.
In this chapter, we introduce fast methods for multiplying integers and polynomials. We start with a simple method due to Karatsuba which reduces the cost from the classical O(n2) for polynomials of degree n to O(n1.59). The Discrete Fourier Transform and its efficient implementation, the Fast Fourier Transform, are the backbone of the fastest algorithms. These work only when appropriate roots of unity are present, but Schönhage & Strassen (1971) showed how to create “virtual” roots that lead to a multiplication cost of only O(n log n loglog n). In Chapter 9, Newton iteration will help us extend this to fast division with remainder.
General-purpose computer algebra systems typically only implement the classical method, and sometimes Karatsuba's. This is quite sufficient as long as one deals with fairly small numbers or polynomials, but for many high-performance tasks fast arithmetic is indispensable. Examples include factoring large polynomials (Section 15.7), finding primes and twin primes (Notes to Chapter 18), and computing billions of digits of π (Section 4.6) or billions of roots of Riemann's zeta function (Notes 18.4).
Asymptotically fast methods are standard tools in many areas of computer science, where, say, O(nlogn) sorting algorithms like quicksort or mergesort are widely used and experiments show that they outperform the “classical” O(n2) sorting algorithms like bubble sort or insertion sort already for values of n below 100.