To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The calculus of looping sequences is a formalism for describing theevolution of biological systems by means of term rewriting rules. Inthis paper we enrich this calculus with a type discipline whichpreserves some biological properties depending on the minimum andthe maximum number of elements of some type requested by the present elements. The typesystem enforces these properties and typed reductions guarantee thatevolution preserves them. As an example, we model the hemoglobinstructure and the equilibrium between cell death and division: typedreductions prevent undesirable behaviors.
In recent work we have proposed a novel approach to define idealized type systems for object-oriented languages, based on abstract compilation ofprograms into Horn formulas which are interpreted w.r.t. the coinductive (that is, the greatest) Herbrand model. In this paper we investigate how this approach can be applied also inthe presence of imperative features. This is made possible by considering a natural translation of Static Single Assignment intermediate form programs into Horn formulas, where φ functionscorrespond to union types.
We extend the simply typedλ-calculus with unbind and rebind primitiveconstructs. That is, a value can be a fragment of open code,which in order to be used should be explicitly rebound. Thismechanism nicely coexists with standard static binding. Themotivation is to provide an unifying foundation for mechanisms ofdynamic scoping, where the meaning of a name isdetermined at runtime, rebinding, such as dynamic updatingof resources and exchange of mobile code, and delegation,where an alternative action is taken if a binding is missing.Depending on the application scenario, we consider twoextensions which differ in the way type safety is guaranteed. Theformer relies on a combination of static and dynamic type checking.That is, rebind raises a dynamic error if for some variablethere is no replacing term or it has the wrong type. In the latter,this error is prevented by a purely static type system, at the priceof more sophisticated types.
For an increasing monotone graph property the local resilience of a graph G with respect to is the minimal r for which there exists a subgraph H ⊆ G with all degrees at most r, such that the removal of the edges of H from G creates a graph that does not possess . This notion, which was implicitly studied for some ad hoc properties, was recently treated in a more systematic way in a paper by Sudakov and Vu. Most research conducted with respect to this distance notion focused on the binomial random graph model (n, p) and some families of pseudo-random graphs with respect to several graph properties, such as containing a perfect matching and being Hamiltonian, to name a few. In this paper we continue to explore the local resilience notion, but turn our attention to random and pseudo-random regular graphs of constant degree. We investigate the local resilience of the typical random d-regular graph with respect to edge and vertex connectivity, containing a perfect matching, and being Hamiltonian. In particular, we prove that for every positive ϵ and large enough values of d, with high probability, the local resilience of the random d-regular graph, n, d, with respect to being Hamiltonian, is at least (1−ϵ)d/6. We also prove that for the binomial random graph model (n, p), for every positive ϵ > 0 and large enough values of K, if p > then, with high probability, the local resilience of (n, p) with respect to being Hamiltonian is at least (1−ϵ)np/6. Finally, we apply similar techniques to positional games, and prove that if d is large enough then, with high probability, a typical random d-regular graph G is such that, in the unbiased Maker–Breaker game played on the edges of G, Maker has a winning strategy to create a Hamilton cycle.
Let A be a finite non-empty set of integers. An asymptotic estimate of the size of the sum of several dilates was obtained by Bukh. The unique known exact bound concerns the sum |A + k⋅A|, where k is a prime and |A| is large. In its full generality, this bound is due to Cilleruelo, Serra and the first author.
Let k be an odd prime and assume that |A| > 8kk. A corollary to our main result states that |2⋅A + k⋅A|≥(k+2)|A|−k2−k+2. Notice that |2⋅P+k⋅P|=(k+2)|P|−2k, if P is an arithmetic progression.
Let be a set of terms over an arbitrary (but finite) number of Boolean variables. Let U() be the set of truth assignments that satisfy exactly one term in . Motivated by questions in computational complexity, Rudich conjectured that there exist ∊, δ > 0 such that, if is any set of terms for which U() contains at least a (1−∊)-fraction of all truth assignments, then there exists a term t ∈ such that at least a δ-fraction of assignments satisfy some term of sharing a variable with t [8].
We prove a stronger version: for any independent assignment of the variables (not necessarily the uniform one), if the measure of U() is at least 1 − ∊, there exists a t ∈ such that the measure of the set of assignments satisfying either t or some term incompatible with t (i.e., having no satisfying assignments in common with t) is at least . (A key part of the proof is a correlation-like inequality on events in a finite product probability space that is in some sense dual to Reimer's inequality [11], a.k.a. the BKR inequality [5], or the van den Berg–Kesten conjecture [3].)
Here we present a non-exhaustive list of software packages that (in most cases) the authors have tried, together with some other useful pointers. Of course, we cannot accept any responsibility for bugs/errors/omissions in any of the software or documentation mentioned here – caveat emptor!
Websites change. If any of the websites mentioned here disappear in the future, you may be able to find the new site using a search engine with appropriate keywords.
Software tools
CLN
CLN (Class Library for Numbers, http://www.ginac.de/CLN/) is a library for efficient computations with all kinds of numbers in arbitrary precision. It was written by Bruno Haible, and is currently maintained by Richard Kreckel. It is written in C++ and distributed under the GNU General Public License (GPL). CLN provides some elementary and special functions, and fast arithmetic on large numbers, in particular it implements Schönhage–Strassen multiplication, and the binary splitting algorithm. CLN can be configured to use GMP low-level MPN routines, which improves its performance.
GNU MP (GMP)
The GNU MP library is the main reference for arbitrary-precision arithmetic. It has been developed since 1991 by Torbjörn Granlund and several other contributors. GNU MP (GMP for short) implements several of the algorithms described in this book. In particular, we recommend reading the “Algorithms” chapter of the GMP reference manual.
This is a book about algorithms for performing arithmetic, and their implementation on modern computers. We are concerned with software more than hardware – we do not cover computer architecture or the design of computer hardware since good books are already available on these topics. Instead, we focus on algorithms for efficiently performing arithmetic operations such as addition, multiplication, and division, and their connections to topics such as modular arithmetic, greatest common divisors, the fast Fourier transform (FFT), and the computation of special functions.
The algorithms that we present are mainly intended for arbitrary-precision arithmetic. That is, they are not limited by the computer wordsize of 32 or 64 bits, only by the memory and time available for the computation. We consider both integer and real (floating-point) computations.
The book is divided into four main chapters, plus one short chapter (essentially an appendix). Chapter 1 covers integer arithmetic. This has, of course, been considered in many other books and papers. However, there has been much recent progress, inspired in part by the application to public key cryptography, so most of the published books are now partly out of date or incomplete. Our aim is to present the latest developments in a concise manner. At the same time, we provide a self-contained introduction for the reader who is not an expert in the field.
Chapter 2 is concerned with modular arithmetic and the FFT, and their applications to computer arithmetic.
In this chapter our main topic is modular arithmetic, i.e. how to compute efficiently modulo a given integer N. In most applications, the modulus N is fixed, and special-purpose algorithms benefit from some precomputations, depending only on N, to speed up arithmetic modulo N.
There is an overlap between Chapter 1 and this chapter. For example, integer division and modular multiplication are closely related. In Chapter 1 we present algorithms where no (or only a few) precomputations with respect to the modulus N are performed. In this chapter, we consider algorithms which benefit from such precomputations.
Unless explicitly stated, we consider that the modulus N occupies n words in the word-base β, i.e. βn−1 ≤ N < βn.
Representation
We consider in this section the different possible representations of residues modulo N. As in Chapter 1, we consider mainly dense representations.
Classical representation
The classical representation stores a residue (class) a as an integer 0 ≤ a < N. Residues are thus always fully reduced, i.e. in canonical form.
Another non-redundant form consists in choosing a symmetric representation, say −N/2 ≤ a < N/2. This form might save some reductions in additions or subtractions (see §2.2). Negative numbers might be stored either with a separate sign (sign-magnitude representation) or with a two's-complement representation.
Since N takes n words in base β, an alternative redundant representation chooses 0 ≤ a < βn to represent a residue class.
Here we consider various applications of Newton's method, which can be used to compute reciprocals, square roots, and more generally algebraic and functional inverse functions. We then consider unrestricted algorithms for computing elementary and special functions. The algorithms of this chapter are presented at a higher level than in Chapter 3. A full and detailed analysis of one special function might be the subject of an entire chapter!
Introduction
This chapter is concerned with algorithms for computing elementary and special functions, although the methods apply more generally. First we consider Newton's method, which is useful for computing inverse functions. For example, if we have an algorithm for computing y = ln x, then Newton's method can be used to compute x = exp y (see §4.2.5). However, Newton's method has many other applications. In fact, we already mentioned Newton's method in Chapters 1–3, but here we consider it in more detail.
After considering Newton's method, we go on to consider various methods for computing elementary and special functions. These methods include power series (§4.4), asymptotic expansions (§4.5), continued fractions (§4.6), recurrence relations (§4.7), the arithmetic-geometric mean (§4.8), binary splitting (§4.9), and contour integration (§4.10). The methods that we consider are unrestricted in the sense that there is no restriction on the attainable precision – in particular, it is not limited to the precision of IEEE standard 32-bit or 64-bit floating-point arithmetic. Of course, this depends on the availability of a suitable software package for performing floating-point arithmetic on operands of arbitrary precision, as discussed in Chapter 3.