We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Although many imperative data structures are difficult or impossible to adapt to a functional setting, some can be adapted quite easily. In this chapter, we review three data structures that are commonly taught in an imperative setting. The first, leftist heaps, is quite simple in either setting, but the other two, binomial queues and red-black trees, have a reputation for being rather complicated because imperative implementations of these data structures often degenerate into nightmares of pointer manipulations. In contrast, functional implementations of these data structures abstract away from troublesome pointer manipulations and directly reflect the high-level ideas. A bonus of implementing these data structures functionally is that we get persistence for free.
Leftist Heaps
Sets and finite maps typically support efficient access to arbitrary elements. But sometimes we need efficient access only to the minimum element. A data structure supporting this kind of access is called a priority queue or a heap. To avoid confusion with FIFO queues, we use the latter name. Figure 3.1 presents a simple signature for heaps.
Remark In comparing the signature for heaps with the signature for sets (Figure 2.7), we see that in the former the ordering relation on elements is included in the signature while in the latter it is not.
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Convexity is one of the oldest concepts in mathematics. It already appears in the works of Archimedes, around three centuries B.C. It was not until the 1950s, however, that this theme developed widely in the works of modern mathematicians. Convexity is a fundamental notion for computational geometry, at the core of many computer engineering applications, for instance in robotics, computer graphics, or optimization.
A convex set has the basic property that it contains the segment joining any two of its points. This property guarantees that a convex object has no hole or bump, is not hollow, and always contains its center of gravity. Convexity is a purely affine notion: no norm or distance is needed to express the property of being convex. Any convex set can be expressed as the convex hull of a certain point set, that is, the smallest convex set that contains those points. It can also be expressed as the intersection of a set of half-spaces. In the following chapters, we will be interested in linear convex sets. These can be defined as convex hulls of a finite number of points, or intersections of a finite number of half-spaces. Traditionally, a bounded linear convex set is called a polytope. We follow the tradition here, but we understand the word polytope as a shorthand for bounded polytope. This lets us speak of an unbounded polytope for the non-bounded intersection of a finite set of half-spaces.
Term rewriting is a branch of theoretical computer science which combines elements of logic, universal algebra, automated theorem proving and functional programming. Its foundation is equational logic. What distinguishes term rewriting from equational logic is that equations are used as directed replacement rules, i.e. the left-hand side can be replaced by the right-hand side, but not vice versa. This constitutes a Turing-complete computational model which is very close to functional programming. It has applications in algebra (e.g. Boolean algebra, group theory and ring theory), recursion theory (what is and is not computable with certain sets of rewrite rules), software engineering (reasoning about equationally defined data types such as numbers, lists, sets etc.), and programming languages (especially functional and logic programming). In general, term rewriting applies in any context where efficient methods for reasoning with equations are required.
To date, most of the term rewriting literature has been published in specialist conference proceedings (especially Rewriting Techniques and Applications and Automated Deduction in Springer's LNCS series) and journals (e.g. Journal of Symbolic Computation and Journal of Automated Reasoning). In addition, several overview articles provide introductions into the field, and references to the relevant literature [141, 74, 204]. This is the first English book devoted to the theory and applications of term rewriting. It is ambitious in that it tries to serve two masters:
The researcher, who needs a unified theory that covers, in detail and in a single volume, material that has previously only been collected in overview articles, and whose technical details are spread over the literature.
The teacher or student, who needs a readable textbook in an area where there is hardly any literature for the non-specialist.
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
The first part of this book introduces the most popular tools in computational geometry. These tools will be put to use throughout the rest of the book.
The first chapter gives a framework for the analysis of algorithms. The concept of complexity of an algorithm is reviewed. The underlying model of computation is made clear and unambiguous.
The second chapter reviews the fundamentals of data structures: lists, heaps, queues, dictionaries, and priority queues. These structures are mostly implemented as balanced trees. To serve as an example, red–black trees are fully described and their performances are evaluated.
The third chapter illustrates the main algorithmic techniques used to solve geometric problems: the incremental method, the divide-and-conquer method, the sweep method, and the decomposition method which subdivides a complex object into elementary geometric objects.
Finally, chapters 4, 5, and 6 introduce the randomization methods which have recently made a distinguished appearance on the stage of computational geometry. Only the incremental randomized method is introduced and used in this book, as opposed to the randomized divide-and-conquer method.
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
To compute the convex hull of a finite set of points is a classical problem in computational geometry. In two dimensions, there are several algorithms that solve this problem in an optimal way. In three dimensions, the problem is considerably more difficult. As for the general case of any dimension, it was not until 1991 that a deterministic optimal algorithm was designed. In dimensions higher than 3, the method most commonly used is the incremental method. The algorithms described in this chapter are also incremental and work in any dimension. Methods specific to two or three dimensions will be given in the next chapter.
Before presenting the algorithms, section 8.1 details the representation of polytopes as data structures. Section 8.2 shows a lower bound of Ω(n log n + n⌊d/2⌋) for computing the convex hull of n points in d dimensions. The basic operation used by an incremental algorithm is: given a polytope C and a point P, derive the representation of the polytope conv(C ∪ {P}} assuming the representation of C has already been computed. Section 8.3 studies the geometric part of this problem. Section 8.4 shows a deterministic algorithm to compute the convex hull of n points in d dimensions. This algorithm requires preliminary knowledge of all the points: it is an off-line algorithm. Its complexity is O(n log n + n⌊(d+1)/2⌋), which is optimal only in even dimensions.
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Several problems, geometric or of other kinds, use the notion of a polytope in d-dimensional space more or less implicitly. The preceding chapters show how to efficiently build the incidence graph which encodes the whole facial structure of a polytope given as the convex hull of a set of points. Using duality, the same algorithms allow one to build the incidence graph of a polytope defined as the intersection of a finite number of half-spaces. It is not always necessary, however, to explicitly enumerate all the faces of the polytope that underlies a problem. This is the case in linear programming problems, which are the topic of this chapter.
Section 10.1 defines what a linear programming problem is, and sets up the terminology commonly used in optimization. Section 10.2 gives a truly simple algorithm that solves this class of problem. Finally, section 10.3 shows how linear programming may be used as an auxiliary for other geometric problems. A linear programming problem may be seen as a shortcut to avoid computing the whole facial structure of some convex hull. Paradoxically, the application we give here is an algorithm that computes the convex hull of n points in dimension d. Besides its simplicity, the interest of the algorithm is mostly that its complexity depends on the output size as well as on the input size. Here, the output size is the number f of faces of all dimensions of the convex hull, and thus ranges widely from O(1) (size of a simplex) to Θ(n⌊d/2⌋ (size of a maximal polytope).
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
We have seen in the previous chapter that in some cases completion generates a convergent term rewriting system for a given equational theory, and that such a system can then be used to decide the word problem for this equational theory. A very similar approach has independently been developed in the area of computer algebra, where Gröbner bases are used to decide the ideal congruence and the ideal membership problem in polynomial rings. The close connection to rewriting is given by the fact that Gröbner bases define convergent reduction relations on polynomials, and that the ideal congruence problem can be seen as a word problem. In addition, Buchberger's algorithm, which is very similar to the basic completion procedure presented above, can be used to compute Gröbner bases. In contrast to the situation for term rewriting, however, termination of the reduction relation can always be guaranteed, and Buchberger's algorithm always terminates with success. The purpose of this chapter is, on the one hand, to provide another example for the usefulness of the rewriting and completion approach introduced above. On the other hand, the basic definitions and results from the area of Gröbner bases are presented using the notations and results for abstract reduction systems introduced in Chapter 2.
The ideal membership problem
Let us first introduce the basic algorithmic problems that can be solved with the help of Gröbner bases.
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
In an arrangement of n lines in the plane, all the cells are convex and thus have complexity O(n). Moreover, given a point A, the cell in the arrangement that contains A can be computed in time Θ(n log n): indeed, the problem reduces to computing the intersection of n half-planes bounded by the lines and containing A (see theorem 7.1.10).
In this chapter, we study arrangements of line segments in the plane. Consider a set S of n line segments in the plane. The arrangement of S includes cells, edges, and vertices of the planar subdivision of the plane induced by S, and their incidence relationships.
Computing the arrangement of S can be achieved in time O(n log n + k) where k is the number of intersection points (see sections 3.3 and 5.3.2, and theorem 5.2.5). All the pairs of segments may intersect, so in the worst case we have k = Ω(n2).
For a few applications, only a cell in this arrangement is needed. This is notably the case in robotics, for a polygonal robot moving amidst polygonal obstacles by translation (see exercise 15.6). The reachable positions are characterized by lying in a single cell of the arrangement of those line segments that correspond to the set of positions of the robot when a vertex of the robot slides along the edge of an obstacle, or when the edge of a robot maintains contact with an obstacle at a point.
This chapter studies the problem of determining whether a TRS is confluent. After a brief look at the (undecidable) decision problem, the rest of the chapter divides neatly into two parts:
The first part deals with terminating systems, for which confluence turns out to be decidable. This is a key result in our search for decidable equational theories: if E constitutes a terminating TRS, we can decide if it is also confluent, in which case we know by Theorem 4.1.1 that ≈E is decidable.
The second part deals with those systems not covered by the first part, namely (potentially) nonterminating ones. The emphasis here is not on deciding ≈E by rewriting, which requires termination, but on the computational content of a TRS. Viewing a TRS as a program, confluence simply means that the program is deterministic. We show that for the class of so-called orthogonal systems, where no two rules interfere with each other, confluence holds irrespective of termination. This result has immediate consequences for the theory and design of functional programming languages.
The decision problem
Just as for termination and most other interesting properties (of term rewriting systems or otherwise), confluence is in general undecidable:
Theorem 6.1.1The following problem is undecidable:
Instance: A finite TRS R.
Question: Is R confluent?
Proof Given a set of identities E such that Var(l) = Var(r) and neither l nor r is a variable for all l ≈ r ∈ E, we can reduce the ground word problem for E to the confluence problem of a related TRS as follows.
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt