To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In constructing a large program, it is vital to be able to divide the program into parts, often called modules, and to specify these modules with sufficient precision that one can program each module knowing only the specification of the other modules.
One of the main benefits of modern type theory is the realization that the type of a module is in fact a formal specification of the module. Indeed (although we will not discuss them in this book) there are implemented type-based systems such as NuPrl and Coq which can handle specifications of functional programs that are as expressive and flexible as the specifications of imperative programs in Chapter 3. As with the specifications of Chapter 3, however, considerable human assistance is needed to insure that a program meets its specification.
In this chapter, we will examine more limited systems for which the check that a module meets its specification can be performed efficiently and without human intervention. What is surprising is the extent to which these less expressive systems can still detect programming errors.
We begin with a discussion of type definitions, especially abstract type definitions that make it possible to separate the definition of an abstract type, including both its representation and the relevant primitive operations, from the part of the program that uses the abstract type but is independent of its representation and the implementation of the primitives. We then examine a more general approach based on existentially quantified types and polymorphic functions.
A nondeterministic program is one that does not completely determine the behavior of a computer, so that different executions of the same program with the same initial state and input can give different results. Although concurrent programs are usually nondeterministic, the topics of nondeterminism and concurrency are distinct, and it is pedagogically sensible to consider nondeterminism by itself before plunging on to concurrency.
Moreover, even with purely sequential computation, nondeterministic programs are often desirable because they avoid unnecessary commitments. Such commitments can make programs harder to read and to reason about. Even more seriously, in programs that use abstract types, they can place unnecessary constraints on the choice of data representations.
(Just as one can have nondeterministic sequential programs, one can also have deterministic concurrent ones — often called parallel programs — as will be evident when we consider functional programming languages.)
In this chapter we will explore Dijkstra's guarded commands, which are the most widely accepted and studied mechanism for extending the simple imperative language to nondeterministic programming.
In contrast to the previous chapters, we will begin with operational semantics, extending the development of the previous chapter. Then, after introducing powerdomains, we will give a direct denotational semantics for our nondeterministic language (excluding intermediate input and output). Finally, we will extend the specification methods of Chapter 3 to nondeterminism (again excluding input and output) and also deal briefly with weakest preconditions.
Before proceeding further, however, we must clear up a potential confusion. In automata theory, the result of a nondeterministic automaton is defined to be the the union of the results of all possible executions.
Most serious programming languages combine imperative aspects, which describe computation in terms of state-transformation operations such as assignment, and functional or applicative aspects, which describe computation in terms of the definition and application of functions or procedures. To gain a solid understanding, however, it is best to begin by considering each of these aspects in isolation, and to postpone the complications that arise from their interactions.
Thus, beginning in this chapter and continuing through Chapter 7 (nondeterminism) and Chapters 8 and 9 (concurrency), we will limit ourselves to purely imperative languages. Then, beginning in Chapter 10, we will turn to purely functional languages. Languages that combine imperative and functional aspects will be considered in Chapter 13 (Iswim-like languages) and Chapter 19 (Algol-like languages).
In this chapter, we consider a simple imperative language that is built out of assignment commands, sequential composition, conditionals (i.e. if commands), while commands, and (in Section 2.5) variable declarations. We will use this language to illustrate the basic concept of a domain, to demonstrate the properties of binding in imperative languages, and, in the next chapter, to explore formalisms for specifying and proving imperative program behavior. In later chapters we will explore extensions to this language and other approaches to describing its semantics.
At an intuitive level, the simple imperative language is so much a part of every programmer's background that it will hold few surprises for typical readers. As in Chapter 1, we have chosen for clarity's sake to introduce novel concepts such as domains in a familiar context. More surprising languages will come after we have sharpened our tools for specifying them.
Arrangements of lines (and planes) form the third important structure used in computational geometry, as important as convex hulls and Voronoi diagrams. And as we glimpsed at the end of the previous chapter, and will see more clearly in Section 6.6, all three structures are intimately related. An arrangement of lines is shown in Figure 6.1. It is a collection of (infinite) lines “arranged” in the plane. These lines induce a partition of the plane into convex regions (called cells, or faces), segments or edges (between line crossings), and vertices (where lines meet). The example in the figure has V = 45 vertices, E = 100 edges, and F = 56 faces; not all of these are visible within the limited window of the figure. It is this partition that is known as the arrangement. It is convenient to view the faces as open sets (not including their edges) and the edges as open segments (not including their bounding vertices), so that the dissection is a true partition: Its pieces cover the plane, but the pieces are disjoint from one another, “pairwise disjoint” in the idiom preferred by mathematicians.
Arrangements may seem too abstract to have much utility, but in fact they arise in a wide variety of contexts. Here are four; more will be discussed in Section 6.7.
The burgeoning field of robotics has inspired the exploration of a collection of problems in computational geometry loosely called “motion planning” problems, or more specifically “algorithmic motion planning” problems. As usual, details are abstracted away from a real-life application to produce mathematically “cleaner” versions of the problem. If the abstraction is performed intelligently, the theoretical explorations have practical import. This happily has been the case with motion planning, which applies not only to traditional robotics, but also to planning tool paths for numerically controlled machines, to wire routing on chips, to planning paths in geographic information systems (GIS), and to virtual navigation in computer graphics.
Problem Specification
The primary paradigm we examine in this chapter assumes a fixed environment of impenetrable obstacles, usually polygons and polyhedra in two and three dimensions respectively. Within this environment is the “robot” R, a movable object with some prespecified geometric characteristics: It may be a point, a line segment, a convex polygon, a hinged object, etc. The robot is at some initial position s (start), and the task is to plan motions that will move it to some specified final position t (terminus), such that throughout the motion, collision between the robot and all obstacles is avoided. A collision occurs when a point of the robot coincides with an interior point of an obstacle. Note that sliding contact with the boundary of the obstacles does not constitute a collision. A collision-avoiding path is called a free path. Often there are restrictions on the type of motions permitted.
Computational geometry broadly construed is the study of algorithms for solving geometric problems on a computer. The emphasis in this text is on the design of such algorithms, with somewhat less attention paid to analysis of performance. I have in several cases carried out the design to the level of working C programs, which are discussed in detail.
There are many brands of geometry, and what has become known as “computational geometry,” covered in this book, is primarily discrete and combinatorial geometry. Thus polygons play a much larger role in this book than do regions with curved boundaries. Much of the work on continuous curves and surfaces falls under the rubrics of “geometric modeling” or “solid modeling,” a field with its own conferences and texts, distinct from computational geometry. Of course there is substantial overlap, and there is no fundamental reason for the fields to be partitioned this way; indeed they seem to be merging to some extent.
The field of computational geometry is a mere twenty years old as of this writing, if one takes M. I. Shamos's thesis (Shamos 1978) as its inception. Now there are annual conferences, journals, texts, and a thriving community of researchers with common interests.
Topics Covered
I consider the “core” concerns of computational geometry to be polygon partitioning (including triangulation), convex hulls, Voronoi diagrams, arrangements of lines, geometric searching, and motion planning. These topics from the chapters of this book. The field is not so settled that this list can be considered a consensus; other researchers would define the core differently.
Much of computational geometry performs its computations on geometrical objects known as polygons. Polygons are a convenient representation for many real-world objects; convenient both in that an abstract polygon is often an accurate model of real objects and in that it is easily manipulated computationally. Examples of their use include representing shapes of individual letters for automatic character recognition, of an obstacle to be avoided in a robot's environment, or of a piece of a solid object to be displayed on a graphics screen. But polygons can be rather complicated objects, and often a need arises to view them as composed of simpler pieces. This leads to the topic of this and the next chapter: partitioning polygons.
Definition of a Polygon
A polygon is the region of a plane bounded by a finite collection of line segments forming a simple closed curve. Pinning down a precise meaning for the phrase “simple closed curve” is unfortunately a bit difficult. A topologist would say that it is the homeomorphic image of a circle, meaning that it is a certain deformation of a circle. We will avoid topology for now and approach a definition in a more pedestrian manner, as follows.
Let v0, v1, v2, …, vn-1 be n points in the plane. Here and throughout the book, all index arithmetic will be mod n, implying a cyclic ordering of the points, with v0 following vn-1, since (n - 1) + 1 = n = 0(modn).
In this chapter we study the Voronoi diagram, a geometric structure second in importance only to the convex hull. In a sense a Voronoi diagram records everything one would ever want to know about proximity to a set of points (or more general objects). And often one does want to know detail about proximity: Who is closest to whom? who is furthest? and so on. The concept is more than a century old, discussed in 1850 by Dirichlet and in a 1908 paper of Voronoi.
We will start with a series of examples to motivate the discussion and then plunge into the details of the rich structure of the Voronoi diagram (in Sections 5.2 and 5.3). It is necessary to become intimately familiar with these details before algorithms can be appreciated (in Section 5.4). Finally we will reveal the beautiful connection between Voronoi diagrams and convex hulls in Section 5.7. This chapter includes only two short pieces of code, to construct the dual of the Voronoi diagram (the Delaunay triangulation), in Section 5.7.4.
APPLICATIONS: PREVIEW
1. Fire Observation Towers
Imagine a vast forest containing a number of fire observation towers. Each ranger is responsible for extinguishing any fire closer to her tower than to any other tower. The set of all trees for which a particular ranger is responsible constitutes the “Voronoi polygon” associated with her tower. The Voronoi diagram maps out the lines between these areas of responsibility: the spots in the forest that are equidistant from two or more towers. (A look ahead to Figure 5.5 may aid intuition.)
The most ubiquitous structure in computational geometry is the convex hull (sometimes shortened to just “the hull”). It is useful in its own right and useful as a tool for constructing other structures in a wide variety of circumstances. Finally, it is an austerely beautiful object playing a central role in pure mathematics.
It also represents something of a success story in computational geometry. One of the first papers identifiably in the area of computational geometry concerned the computation of the convex hull, as will be discussed in Section 3.5. Since then there has been an amazing variety of research on hulls, ultimately leading to optimal algorithms for most natural problems. We will necessarily select a small thread through this work for this chapter, partially compensating with a series of exercises on related topics (Section 3.9).
Before plunging into the geometry, we briefly mention a few applications.
Collision avoidance. If the convex hull of a robot avoids collision with obstacles, then so does the robot. Since the computation of paths that avoid collision is much easier with a convex robot than with a nonconvex one, this is often used to plan paths. This will be discussed in Chapter 8 (Section 8.4).
Fitting ranges with a line. Finding a straight line that fits between a collection of data ranges maps to finding the convex region common to a collection of half-planes (O'Rourke 1981).
In this chapter, we extend the simple imperative language and the methods for reasoning about its programs to include one-dimensional arrays with integer subscripts. Although more elaborate and varied forms of arrays are provided by many programming languages, such simple arrays are enough to demonstrate the basic semantical and logical properties of arrays.
There are two complementary ways to think about arrays. In the older view, which was first made explicit in early work on semantics by Christopher Strachey, an array variable is something that one can apply to an integer (called a subscript) to obtain an “array element” (in Strachey's terminology, an “L-value”), which in turn can be either evaluated, to obtain a value, or assigned, to alter the state of the computation. In the newer view, which is largely due to Hoare but has roots in the work of McCarthy, an array variable, like an ordinary variable, has a value — but this value is a function mapping subscripts into ordinary values. Strachey's view is essential for languages that are rich enough that arrays can share elements. But for the simple imperative language, and especially for the kind of reasoning about programs developed in the previous chapter, Hoare's view is much more straightforward.
Abstract Syntax
Clearly, array variables are a different type of variable than the integer variables used in previous chapters.
Given a sequence of nonnegative real numbers λ0, λ1, … that sum to 1, we consider a random graph having approximately λin vertices of degree i. In [12] the authors essentially show that if [sum ]i(i−2)λi>0 then the graph a.s. has a giant component, while if [sum ]i(i−2)λi<0 then a.s. all components in the graph are small. In this paper we analyse the size of the giant component in the former case, and the structure of the graph formed by deleting that component. We determine ε, λ′0, λ′1 … such that a.s. the giant component, C, has εn+o(n) vertices, and the structure of the graph remaining after deleting C is basically that of a random graph with n′=n−[mid ]C[mid ] vertices, and with λ′in′ of them of degree i.
Let k be a positive integer and G a finite abelian group of order n, where n[ges ]k2−4k+8. Then every sequence of 2n−¼k2+k−2 elements in G assuming k distinct values has an n-subsequence with sum zero. This settles a conjecture of Bialostocki and Lotspeich.
Stacks which allow elements to be pushed into any of the top r positions and popped from any of the top s positions are studied. An asymptotic formula for the number un of permutations of length n sortable by such a stack is found in the cases r=1 or s=1. This formula is found from the generating function of un. The sortable permutations are characterized if r=1 or s=1 or r=s=2 by a forbidden subsequence condition.
The natural relations for sets are those definable in terms of the emptiness of the subsets corresponding to Boolean combinations of the sets. For pairs of sets, there are just five natural relations of interest, namely, strict inclusion in each direction, disjointness, intersection with the universe being covered, or not. Let N denote {1, 2, …, n} and (N2) denote {(i, j)[mid ]i, j∈N and i<j}. A function μ on (N2) specifies one of these relations for each pair of indices. Then μ is said to be consistent on M⊆N if and only if there exists a collection of sets corresponding to indices in M such that the relations specified by μ hold between each associated pair of the sets. Firstly, it is proved that if μ is consistent on all subsets of N of size three then μ is consistent on N. Secondly, explicit conditions that make μ consistent on a subset of size three are given as generalized transitivity laws. Finally, it is shown that the result concerning binary natural relations can be generalized to r-ary natural relations for arbitrary r[ges ]2.