To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Some 10 years ago, Harper illustrated the powerful method of proof-directed debugging for developing programs with an article in this journal. Unfortunately, his example uses both higher-order functions and continuation-passing style, which is too difficult for students in an introductory programming course. In this pearl, we present a first-order version of Harper's example and demonstrate that it is easy to transform the final version into an efficient state machine. Our new version convinces students that the approach is useful, even essential, in developing both correct and efficient programs.
In Chapter 1 we combined two isometries g, h to produce a third by taking their compositions gh (do g, then h) and hg. There is another way to combine two isometries, of great practical use in the context of plane patterns, and which we will introduce in Section 2.3. We begin by highlighting two geometrical ways to find the composition (or product) of isometries. The first was already used in the proof of Theorem 1.18.
Method 1
(A) Determine the sense of the composition from those of its parts (Remark 1.17).
(B) Determine the effect of the composition on two convenient points P, Q.
(C) Find an isometry with the right sense and effect on P, Q. This must be the one required by Theorem 1.10.
Notice that (C) is now made easier by our knowledge of the four isometry types (Theorem 1.18). This method can be beautifully simple and effective for otherwise tricky compositions, but the second approach, given by Theorem 2.1 and Corollary 2.2, is perhaps more powerful for getting general results and insights. With Theorems 1.15 and 1.16 it says that every isometry can be decomposed into reflections, and it tells us how to combine reflections.
Method 2 Decompose the given isometries into reflections, using the available freedom of choice, so that certain reflections in the composition cancel each other out. See Examples 2.3 to 2.7. We note for later:
Method 3 Use Cartesian coordinates (See Chapter 7).
This text is a successor to the 1992 Mathematics for Computer Graphics. It retains the original Part I on plane geometry and pattern-generating symmetries, along with much on 3D rotation and reflection matrices. On the other hand, the completely new pages exceed in number the total pages of the older book.
In more detail, topology becomes a reference and is replaced by probability, leading to simulation, priors and Bayesian methods, and the Shannon Information Theory. Also, notably, the Fourier Transform appears in various incarnations, along with Artificial Neural Networks. As the book's title implies, all this is applied to digital images, their processing, compresssion, restoration and recognition.
Wavelets are used too, in compression (as are fractals), and in conjuction with B-splines and subdivision to achieve multiresolution and curve editing at varying scales. We conclude with the Fourier approach to tomography, the medically important reconstruction of an image from lower-dimensional projections.
As before, a high priority is given to examples and illustrations, and there are exercises, which the reader can use if desired, at strategic points in the text; these sometimes form part of the exercises placed at the end of each chapter. Exercises marked with a tick are partly, or more likely fully, solved on the website. Especially after Chapter 6, solutions are the rule, except for implementation exercises. In the latter regard there are a considerable number of pseudocode versions throughout the text, for example ALGO 11.9 of Chapter 11, simulating the d-dimensional Gaussian distribution, or ALGO 16.1, wavelet compression with limited percentage error.
In this chapter we introduce and exemplify the division of plane patterns into 17 types by symmetry group. This begins with the broad division into net type. The chapter concludes with a scheme for identifying pattern types, plus examples and exercises. It then remains to show that all the types are distinct and that there are no more; this will be done in Chapter 6.
Preliminaries
Here we recapitulate on some important ideas and results, then introduce the signature system which will label each type of plane pattern according to its symmetry group. For the basics of a plane pattern F and its group of symmetries G, see Review 4.1. We have introduced the subgroup T of G, consisting of all translation symmetries of F (Definition 4.2), and the representation of those translations by a net N of points relative to a chosen basepoint O (Definition 4.3). The points of N are the vertices of a tiling of the plane by parallelogram cells (Construction 4.5 – see especially Figure 4.3).
The division of patterns into five classes according to net type (determined by T) is motivated by reflection issues in Section 4.3.1. In Section 4.3.3 we described the five types, indicating case by case which of the feasible rotational symmetries for a plane pattern (Section 4.3.2) are permitted by net invariance, Theorem 4.14.
In this chapter we ease the transition from vectors in the plane to three dimensions and n-space. The angle between two vectors is often replaced by their scalar product, which is in many ways easier to work with and has special properties. Other kinds of vector product are useful too in geometry. An important issue for a set of vectors is whether it is dependent (i.e. whether one vector is a linear combination of the others). This apparently simple idea will have many ramifications in practical application.
We introduce the first properties of matrices, an invaluable handle on transformations in 2-, 3- and n-space. At this stage, besides identifying isometries with orthogonal matrices, we characterise the matrices of projection mappings, preparatory to the Singular Value Decomposition of Chapter 8 (itself leading to an optimal transform in Chapter 10.)
Vectors and handedness
This section is something like an appendix. The reader may wish to scan quickly through or refer back to it later for various formulae and notations. We reviewed vectors in the plane in Section 1.2.1. Soon we will see how the vector properties of having direction and length are even more useful in 3-space. The results of Section 1.2.1 still hold, but vectors now have three components rather than two.
Recapitulation – vectors
A vector ν consists of a magnitude |ν|, also called the length of ν, and a direction. Thus, as illustrated in Figure 7.1, ν is representable by any directed line segment AB with the same length and direction.
One practical aim in Part I is to equip the reader to build a pattern-generating computer engine. The patterns we have in mind come from two main streams. Firstly the geometrical tradition, represented for example in the fine Moslem art in the Alhambra at Granada in Spain, but found very widely. (See Figure 1.1.)
Less abundant but still noteworthy are the patterns left by the ancient Romans (Field, 1988). The second type is that for which the Dutch artist M. C. Escher is famous, exemplified in Figure 1.2, in which (stylised) motifs of living forms are dovetailed together in remarkable ways. Useful references are Coxeter (1987), MacGillavry (1976), and especially Escher (1989). In Figure 1.2 we imitate a classic Escher-type pattern.
The magic is due partly to the designers' skill and partly to their discovery of certain rules and techniques. We describe the underlying mathematical theory and how it may be applied in practice by someone claiming no particular artistic skills.
The patterns to which we refer are true plane patterns, that is, there are translations in two non-parallel directions (opposite directions count as parallel) which move every submotif of the pattern onto a copy of itself elsewhere in the pattern. A translation is a movement of everything, in the same direction, by the same amount. Thus in Figure 1.2 piece A can be moved to piece B by the translation represented by arrow a, but no translation will transform it to piece C. A reflection would have to be incorporated.
Review 4.1 We recapitulate on some basic ideas. An isometry of the plane is a transformation of the plane which preserves distances, and is consequently a translation, rotation, reflection or glide (by Theorem 1.18). We may refer to any subset F of the plane as a pattern, but in doing so we normally imply that F has symmetry. That is, there is an isometry g which maps F onto itself. In this case g is called a symmetry or symmetry operation of F. Again, a motif M in (of) F is in principle any subset of F, but we generally have in mind a subset that is striking, attractive, and/or significant for our understanding of the structure of F.
Since the symmetry g has the two properties of preserving distance and sending every point of F to another point of F, it sends M to another motif M′ of F, which we may describe as being of the same size and shape as M, or congruent to M. By now we have many examples of this situation. An early case is that of the bird motifs of Figure 1.2, mapped onto other birds by translations and reflections. We observed that the composition of two symmetries of F, the result of applying one symmetry, then the other, qualifies also as a symmetry, and so the collection of all symmetries of F forms a group G (see Section 2.5). We call G the symmetry group of F.
In the previous chapter we introduced Shannon's concept of the amount of information (entropy) conveyed by an unknown symbol as being the degree of our uncertainty about it. This was applied to encoding a message, or sequence of symbols, in the minimum number of bits, including image compression. The theory was ‘noiseless’ in that no account was taken of loss through distortion as information is conveyed from one site to another. Now we consider some ways in which information theory handles the problem of distortion, and its solution. (For the historical development, see Slepian, 1974, Sloane and Wyner, 1993, or Verdú and McLaughlin, 2000.)
Physically, the journey can be anything from microns along a computer ‘bus’, to kilometres through our planet's atmosphere, to a link across the Universe reaching a space probe or distant galaxy. In Shannon's model of a communication system, Figure 12.1, we think of the symbols reaching their destination via a ‘channel’, which mathematically is a distribution of conditional probabilities for what is received, given what was sent.
The model incorporates our assumptions about ‘noise’, which could be due to equipment which is faulty or used outside its specifications, atmospheric conditions, interference from other messages, and so on. Some possibilities are shown in Table 13.1.
We prove Shannon's (‘noisy’) Channel Coding Theorem, then review progress in finding practical error-correcting codes that approach the possibilites predicted by that theorem for successful transmission in the face of corruption by a noisy channel.
Existing approaches to object encapsulation either rely on ad hoc syntactic restrictions or require the use of specialised type systems. Syntactic restrictions are difficult to scale and to prove correct, while specialised type systems require extensive changes to programming languages. We demonstrate that confinement can be enforced cheaply in Featherweight Generic Java, with no essential change to the underlying language or type system. This result demonstrates that polymorphic type parameters can simultaneously act as ownership parameters and should facilitate the adoption of confinement and ownership type systems in general-purpose programming languages.
Here we extend all things Fourier to two dimensions. Shortly we will be able to model many effects on an image, such as motion or focus blur, by the 2D version of convolution, which is handled especially simply by the Fourier Transform. This enables us to restore an image from many kinds of noise and other corruption. We begin Section 15.1 by showing how the Fourier Transform, and others, may be converted from a 1- to a 2-dimensional transform of a type called separable, reducing computation and adding simplicity. In the Fourier case we may apply the FFT in each dimension individually, and hence speed calculation still further.
In Section 15.1.3 we prove that certain changes in an image result in predictable changes in its transform. We include the effect of both rotation and projection, which are germane to computerised tomography in Chapter 18. In Section 15.1.4 we present consequences of the 2D Convolution Theorem for the Fourier Transform, and offer a polynomial-based proof that purports to show ‘why’ the result holds. Section 15.1.5 establishes connections between correlation and the Fourier Transform, for later use.
We begin Section 15.2 by considering the low-level operation of changing pixels solely on the basis of their individual values, then move on to the possibilites of ‘filtering’ by changing Fourier coefficients. Next we see how the same effect may be accomplished by convolving the original with a matrix of coefficients. We introduce filters that achieve edge-detection in an image.
In this chapter we recapitulate the beginnings of probability theory. The reader to whom this subject is completely new may wish first to consult a more leisurely introduction, such as McColl (1997).
Sample spaces
There are different schools on the meaning of probability. For example, it is argued that a statement such as ‘The Scottish National Party has a probability of 1/5 of winning the election’ is meaningless because the experiment ‘have an election’ cannot be repeated to order. The way out has proved to be an axiomatic approach, originated by Kolmogorov (see Figure 9.1) in 1933, in which all participants, though begging to differ on some matters of interpretation, can nevertheless agree on the consequences of the rules (see e.g. Kolmogorov, 1956b). His work included a rigorous definition of conditional expectation, a crucial and fruitful concept in current work in many areas and applications of probability.
Sample spaces and events
Model 9.1 We begin with the idea that, corresponding to an experiment E, there is a set S, the sample space, consisting of all possible outcomes. In the present context an event A is a set of outcomes, that is A ⊆ S. Then it is a matter of definition that, if E is performed with outcome a, the event A occurs if and only if a ∈ A.
Often, but not always, the outcomes are conveniently represented by numbers, as illustrated in examples below.