To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The existence of functions such as Nest and NestList, together with extensive visualization tools, ensure that Mathematica is a natural system for investigating the iteration of mappings. One favourite is the logistic map that was considered in Chapter 5. Complex numbers play a very natural role, from two quite different points of view. First, we wish to understand why maps such as the logistic map behave the way they do, without relying merely on numerical simulation. Second, we wish to extend the use of numerical simulation to mappings of the complex plane.
You explored the first point in Chapter 5, and the second in Chapters 4 and 6, for simple polynomial maps. Now you will consider some other mappings of the complex plane into itself. These mappings are non-holomorphic (in the sense that will be defined properly in Chapter 10 – for now it suffices to realize that this means the functions involve complex conjugation in an essential way). Normally, when considering the theory of complex numbers, one's interest is quite rightly focused on the analytical properties of holomorphic or meromorphic functions. I hope the constructions described here will suggest that there is much that is both beautiful and interesting in complex structures that are not holomorphic. My approach is based on that of Field and Golubitsky (1992), and the use of Mathematica on this topic was first given by the author (Shaw, 1995).
In this chapter we shall explore the notion of a ‘transform’ of a function, where an integral mapping is used to construct a ‘transformed’ function out of an original function. The continuous Fourier transform is one of a family of such mappings, which also includes the Laplace transform and the discrete Fourier transform. The Laplace transform will be discussed in Chapter 18. Numerical methods for the discrete Fourier transform and for the inversion of Laplace transforms will be given in Chapter 21.
What is the point of such transforms? Perhaps the most important lies in the solution of linear differential equations. Here the operation of a transform can convert differential equations into algebraic equations. In the case of an ordinary differential equation (ODE), one such transform can produce a single algebraic condition that can be solved for the transform by elementary means, leaving one with the problem of inversion – the means by which the transformed solution is turned into the function that is desired. In the case of a partial differential equation (PDE), for example in two variables, one transform can be used to reduce the PDE into an ODE, which may be solved by standard methods, or, perhaps, by the application of a further transform to an algebraic condition. Again one proceeds to a solution of the transformed problem. One or more inversions is required to obtain the solution.
In real-world applications if is often the case that a purely analytical approach to transform calculus is insufficient. This chapter explores two techniques that extend the utility of transform methods by allowing a purely numerical treatment. We can explore numerical methods for both Fourier and Laplace transforms – in principle any method could be applied to either type of transform, since one is a rotation of the other. However, in practice, two types of problem appear to be of most importance:
(1) An essentially numerical approach to forward and backward Fourier transforms;
(2) The inversion of a complicated Laplace transform given in analytical form, for which there is no known analytical inverse.
An entire book could be written about the first topic, which is at the heart of many problems in applied mathematics, physics and engineering. It is of particular importance for signal and image processing. We shall briefly explore Mathematica's Fourier and InverseFourier functions. It should also be clear than any Fourier transform could be worked out numerically by the use of NIntegrate. In this chapter this approach will be explored in detail only for Laplace transforms.
The second topic is particularly important for generating solutions to partial differential equations. As we have seen in Chapters 18 and 19, certain partial differential equations may be simplified by transforming with respect to one or more of the independent variables, leading to a solvable algebraic or ordinary differential equation.
In this chapter we introduce the concept of differentiation of a complex function of a complex variable. There are several ways of approaching this topic, and we shall consider at least two. Given that a complex number may be regarded as a pair of real numbers, we need to make very clear the distinction between differentiability of a function that has two real arguments and differentiability of a function with a single complex argument, and shall take as our starting point a review of the differentiation of functions of two real variables. This approach has the merit of generalizing in a straight-forward way to functions of many real or complex variables. We shall also consider another simple approach to differentiation based on the limit of a ratio. This latter approach is perhaps more familiar if you have taken a course in one-variable real calculus, but does not generalize to functions of several real or complex variables. A key result that we will establish is that a complex function is differentiable in the complex sense if (a) it is differentiable when considered as two real functions of two real variables and also (b) the Cauchy—Riemann equations (partial differential equations) apply. These equations link the real and imaginary parts of the function. After proving some basic results about complex differentiability, e.g., the product, quotient and chain rules, we then derive one of the principal results of basic complex analysis — that power series are differentiable within their radius of convergence.
The goal of this Chapter is to provide you with a very basic understanding of how Laplace's equation (in three space variables) and the scalar wave equation (in three space and one time dimension) can be solved using holomorphic methods. This chapter will build on the methods developed in Chapter 23 in a very direct way, and you are recommended to read that chapter now before proceeding further here.
The material on dimension three is, in part, a much simplified version of part of a series of lectures given by Nigel Hitchin (now Professor, F.R.S.) in Oxford in the early 1980s. The picture presented here will not give you anything like the full geometrical picture underlying the results, which we shall develop by elementary methods. Part of the theory of the intrinsic three-dimensional approach is developed in a paper published by Hitchin (1982). The four-dimensional picture is covered most comprehensively by its principle architect, Professor Sir Roger Penrose, F.R.S., in Penrose and Rindler (1984a, b).
Laplace's equation in dimension three
Perhaps the most natural place to start is with the solution of Laplace's equation in three variables. In Chapter 23 we found, in Eq. (23.85), a natural holomorphic representation of a point in (possibly complex) three-dimensional space, arising as a degenerate case of a holomorphic null curve. This representation is in terms of a special type of quadratic holomorphic function.
The solution of general quadratic equations becomes possible, in terms of simple square roots, once one has access to the machinery of complex numbers. The question naturally arises as to whether it is possible to solve higher-order equations in the same way. In fact, we must be careful to pose this question properly. We might be interested in whether we need to extend the number system still further. For example, if we write down a cubic equation with coefficients that are complex numbers, can we find all the roots in terms of complex numbers? We can ask similar questions for higher-order polynomial equations. The investigation of the solution of cubic and quartic equations is a topic that used to be popular in basic courses on complex numbers, but has become less fashionable recently, probably because of the extensive manipulations that are required. Armed with Mathematica, however, such manipulations become routine, and we can revisit some of the classic developments in algebra quite straightforwardly. These topics have become so unfashionable, in fact, that the author received some suggestions from readers of early drafts of this book that this material should be, if not removed altogether, relocated to an appendix! I have left this material here quite deliberately, having found numerous applications for the solutions of cubics, at least, in applied mathematics. You may feel free to skip this part of the material if you have no interest in cubics and higher order systems.
In Chapter 16 we explored some of the geometrical properties of holomorphic functions, and in particular looked at the behaviour of the Möbius transformation. The key geometrical feature was that the mapping is conformal (where the derivative is non-zero) in the sense that it is locally angle-preserving. In Chapter 19 we highlighted the importance of conformal mapping to the solution of Laplace's equation in two dimensions. We produced several types of solution to Laplace's equation, including several examples where the region of interest was bounded by a circle or a line in the complex plane.
A question that arise naturally is how to manage matters when the region is not a half-plane or interior/exterior of a circle. Here we must draw a sharp distinction between issues of general principle and practicalities of implementation. We shall begin by stating without proof an important, but non-constructive, theorem that addresses the general principle. Then we shall introduce the Schwarz—Christoffel (SC) mapping that gives an explicit construction for polygonal regions.
There are very few examples of the SC mapping that can be worked out in closed-form in terms of ‘simple’ functions. A novel use of Mathematica is to use its advanced built-in special-function capabilities, and their linkage to the symbolic integrator, to give evaluations of several expressions usually left as complicated integrals in more traditional treatments. We can use such evaluations to facilitate the visualization of the mappings, and hence to confirm the correctness of the answer.
When we come to explore real dimensions greater than two, matters become considerably more interesting. Indeed, in my own undergraduate studies, the question as to how to solve Laplace's equation in three or more dimensions using methods analogous to those presented here went unanswered, and remains unanswered in most undergraduate curricula the world over. I did not see the answer until my postgraduate studies, studying twistor methods in Oxford, and did not fully understand many of the geometrical aspects until my own post-doctoral work on twistor descriptions of minimal surfaces and strings. However, this author at least is convinced that many of the concepts are easily understood using only the elementary complex analysis already presented here, and this chapter is in part an attempt to get the message across in such a fashion. Another goal of this chapter and the subsequent one is to persuade some of you that, as well as being a basis for research in fundamental theoretical physics, there are also some interesting problems in basic and very applied mathematics that might well be solved with such methods, if only more people worked on it!
In some ways the presentation is easier if we make the jump straight to four dimensions, and treat the relativistic case. Results for three dimensions can then be obtained by constraining matters to a hyperplane t = 0.
There is a transform that is closely related to a special case of the Fourier transform, known as the Laplace transform. While the Laplace transform is very similar, historically it has come to have a separate identity, and one can often find separate tables of the two sets of transforms. Furthermore, it is very appropriate to make a separate assessment of both its inversion, and its applications to differential equations. In the latter context, Laplace transforms are particularly useful when dealing with ODEs and PDEs defined on a half-space – in this setting its differential properties are slightly different from the Fourier transform due to the influence of the boundary.
The goal of this chapter is to define the Laplace transform and explain the basic results and links to complex variable theory. It should be appreciated that there is an extensive knowledge base of known transforms and their inverses. Sadly, many of the excellent books of tables of transforms are old and hard to find if not actually out of print. You might like to check if your library has copies of the old works by Erdelyi. One notable exception is the extraordinarily comprehensive series of books by Prudnikov, Brychkov and Marichev, in which volumes 4 and 5 (Prudnikov et al, 1998, 2002) give tables of transforms and their inverses.
Markov chain decomposition is a tool for analysing the convergence rate of a complicated Markov chain by studying its behaviour on smaller, more manageable pieces of the state space. Roughly speaking, if a Markov chain converges quickly to equilibrium when restricted to subsets of the state space, and if there is sufficient ergodic flow between the pieces, then the original Markov chain also must converge rapidly to equilibrium. We present a new version of the decomposition theorem where the pieces partition the state space, rather than forming a cover where pieces overlap, as was previously required. This new formulation is more natural and better suited to many applications. We apply this disjoint decomposition method to demonstrate the efficiency of simple Markov chains designed to uniformly sample circuits of a given length on certain Cayley graphs. The proofs further indicate that a Markov chain for sampling adsorbing staircase walks, a problem arising in statistical physics, is also rapidly mixing.
We show how to generate labelled and unlabelled outerplanar graphs with $n$ vertices uniformly at random in polynomial time in $n$. To generate labelled outerplanar graphs, we present a counting technique using the decomposition of a graph according to its block structure, and compute the exact number of labelled outerplanar graphs. This allows us to make the correct probabilistic choices in a recursive generation of uniformly distributed outerplanar graphs.
Next we modify our formulas to also count rooted unlabelled graphs, and finally show how to use these formulas in a Las Vegas algorithm to generate unlabelled outerplanar graphs uniformly at random in expected polynomial time.
The two-type Richardson model describes the growth of two competing infections on $\mathbb{Z}^d$. At time 0 two disjoint finite sets $\xi_1,\xi_2\subset \mathbb{Z}^d$ are infected with type 1 and type 2 infection respectively. An uninfected site then becomes type 1 (2) infected at a rate proportional to the number of type 1 (2) infected nearest neighbours and once infected it remains so forever. The main result in this paper is, loosely speaking, that the choice of the initial sets $\xi_1$ and $\xi_2$ is irrelevant in deciding whether or not the event of mutual unbounded growth for the two infection types has positive probability.
Let ${P_s(d)}$ be the probability that a random 0/1-matrix of size $d \times d$ is singular, and let ${E(d)}$ be the expected number of 0/1-vectors in the linear subspace spanned by $d-1$ random independent 0/1-vectors. (So ${E(d)}$ is the expected number of cube vertices on a random affine hyperplane spanned by vertices of the cube.)
We prove that bounds on ${P_s(d)}$ are equivalent to bounds on ${E(d)}$: \[{P_s(d)} = \bigg(2^{-d} {E(d)} + \frac{d^2}{2^{d+1}} \bigg) (1 + \so(1)). \] We also report on computational experiments pertaining to these numbers.