To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The title of this book promises a discussion of topics in geometry and topology applied to grid or mesh generation. To generate meshes we need algorithms, the subject that provides the glue for our various investigations. However, I make no attempt to cover the breadth of computational geometry. Quite to the contrary, I seek out the subarea relevant to mesh generation, and I enrich that material with concepts from combinatorial topology and a modest amount of numerical analysis. To preserve the focus, I limit attention to meshes composed of triangles and tetrahedra. The economy in breadth permits a coherent and locally self-contained treatment of all topics. My choices are guided by stylistic concerns aimed at exposing ideas and limiting the amount of technical detail.
This book is based on notes I developed while teaching graduate courses at the University of Illinois at Urbana-Champaign and Duke University. The organization into chapters, sections, exercises, and open problems reflects the teaching style I practiced in these courses. Each chapter but the last develops a major topic and is worth about 2 weeks of teaching. Some of the topics are closely related and others are independent. The chapters are divided into sections; each section corresponds to a lecture of about 75 minutes. I believe in an approach to research that complements knowing what is known with knowing what is not known. I therefore recommend spending time in each lecture to discuss one of the open problems collected in the last chapter.
This chapter studies the problem of constructing meshes of tetrahedra in ℝ3. Such meshes are three-dimensional simplicial complexes, the same as what we called tetrahedrizations in Chapter 5. The new aspects are the attention to boundary conditions and the focus on the shape of the tetrahedra. The primary purpose of meshes is to provide a discrete representation of continuous space. The tetrahedra themselves and their arrangement within the mesh are not as important as how well they represent space. Unfortunately, there is no universal measure that distinguishes good from bad space representations. As a general guideline, we avoid very small and very large angles because of their usually negative influence on the performance of numerical methods based on meshes. Section 6.1 studies the problem of tetrahedrizing possibly nonconvex polyhedra. Section 6.2 measures tetrahedral shape and introduces the ratio property for Delaunay tetrahedrizations. Section 6.3 extends the Delaunay refinement algorithm from two to three dimensions. Section 6.4 studies a particularly annoying type of tetrahedron and ways to remove it from Delaunay meshes.
Meshing polyhedra
In this book, meshing a spatial domain means decomposing a polyhedron into tetrahedra that form a simplicial complex. This section introduces polyhedra and studies the problem of how many tetrahedra are needed to mesh them.
Polyhedra and faces
A polyhedron is the union of convex polyhedra, P = ∪i∈I ∩ Hi, where I is a finite index set and each Hi is a finite set of closed half-spaces. For example, the polyhedron in Figure 6.1 can be specified as the union of four convex polyhedra.
This paper is devoted to the complexity analysis of certain uniformity properties owned by all known symbolic methods of parametric polynomial equation solving (geometric elimination). It is shown that any parametric elimination procedure which is parsimonious with respect to branchings and divisions must necessarily have a non-polynomial sequential time complexity, even if highly efficient data structures (as e.g. the arithmetic circuit encoding of polynomials) are used.
Introduction
Origins, development and interaction of modern algebraic geometry and commutative algebra may be considered as one of the most illustrative examples of historical dialectics in mathematics. Still today, and more than ever before, timeless idealism (in form of modern commutative algebra) is bravely struggling whith secular materialism (in form of complexity issues in computational algebraic geometry).
Kronecker was doubtless the creator of this eternal battle field and its first war lord. In a similar way as Gauss did for computational number theory, Kronecker laid intuitively the mathematical foundations of modern computer algebra. He introduced 1882 in [30] his famous “elimination method” for polynomial equation systems and his “parametric representation” of (equidimensional) algebraic varieties. By the way, this parametric representation was until 10 years ago rediscovered again and again. It entered in modern computer algebra as “Shape Lemma” (see e.g. [38, 8, 14, 27]). Using his elimination method in a highly skillful, but unfortunately inimitable way, Kronecker was able to state and to prove a series of fundamental results on arbitrary algebraic varieties.
We analyze the properties that optimization algorithms must possess in order to prevent convergence to non-stationary points for the merit function. We show that demanding the exact satisfaction of constraint linearizations results in difficulties in a wide range of optimization algorithms. Feasibility control is a mechanism that prevents convergence to spurious solutions by ensuring that sufficient progress towards feasibility is made, even in the presence of certain rank deficiencies. The concept of feasibility control is studied in this paper in the context of Newton methods for nonlinear systems of equations and equality constrained optimization, as well as in interior methods for nonlinear programming.
Introduction
We survey some recent developments in nonlinear optimization, paying particular attention to global convergence properties. A common thread in our review is the concept of “feasibility control”, which is a name we give to mechanisms that regulate progress toward feasibility.
An example of lack of feasibility control occurs in line search Newton methods for solving systems of nonlinear equations. It has been known since the 1970s (see Powell [24]) that these methods can converge to undesirable points. The difficulties are caused by the requirement that each step satisfy a linearization of the equations, and cannot be overcome simply by performing a line search. The need for more robust algorithms has been one of the main driving forces behind the development of trust region methods. Feasibility control is provided in trust region methods by reformulating the step computation as an optimization problem with a restriction on the length of the step.
We present a general approach to error control and mesh adaptation for computing viscous flows by the Galerkin finite element method. A posteriori error estimates are derived for quantities of physical interest by duality arguments. In these estimates local cell residuals are multiplied by influence factors which are obtained from the numerical solution of a global dual problem. This provides the basis of a feed-back algorithm by which economical meshes can be constructed which are tailored to the particular needs of the computation. The performance of this method is illustrated by several flow examples.
Introduction
Approximating partial differential equations by discretization as in the finite element method may be considered as a model reduction where a conceptually infinite dimensional model is approximated by a finite dimensional one. As the result of the computation, we obtain an approximation to the desired output quantity of the simulation and besides that certain accuracy indicators like cell-residuals. Controlling the error in such an approximation of a continuous model requires to determine the influence factors for the local error indicators on the target quantity. Such a sensitivity analysis with respect to local perturbations of the model is common in optimal control theory and introduces the concept of a dual (or adjoint) problem.
There has been an increasing interest in studying computational aspects of high dimensional problems. Such problems are defined on spaces of functions of d variables and occur in many applications, with d that can be hundreds or even thousands. Examples include:
• High dimensional integrals or path integrals with respect to the Wiener measure. These are important for many applications, in particular, in physics and in finance. High dimensional integrals also occur when we want to compute certain parameters of stochastic processes. Moreover, path integrals arise as solutions of partial differential equations given, for example, by the Feynman–Kac formula. See [25, 40, 66, 82, 85, 91].
• Global optimization where we need to compute the (global) minimum of a function of d variables. This occurs in many applications, for example, in pattern recognition and in image processing, see [97], or in the modelling and prediction of the geometry of proteins, see [45]. Simulated annealing strategies and genetic algorithms are often used, as well as smoothing techniques and other stochastic algorithms, see [10] and [74]. Some error bounds for deterministic and stochastic algorithms can be found in [42, 43, 44, 48, 53].
• The Schrödinger equation for m > 1 particles in ℝ3is a d = 3m-dimensional problem.
The Society for the Foundations of Computational Mathematics supports fundamental research in a wide spectrum of computational mathematics and its application areas. As part of its endeavor to promote research in computational mathematics, it regularly organises conferences and workshops which bring together leading researchers in the diverse fields impinging on all aspects of computation. Major conferences have been held in Park City (1995), Rio de Janeiro (1997), and Oxford (1999).
The next major FoCM conference will take place at the Institute for Mathematics and its Application (IMA) in Minneapolis in the summer of 2002. More information about FoCM can be obtained from its website at www.focm.net.
The conference in Oxford, England, on July 18-28,1999, was attended by over 300 scientists. Workshops were held on fourteen subjects dealing with diverse research topics from computational mathematics. In addition, eighteen plenary lectures, concerned with various computational issues, were given by some of the world's foremost researchers. This volume presents thirteen papers from these plenary speakers. Some of these papers are a survey of state of the art in an important area of computational mathematics, others present new material. The range of the topics: from complexity theory to the computation of partial differential equations, from optimization to computational geometry to stochastic systems, is an illustration of the wide sweep of contemporary computational mathematics and the intricate web of its interaction with pure mathematics and application areas.
Various issues are addressed related to the computation of minimizers for variational problems. Special attention is paid (i) to problems with singular minimizers, which natural numerical schemes may fail to detect, and the role of the choice of function space for such problems, and (ii) to problems for which there is no minimizer, which lead to difficult numerical questions such as the computation of microstructure for elastic materials that undergo phase transformations involving a change of shape.
Introduction
In this article I give a brief tour of some topics related to the computation of minimizers for integrals of the calculus of variations. In this I take the point of view not of a numerical analyst, which I am not, but of an applied mathematician for whom questions of computation have arisen not just because of the need to understand phenomena inaccessible to contemporary analysis, but also because they are naturally motivated by attempts to apply analysis to variational problems.
I will concentrate on two specific issues. The first is that minimizers of variational problems may have singularities, but natural numerical schemes may fail to detect them. Connected with this is the surprising Lavrentiev phenomenon, according to which minimizers in different function spaces may be different. The second is that minimizers may not exist, in which case the question naturally arises as to what the behaviour of numerical schemes designed to compute such minimizers will be.
This paper surveys the new, algorithmic theory of moving frames developed by the author and M. Fels. Applications in geometry, computer vision, classical invariant theory, and numerical analysis are indicated.
Introduction
The method of moving frames (“repères mobiles”) was forged by Élie Cartan, [13, 14], into a powerful and algorithmic tool for studying the geometric properties of submanifolds and their invariants under the action of a transformation group. However, Cartan's methods remained incompletely understood and the applications were exclusively concentrated in classical differential geometry; see [22, 23, 26]. Three years ago, [20, 21], Mark Fels and I formulated a new approach to the moving frame theory that can be systematically applied to general transformation groups. The key idea is to formulate a moving frame as an equivariant map to the transformation group. All classical moving frames can be reinterpreted in this manner, but the new approach applies in far wider generality. Cartan's normalization procedure for the explicit construction of a moving frame relies on the choice of a cross-section to the group orbits. Building on these two simple ideas, one may algorithmically construct moving frames and complete systems of invariants for completely general group actions.
In a a number of applications in image processing, computer vision, and computer graphics, the data of interest is defined on non-flat manifolds and maps onto non-flat manifolds. A classical and important example is directional data, including gradient directions, optical flow directions, surface normals, principal directions, and chroma. Frequently, this data is available in a noisy fashion, and there is a need for noise removal. In addition, it is often desired to obtain a multiscale-type representation of the directional data, similar to those representations obtained for gray-level images, [2, 31, 36, 37, 55]. Addressing the processing of non-flat data is the goal of this chapter. We will illustrate the basic ideas with directional data and probability distributions. In the first case, the data maps onto an hypersphere, while on the second one it maps onto a semi-hyperplane.
Image data, as well as directions and other sources of information, are not always defined on the ℝ plane or ℝ3 space. They can be, for example, defined over a surface embedded in ℝ3. It is important then to define basic image processing operation for general data defined on general (not-necessarily flat) manifolds. In other words, we want to deal with maps between two general manifolds, and be able for example to isotropically and anisotropically diffuse them with the goal of noise removal. This will make it possible for example to denoise data defined on 3D surfaces.
The study of numerical methods for initial value problems by considering their approximation properties from a dynamical systems viewpoint is now a well-established field; a substantial body of knowledge, developed over the past two decades, can be found in the literature. Nonetheless many open questions remain concerning the meaning of long-time simulations performed by approximating dynamical systems. In recent years various attempts to analyse the statistical content of these long-time simulations have emerged, and the purpose of this article is to review some of that work. The subject area is far from complete; nonetheless a certain unity can be seen in what has been achieved to date and it is therefore of value to give an overview of the field.
Some mathematical background concerning the propagation of probability measures by discrete and continuous time dynamical systems or Markov chains will be given. In particular the Frobenius-Perron and Fokker-Planck operators will be described. Using the notion of ergodicity two different approaches, direct and indirect, will be outlined. The majority of the review is concerned with indirect methods, where the initial value problem is simulated from a single initial condition and the statistical content of this trajectory studied. Three classes of problems will be studied: deterministic problems in fixed finite dimension, stochastic problems in fixed finite dimension, and deterministic problems with random data in dimension n → ∞; in the latter case ideas from statistical mechanics can be exploited to analyse or interpret numerical schemes.
As described in the previous chapter, the term reactive flow applies to a very broad range of physical phenomena. In some cases the equations are not even rigorously known. In this chapter, we first consider the equations of gas-phase reactive flows, which are generally accepted as valid in the continuum regime. This set of time-dependent, coupled, partial differential equations governs the conservation of mass and species density, momentum, and energy. The equations describe the convective motion of the fluid, reactions among the constituent species that may change the molecular composition, and other transport processes such as thermal conduction, molecular diffusion, and radiation transport. Many different situations are described by these equations when they are combined with various initial and boundary conditions. In a later section of this chapter, we discuss interactions among these processes and generalizations of this set of equations to describe multiphase reactive flows.
The material presented in this chapter is somewhat condensed, and is not meant to give an in-depth explanation to those unfamiliar with the individual topics. The purpose is to present the reactive-flow equations, to establish the notation used throughout this book, and then to relate each term in the equations to physical processes important in reactive flows. The chapter can then be used as a reference for the more detailed discussions of numerical methods in subsequent chapters. It would be reasonable to skim this chapter the first time through the book, and then to refer back to it as needed.