To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We analyze the properties that optimization algorithms must possess in order to prevent convergence to non-stationary points for the merit function. We show that demanding the exact satisfaction of constraint linearizations results in difficulties in a wide range of optimization algorithms. Feasibility control is a mechanism that prevents convergence to spurious solutions by ensuring that sufficient progress towards feasibility is made, even in the presence of certain rank deficiencies. The concept of feasibility control is studied in this paper in the context of Newton methods for nonlinear systems of equations and equality constrained optimization, as well as in interior methods for nonlinear programming.
Introduction
We survey some recent developments in nonlinear optimization, paying particular attention to global convergence properties. A common thread in our review is the concept of “feasibility control”, which is a name we give to mechanisms that regulate progress toward feasibility.
An example of lack of feasibility control occurs in line search Newton methods for solving systems of nonlinear equations. It has been known since the 1970s (see Powell [24]) that these methods can converge to undesirable points. The difficulties are caused by the requirement that each step satisfy a linearization of the equations, and cannot be overcome simply by performing a line search. The need for more robust algorithms has been one of the main driving forces behind the development of trust region methods. Feasibility control is provided in trust region methods by reformulating the step computation as an optimization problem with a restriction on the length of the step.
Edited by
N. Dyn, Tel-Aviv University,D. Leviatan, Tel-Aviv University,D. Levin, Tel-Aviv University,A. Pinkus, Technion - Israel Institute of Technology, Haifa
Edited by
N. Dyn, Tel-Aviv University,D. Leviatan, Tel-Aviv University,D. Levin, Tel-Aviv University,A. Pinkus, Technion - Israel Institute of Technology, Haifa
This chapter gives a short, up-to-date survey of some recent developments in the research on radial basis functions. Among its other new achievements, we consider results on convergence rates of interpolation with radial basis functions, and also recent contributions on approximation on spheres and on computation of interpolants with Krylov space methods.
Introduction
Research into radial basis functions is an immensely active and fruitful field at present and it is important and worthwhile to stand back and summarize the newest developments from time to time. In brief, this is the goal of this chapter, although we will by necessity be far from comprehensive. One of the most important aspects from the perspective of approximation theorists is the accuracy of approximation with radial basis functions when the centers are scattered. This is a subject quite suitable to begin this review with, as the whole development of radial basis functions was initiated by Duchon's contributions (1976,1978,1979) on exactly this question in a special context, especially for thin-plate splines approximation in ℝ2.
Before we begin, we recall what is understood by approximation and interpolation by radial basis function. We always start with a univariate continuous function – the radial function – φ that is radialized by composition with the Euclidean norm on ℝn, or a suitable replacement thereof when we are working on an (n − 1) sphere in n-dimensional Euclidean space.
Edited by
N. Dyn, Tel-Aviv University,D. Leviatan, Tel-Aviv University,D. Levin, Tel-Aviv University,A. Pinkus, Technion - Israel Institute of Technology, Haifa
Geophysical or meteorological data collected over the surface of the earth via satellites or ground stations will invariably come from scattered sites. There are two extremes in the problems one faces in handling such data. The first is representing sparse data by fitting a surface to it. This arises in geodesy in conjunction with measurements of the gravitation field from satellites, or meteorological measurements – temperature, for example – made at ground stations. The second is analyzing dense data to extract features of interest. For example, one may wish to process satellite images for mapping purposes. Between these two extremes there are many other problems. We will review various aspects of fitting surfaces to scattered data, addressing problems involving interpolation and order of approximation, and quadratures. Analyzing data is a more recent problem that is currently being addressed via various spherical wavelet schemes, which we will review, along with multilevel schemes. We close by discussing quadrature methods, which arise in many of the wavelet schemes as well as some interpolation methods.
Introduction
Overview
In this survey, we discuss recent progress in the representation and analysis of scattered data on spheres. As is the case with ℝs, many practical problems have stimulated interest in this direction. More and more data is taken from satellites each year. This, in turn, requires for example, improved image processing techniques for fault detection and for generation of maps.
We present a general approach to error control and mesh adaptation for computing viscous flows by the Galerkin finite element method. A posteriori error estimates are derived for quantities of physical interest by duality arguments. In these estimates local cell residuals are multiplied by influence factors which are obtained from the numerical solution of a global dual problem. This provides the basis of a feed-back algorithm by which economical meshes can be constructed which are tailored to the particular needs of the computation. The performance of this method is illustrated by several flow examples.
Introduction
Approximating partial differential equations by discretization as in the finite element method may be considered as a model reduction where a conceptually infinite dimensional model is approximated by a finite dimensional one. As the result of the computation, we obtain an approximation to the desired output quantity of the simulation and besides that certain accuracy indicators like cell-residuals. Controlling the error in such an approximation of a continuous model requires to determine the influence factors for the local error indicators on the target quantity. Such a sensitivity analysis with respect to local perturbations of the model is common in optimal control theory and introduces the concept of a dual (or adjoint) problem.
There has been an increasing interest in studying computational aspects of high dimensional problems. Such problems are defined on spaces of functions of d variables and occur in many applications, with d that can be hundreds or even thousands. Examples include:
• High dimensional integrals or path integrals with respect to the Wiener measure. These are important for many applications, in particular, in physics and in finance. High dimensional integrals also occur when we want to compute certain parameters of stochastic processes. Moreover, path integrals arise as solutions of partial differential equations given, for example, by the Feynman–Kac formula. See [25, 40, 66, 82, 85, 91].
• Global optimization where we need to compute the (global) minimum of a function of d variables. This occurs in many applications, for example, in pattern recognition and in image processing, see [97], or in the modelling and prediction of the geometry of proteins, see [45]. Simulated annealing strategies and genetic algorithms are often used, as well as smoothing techniques and other stochastic algorithms, see [10] and [74]. Some error bounds for deterministic and stochastic algorithms can be found in [42, 43, 44, 48, 53].
• The Schrödinger equation for m > 1 particles in ℝ3is a d = 3m-dimensional problem.
Edited by
N. Dyn, Tel-Aviv University,D. Leviatan, Tel-Aviv University,D. Levin, Tel-Aviv University,A. Pinkus, Technion - Israel Institute of Technology, Haifa
Edited by
N. Dyn, Tel-Aviv University,D. Leviatan, Tel-Aviv University,D. Levin, Tel-Aviv University,A. Pinkus, Technion - Israel Institute of Technology, Haifa
We investigate mutually orthogonal spline wavelet spaces on nonuniform partitions of a bounded interval, addressing the existence, uniqueness and construction of bases of minimally supported spline wavelets. The relevant algorithms for decomposition and reconstruction are considered as well as some stability-related questions. In addition, we briefly review the bivariate case for tensor products and arbitrary triangulations. We conclude this chapter with a discussion of some special cases.
Introduction
Splines have become the standard mathematical tool for representing smooth shapes in computer graphics and geometric modeling. Wavelets have been introduced more recently, but are by now well established both in mathematics and in applied sciences like signal processing and numerical analysis. The two concepts are closely related as splines provide some of the most important examples of wavelets. Although there is an extensive literature on cardinal spline wavelets (spline wavelets with uniform knot spacing), see Chui (1992), relatively little has been published about spline wavelets on arbitrary, nonuniform knots, which form the subject of this chapter. These kinds of wavelets, however, are needed for performing operations like decomposition, reconstruction and thresholding on splines given on a nonuniform knot vector, which typically occur in practical applications.
The flexibility of splines in modeling is due to good approximation properties, useful geometric interpretations of the B-spline coefficients, and simple algorithms for adding and removing knots. Full advantage of these capabilities can only be taken on general nonuniform knot vectors, where also multiple knots are allowed.
Edited by
N. Dyn, Tel-Aviv University,D. Leviatan, Tel-Aviv University,D. Levin, Tel-Aviv University,A. Pinkus, Technion - Israel Institute of Technology, Haifa
The Society for the Foundations of Computational Mathematics supports fundamental research in a wide spectrum of computational mathematics and its application areas. As part of its endeavor to promote research in computational mathematics, it regularly organises conferences and workshops which bring together leading researchers in the diverse fields impinging on all aspects of computation. Major conferences have been held in Park City (1995), Rio de Janeiro (1997), and Oxford (1999).
The next major FoCM conference will take place at the Institute for Mathematics and its Application (IMA) in Minneapolis in the summer of 2002. More information about FoCM can be obtained from its website at www.focm.net.
The conference in Oxford, England, on July 18-28,1999, was attended by over 300 scientists. Workshops were held on fourteen subjects dealing with diverse research topics from computational mathematics. In addition, eighteen plenary lectures, concerned with various computational issues, were given by some of the world's foremost researchers. This volume presents thirteen papers from these plenary speakers. Some of these papers are a survey of state of the art in an important area of computational mathematics, others present new material. The range of the topics: from complexity theory to the computation of partial differential equations, from optimization to computational geometry to stochastic systems, is an illustration of the wide sweep of contemporary computational mathematics and the intricate web of its interaction with pure mathematics and application areas.
Edited by
N. Dyn, Tel-Aviv University,D. Leviatan, Tel-Aviv University,D. Levin, Tel-Aviv University,A. Pinkus, Technion - Israel Institute of Technology, Haifa
Various issues are addressed related to the computation of minimizers for variational problems. Special attention is paid (i) to problems with singular minimizers, which natural numerical schemes may fail to detect, and the role of the choice of function space for such problems, and (ii) to problems for which there is no minimizer, which lead to difficult numerical questions such as the computation of microstructure for elastic materials that undergo phase transformations involving a change of shape.
Introduction
In this article I give a brief tour of some topics related to the computation of minimizers for integrals of the calculus of variations. In this I take the point of view not of a numerical analyst, which I am not, but of an applied mathematician for whom questions of computation have arisen not just because of the need to understand phenomena inaccessible to contemporary analysis, but also because they are naturally motivated by attempts to apply analysis to variational problems.
I will concentrate on two specific issues. The first is that minimizers of variational problems may have singularities, but natural numerical schemes may fail to detect them. Connected with this is the surprising Lavrentiev phenomenon, according to which minimizers in different function spaces may be different. The second is that minimizers may not exist, in which case the question naturally arises as to what the behaviour of numerical schemes designed to compute such minimizers will be.
This paper surveys the new, algorithmic theory of moving frames developed by the author and M. Fels. Applications in geometry, computer vision, classical invariant theory, and numerical analysis are indicated.
Introduction
The method of moving frames (“repères mobiles”) was forged by Élie Cartan, [13, 14], into a powerful and algorithmic tool for studying the geometric properties of submanifolds and their invariants under the action of a transformation group. However, Cartan's methods remained incompletely understood and the applications were exclusively concentrated in classical differential geometry; see [22, 23, 26]. Three years ago, [20, 21], Mark Fels and I formulated a new approach to the moving frame theory that can be systematically applied to general transformation groups. The key idea is to formulate a moving frame as an equivariant map to the transformation group. All classical moving frames can be reinterpreted in this manner, but the new approach applies in far wider generality. Cartan's normalization procedure for the explicit construction of a moving frame relies on the choice of a cross-section to the group orbits. Building on these two simple ideas, one may algorithmically construct moving frames and complete systems of invariants for completely general group actions.
Edited by
N. Dyn, Tel-Aviv University,D. Leviatan, Tel-Aviv University,D. Levin, Tel-Aviv University,A. Pinkus, Technion - Israel Institute of Technology, Haifa
In a a number of applications in image processing, computer vision, and computer graphics, the data of interest is defined on non-flat manifolds and maps onto non-flat manifolds. A classical and important example is directional data, including gradient directions, optical flow directions, surface normals, principal directions, and chroma. Frequently, this data is available in a noisy fashion, and there is a need for noise removal. In addition, it is often desired to obtain a multiscale-type representation of the directional data, similar to those representations obtained for gray-level images, [2, 31, 36, 37, 55]. Addressing the processing of non-flat data is the goal of this chapter. We will illustrate the basic ideas with directional data and probability distributions. In the first case, the data maps onto an hypersphere, while on the second one it maps onto a semi-hyperplane.
Image data, as well as directions and other sources of information, are not always defined on the ℝ plane or ℝ3 space. They can be, for example, defined over a surface embedded in ℝ3. It is important then to define basic image processing operation for general data defined on general (not-necessarily flat) manifolds. In other words, we want to deal with maps between two general manifolds, and be able for example to isotropically and anisotropically diffuse them with the goal of noise removal. This will make it possible for example to denoise data defined on 3D surfaces.
The study of numerical methods for initial value problems by considering their approximation properties from a dynamical systems viewpoint is now a well-established field; a substantial body of knowledge, developed over the past two decades, can be found in the literature. Nonetheless many open questions remain concerning the meaning of long-time simulations performed by approximating dynamical systems. In recent years various attempts to analyse the statistical content of these long-time simulations have emerged, and the purpose of this article is to review some of that work. The subject area is far from complete; nonetheless a certain unity can be seen in what has been achieved to date and it is therefore of value to give an overview of the field.
Some mathematical background concerning the propagation of probability measures by discrete and continuous time dynamical systems or Markov chains will be given. In particular the Frobenius-Perron and Fokker-Planck operators will be described. Using the notion of ergodicity two different approaches, direct and indirect, will be outlined. The majority of the review is concerned with indirect methods, where the initial value problem is simulated from a single initial condition and the statistical content of this trajectory studied. Three classes of problems will be studied: deterministic problems in fixed finite dimension, stochastic problems in fixed finite dimension, and deterministic problems with random data in dimension n → ∞; in the latter case ideas from statistical mechanics can be exploited to analyse or interpret numerical schemes.