To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
The numerical treatment of ordinary differential equations has continued to be a lively area of numerical analysis for more than a century, with interesting applications in various fields and rich theory. There are three main developments in the design of numerical techniques and in the analysis of the algorithms:
Non-stiff differential equations. In the 19th century (Adams, Bashforth, and later Runge, Heun and Kutta), numerical integrators have been designed that are efficient (high order) and easy to apply (explicit) in practical situations.
Stiff differential equations. In the middle of the 20th century one became aware that earlier developed methods are impractical for a certain class of differential equations (stiff problems) due to stability restrictions. New integrators (typically implicit) were needed as well as new theories for a better understanding of the algorithms.
Geometric numerical integration. In long-time simulations of Hamiltonian systems (molecular dynamics, astronomy) neither classical explicit methods nor implicit integrators for stiff problems give satisfactory results. In the last few decades, special numerical methods have been designed that preserve the geometric structure of the exact flow and thus have an improved long-time behaviour.
The basic developments (algorithmic and theoretical) of these epochs are documented in the monographs [HNW93], [HW96], and [HLW06]. Within geometric numerical integration we can also distinguish between non-stiff and stiff situations. Since here the main emphasis is on conservative Hamiltonian systems, the term “stiff” has to be interpreted as “highly oscillatory”.
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
In this paper I present a history of tractability of continuous problems, which has its beginning in the successful numerical tests for highdimensional integration of finance problems. Tractability results will be illustrated for two multivariate problems, integration and linear tensor products problems, in the worst case setting. My talk at FoCM'08 in Hong Kong and this paper are based on the book Tractability of Multivariate Problems, written jointly with Erich Novak. The first volume of our book has been recently published by the European Mathematical Society.
Introduction
Many people have recently become interested in studying the tractability of continuous problems. This area of research addresses the computational complexity of multivariate problems defined on spaces of functions of d variables, with d that can be in the hundreds or thousands; in fact, d can even be arbitrarily large. Such problems occur in numerous applications including physics, chemistry, finance, economics, and the computational sciences.
As with all problems arising in information-based complexity, we want to solve multivariate problems to within ∈, using algorithms that use finitely many functions values or values of some linear functionals. Let n(∈, d) be the minimal number of function values or linear functionals that is needed to compute the solution of the d-variate problem to within ∈.
For many multivariate problems defined over standard spaces of functions n(∈, d) is exponentially large in d.
A new method for solving boundary value problems has recently been introduced by the first author. Although this method was first developed for non-linear integrable PDEs (using the crucial notion of a Lax pair), it has also given rise to new analytical and numerical techniques for linear PDEs. Here we review the application of the new method to linear elliptic PDEs, using the modified Helmholtz equation as an illustrative example.
Introduction
Almost forty years ago an ingenious new method was discovered for the solution of the initial value problem of the Korteweg–de Vries (KdV) equation [GGKM67]. This new method, which was later called the inverse scattering transform (IST) method was based on the mysterious fact that the KdV equation is equivalent to two linear eigenvalue equations called a Lax pair (in honor of Peter Lax, [Lax68] who first understood that the IST method was the consequence of this remarkable property). The KdV equation belongs to a large class of nonlinear equations which are called integrable. Although there exist several types of integrable equations, which include PDEs, ODEs, singular-integrodifferential equations, difference equations and cellular automata, the existence of an associated Lax pair provides a common feature of all these equations.
After several attempts to extend the inverse scattering transform method from initial value problems to boundary value problems, a unified method for solving boundary value problems for linear and integrable nonlinear PDEs was introduced by the first author in [Fok97] and reviewed in [Fok08].
In this article we review recent progress on the design, analysis and implementation of numerical-asymptotic boundary integral methods for the computation of frequency-domain acoustic scattering in a homogeneous unbounded medium by a bounded obstacle. The main aim of the methods is to allow computation of scattering at arbitrarily high frequency with finite computational resources.
Introduction
There is huge mathematical and engineering interest in acoustic and electromagnetic wave scattering problems, driven by many applications such as modelling radar, sonar, acoustic noise barriers, atmospheric particle scattering, ultrasound and VLSI. For time harmonic problems in infinite domains and media which are predominantly homogeneous, the boundary element method is a very popular solver, used in a number of large commercial codes, see e.g. [CSCVHH04]. In many practical applications the characteristic length scale L of the domain is large compared to the wavelength λ. Then the small dimensionless wavelength λ/L induces oscillatory solutions, and the application of conventional (piece-wise polynomial) boundary elements for this multiscale problem yields full matrices of dimension at least N = (L/λ)d-1 (in ℝd). (Domain finite elements lead to sparse matrices but require even larger N.) Since this “loss of robustness” as L/λ→∞ puts high frequency problems outside the reach of many standard algorithms, much recent research has been devoted to finding more robust methods.
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
The running time of many iterative numerical algorithms is dominated by the condition number of the input, a quantity measuring the sensitivity of the solution with regard to small perturbations of the input. Examples are iterative methods of linear algebra, interior-point methods of linear and convex optimization, as well as homotopy methods for solving systems of polynomial equations. Thus a probabilistic analysis of these algorithms can be reduced to the analysis of the distribution of the condition number for a random input. This approach was elaborated upon for average-case complexity by many researchers.
The goal of this survey is to explain how average-case analysis can be naturally refined in the sense of smoothed analysis. The latter concept, introduced by Spielman and Teng in 2001, aims at showing that for all real inputs (even ill-posed ones), and all slight random perturbations of that input, it is unlikely that the running time will be large. A recent general result of Bürgisser, Cucker and Lotz (2008) gives smoothed analysis estimates for a variety of applications. Its proof boils down to local bounds on the volume of tubes around a real algebraic hypersurface in a sphere. This is achieved by bounding the integrals of absolute curvature of smooth hypersurfaces in terms of their degree via the principal kinematic formula of integral geometry and Bézout's theorem.
Introduction
In computer science, the most common theoretical approach to understanding the behaviour of algorithms is worst-case analysis.
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
Subdivision schemes are efficient computational methods for the design, representation and approximation of 2D and 3D curves, and of surfaces of arbitrary topology in 3D. Such schemes generate curves/surfaces from discrete data by repeated refinements. While these methods are simple to implement, their analysis is rather complicated.
The first part of the paper presents the “classical” case of linear, stationary subdivision schemes refining control points. It reviews univariate schemes generating curves and their analysis. Several well-known schemes are discussed.
The second part of the paper presents three types of nonlinear subdivision schemes, which depend on the geometry of the data, and which are extensions of univariate linear schemes. The first two types are schemes refining control points and generating curves. The last is a scheme refining curves in a geometry-dependent way, and generating surfaces.
Introduction
Subdivision schemes are efficient computational tools for the generation of functions/curves/surfaces from discrete data by repeated refinements. They are used in geometric modeling for the design, representation and approximation of curves and of surfaces of arbitrary topology. A linear stationary scheme uses the same linear refinement rules at each location and at each refinement level. The refinement rules depend on a finite number of mask coefficients. Therefore, such schemes are easy to implement, but their analysis is rather complicated.
The first subdivision schemes were devised by G. de Rahm (1956) for the generation of functions with a first derivative everywhere and a second derivative nowhere.
In this paper we shall discuss plane wave methods for approximating the time-harmonic wave equation paying particular attention to the Ultra Weak Variational Formulation (UWVF). This method is essentially a Discontinuous Galerkin (DG) method in which the approximating functions are special traces of solutions of the underlying Helmholtz equation. We summarize the known error analysis for this method, as well as recent attempts to improve the conditioning of the resulting linear system. There are several refinement strategies that can be used to improve the accuracy of the computed solution: h-refinement in which the mesh is refined with a fixed number of basis functions per element, the p-version in which the number of approximating functions per element is increased with a fixed mesh, and a combined hp strategy. We shall provide some numerical results on h and p convergence showing how methods of this type can sometimes provide an efficient solver.
Introduction
Traditional methods for discretizing the Helmholtz equation based on using the equation directly suffer from the problem that they become rapidly more expensive as the wave number k (see Eq. (1.1)) increases. For example, finite element, finite difference, finite volume and discontinuous Galerkin methods all suffer from “pollution error” due to the fact that discrete waves have a slightly different wavelength than their exact counterparts (since this error in the wavelength depends on the wave number k, this leads to the “dispersion” of a wave).
The numerical approximation of high frequency wave propagation is important in many applications including seismic, acoustic, optical waves and microwaves. For these problems the solution becomes highly oscillatory relative to the overall size of the domain. Direct simulations using the standard wave equations are therefore very expensive, since a large number of grid points is required to resolve the wave oscillations. There are however computationally much less costly models, that are good approximations of many wave equations at high frequencies. In this paper we review such models and related numerical methods that are used for simulations in applications. We focus on the infinite frequency approximation of geometrical optics and the finite frequency corrections given by the geometrical theory of diffraction. We also briefly discuss Gaussian beams.
Introduction
Simulation of high-frequency waves is a problem encountered in a great many engineering and science fields. Currently the interest is driven by new applications in wireless communication (cell phones, Bluetooth, WiFi) and photonics (optical fibers, filters, switches). Simulation is also used increasingly in more classical applications. Some examples in electromagnetism are antenna design, radar signature computation and base station coverage for cell phones. In acoustics simulation is used for noise reduction, underwater communication and medical ultrasonography. Finding the location of an earthquake and oil exploration are some applications of seismic wave simulation in geophysics. Nondestructive testing is another example where both electromagnetic and acoustic waves are simulated.
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
The Society for the Foundations of Computational Mathematics supports and promotes fundamental research in computational mathematics and its applications, interpreted in the broadest sense. It fosters interaction among mathematics, computer science and other areas of computational science through its conferences, workshops and publications. As part of this endeavour to promote research across a wide spectrum of subjects concerned with computation, the Society brings together leading researchers working in diverse fields. Major conferences of the Society have been held in Park City (1995), Rio de Janeiro (1997), Oxford (1999), Minneapolis (2002), Santander (2005), and Hong Kong (2008). The next conference is expected to be held in 2011. More information about FoCM is available at its website http://www.focm.net.
The conference in Hong Kong on June 16 – 26, 2008, was attended by several hundred scientists. FoCM conferences follow a set pattern: mornings are devoted to plenary talks, while in the afternoon the conference divides into a number of workshops, each devoted to a different theme within the broad theme of foundations of computational mathematics. This structure allows for a very high standard of presentation, while affording endless opportunities for cross-fertilization and communication across subject boundaries. Workshops at the Hong Kong conference were held in the following nineteen fields:
– Approximation theory
– Asymptotic analysis
– Computational algebraic geometry
– Computational dynamics
– Computational number theory
– Foundations of numerical PDEs
– Geometric integration and computational mechanics
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
When studying a dynamical system from a global viewpoint, there are only few theoretical tools at our disposal. This is especially true if we want to describe all aspects of the dynamics with a reasonable amount of detail. A combination of analytic, symbolic and numerical tools, together with qualitative and topological considerations, can give a reasonably good description. Furthermore it is possible to derive paradigmatic models which can be analysed theoretically and allow us to study pieces of the dynamics. It is also important to know the relevance of different phenomena. Are they confined to a narrow domain of the phase space or to a tiny region of the parameter space or do they really play a significant role? Several theoretical/numerical tools are presented, and applied to different problems in celestial mechanics, unfolding of singularities and other problems. This is part of a project aimed towards understanding finite-dimensional systems in a global way. To avoid technicalities we shall assume that all maps and flows considered in this paper are analytic.
Introduction
Many properties are known for low-dimensional conservative systems, like Area-Preserving or Measure-Preserving Maps (APM, MPM) or systems which can be reduced to them as 2-degrees of freedom Hamiltonian systems and volume-preserving 3D flows. Most of these properties have a local character, either around a fixed point, around a given orbit, like a periodic orbit or a homoclinic orbit, or around an invariant curve or torus.
We consider multiscale differential equations in which the operator varies rapidly over fine scales. Direct numerical simulation methods need to resolve the small scales and they therefore become very expensive for such problems when the computational domain is large. Inspired by classical homogenization theory, we describe a numerical procedure for homogenization, which starts from a fine discretization of a multiscale differential equation, and computes a discrete coarse grid operator which incorporates the influence of finer scales. In this procedure the discrete operator is represented in a wavelet space, projected onto a coarser subspace and approximated by a banded or block-banded matrix. This wavelet homogenization applies to a wider class of problems than classical homogenization. The projection procedure is general and we give a presentation of a framework in Hilbert spaces, which also applies to the differential equation directly. We show numerical results when the wavelet based homogenization technique is applied to discretizations of elliptic and hyperbolic equations, using different approximation strategies for the coarse grid operator.
Introduction
In the numerical simulation of partial differential equations, the existence of subgrid scale phenomena poses considerable difficulties. With subgrid scale phenomena, we mean those processes which could influence the solution on the computational grid but which have length scales shorter than the grid size. Highly oscillatory initial data may, for example, interact with fine scales in the material properties and produce coarse scale contributions to the solution.
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
Asymptotic methods include asymptotic evaluation of integrals, asymptotic expansion of solutions to differential equations, singular perturbation techniques, discrete asymptotics, etc. In this survey, we present some of the most significant developments in these areas in the second half of the 20th century. Also mentioned will be a new method known as the Riemann-Hilbert approach, which has had a significant impact in the field in recent years.
Introduction
What is asymptotics? It is the branch of analysis that deals with problems concerning the determination of the behavior of a function as one of its parameters tends to a specific value, or a sequence as its index tends to infinity. Thus, it includes, for example, Stirling's formula, asymptotic expansion of the Lebesgue constant in Fourier series, and even the prime number theorem. But, in general, it refers to just the two main areas: (i) asymptotic evaluation of integrals, and (ii) asymptotic solutions to differential equations. The second area sometimes also includes the subject of singular perturbation theory. But the results in this subarea are mostly formal (i.e., not mathematically rigorous). Although occasionally one may also include the methods of asymptotic enumeration in the general area of asymptotics, the development of this area is far behind those in the two areas mentioned above. For instance, a turning point theory for difference equations was not introduced until just around the turn of this century, while the corresponding theory for differential equations was developed in the 1930's.
It's been over 40 years since Abraham Sinkov wrote this wonderful little book. As he mentions in his introduction (Preface to the first edition, above) this is a book concerning elementary cryptographic systems. Much has happened in the cryptographic world since this book first appeared. Notably, public-key systems have changed the landscape entirely. In bringing this book up to date, I've included the RSA method (Chapter 6), reasoning that understanding its underpinnings requires relatively elementary number theory and so would be a useful addition. The difficulty in breaking RSA leads to the question of what is a perfectly secure system, and so I've also added a chapter on one-time pads.
Otherwise, I've tried to change very little in the original text. Some terminology I've brought up-to-date: “direct standard” alphabets and ciphers are now “additive”, “decimated” is now “multiplicative”. Sinkov's original exercises I've left unaltered. Their subjects are rather dated, reflecting the cold war era (there are references to nuclear testing and communists), but leaving them I think does no harm to the material being studied and might be now thought of as “quaint”. In any case, decrypting them presents the same challenge as if more modern messages were used.
It's hard to find discussion of some of these topics anymore, methods you can think of as paper-and-pencil methods. Sinkov still presents the best discussion available on how to break columnar ciphers of unequal length, I feel.