To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is inspiring to view data assimilation from that epochal moment two hundred years ago when the youthful Carl Friedrich Gauss experienced an epiphany and developed the method of least squares under constraint. In light of the great difficulty that stalwarts such as Laplace and Euler experienced in orbit determination, Gauss certainly experienced the joie de vivre of this creative work. Nevertheless, we suspect that even Gauss could not have foreseen the pervasiveness of his discovery.
And, indeed, it is difficult to view data assimilation aside from dynamical systems – that mathematical exploration commenced by Henri Poincaré in the pre-computer age. Gauss had the luxury of performing least squares on a most stable and forgiving system, the two-body problem of celestial mechanics. Poincare and his successors, notably G. D. Birkhoff and Edward Lorenz, made it clear that the three-body problem was not so forgiving of slight inaccuracies in initial state – evident through their attack on the special three-body problem discussed earlier. Further, the failure of deterministic laws to explain Brownian motion and the intricacies of thermodynamics led to a stochastic–dynamic approach where variables were considered to be random rather than deterministic. In this milieu, probability melded with dynamical law and data assimilation expanded beyond the Gaussian scope.
In this chapter our goal is to describe the classical method of linear least squares estimation as a deterministic process wherein the estimation problem is recast as an optimization (minimization) problem. This approach is quite fundamental to data assimilation and was originally developed by Gauss in the nineteenth century (refer to Part I). The primary advantage of this approach is that it requires no knowledge of the properties of the observational errors which is an integral part of any measurement system. A statistical approach to the estimation, on the other hand, relies on a probabilistic model for the observational errors. One of the important facets of the statistical approach is that under appropriate choice of the probabilistic model for the observational errors, we can indeed reproduce the classical deterministic least squares solution described in this chapter. Statistical methods for estimation are reviewed in Part IV.
The opening Section 5.1 introduces the basic “trails of thought” leading to the first formulation of the linear least squares estimation using a very simple problem called the straight line problem (see Chapter 3 for details). This problem involves estimation of two parameters – the intercept and the slope of the straight line that is being “fitted” to a swarm of m points (that align themselves very nearly along a line) in a two-dimensional plane. An extension to the general case of linear models – m points in n dimensions (n ≧ 2) is pursued in Section 5.2. Thanks to the beauty and the power of the vector-matrix notation, the derivation of this extension is no more complex than the simple two-dimensional example discussed in Section 5.1.
A simplified stochastic Hookean dumbbells model arising from viscoelastic flows is considered, the convective terms being disregarded. A finite element discretization in space is proposed. Existence of the numerical solution is proved for small data, so as a priori error estimates,using an implicit function theorem and regularity results obtained in [Bonito et al., J. Evol. Equ.6 (2006) 381–398] for the solution of the continuous problem. A posteriori error estimates are also derived.Numerical results with small time steps and a large number of realizations confirm theconvergence rate with respect to the mesh size.
This work deals with the study of some stratigraphic models for the formation of geological basins under a maximal erosion rate constrain. It leads to introduce differentialinclusions of degenerated hyperbolic-parabolic type $0\in \partial _{t}u-div\{H(\partial _{t}u+E)\nabla u\}$, where H is the maximal monotonous graph of the Heaviside function and E is a given non-negative function. Firstly, we present the new and realistic models and an original mathematical formulation, taking into account the weather-limited rate constraint in the conservation law, with a unilateral constraint on the outflow boundary. Then, we give a study of the 1-D case with numerical illustrations.
In this paper, we propose a new diffuse interface model for the study of three immiscible component incompressible viscous flows. The model is based on the Cahn-Hilliard free energy approach. The originality of our study lies in particular in the choice of the bulk free energy. We show that one must take care of this choice in order for the model to give physically relevant results. More precisely, we give conditions for the model to be well-posed and to satisfy algebraically and dynamically consistency properties with the two-component models. Notice that our model is also able to cope with some total spreading situations.We propose to take into account the hydrodynamics of the mixture by coupling our ternary Cahn-Hilliard system and the Navier-Stokes equation supplemented by capillary force terms accounting for surface tension effects between the components. Finally, we present some numerical results which illustrate our analysis and which confirm that our model has a better behavior than other possible similar models.
This paper is devoted to some elliptic boundary value problems in a self-similar ramified domain of ${\mathbb R}^2$with a fractal boundary. Both the Laplace and Helmholtz equations are studied. A generalized Neumann boundary condition is imposed on the fractal boundary.Sobolev spaces on this domain are studied. In particular, extension and trace results are obtained. These results enable the investigation of the variational formulation of the above mentioned boundary value problems. Next, for homogeneous Neumann conditions, the emphasis is placed on transparent boundary conditions, which allow the computationof the solutions in the subdomains obtained by stopping the geometric construction after a finite number of steps. The proposed methods and algorithmswill be used numerically in forecoming papers.
We consider mathematical models describing dynamics of an elastic beam which is clamped at its left end to a vibrating support and which can move freely at its right end between two rigid obstacles. We model the contact with Signorini's complementary conditions between the displacement and the shear stress. For this infinite dimensional contact problem, we propose a family of fully discretized approximations and their convergence is proved. Moreover some examples of implementation are presented. The results obtained here are also valid in the case of a beam oscillating between two longitudinal rigid obstacles.
We construct a Roe-type numerical scheme for approximating the solutionsof a drift-flux two-phase flow model. The model incorporates a set ofhighly complex closure laws, and the fluxes are generally not algebraic functions of the conserved variables. Hence, the classical approach of constructing a Roe solver by means of parameter vectors is unfeasible.Alternative approaches for analytically constructing the Roe solver are discussed, and a formulation of the Roe solver valid for general closure laws is derived. In particular, a fully analytical Roe matrix is obtained for the special case of the Zuber–Findlay law describing bubbly flows.First and second-order accurate versions of the scheme are demonstrated bynumerical examples.
In this work we design a new domain decomposition method for the Euler equations in 2 dimensions. The starting point is the equivalence with a third order scalar equation to whom we can apply an algorithm inspired from the Robin-Robin preconditioner for the convection-diffusion equation [Achdou and Nataf,C. R. Acad. Sci. Paris Sér. I325 (1997) 1211–1216]. Afterwards we translate it into an algorithm for the initial system and prove that at the continuous level and for a decomposition into 2 sub-domains, it converges in 2 iterations. This property cannot be conserved strictly at discrete level and for arbitrary domain decompositions but we still have numerical results which confirm a very good stability with respect to the various parameters of the problem (mesh size, Mach number, ...).
The efficient numerical treatment of high-dimensional problems is hampered by the curse of dimensionality. We review approximation techniques which overcome this problem to some extent. Here, we focus on methods stemming from Kolmogorov's theorem, the ANOVA decomposition and the sparse grid approach and discuss their prerequisites and properties. Moreover, we present energy-norm based sparse grids and demonstrate that, for functions with bounded mixed derivatives on the unit hypercube, the associated approximation rate in terms of the involved degrees of freedom shows no dependence on the dimension at all, neither in the approximation order nor in the order constant.
Introduction
The discretization of PDEs by conventional methods is limited to problems with up to three or four dimensions due to storage requirements and computational complexity. The reason is the so-called curse of dimensionality, a term coined in (Bellmann 1961). Here, the cost to compute and represent an approximation with a prescribed accuracy ε depends exponentially on the dimensionality d of the problem considered. We encounter complexities of the order O(ε−d/r) with r > 0 depending on the respective approach, the smoothness of the function under consideration, the polynomial degree of the ansatz functions and the details of the implementation. If we consider simple uniform grids with piecewise d-polynomial functions over a bounded domain in a finite element or finite difference approach, this complexity estimate translates to O(Nd) grid points or degrees of freedom for which approximation accuracies of the order O(N−r) are achieved.
By
Liviu I. Ignat, Departamento de Matemáticas, Universidad Autónoma de Madrid, Madrid, Spain,
Enrique Zuazua, Departamento de Matemáticas, Universidad Autónoma de Madrid, Madrid, Spain
By
Daniel A. Spielman, Applied Mathematics and Computer Science, Yale University, New Heaven, Connecticut, USA,
Shang-Hua Teng, Computer Science, Boston University, and Akamai Technologies Inc, Boston, Massachusetts, USA
In this paper, we survey some recent progress in the smoothed analysis of algorithms and heuristics in mathematical programming, combinatorial optimization, computational geometry, and scientific computing. Our focus will be more on problems and results rather than on proofs. We discuss several perturbation models used in smoothed analysis for both continuous and discrete inputs. Perhaps more importantly, we present a collection of emerging open questions as food for thought in this field.
Prelinminaries
The quality of an algorithm is often measured by its time complexity (Aho, Hopcroft & Ullman (1983) and Cormen, Leiserson, Rivest & Stein (2001)). There are other performance parameters that might be important as well, such as the amount of space used in computation, the number of bits needed to achieve a given precision (Wilkinson (1961)), the number of cache misses in a system with a memory hierarchy (Aggarwal et al. (1987), Frigo et al. (1999), and Sen et al. (2002)), the error probability of a decision algorithm (Spielman & Teng (2003a)), the number of random bits needed in a randomized algorithm (Motwani & Raghavan (1995)), the number of calls to a particular “oracle” program, and the number of iterations of an iterative algorithm (Wright (1997), Ye (1997), Nesterov & Nemirovskii (1994), and Golub & Van Loan (1989)). The quality of an approximation algorithm could be its approximation ratio (Vazirani (2001)) and the quality of an online algorithm could be its competitive ratio (Sleator & Tarjan (1985) and Borodin & El-Yaniv (1998)).
The Society for the Foundations of Computational Mathematics supports and promotes fundamental research in computational mathematics and its applications, interpreted in the broadest sense. It fosters interaction among mathematics, computer science and other areas of computational science through its conferences, workshops and publications. As part of this endeavour to promote research across a wide spectrum of subjects concerned with computation, the Society brings together leading researchers working in diverse fields. Major conferences of the Society have been held in Park City (1995), Rio de Janeiro (1997), Oxford (1999), Minneapolis (2002), and Santander (2005). The next conference is expected to be held in 2008. More information about FoCM is available at its website http://www.focm.net.
The conference in Santander on June 30 – July 9, 2005, was attended by several hundred scientists. FoCM conferences follow a set pattern: mornings are devoted to plenary talks, while in the afternoon the conference divides into a number of workshops, each devoted to a different theme within the broad theme of foundations of computational mathematics. This structure allows for a very high standard of presentation, while affording endless opportunities for cross-fertilization and communication across subject boundaries.