To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The swelling and shrinkage of biological tissues are modelled by a four-component mixture theory in which a deformable and charged porous medium is saturated with a fluid with dissolved ions. Four components are defined: solid, liquid, cations and anions. The aim of this paper is the construction of the Lagrangian model of the four-component system. It is shown that, with the choice of Lagrangian description of the solid skeleton, the motion of the other components can be described in terms of Lagrangian initial system of the solid skeleton as well. Such an approach has a particularly important bearing on computer-aided calculations. Balance laws are derived for each component and for the whole mixture. In cooperation of the second law of thermodynamics, the constitutive equations are given. This theory results in a coupled system of nonlinear parabolic differential equations together with an algebraic constraint for electroneutrality. In this model, it is desirable to obtain an accurate approximation of the fluid flow and ions flow. Such an accurate approximation can be determined by the mixed finite element method. Part II is devoted to this task.
The discretisation of the Oseen problem by finite element methods may sufferin general from two shortcomings. First, the discrete inf-sup (Babuška-Brezzi)condition can be violated. Second, spurious oscillationsoccur due to the dominating convection. One way to overcome bothdifficulties is the use of local projection techniques. Studying the local projection method in an abstract setting, we show that the fulfilment of a local inf-sup condition between approximation andprojection spaces allows to construct an interpolation with additional orthogonality properties. Based on this special interpolation, optimal a-priori error estimates are shown with error constants independent of the Reynolds number. Applying the general theory,we extend the results of Braack and Burman for the standard two-level versionof the local projection stabilisation to discretisations of arbitrary order onsimplices, quadrilaterals, and hexahedra. Moreover, our general theory allowsto derive a novel class of local projection stabilisation by enrichment of the approximation spaces. This class of stabilised schemes uses approximation and projection spaces defined on the same mesh and leads tomuch more compact stencils than in the two-level approach. Finally, on simplices, the spectral equivalence of the stabilising terms of the local projection method and the subgrid modelling introduced by Guermond is shown. This clarifies the relation of the local projection stabilisation to the variational multiscaleapproach.
The primary objective of this work is to develop coarse-grainingschemes for stochastic many-body microscopic models and quantify theireffectiveness in terms of a priori and a posteriori error analysis. Inthis paper we focus on stochastic lattice systems ofinteracting particles at equilibrium. The proposed algorithms are derived from an initial coarse-grainedapproximation that is directly computable by Monte Carlo simulations, and the corresponding numerical error is calculated using the specific relative entropy between the exact and approximate coarse-grained equilibrium measures. Subsequently we carry out a cluster expansion around this first – and often inadequate – approximation and obtain more accurate coarse-graining schemes.The cluster expansions yield also sharp a posteriori error estimates forthe coarse-grained approximations that can be used for the construction ofadaptive coarse-graining methods. We present a number of numerical examples that demonstrate that thecoarse-graining schemes developed here allow for accurate predictions of critical behavior and hysteresis in systems with intermediate and long-range interactions. We also present examples where they substantially improvepredictions of earlier coarse-graining schemes for short-range interactions.
In this paper, we extend the reduced-basis approximations developed earlier for linear elliptic and parabolic partial differential equations with affine parameterdependence to problems involving (a) nonaffine dependence on theparameter, and (b) nonlinear dependence on the field variable.The method replaces the nonaffine and nonlinear terms with a coefficient function approximation which then permits an efficient offline-online computationaldecomposition. We first review the coefficient function approximation procedure: the essential ingredients are (i) a good collateralreduced-basis approximation space, and (ii) a stable and inexpensiveinterpolation procedure. We then apply this approach to linear nonaffine and nonlinear elliptic and parabolic equations; in eachinstance, we discuss the reduced-basis approximation and the associated offline-online computationalprocedures. Numerical results are presented to assess our approach.
The aim of this article is to propose a new method for the grey-level image classification problem. We first present the classicalvariational approach without and with a regularization term in order tosmooth the contours of the classified image. Then we present the generaltopological asymptotic analysis, and we finally introduce its application tothe grey-level image classification problem.
Electro-muscular disruption (EMD) devices such as TASER M26 andX26 have been used as a less-than-lethal weapon. Such EMD devicesshoot a pair of darts toward an intended target to generate anincapacitating electrical shock. In the use of the EMD device,there have been controversial questions about its safety andeffectiveness. To address these questions, we need to investigatethe distribution of the current density J inside the targetproduced by the EMD device. One approach is to develop acomputational model providing a quantitative and reliable analysisabout the distribution of J. In this paper, we set up amathematical model of a typical EMD shock, bearing in mind that weare aiming to compute the current density distribution inside thehuman body with a pair of inserted darts. The safety issue ofTASER is directly related to the magnitude of |J| at the regionof the darts where the current density J is highlyconcentrated. Hence, fine computation of J near the dart isessential. For such numerical simulations, serious computationaldifficulties are encountered in dealing with the darts having twodifferent very sharp corners, tip of needle and tip of barb. Theboundary of a small fishhook-shaped dart inside a largecomputational domain and the presence of corner singularitiesrequire a very fine mesh leading to a formidable amount ofnumerical computations. To circumvent these difficulties, wedeveloped a multiple point source method of computing J. It hasa potential to provide effective analysis and more accurateestimate of J near fishhook-shaped darts. Numerical experimentsshow that the MPSM is just fit for the study of EMD shocks.
This paper is concerned with optimal design problems with aspecial assumption on the coefficients of the state equation.Namely we assume that the variations of these coefficientshave a small amplitude. Then, making an asymptotic expansionup to second order with respect to the aspect ratio of thecoefficients allows us to greatly simplify the optimal designproblem. By using the notion of H-measures we are able toprove general existence theorems for small amplitudeoptimal design and to provide simple and efficient numericalalgorithms for their computation. A key feature of thistype of problems is that the optimal microstructures arealways simple laminates.
Motivated by the pricing of American options for baskets weconsider a parabolic variational inequality in a boundedpolyhedral domain $\Omega\subset\mathbb{R}^d$ with a continuous piecewisesmooth obstacle. We formulate a fully discrete method by usingpiecewise linear finite elements in space and the backward Eulermethod in time. We define an a posteriori error estimator and showthat it gives an upper bound for the error inL2(0,T;H1(Ω)). The error estimator is localized in thesense that the size of the elliptic residual is only relevant inthe approximate non-contact region, and the approximability of theobstacle is only relevant in the approximate contact region. Wealso obtain lower bound results for the space error indicators inthe non-contact region, and for the time error estimator.Numerical results for d=1,2 show that the error estimator decayswith the same rate as the actual error when the space meshsize hand the time step τ tend to zero. Also, the error indicatorscapture the correct behavior of the errors in both the contact andthe non-contact regions.
In this work we deal with the numerical solution of aHamilton-Jacobi-Bellman (HJB) equation with infinitely manysolutions. To compute the maximal solution – the optimalcost of the original optimal control problem – we present acomplete discrete method based on the use of some finite elementsand penalization techniques.
Partant du principe de conservation de la masse et du principefondamental de la dynamique, on retrouvel'équation d'Euler nous permettant de décrire les modèlesasymptotiques de propagation d'ondes dans des eaux peu profondesen dimension 1. Pour décrire la propagation des ondes en dimension2, Kadomtsev et Petviashvili [ 15 (1970) 539] utilisent une perturbationlinéaire de l'équation de KdV. Mais cela ne précise pas si leséquations ainsi obtenues dérivent de l'équation d'Euler, c'est ceque montrent Ablowitz et Segur dans l'article [J. Fluid Mech.92 (1979) 691–715]. Oninsistera, de la même manière, sur le fait que les équations deKP-BBM peuvent être aussi obtenues à partir de l'équation d'Euler,et dans quelle mesure elles décrivent le modèle physique. Dans unsecond temps, on reprend la méthode introduite dans l'article deBona et al. [Lect. Appl. Math.20 (1983) 235–267] dans lequel ils comparent lessolutions d'ondes longues en dimension 1, à savoir les solutionsdes équations KdV et BBM, pour montrer ici que les solutions deséquations KP-II et KP-BBM-II sont proches sur un intervalle de tempsinversement proportionnel à l'amplitude des ondes. Du point de vue de la modélisation,il sera clair, d'après la première partie, que seul le modèledécrit par KP-BBM-II est bien posé, et comme du point de vuephysique, KP-II et KP-BBM-II décrivent les ondes longues de faibleamplitude lorsque la tension de surface est négligeable, il est intéressant de les comparer.De plus, on verra que la méthode utilisée ici reste valable pourles problèmes périodiques.
We are concerned here with processing discontinuous functions from their spectral information. We focus on two main aspects of processing such piecewise smooth data: detecting the edges of a piecewise smooth f, namely, the location and amplitudes of its discontinuities; and recovering with high accuracy the underlying function in between those edges. If f is a smooth function, say analytic, then classical Fourier projections recover f with exponential accuracy. However, if f contains one or more discontinuities, its global Fourier projections produce spurious Gibbs oscillations which spread throughout the smooth regions, enforcing local loss of resolution and global loss of accuracy. Our aim in the computation of the Gibbs phenomenon is to detect edges and to reconstruct piecewise smooth functions, while regaining the high accuracy encoded in the spectral data.
To detect edges, we utilize a general family of edge detectors based on concentration kernels. Each kernel forms an approximate derivative of the delta function, which detects edges by separation of scales. We show how such kernels can be adapted to detect edges with one- and two-dimensional discrete data, with noisy data, and with incomplete spectral information. The main feature is concentration kernels which enable us to convert global spectral moments into local information in physical space. To reconstruct f with high accuracy we discuss novel families of mollifiers and filters. The main feature here is making these mollifiers and filters adapted to the local region of smoothness while increasing their accuracy together with the dimension of the data. These mollifiers and filters form approximate delta functions which are properly parametrized to recover f with (root-) exponential accuracy.
This paper describes methods that are important for the numerical evaluation of certain functions that frequently occur in applied mathematics, physics and mathematical statistics. This includes what we consider to be the basic methods, such as recurrence relations, series expansions (both convergent and asymptotic), and numerical quadrature. Several other methods are available and some of these will be discussed in less detail. Examples will be given on the use of special functions in certain problems from mathematical physics and mathematical statistics (integrals and series with special functions).
Finite volume methods apply directly to the conservation law form of a differential equation system; and they commonly yield cell average approximations to the unknowns rather than point values. The discrete equations that they generate on a regular mesh look rather like finite difference equations; but they are really much closer to finite element methods, sharing with them a natural formulation on unstructured meshes. The typical projection onto a piecewise constant trial space leads naturally into the theory of optimal recovery to achieve higher than first-order accuracy. They have dominated aerodynamics computation for over forty years, but they have never before been the subject of an Acta Numerica article. We shall therefore survey their early formulations before describing powerful developments in both their theory and practice that have taken place in the last few years.
This article demonstrates how numerical methods for atmospheric models can be validated by showing that they give the theoretically predicted rate of convergence to relevant asymptotic limit solutions. This procedure is necessary because the exact solution of the Navier–Stokes equations cannot be resolved by production models. The limit solutions chosen are those most important for weather and climate prediction. While the best numerical algorithms for this purpose largely reflect current practice, some important limit solutions cannot be captured by existing methods. The use of Lagrangian rather than Eulerian averaging may be required in these cases.
V. L. Rvachev called R-functions ‘logically charged functions’ because they encode complete logical information within the standard setting of real analysis. He invented them in the 1960s as a means for unifying logic, geometry, and analysis within a common computational framework – in an effort to develop a new computationally effective language for modelling and solving boundary value problems. Over the last forty years, R-functions have been accepted as a valuable tool in computer graphics, geometric modelling, computational physics, and in many areas of engineering design, analysis, and optimization. Yet, many elements of the theory of R-functions continue to be rediscovered in different application areas and special situations. The purpose of this survey is to expose the key ideas and concepts behind the theory of R-functions, explain the utility of R-functions in a broad range of applications, and to discuss selected algorithmic issues arising in connection with their use.
Molecular dynamics is discussed from a mathematical perspective. The recent history of method development is briefly surveyed with an emphasis on the use of geometric integration as a guiding principle. The recovery of statistical mechanical averages from molecular dynamics is then introduced, and the use of backward error analysis as a technique for analysing the accuracy of numerical averages is described. This article gives the first rigorous estimates for the error in statistical averages computed from molecular dynamics simulation based on backward error analysis. It is shown that molecular dynamics introduces an appreciable bias at stepsizes which are below the stability threshold. Simulations performed in such a regime can be corrected by use of a stepsize-dependent reweighting factor. Numerical experiments illustrate the efficacy of this approach. In the final section, several open problems in dynamics-based molecular sampling are considered.