We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Time parallelization, also known as PinT (parallel-in-time), is a new research direction for the development of algorithms used for solving very large-scale evolution problems on highly parallel computing architectures. Despite the fact that interesting theoretical work on PinT appeared as early as 1964, it was not until 2004, when processor clock speeds reached their physical limit, that research in PinT took off. A distinctive characteristic of parallelization in time is that information flow only goes forward in time, meaning that time evolution processes seem necessarily to be sequential. Nevertheless, many algorithms have been developed for PinT computations over the past two decades, and they are often grouped into four basic classes according to how the techniques work and are used: shooting-type methods; waveform relaxation methods based on domain decomposition; multigrid methods in space–time; and direct time parallel methods. However, over the past few years, it has been recognized that highly successful PinT algorithms for parabolic problems struggle when applied to hyperbolic problems. We will therefore focus on this important aspect, first by providing a summary of the fundamental differences between parabolic and hyperbolic problems for time parallelization. We then group PinT algorithms into two basic groups. The first group contains four effective PinT techniques for hyperbolic problems: Schwarz waveform relaxation (SWR) with its relation to tent pitching; parallel integral deferred correction; ParaExp; and ParaDiag. While the methods in the first group also work well for parabolic problems, we then present PinT methods specifically designed for parabolic problems in the second group: Parareal; the parallel full approximation scheme in space–time (PFASST); multigrid reduction in time (MGRiT); and space–time multigrid (STMG). We complement our analysis with numerical illustrations using four time-dependent PDEs: the heat equation; the advection–diffusion equation; Burgers’ equation; and the second-order wave equation.
Cut finite element methods (CutFEM) extend the standard finite element method to unfitted meshes, enabling the accurate resolution of domain boundaries and interfaces without requiring the mesh to conform to them. This approach preserves the key properties and accuracy of the standard method while addressing challenges posed by complex geometries and moving interfaces.
In recent years, CutFEM has gained significant attention for its ability to discretize partial differential equations in domains with intricate geometries. This paper provides a comprehensive review of the core concepts and key developments in CutFEM, beginning with its formulation for common model problems and the presentation of fundamental analytical results, including error estimates and condition number estimates for the resulting algebraic systems. Stabilization techniques for cut elements, which ensure numerical robustness, are also explored. Finally, extensions to methods involving Lagrange multipliers and applications to time-dependent problems are discussed.
The discontinuous Petrov–Galerkin (DPG) method is a Petrov–Galerkin finite element method with test functions designed for obtaining stability. These test functions are computable locally, element by element, and are motivated by optimal test functions which attain the supremum in an inf-sup condition. A profound consequence of the use of nearly optimal test functions is that the DPG method can inherit the stability of the (undiscretized) variational formulation, be it coercive or not. This paper combines a presentation of the fundamentals of the DPG ideas with a review of the ongoing research on theory and applications of the DPG methodology. The scope of the presented theory is restricted to linear problems on Hilbert spaces, but pointers to extensions are provided. Multiple viewpoints to the basic theory are provided. They show that the DPG method is equivalent to a method which minimizes a residual in a dual norm, as well as to a mixed method where one solution component is an approximate error representation function. Being a residual minimization method, the DPG method yields Hermitian positive definite stiffness matrix systems even for non-self-adjoint boundary value problems. Having a built-in error representation, the method has the out-of-the-box feature that it can immediately be used in automatic adaptive algorithms. Contrary to standard Galerkin methods, which are uninformed about test and trial norms, the DPG method must be equipped with a concrete test norm which enters the computations. Of particular interest are variational formulations in which one can tailor the norm to obtain robust stability. Key techniques to rigorously prove convergence of DPG schemes, including construction of Fortin operators, which in the DPG case can be done element by element, are discussed in detail. Pointers to open frontiers are presented.
Complicated option pricing models attract much attention in financial industries, as they produce relatively better accurate values by taking into account more realistic assumptions such as market liquidity, uncertain volatility and so forth. We propose a new hybrid method to accurately explore the behaviour of the nonlinear pricing model in illiquid markets, which is important in financial risk management. Our method is based on the Newton iteration technique and the Fréchet derivative to linearize the model. The linearized equation is then discretized by a differential quadrature method in space and a quadratic trapezoid rule in time. It is observed through computations that the accurate solutions for the model emerge using very few grid points and time elements, compared with the finite difference method in the literature. Furthermore, this method also helps to avoid consideration of the convergence issues of the Newton approach applied to the nonlinear algebraic system containing many unknowns at each time step if an implicit method is used in time discretization. It is important to note that the Fréchet derivative supports to enhance the convergence order of the proposed iterative scheme.
This article studies the dynamical behaviour of classical solutions of a hyperbolic system of balance laws, derived from a chemotaxis model with logarithmic sensitivity, with time-dependent boundary conditions. It is shown that under suitable assumptions on the boundary data, solutions starting in the $H^2$-space exist globally in time and the differences between the solutions and their corresponding boundary data converge to zero as time goes to infinity. There is no smallness restriction on the magnitude of the initial perturbations. Moreover, numerical simulations show that the assumptions on the boundary data are necessary for the above-mentioned results to hold true. In addition, numerical results indicate that the solutions converge asymptotically to time-periodic states if the boundary data are time-periodic.
We derive and analyse well-posed boundary conditions for the linear shallow water wave equation. The analysis is based on the energy method and it identifies the number, location and form of the boundary conditions so that the initial boundary value problem is well-posed. A finite-volume method is developed based on the summation-by-parts framework with the boundary conditions implemented weakly using penalties. Stability is proven by deriving a discrete energy estimate analogous to the continuous estimate. The continuous and discrete analysis covers all flow regimes. Numerical experiments are presented verifying the analysis.
We propose a monotone approximation scheme for a class of fully nonlinear degenerate partial integro-differential equations which characterize nonlinear $\alpha$-stable Lévy processes under a sublinear expectation space with $\alpha\in(1,2)$. We further establish the error bounds for the monotone approximation scheme. This in turn yields an explicit Berry–Esseen bound and convergence rate for the $\alpha$-stable central limit theorem under sublinear expectation.
Physics-informed neural networks (PINNs) and their variants have been very popular in recent years as algorithms for the numerical simulation of both forward and inverse problems for partial differential equations. This article aims to provide a comprehensive review of currently available results on the numerical analysis of PINNs and related models that constitute the backbone of physics-informed machine learning. We provide a unified framework in which analysis of the various components of the error incurred by PINNs in approximating PDEs can be effectively carried out. We present a detailed review of available results on approximation, generalization and training errors and their behaviour with respect to the type of the PDE and the dimension of the underlying domain. In particular, we elucidate the role of the regularity of the solutions and their stability to perturbations in the error analysis. Numerical results are also presented to illustrate the theory. We identify training errors as a key bottleneck which can adversely affect the overall performance of various models in physics-informed machine learning.
Questions of ‘how best to acquire data’ are essential to modelling and prediction in the natural and social sciences, engineering applications, and beyond. Optimal experimental design (OED) formalizes these questions and creates computational methods to answer them. This article presents a systematic survey of modern OED, from its foundations in classical design theory to current research involving OED for complex models. We begin by reviewing criteria used to formulate an OED problem and thus to encode the goal of performing an experiment. We emphasize the flexibility of the Bayesian and decision-theoretic approach, which encompasses information-based criteria that are well-suited to nonlinear and non-Gaussian statistical models. We then discuss methods for estimating or bounding the values of these design criteria; this endeavour can be quite challenging due to strong nonlinearities, high parameter dimension, large per-sample costs, or settings where the model is implicit. A complementary set of computational issues involves optimization methods used to find a design; we discuss such methods in the discrete (combinatorial) setting of observation selection and in settings where an exact design can be continuously parametrized. Finally we present emerging methods for sequential OED that build non-myopic design policies, rather than explicit designs; these methods naturally adapt to the outcomes of past experiments in proposing new experiments, while seeking coordination among all experiments to be performed. Throughout, we highlight important open questions and challenges.
This overview is devoted to splitting methods, a class of numerical integrators intended for differential equations that can be subdivided into different problems easier to solve than the original system. Closely connected with this class of integrators are composition methods, in which one or several low-order schemes are composed to construct higher-order numerical approximations to the exact solution. We analyse in detail the order conditions that have to be satisfied by these classes of methods to achieve a given order, and provide some insight about their qualitative properties in connection with geometric numerical integration and the treatment of highly oscillatory problems. Since splitting methods have received considerable attention in the realm of partial differential equations, we also cover this subject in the present survey, with special attention to parabolic equations and their problems. An exhaustive list of methods of different orders is collected and tested on simple examples. Finally, some applications of splitting methods in different areas, ranging from celestial mechanics to statistics, are also provided.
We study a system of nonlocal aggregation cross-diffusion PDEs that describe the evolution of opinion densities on a network. The PDEs are coupled with a system of ODEs that describe the time evolution of the agents on the network. Firstly, we apply the Deterministic Particle Approximation (DPA) method to the aforementioned system in order to prove the existence of solutions under suitable assumptions on the interactions between agents. Later on, we present an explicit model for opinion formation on an evolving network. The opinions evolve based on both the distance between the agents on the network and the ’attitude areas’, which depend on the distance between the agents’ opinions. The position of the agents on the network evolves based on the distance between the agents’ opinions. The goal is to study radicalisation, polarisation and fragmentation of the population while changing its open-mindedness and the radius of interaction.
Opinion dynamics is an important and very active area of research that delves into the complex processes through which individuals form and modify their opinions within a social context. The ability to comprehend and unravel the mechanisms that drive opinion formation is of great significance for predicting a wide range of social phenomena such as political polarisation, the diffusion of misinformation, the formation of public consensus and the emergence of collective behaviours. In this paper, we aim to contribute to that field by introducing a novel mathematical model that specifically accounts for the influence of social media networks on opinion dynamics. With the rise of platforms such as Twitter, Facebook, and Instagram and many others, social networks have become significant arenas where opinions are shared, discussed and potentially altered. To this aim after an analytical construction of our new model and through incorporation of real-life data from Twitter, we calibrate the model parameters to accurately reflect the dynamics that unfold in social media, showing in particular the role played by the so-called influencers in driving individual opinions towards predetermined directions.
In this work, we carry out an analytical and numerical investigation of travelling waves representing arced vegetation patterns on sloped terrains. These patterns are reported to appear also in ecosystems which are not water deprived; therefore, we study the hypothesis that their appearance is due to plant–soil negative feedback, namely due to biomass-(auto)toxicity interactions.
To this aim, we introduce a reaction-diffusion-advection model describing the dynamics of vegetation biomass and toxicity which includes the effect of sloped terrains on the spatial distribution of these variables. Our analytical investigation shows the absence of Turing patterns, whereas travelling waves (moving uphill in the slope direction) emerge. Investigating the corresponding dispersion relation, we provide an analytic expression for the asymptotic speed of the wave. Numerical simulations not only just confirm this analytical quantity but also reveal the impact of toxicity on the structure of the emerging travelling pattern.
Our analysis represents a further step in understanding the mechanisms behind relevant plants‘ spatial distributions observed in real life. In particular, since vegetation patterns (both stationary and transient) are known to play a crucial role in determining the underlying ecosystems’ resilience, the framework presented here allows us to better understand the emergence of such structures to a larger variety of ecological scenarios and hence improve the relative strategies to ensure the ecosystems’ resilience.
This article surveys research on the application of compatible finite element methods to large-scale atmosphere and ocean simulation. Compatible finite element methods extend Arakawa’s C-grid finite difference scheme to the finite element world. They are constructed from a discrete de Rham complex, which is a sequence of finite element spaces linked by the operators of differential calculus. The use of discrete de Rham complexes to solve partial differential equations is well established, but in this article we focus on the specifics of dynamical cores for simulating weather, oceans and climate. The most important consequence of the discrete de Rham complex is the Hodge–Helmholtz decomposition, which has been used to exclude the possibility of several types of spurious oscillations from linear equations of geophysical flow. This means that compatible finite element spaces provide a useful framework for building dynamical cores. In this article we introduce the main concepts of compatible finite element spaces, and discuss their wave propagation properties. We survey some methods for discretizing the transport terms that arise in dynamical core equation systems, and provide some example discretizations, briefly discussing their iterative solution. Then we focus on the recent use of compatible finite element spaces in designing structure preserving methods, surveying variational discretizations, Poisson bracket discretizations and consistent vorticity transport.
Low-rank tensor representations can provide highly compressed approximations of functions. These concepts, which essentially amount to generalizations of classical techniques of separation of variables, have proved to be particularly fruitful for functions of many variables. We focus here on problems where the target function is given only implicitly as the solution of a partial differential equation. A first natural question is under which conditions we should expect such solutions to be efficiently approximated in low-rank form. Due to the highly nonlinear nature of the resulting low-rank approximations, a crucial second question is at what expense such approximations can be computed in practice. This article surveys basic construction principles of numerical methods based on low-rank representations as well as the analysis of their convergence and computational complexity.
In the current work, we study a stochastic parabolic problem. The presented problem is motivated by the study of an idealised electrically actuated MEMS (Micro-Electro-Mechanical System) device in the case of random fluctuations of the potential difference, a parameter that actually controls the operation of MEMS device. We first present the construction of the mathematical model, and then, we deduce some local existence results. Next for some particular versions of the model, relevant to various boundary conditions, we derive quenching results as well as estimations of the probability for such singularity to occur. Additional numerical study of the problem in one dimension follows, which also allows the further investigation the problem with respect to its quenching behaviour.
We introduce an approach and a software tool for solving coupled energy networks composed of gas and electric power networks. Those networks are coupled to stochastic fluctuations to address possibly fluctuating demand due to fluctuating demands and supplies. Through computational results, the presented approach is tested on networks of realistic size.
We study the optimal investment strategy to minimize the probability of lifetime ruin under a general mortality hazard rate. We explore the error between the minimum probability of lifetime ruin and the achieved probability of lifetime ruin if one follows a simple investment strategy inspired by earlier work in this area. We also include numerical examples to illustrate the estimation. We show that the nearly optimal probability of lifetime ruin under the simplified investment strategy is quite close to the original minimum probability of lifetime ruin under reasonable parameter values.
We introduce a numerical framework for dispersive equations embedding their underlying resonance structure into the discretisation. This will allow us to resolve the nonlinear oscillations of the partial differential equation (PDE) and to approximate with high-order accuracy a large class of equations under lower regularity assumptions than classical techniques require. The key idea to control the nonlinear frequency interactions in the system up to arbitrary high order thereby lies in a tailored decorated tree formalism. Our algebraic structures are close to the ones developed for singular stochastic PDEs (SPDEs) with regularity structures. We adapt them to the context of dispersive PDEs by using a novel class of decorations which encode the dominant frequencies. The structure proposed in this article is new and gives a variant of the Butcher–Connes–Kreimer Hopf algebra on decorated trees. We observe a similar Birkhoff type factorisation as in SPDEs and perturbative quantum field theory. This factorisation allows us to single out oscillations and to optimise the local error by mapping it to the particular regularity of the solution. This use of the Birkhoff factorisation seems new in comparison to the literature. The field of singular SPDEs took advantage of numerical methods and renormalisation in perturbative quantum field theory by extending their structures via the adjunction of decorations and Taylor expansions. Now, through this work, numerical analysis is taking advantage of these extended structures and provides a new perspective on them.