To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Motivated by problems from compressed sensing, we determine the threshold behaviour of a random $n\times d \pm 1$ matrix $M_{n,d}$ with respect to the property ‘every $s$ columns are linearly independent’. In particular, we show that for every $0\lt \delta \lt 1$ and $s=(1-\delta )n$, if $d\leq n^{1+1/2(1-\delta )-o(1)}$ then with high probability every $s$ columns of $M_{n,d}$ are linearly independent, and if $d\geq n^{1+1/2(1-\delta )+o(1)}$ then with high probability there are some $s$ linearly dependent columns.
We consider an (R, Q) inventory model with two types of orders, normal orders and emergency orders, which are issued at different inventory levels. These orders are delivered after exponentially distributed lead times. In between deliveries, the inventory level decreases in a state-dependent way, according to a release rate function $\alpha({\cdot})$. This function represents the fluid demand rate; it could be controlled by a system manager via price adaptations. We determine the mean number of downcrossings $\theta(x)$ of any level x in one regenerative cycle, and use it to obtain the steady-state density f (x) of the inventory level. We also derive the rates of occurrence of normal deliveries and of emergency deliveries, and the steady-state probability of having zero inventory.
Strengthening Hadwiger’s conjecture, Gerards and Seymour conjectured in 1995 that every graph with no odd $K_t$-minor is properly $(t-1)$-colourable. This is known as the Odd Hadwiger’s conjecture. We prove a relaxation of the above conjecture, namely we show that every graph with no odd $K_t$-minor admits a vertex $(2t-2)$-colouring such that all monochromatic components have size at most $\lceil \frac{1}{2}(t-2) \rceil$. The bound on the number of colours is optimal up to a factor of $2$, improves previous bounds for the same problem by Kawarabayashi (2008, Combin. Probab. Comput.17 815–821), Kang and Oum (2019, Combin. Probab. Comput.28 740–754), Liu and Wood (2021, arXiv preprint, arXiv:1905.09495), and strengthens a result by van den Heuvel and Wood (2018, J. Lond. Math. Soc.98 129–148), who showed that the above conclusion holds under the more restrictive assumption that the graph is $K_t$-minor-free. In addition, the bound on the component-size in our result is much smaller than those of previous results, in which the dependency on $t$ was given by a function arising from the graph minor structure theorem of Robertson and Seymour. Our short proof combines the method by van den Heuvel and Wood for $K_t$-minor-free graphs with some additional ideas, which make the extension to odd $K_t$-minor-free graphs possible.
This article derives quantitative limit theorems for multivariate Poisson and Poisson process approximations. Employing the solution of the Stein equation for Poisson random variables, we obtain an explicit bound for the multivariate Poisson approximation of random vectors in the Wasserstein distance. The bound is then utilized in the context of point processes to provide a Poisson process approximation result in terms of a new metric called $d_\pi$, stronger than the total variation distance, defined as the supremum over all Wasserstein distances between random vectors obtained by evaluating the point processes on arbitrary collections of disjoint sets. As applications, the multivariate Poisson approximation of the sum of m-dependent Bernoulli random vectors, the Poisson process approximation of point processes of U-statistic structure, and the Poisson process approximation of point processes with Papangelou intensity are considered. Our bounds in $d_\pi$ are as good as those already available in the literature.
This paper provides a stochastic model, consistent with Solvency II and the Delegated Regulation, to quantify the capital requirement for demographic risk. In particular, we present a framework that models idiosyncratic and trend risks exploiting a risk theory approach in which results are obtained analytically. We apply the model to non-participating policies and quantify the Solvency Capital Requirement for the aforementioned risks in different time horizons.
In certain data assimilation and optimization problems, gradient information is essentially required. For this purpose, the adjoint model (ADJM) is often employed. The ADJM is a transpose of the tangent linear model (TLM); thus, both are based on the tangent linear approximation of the corresponding nonlinear model. Derivations, formulations, and correctness checks of the TLM and ADJM are described in detail along with the construction of the practical codes of the TLM/ADJM. Practical methods of deriving the ADJM are introduced, using the adjoint operator, Lagrangian multipliers, and chain rules. Uncertainty and validity of the TLM/ADJM are also discussed in terms of nonlinearity and discontinuous physical processes in numerical models. An example of deriving the ADJM is given for Burgers’ equation by comparing the adjoint operator method and the Lagrangian multipliers methods.
The theoretical background on sensitivity analysis, especially on the deterministic approach, is described along with definitions on the forward sensitivity coefficient, adjoint sensitivity coefficient, and relative sensitivity coefficient along with examples of their practical applications. Concept, strategies, and applications of adaptive (targeted) observations are discussed, using adjoint sensitivity analysis, singular vectors, the ensemble transform Kalman filter, and conditional nonlinear optimal perturbations. Forecast sensitivity of observations is also discussed as a tool for assessing the impact of observations. In addition, various targeting field programs are introduced.
Overview of discretizing partial differential equations is provided for selected finite difference methods, which is required for construction of discrete tangent linear and adjoint models.
In this chapter the data assimilation problem is introduced as a control theory problem for partial differential equations, with initial conditions, model error, and empirical model parameters as optional control variables. An alternative interpretation of data assimilation as a processing of information in a dynamic-stochastic system is also introduced. Both approaches will be addressed in more detail throughout this book. The historical development of data assimilation has been documented, starting from the early nineteenth-century works by Legendre, Gauss, and Laplace, to optimal interpolation and Kalman filtering, to modern data assimilation based on variational and ensemble methods, and finally to future methods such as particle filters. This suggests that data assimilation is not a very new concept, given that it has been of scientific and practical interest for a long time. Part of the chapter focuses on introducing the common terminologies and notation used in data assimilation, with special emphasis on observation equation, observation errors, and observation operators. Finally, a basic linear estimation problem based on least squares is presented.
Coupled data assimilation is presented in detail. Starting from a coupled modeling system, a classification of coupled data assimilation based on coupling strength is defined. This includes uncoupled, weakly coupled, and strongly coupled data assimilation, and the coupling strength is quantified using mutual information. The most interesting aspects of coupled data assimilation can be related to a strongly coupled system in which the information exchange is maximized. The challenges of strongly coupled data assimilation include the account of the complex control variable and error covariance. The mentioned challenges can considerably increase in realistic high-dimensional applications. Additional issues that can hamper strongly coupled data assimilation include non-Gaussian errors and potentially different spatiotemporal scales of coupled system components. To improve understanding of strongly coupled data assimilation, a simple two-component system is introduced and analyzed. The theoretical assessment is followed by real-world examples of strongly coupled forecast error covariance. Finally, the coupled covariance localization is analyzed and a practical method to address it is described.
Variational data assimilation (VAR) is described in its various forms and their mathematical formulations are explained, including three-dimensional/four-dimensional VAR, first guess at appropriate time (FGAT), Physical-space Statistical Analysis System (PSAS), and incremental approaches. A historical overview of and differences in the calculus of variations and optimal control theory, the root theories on VAR, are also discussed, which are represented by the Euler–Lagrange equations and Pontryagin’s maximum (minimum) principle, respectively. Furthermore, major elements of VAR are reviewed with an emphasis on various formalisms of cost function, including Tikhonov regularization, strong- versus weak-constraint and incremental formulation, and on specification and diagnosis of error covariances, including observation error covariance, background error covariance, and model error covariance. Issues on minimization of the VAR cost function, including gradient, preconditioning, and assimilation period, are also addressed.
Mathematical background and formulation of numerical minimization process are described in terms of gradient-based methods, whose ingredients include gradient, Hessian, directional derivatives, optimality conditions for minimization, Hessian eigensystem, conjugate number of Hessian, and conjugate vectors. Various minimization algorithms, such as the steepest descent method, Newton’s method, conjugate gradient method, and quasi-Newton’s method, are introduced along with practical examples.