To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study 2-stage game-theoretic problem oriented 3-stage service policy computing, convolutional neural network (CNN) based algorithm design, and simulation for a blockchained buffering system with federated learning. More precisely, based on the game-theoretic problem consisting of both “win-lose” and “win-win” 2-stage competitions, we derive a 3-stage dynamical service policy via a saddle point to a zero-sum game problem and a Nash equilibrium point to a non-zero-sum game problem. This policy is concerning users-selection, dynamic pricing, and online rate resource allocation via stable digital currency for the system. The main focus is on the design and analysis of the joint 3-stage service policy for given queue/environment state dependent pricing and utility functions. The asymptotic optimality and fairness of this dynamic service policy is justified by diffusion modeling with approximation theory. A general CNN based policy computing algorithm flow chart along the line of the so-called big model framework is presented. Simulation case studies are conducted for the system with three users, where only two of the three users can be selected into the service by a zero-sum dual cost game competition policy at a time point. Then, the selected two users get into service and share the system rate service resource through a non-zero-sum dual cost game competition policy. Applications of our policy in the future blockchain based Internet (e.g., metaverse and web3.0) and supply chain finance are also briefly illustrated.
We develop a novel Monte Carlo algorithm for the vector consisting of the supremum, the time at which the supremum is attained, and the position at a given (constant) time of an exponentially tempered Lévy process. The algorithm, based on the increments of the process without tempering, converges geometrically fast (as a function of the computational cost) for discontinuous and locally Lipschitz functions of the vector. We prove that the corresponding multilevel Monte Carlo estimator has optimal computational complexity (i.e. of order $\varepsilon^{-2}$ if the mean squared error is at most $\varepsilon^2$) and provide its central limit theorem (CLT). Using the CLT we construct confidence intervals for barrier option prices and various risk measures based on drawdown under the tempered stable (CGMY) model calibrated/estimated on real-world data. We provide non-asymptotic and asymptotic comparisons of our algorithm with existing approximations, leading to rule-of-thumb principles guiding users to the best method for a given set of parameters. We illustrate the performance of the algorithm with numerical examples.
We propose a discrete-time discrete-space Markov chain approximation with a Brownian bridge correction for computing curvilinear boundary crossing probabilities of a general diffusion process on a finite time interval. For broad classes of curvilinear boundaries and diffusion processes, we prove the convergence of the constructed approximations in the form of products of the respective substochastic matrices to the boundary crossing probabilities for the process as the time grid used to construct the Markov chains is getting finer. Numerical results indicate that the convergence rate for the proposed approximation with the Brownian bridge correction is $O(n^{-2})$ in the case of $C^2$ boundaries and a uniform time grid with n steps.
We construct a class of non-reversible Metropolis kernels as a multivariate extension of the guided-walk kernel proposed by Gustafson (Statist. Comput.8, 1998). The main idea of our method is to introduce a projection that maps a state space to a totally ordered group. By using Haar measure, we construct a novel Markov kernel termed the Haar mixture kernel, which is of interest in its own right. This is achieved by inducing a topological structure to the totally ordered group. Our proposed method, the $\Delta$-guided Metropolis–Haar kernel, is constructed by using the Haar mixture kernel as a proposal kernel. The proposed non-reversible kernel is at least 10 times better than the random-walk Metropolis kernel and Hamiltonian Monte Carlo kernel for the logistic regression and a discretely observed stochastic process in terms of effective sample size per second.
The principle of maximum entropy is a well-known approach to produce a model for data-generating distributions. In this approach, if partial knowledge about the distribution is available in terms of a set of information constraints, then the model that maximizes entropy under these constraints is used for the inference. In this paper, we propose a new three-parameter lifetime distribution using the maximum entropy principle under the constraints on the mean and a general index. We then present some statistical properties of the new distribution, including hazard rate function, quantile function, moments, characterization, and stochastic ordering. We use the maximum likelihood estimation technique to estimate the model parameters. A Monte Carlo study is carried out to evaluate the performance of the estimation method. In order to illustrate the usefulness of the proposed model, we fit the model to three real data sets and compare its relative performance with respect to the beta generalized Weibull family.
The problem of optimally scaling the proposal distribution in a Markov chain Monte Carlo algorithm is critical to the quality of the generated samples. Much work has gone into obtaining such results for various Metropolis–Hastings (MH) algorithms. Recently, acceptance probabilities other than MH are being employed in problems with intractable target distributions. There are few resources available on tuning the Gaussian proposal distributions for this situation. We obtain optimal scaling results for a general class of acceptance functions, which includes Barker’s and lazy MH. In particular, optimal values for Barker’s algorithm are derived and found to be significantly different from that obtained for the MH algorithm. Our theoretical conclusions are supported by numerical simulations indicating that when the optimal proposal variance is unknown, tuning to the optimal acceptance probability remains an effective strategy.
We prove existence and uniqueness for the solution of a class of mixed fractional stochastic differential equations with discontinuous drift driven by both standard and fractional Brownian motion. Additionally, we establish a generalized Itô rule valid for functions with an absolutely continuous derivative and applicable to solutions of mixed fractional stochastic differential equations with Lipschitz coefficients, which plays a key role in our proof of existence and uniqueness. The proof of such a formula is new and relies on showing the existence of a density of the law under mild assumptions on the diffusion coefficient.
Matryoshka dolls, the traditional Russian nesting figurines, are known worldwide for each doll’s encapsulation of a sequence of smaller dolls. In this paper, we exploit the structure of a new sequence of nested matrices we call matryoshkan matrices in order to compute the moments of the one-dimensional polynomial processes, a large class of Markov processes. We characterize the salient properties of matryoshkan matrices that allow us to compute these moments in closed form at a specific time without computing the entire path of the process. This simplifies the computation of the polynomial process moments significantly. Through our method, we derive explicit expressions for both transient and steady-state moments of this class of Markov processes. We demonstrate the applicability of this method through explicit examples such as shot noise processes, growth–collapse processes, ephemerally self-exciting processes, and affine stochastic differential equations from the finance literature. We also show that we can derive explicit expressions for the self-exciting Hawkes process, for which finding closed-form moment expressions has been an open problem since their introduction in 1971. In general, our techniques can be used for any Markov process for which the infinitesimal generator of an arbitrary polynomial is itself a polynomial of equal or lower order.
In this article we consider the estimation of the log-normalization constant associated to a class of continuous-time filtering models. In particular, we consider ensemble Kalman–Bucy filter estimates based upon several nonlinear Kalman–Bucy diffusions. Using new conditional bias results for the mean of the aforementioned methods, we analyze the empirical log-scale normalization constants in terms of their $\mathbb{L}_n$-errors and $\mathbb{L}_n$-conditional bias. Depending on the type of nonlinear Kalman–Bucy diffusion, we show that these are bounded above by terms such as $\mathsf{C}(n)\left[t^{1/2}/N^{1/2} + t/N\right]$ or $\mathsf{C}(n)/N^{1/2}$ ($\mathbb{L}_n$-errors) and $\mathsf{C}(n)\left[t+t^{1/2}\right]/N$ or $\mathsf{C}(n)/N$ ($\mathbb{L}_n$-conditional bias), where t is the time horizon, N is the ensemble size, and $\mathsf{C}(n)$ is a constant that depends only on n, not on N or t. Finally, we use these results for online static parameter estimation for the above filtering models and implement the methodology for both linear and nonlinear models.
We prove polynomial ergodicity for the one-dimensional Zig-Zag process on heavy-tailed targets and identify the exact order of polynomial convergence of the process when targeting Student distributions.
In this article we consider a Monte-Carlo-based method to filter partially observed diffusions observed at regular and discrete times. Given access only to Euler discretizations of the diffusion process, we present a new procedure which can return online estimates of the filtering distribution with no time-discretization bias and finite variance. Our approach is based upon a novel double application of the randomization methods of Rhee and Glynn (Operat. Res.63, 2015) along with the multilevel particle filter (MLPF) approach of Jasra et al. (SIAM J. Numer. Anal.55, 2017). A numerical comparison of our new approach with the MLPF, on a single processor, shows that similar errors are possible for a mild increase in computational cost. However, the new method scales strongly to arbitrarily many processors.
We study an intertemporal consumption and portfolio choice problem under Knightian uncertainty in which agent’s preferences exhibit local intertemporal substitution. We also allow for market frictions in the sense that the pricing functional is nonlinear. We prove existence and uniqueness of the optimal consumption plan, and we derive a set of sufficient first-order conditions for optimality. With the help of a backward equation, we are able to determine the structure of optimal consumption plans. We obtain explicit solutions in a stationary setting in which the financial market has different risk premia for short and long positions.
The systematic development of coarse-grained (CG) models via the Mori–Zwanzig projector operator formalism requires the explicit description of a deterministic drift term, a dissipative memory term and a random fluctuation term. The memory and fluctuating terms are related by the fluctuation–dissipation relation and are more challenging to sample and describe than the drift term due to complex dependence on space and time. This work proposes a rational basis for a Markovian data-driven approach to approximating the memory and fluctuating terms. We assumed a functional form for the memory kernel and under broad regularity hypothesis, we derived bounds for the error committed in replacing the original term with an approximation obtained by its asymptotic expansions. These error bounds depend on the characteristic time scale of the atomistic model, representing the decay of the autocorrelation function of the fluctuating force; and the characteristic time scale of the CG model, representing the decay of the autocorrelation function of the momenta of the beads. Using appropriate parameters to describe these time scales, we provide a quantitative meaning to the observation that the Markovian approximation improves as they separate. We then proceed to show how the leading-order term of such expansion can be identified with the Markovian approximation usually considered in the CG theory. We also show that, while the error of the approximation involving time can be controlled, the Markovian term usually considered in CG simulations may exhibit significant spatial variation. It follows that assuming a spatially constant memory term is an uncontrolled approximation which should be carefully checked. We complement our analysis with an application to the estimation of the memory in the CG model of a one-dimensional Lennard–Jones chain with different masses and interactions, showing that even for such a simple case, a non-negligible spatial dependence for the memory term exists.
We introduce an approach and a software tool for solving coupled energy networks composed of gas and electric power networks. Those networks are coupled to stochastic fluctuations to address possibly fluctuating demand due to fluctuating demands and supplies. Through computational results, the presented approach is tested on networks of realistic size.
We present a new and straightforward algorithm that simulates exact sample paths for a generalized stress-release process. The computation of the exact law of the joint inter-arrival times is detailed and used to derive this algorithm. Furthermore, the martingale generator of the process is derived, and induces theoretical moments which generalize some results of [3] and are used to demonstrate the validity of our simulation algorithm.
We consider a continuous Gaussian random field living on a compact set $T\subset \mathbb{R}^{d}$. We are interested in designing an asymptotically efficient estimator of the probability that the integral of the exponential of the Gaussian process over T exceeds a large threshold u. We propose an Asmussen–Kroese conditional Monte Carlo type estimator and discuss its asymptotic properties according to the assumptions on the first and second moments of the Gaussian random field. We also provide a simulation study to illustrate its effectiveness and compare its performance with the importance sampling type estimator of Liu and Xu (2014a).
We study weighted ensemble, an interacting particle method for sampling distributions of Markov chains that has been used in computational chemistry since the 1990s. Many important applications of weighted ensemble require the computation of long time averages. We establish the consistency of weighted ensemble in this setting by proving an ergodic theorem for time averages. As part of the proof, we derive explicit variance formulas that could be useful for optimizing the method.
We investigate properties of random mappings whose core is composed of derangements as opposed to permutations. Such mappings arise as the natural framework for studying the Screaming Toes game described, for example, by Peter Cameron. This mapping differs from the classical case primarily in the behaviour of the small components, and a number of explicit results are provided to illustrate these differences.
In this paper an exact rejection algorithm for simulating paths of the coupled Wright–Fisher diffusion is introduced. The coupled Wright–Fisher diffusion is a family of multivariate Wright–Fisher diffusions that have drifts depending on each other through a coupling term and that find applications in the study of networks of interacting genes. The proposed rejection algorithm uses independent neutral Wright–Fisher diffusions as candidate proposals, which are only needed at a finite number of points. Once a candidate is accepted, the remainder of the path can be recovered by sampling from neutral multivariate Wright–Fisher bridges, for which an exact sampling strategy is also provided. Finally, the algorithm’s complexity is derived and its performance demonstrated in a simulation study.