To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Explicit bounds are given for the Kolmogorov and Wasserstein distances between a mixture of normal distributions, by which we mean that the conditional distribution given some $\sigma$-algebra is normal, and a normal distribution with properly chosen parameter values. The bounds depend only on the first two moments of the first two conditional moments given the $\sigma$-algebra. The proof is based on Stein’s method. As an application, we consider the Yule–Ornstein–Uhlenbeck model, used in the field of phylogenetic comparative methods. We obtain bounds for both distances between the distribution of the average value of a phenotypic trait over n related species, and a normal distribution. The bounds imply and extend earlier limit theorems by Bartoszek and Sagitov.
We study the distributions of component and system lifetimes under the time-homogeneous load-sharing model, where the multivariate conditional hazard rates of working components depend only on the set of failed components, and not on their failure moments or the time elapsed from the start of system operation. Then we analyze its time-heterogeneous extension, in which the distributions of consecutive failure times, single component lifetimes, and system lifetimes coincide with mixtures of distributions of generalized order statistics. Finally we focus on some specific forms of the time-nonhomogeneous load-sharing model.
We study ergodic properties of a class of Markov-modulated general birth–death processes under fast regime switching. The first set of results concerns the ergodic properties of the properly scaled joint Markov process with a parameter that is taken to be large. Under very weak hypotheses, we show that if the averaged process is exponentially ergodic for large values of the parameter, then the same applies to the original joint Markov process. The second set of results concerns steady-state diffusion approximations, under the assumption that the ‘averaged’ fluid limit exists. Here, we establish convergence rates for the moments of the approximating diffusion process to those of the Markov-modulated birth–death process. This is accomplished by comparing the generator of the approximating diffusion and that of the joint Markov process. We also provide several examples which demonstrate how the theory can be applied.
This study presents functional limit theorems for the Euler characteristic of Vietoris–Rips complexes. The points are drawn from a nonhomogeneous Poisson process on $\mathbb{R}^d$, and the connectivity radius governing the formation of simplices is taken as a function of the time parameter t, which allows us to treat the Euler characteristic as a stochastic process. The setting in which this takes place is that of the critical regime, in which the simplicial complexes are highly connected and have nontrivial topology. We establish two ‘functional-level’ limit theorems, a strong law of large numbers and a central limit theorem, for the appropriately normalized Euler characteristic process.
We establish a pathwise large deviation principle for affine stochastic volatility models introduced by Keller-Ressel (2011), and present an application to variance reduction for Monte Carlo computation of prices of path-dependent options in these models, extending the method developed by Genin and Tankov (2020) for exponential Lévy models. To this end, we apply an exponentially affine change of measure and use Varadhan’s lemma, in the fashion of Guasoni and Robertson (2008) and Robertson (2010), to approximate the problem of finding the measure that minimizes the variance of the Monte Carlo estimator. We test the method on the Heston model with and without jumps to demonstrate its numerical efficiency.
We present closed-form solutions to some discounted optimal stopping problems for the running maximum of a geometric Brownian motion with payoffs switching according to the dynamics of a continuous-time Markov chain with two states. The proof is based on the reduction of the original problems to the equivalent free-boundary problems and the solution of the latter problems by means of the smooth-fit and normal-reflection conditions. We show that the optimal stopping boundaries are determined as the maximal solutions of the associated two-dimensional systems of first-order nonlinear ordinary differential equations. The obtained results are related to the valuation of real switching lookback options with fixed and floating sunk costs in the Black–Merton–Scholes model.
In testing for correlation of the errors in regression models, the power of tests can be very low for strongly correlated errors. This counterintuitive phenomenon has become known as the “zero-power trap.” Despite a considerable amount of literature devoted to this problem, mainly focusing on its detection, a convincing solution has not yet been found. In this article, we first discuss theoretical results concerning the occurrence of the zero-power trap phenomenon. Then, we suggest and compare three ways to avoid it. Given an initial test that suffers from the zero-power trap, the method we recommend for practice leads to a modified test whose power converges to $1$ as the correlation gets very strong. Furthermore, the modified test has approximately the same power function as the initial test and thus approximately preserves all of its optimality properties. We also provide some numerical illustrations in the context of testing for network generated correlation.
This article investigates the optimal hedging problem of the European contingent claims written on non-tradable assets. We assume that the risky assets satisfy jump diffusion models with a common jump process which reflects the correlated jump risk. The non-tradable asset and jump risk lead to an incomplete financial market. Hence, the cross-hedging method will be used to reduce the potential risk of the contingent claims seller. First, we obtain an explicit closed-form solution for the locally risk-minimizing hedging strategies of the European contingent claims by using the Föllmer–Schweizer decomposition. Then, we consider the hedging for a European call option as a special case. The value of the European call option under the minimal martingale measure is derived by the Fourier transform method. Next, some semi-closed solution formulae of the locally risk-minimizing hedging strategies for the European call option are obtained. Finally, some numerical examples are provided to illustrate the sensitivities of the optimal hedging strategies. By comparing the optimal hedging strategies when the underlying asset is a non-tradable asset or a tradable asset, we find that the liquidity risk has a significant impact on the optimal hedging strategies.
The role of fire in the management of degraded areas remains strongly debated. Here we experimentally compare removal and infestation of popcorn kernels (Zea mays L. – Poaceae) and açaí fruits (Euterpe oleracea Mart. – Arecaceae) in one burned and two unburned savanna habitats in the eastern Brazilian Amazon. In each habitat, a total of ten experimental units (five per seed type) were installed, each with three treatments: (1) open access, (2) vertebrate access, and (3) invertebrate access. Generalized linear models showed significant differences in both seed removal (P < 0.0001) and infestation (P < 0.0001) among seed type, habitats and access treatments. Burned savanna had the highest overall seed infestation rate (24.3%) and invertebrate access increased açaí seed infestation levels to 100% in the burned savanna. Increased levels of invertebrate seed infestation in burned savanna suggest that preparation burning may be of limited use for the management and restoration of such habitats in tropical regions.
In this paper, we introduce a family of processes with values on the nonnegative integers that describes the dynamics of populations where individuals are allowed to have different types of interactions. The types of interactions that we consider include pairwise interactions, such as competition, annihilation, and cooperation; and interactions among several individuals that can be viewed as catastrophes. We call such families of processes branching processes with interactions. Our aim is to study their long-term behaviour under a specific regime of the pairwise interaction parameters that we introduce as the subcritical cooperative regime. Under such a regime, we prove that a process in this class comes down from infinity and has a moment dual which turns out to be a jump-diffusion that can be thought as the evolution of the frequency of a trait or phenotype, and whose parameters have a classical interpretation in terms of population genetics. The moment dual is an important tool for characterizing the stationary distribution of branching processes with interactions whenever such a distribution exists; it is also an interesting object in its own right.
This study considered the role of adult children in the core networks of U.S. older adults with varying levels of functional health. Taking a multidimensional perspective of the ego network system, we considered (a) presence of child(ren) in the network, (b) contact with children network members, and (c) embeddedness of children within the network. We observed older parents from three waves of the National Social Life, Health, and Aging Project (NSHAP). The common ‘important matters’ name generator was used to construct egocentric network variables, while self-reported difficulty with activities of daily life was used to measure disablement transitions. Parameters were estimated with Generalized Estimating Equations (GEE). Though child turnover was common in parents’ core networks, there was no evidence linking disablement transitions to systematic forms of child reshuffling. Children that remained in parents’ networks, however, showed increased contact with parents and with other members of the network when the parent underwent disability progression. Disability onset was not significantly linked to either outcome. There was limited evidence of gender variation in these patterns. Overall, results strengthen the view that children are distinctive members of older adults’ core networks. Further, the role of adult children shifts most noticeably at advanced stages of the disablement process.
This paper investigates the distributions of triangle counts per vertex and edge, as a means for network description, analysis, model building, and other tasks. The main interest is in estimating these distributions through sampling, especially for large networks. A novel sampling method tailored for the estimation analysis is proposed, with three sampling designs motivated by several network access scenarios. An estimation method based on inversion and an asymptotic method are developed to recover the entire distribution. A single method to estimate the distribution using multiple samples is also considered. Algorithms are presented to sample the network under the various access scenarios. Finally, the estimation methods on synthetic and real-world networks are evaluated in a data study.
In passive seismic and microseismic monitoring, identifying and characterizing events in a strong noisy background is a challenging task. Most of the established methods for geophysical inversion are likely to yield many false event detections. The most advanced of these schemes require thousands of computationally demanding forward elastic-wave propagation simulations. Here we train and use an ensemble of Gaussian process surrogate meta-models, or proxy emulators, to accelerate the generation of accurate template seismograms from random microseismic event locations. In the presence of multiple microseismic events occurring at different spatial locations with arbitrary amplitude and origin time, and in the presence of noise, an inference algorithm needs to navigate an objective function or likelihood landscape of highly complex shape, perhaps with multiple modes and narrow curving degeneracies. This is a challenging computational task even for state-of-the-art Bayesian sampling algorithms. In this paper, we propose a novel method for detecting multiple microseismic events in a strong noise background using Bayesian inference, in particular, the Multimodal Nested Sampling (MultiNest) algorithm. The method not only provides the posterior samples for the 5D spatio-temporal-amplitude inference for the real microseismic events, by inverting the seismic traces in multiple surface receivers, but also computes the Bayesian evidence or the marginal likelihood that permits hypothesis testing for discriminating true vs. false event detection.
Varicella infection during pregnancy has serious and/or difficult implications and in some cases lethal outcome. Though epidemiological studies in developing countries reveal that a significant proportion of patients may remain susceptible during pregnancy, such an estimate of susceptible women is not known in India. We designed this study to study the prevalence and factors associated with susceptibility to varicella among rural and urban pregnant women in South India. We prospectively recruited 430 pregnant women and analysed their serum varicella IgG antibodies as surrogates for protection. We estimated seroprevalence, the validity of self-reported history of chickenpox and factors associated with varicella susceptibility. We found 23 (95% CI 19.1–27.3) of women were susceptible. Nearly a quarter (22.2%) of the susceptible women had a history of exposure to chickenpox anytime in the past or during the current pregnancy. Self-reported history of varicella had a positive predictive value of 82.4%. Negative history of chickenpox (adjusted prevalence ratio (PR) 1.85, 95% CI 1.15–3.0) and receiving antenatal care from a rural secondary hospital (adjusted PR 4.08, 95% CI 2.1–7.65) were significantly associated with susceptibility. We conclude that high varicella susceptibility rates during pregnancy were noted and self-reported history of varicella may not be a reliable surrogate for protection.
This is a practical guide to P-splines, a simple, flexible and powerful tool for smoothing. P-splines combine regression on B-splines with simple, discrete, roughness penalties. They were introduced by the authors in 1996 and have been used in many diverse applications. The regression basis makes it straightforward to handle non-normal data, like in generalized linear models. The authors demonstrate optimal smoothing, using mixed model technology and Bayesian estimation, in addition to classical tools like cross-validation and AIC, covering theory and applications with code in R. Going far beyond simple smoothing, they also show how to use P-splines for regression on signals, varying-coefficient models, quantile and expectile smoothing, and composite links for grouped data. Penalties are the crucial elements of P-splines; with proper modifications they can handle periodic and circular data as well as shape constraints. Combining penalties with tensor products of B-splines extends these attractive properties to multiple dimensions. An appendix offers a systematic comparison to other smoothers.
Spatial random graphs capture several important properties of real-world networks. We prove quenched results for the continuous-space version of scale-free percolation introduced in [14]. This is an undirected inhomogeneous random graph whose vertices are given by a Poisson point process in $\mathbb{R}^d$. Each vertex is equipped with a random weight, and the probability that two vertices are connected by an edge depends on their weights and on their distance. Under suitable conditions on the parameters of the model, we show that, for almost all realizations of the point process, the degree distributions of all the nodes of the graph follow a power law with the same tail at infinity. We also show that the averaged clustering coefficient of the graph is self-averaging. In particular, it is almost surely equal to the annealed clustering coefficient of one point, which is a strictly positive quantity.
We consider a space-time random field on ${{\mathbb{R}^d} \times {\mathbb{R}}}$ given as an integral of a kernel function with respect to a Lévy basis with a convolution equivalent Lévy measure. The field obeys causality in time and is thereby not continuous along the time axis. For a large class of such random fields we study the tail behaviour of certain functionals of the field. It turns out that the tail is asymptotically equivalent to the right tail of the underlying Lévy measure. Particular examples are the asymptotic probability that there is a time point and a rotation of a spatial object with fixed radius, in which the field exceeds the level x, and that there is a time interval and a rotation of a spatial object with fixed radius, in which the average of the field exceeds the level x.
This paper investigates a financial market where stock returns depend on an unobservable Gaussian mean reverting drift process. Information on the drift is obtained from returns and randomly arriving discrete-time expert opinions. Drift estimates are based on Kalman filter techniques. We study the asymptotic behavior of the filter for high-frequency experts with variances that grow linearly with the arrival intensity. The derived limit theorems state that the information provided by discrete-time expert opinions is asymptotically the same as that from observing a certain diffusion process. These diffusion approximations are extremely helpful for deriving simplified approximate solutions of utility maximization problems.