To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we introduce a non-homogeneous version of the generalized counting process (GCP). We time-change this process by an independent inverse stable subordinator and derive the system of governing differential–integral equations for the marginal distributions of its increments. We then consider the GCP time-changed by a multistable subordinator and obtain its Lévy measure and the distribution of its first passage times. We discuss an application of a time-changed GCP, namely the time-changed generalized counting process-I (TCGCP-I) in ruin theory. A fractional version of the TCGCP-I is studied, and its long-range dependence property is established.
We consider a single server queue that has a threshold to change its arrival process and service speed by its queue length, which is referred to as a two-level GI/G/1 queue. This model is motivated by an energy saving problem for a single server queue whose arrival process and service speed are controlled. To obtain its performance in tractable form, we study the limit of the stationary distribution of the queue length in this two-level queue under scaling in heavy traffic. Except for a special case, this limit corresponds to its diffusion approximation. It is shown that this limiting distribution is truncated exponential (or uniform if the drift is null) below the threshold level and exponential above it under suitably chosen system parameters and generally distributed interarrival times and workloads brought by customers. This result is proved under a mild limitation on arrival parameters using the so-called basic adjoint relationship (BAR) approach studied in Braverman, Dai, and Miyazawa (2017, 2024) and Miyazawa (2017, 2024). We also intuitively discuss about a diffusion process corresponding to the limit of the stationary distribution under scaling.
The Wright–Fisher model, originating in Wright (1931) is one of the canonical probabilistic models used in mathematical population genetics to study how genetic type frequencies evolve in time. In this paper we bound the rate of convergence of the stationary distribution for a finite population Wright–Fisher Markov chain with parent-independent mutation to the Dirichlet distribution. Our result improves the rate of convergence established in Gan et al. (2017) from $\mathrm{O}(1/\sqrt{N})$ to $\mathrm{O}(1/N)$. The results are derived using Stein’s method, in particular, the prelimit generator comparison method.
This paper investigates structural changes in the parameters of first-order autoregressive (AR) models by analyzing the edge eigenvalues of the precision matrices. Specifically, edge eigenvalues in the precision matrix are observed if and only if there is a structural change in the AR coefficients. We show that these edge eigenvalues correspond to the zeros of a determinantal equation. Additionally, we propose a consistent estimator for detecting outliers within the panel time series framework, supported by numerical experiments.
This paper uses a two-step approach to modelling the probability of a policyholder making an auto insurance claim. We perform clustering via Gaussian mixture models and cluster-specific binary regression models. We use telematics information along with traditional auto insurance information and find that the best model incorporates telematics, without the need for dimension reduction via principal components. We also utilise the probabilistic estimates from the mixture model to account for the uncertainty in the cluster assignments. The clustering process allows for the creation of driving profiles and offers a fairer method for policyholder segmentation than when clustering is not used. By fitting separate regression models to the observations from the respective clusters, we are able to offer differential pricing, which recognises that policyholders have different exposures to risk despite having similar covariate information, such as total miles driven. The approach outlined in this paper offers an explainable and interpretable model that can compete with black box models. Our comparisons are based on a synthesised telematics data set that was emulated from a real insurance data set.
In this paper, we study asymptotic behaviors of a subcritical branching Brownian motion with drift $-\rho$, killed upon exiting $(0, \infty)$, and offspring distribution $\{p_k{:}\; k\ge 0\}$. Let $\widetilde{\zeta}^{-\rho}$ be the extinction time of this subcritical branching killed Brownian motion, $\widetilde{M}_t^{-\rho}$ the maximal position of all the particles alive at time t and $\widetilde{M}^{-\rho}:\!=\max_{t\ge 0}\widetilde{M}_t^{-\rho}$ the all-time maximal position. Let $\mathbb{P}_x$ be the law of this subcritical branching killed Brownian motion when the initial particle is located at $x\in (0,\infty)$. Under the assumption $\sum_{k=1}^\infty k ({\log}\; k) p_k <\infty$, we establish the decay rates of $\mathbb{P}_x(\widetilde{\zeta}^{-\rho}>t)$ and $\mathbb{P}_x(\widetilde{M}^{-\rho}>y)$ as t and y respectively tend to $\infty$. We also establish the decay rate of $\mathbb{P}_x(\widetilde{M}_t^{-\rho}> z(t,\rho))$ as $t\to\infty$, where $z(t,\rho)=\sqrt{t}z-\rho t$ for $\rho\leq 0$ and $z(t,\rho)=z$ for $\rho>0$. As a consequence, we obtain a Yaglom-type limit theorem.
In this paper, we study the asymptotic behavior of the generalized Zagreb indices of the classical Erdős–Rényi (ER) random graph G(n, p), as $n\to\infty$. For any integer $k\ge1$, we first give an expression for the kth-order generalized Zagreb index in terms of the number of star graphs of various sizes in any simple graph. The explicit formulas for the first two moments of the generalized Zagreb indices of an ER random graph are then obtained from this expression. Based on the asymptotic normality of the numbers of star graphs of various sizes, several joint limit laws are established for a finite number of generalized Zagreb indices with a phase transition for p in different regimes. Finally, we provide a necessary and sufficient condition for any single generalized Zagreb index of G(n, p) to be asymptotic normal.
We use the framework of multivariate regular variation to analyse the extremal behaviour of preferential attachment models. To this end, we follow a directed linear preferential attachment model for a random, heavy-tailed number of steps in time and treat the incoming edge count of all existing nodes as a random vector of random length. By combining martingale properties, moment bounds and a Breiman type theorem we show that the resulting quantity is multivariate regularly varying, both as a vector of fixed length formed by the edge counts of a finite number of oldest nodes, and also as a vector of random length viewed in sequence space. A Pólya urn representation allows us to explicitly describe the extremal dependence between the degrees with the help of Dirichlet distributions. As a by-product of our analysis we establish new results for almost sure convergence of the edge counts in sequence space as the number of nodes goes to infinity.
The systemic nature of climate risk is well established, but the extent may be more severe than previously understood, particularly with regard to cyber risk and economic security. Cyber security relies on the availability of insurance capital to mitigate economic security sector risks and support the reversibility of attacks. However, the cyber insurance industry is still in its infancy. Pressure on insurance capital from increasing natural disaster activity could consume the resources necessary for economic security in the cyber domain in the near term and create long-term conditions that increase the scarcity of capital to support cyber security risks. This article makes an original contribution by exploring the under-researched connection between the nexus of cyber and economic security and the climate change threat. Although the immediate pressure on economic resources for cyber security is limited, recent natural disaster activity has clearly shown that access to capital for cyber risks could come under significant pressure in the future.
Ideological and relational polarization are two increasingly salient political divisions in Western societies. We integrate the study of these phenomena by describing society as a multilevel network of social ties between people and attitudinal ties between people and political topics. We then define and propose a set of metrics to measure ‘network polarization’: the extent to which a community is ideologically and socially divided. Using longitudinal network modelling, we examine whether observed levels of network polarization can be explained by three processes: social selection, social influence, and latent-cause reinforcement. Applied to new longitudinal friendship and political attitude network data from two Swiss university cohorts, our metrics show mild polarization. The models explain this outcome and suggest that friendships and political attitudes are reciprocally formed and sustained. We find robust evidence for friend selection based on attitude similarity and weaker evidence for social influence. The results further point to latent-cause reinforcement processes: (dis)similar attitudes are more likely to be formed or maintained between individuals whose attitudes are already (dis)similar on a range of political issues. Applied across different cultural and political contexts, our approach may help to understand the degree and mechanisms of divisions in society.
Covering formulation, algorithms and structural results and linking theory to real-world applications in controlled sensing (including social learning, adaptive radars and sequential detection), this book focuses on the conceptual foundations of partially observed Markov decision processes (POMDPs). It emphasizes structural results in stochastic dynamic programming, enabling graduate students and researchers in engineering, operations research, and economics to understand the underlying unifying themes without getting weighed down by mathematical technicalities. In light of major advances in machine learning over the past decade, this edition includes a new Part V on inverse reinforcement learning as well as a new chapter on non-parametric Bayesian inference (for Dirichlet processes and Gaussian processes), variational Bayes and conformal prediction.
A graduate-level introduction to advanced topics in Markov chain Monte Carlo (MCMC), as applied broadly in the Bayesian computational context. The topics covered have emerged as recently as the last decade and include stochastic gradient MCMC, non-reversible MCMC, continuous time MCMC, and new techniques for convergence assessment. A particular focus is on cutting-edge methods that are scalable with respect to either the amount of data, or the data dimension, motivated by the emerging high-priority application areas in machine learning and AI. Examples are woven throughout the text to demonstrate how scalable Bayesian learning methods can be implemented. This text could form the basis for a course and is sure to be an invaluable resource for researchers in the field.
The payoff in the Chow–Robbins coin-tossing game is the proportion of heads when you stop. Stopping to maximize expectation was addressed by Chow and Robbins (1965), who proved there exist integers ${k_n}$ such that it is optimal to stop at n tosses when heads minus tails is ${k_n}$. Finding ${k_n}$ was unsolved except for finitely many cases by computer. We prove an $o(n^{-1/4})$ estimate of the stopping boundary of Dvoretsky (1967), which then proves ${k_n} = \left\lceil {\alpha \sqrt n \,\, - 1/2\,\, + \,\,\frac{{\left( { - 2\zeta (\! -1/2)} \right)\sqrt \alpha }}{{\sqrt \pi }}{n^{ - 1/4}}} \right\rceil $ except for n in a set of density asymptotic to 0, at a power law rate. Here, $\alpha$ is the Shepp–Walker constant from the Brownian motion analog, and $\zeta$ is Riemann’s zeta function. An $n^{ - 1/4}$ dependence was conjectured by Christensen and Fischer (2022). Our proof uses moments involving Catalan and Shapiro Catalan triangle numbers which appear in a tree resulting from backward induction, and a generalized backward induction principle. It was motivated by an idea of Häggström and Wästlund (2013) to use backward induction of upper and lower Value bounds from a horizon, which they used numerically to settle a few cases. Christensen and Fischer, with much better bounds, settled many more cases. We use Skorohod’s embedding to get simple upper and lower bounds from the Brownian analog; our upper bound is the one found by Christensen and Fischer in another way. We use them first for yet many more examples and a conjecture, then algebraically in the tree, with feedback to get much sharper Value bounds near the border, and analytic results. Also, we give a formula that gives the exact optimal stop rule for all n up to about a third of a billion; it uses the analytic result plus terms arrived at empirically.
We present a short and simple proof of the celebrated hypergraph container theorem of Balogh–Morris–Samotij and Saxton–Thomason. On a high level, our argument utilises the idea of iteratively taking vertices of largest degree from an independent set and constructing a hypergraph of lower uniformity which preserves independent sets and inherits edge distribution. The original algorithms for constructing containers also remove in each step vertices of high degree, which are not in the independent set. Our modified algorithm postpones this until the end, which surprisingly results in a significantly simplified analysis.
Healthcare costs tend to increase with age. In particular, in the case of illness, the last year before death can be an exceptionally costly period as the need for healthcare increases. Using a novel private insurance dataset containing over one million records of claims submitted by individuals to their health insurance providers during the last year of life, our research seeks to shed light on the costs before death in Switzerland. Our work documents how spending patterns change with proximity to dying. We use machine learning algorithms to identify and quantify the key effects that drive a person’s spending during this critical period. Our findings provide a more profound understanding of the costs associated with hospitalization before death, the role of age, and the variation in costs based on the services, including care services, which individuals require.
This paper focuses on the comparison of networks on the basis of statistical inference. For that purpose, we rely on smooth graphon models as a nonparametric modeling strategy that is able to capture complex structural patterns. The graphon itself can be viewed more broadly as local density or intensity function on networks, making the model a natural choice for comparison purposes. More precisely, to gain information about the (dis-)similarity between networks, we extend graphon estimation towards modeling multiple networks simultaneously. In particular, fitting a single model implies aligning different networks with respect to the same graphon estimate. To do so, we employ an EM-type algorithm. Drawing on this network alignment consequently allows a comparison of the edge density at local level. Based on that, we construct a chi-squared-type test on equivalence of network structures. Simulation studies and real-world examples support the applicability of our network comparison strategy.
Structural health monitoring (SHM) is increasingly applied in civil engineering. One of its primary purposes is detecting and assessing changes in structure conditions to increase safety and reduce potential maintenance downtime. Recent advancements, especially in sensor technology, facilitate data measurements, collection, and process automation, leading to large data streams. We propose a function-on-function regression framework for (nonlinear) modeling the sensor data and adjusting for covariate-induced variation. Our approach is particularly suited for long-term monitoring when several months or years of training data are available. It combines highly flexible yet interpretable semi-parametric modeling with functional principal component analysis and uses the corresponding out-of-sample Phase-II scores for monitoring. The method proposed can also be described as a combination of an “input–output” and an “output-only” method.
We interrogate efforts to legislate artificial intelligence (AI) through Canada’s Artificial Intelligence and Data Act (AIDA) and argue it represents a series of missed opportunities that so delayed the Act that it died. We note how much of this bill was explicitly tied to economic development and implicitly tied to a narrow jurisdictional form of shared prosperity. Instead, we contend that the benefits of AI are not shared but disproportionately favour specific groups, in this case, the AI industry. This trend appears typical of many countries’ AI and data regulations, which tend to privilege the few, despite promises to favour the many. We discuss the origins of AIDA, drafted by Canada’s federal Department for Innovation Science and Economic Development (ISED). We then consider four problems: (1) AIDA relied on public trust in a digital and data economy; (2) ISED tried to both regulate and promote AI and data; (3) Public consultation was insufficient for AIDA; and (4) Workers’ rights in Canada and worldwide were excluded in AIDA. Without strong checks and balances built into regulation like AIDA, innovation will fail to deliver on its claims. We recommend the Canadian government and, by extension, other governments invest in an AI act that prioritises: (1) Accountability mechanisms and tools for the public and private sectors; (2) Robust workers’ rights in terms of data handling; and (3) Meaningful public participation in all stages of legislation. These policies are essential to countering wealth concentration in the industry, which would stifle progress and widespread economic growth.
We study continuous-time Markov chains on the nonnegative integers under mild regularity conditions (in particular, the set of jump vectors is finite and both forward and backward jumps are possible). Based on the so-called flux balance equation, we derive an iterative formula for calculating stationary measures. Specifically, a stationary measure $\pi(x)$ evaluated at $x\in\mathbb{N}_0$ is represented as a linear combination of a few generating terms, similarly to the characterization of a stationary measure of a birth–death process, where there is only one generating term, $\pi(0)$. The coefficients of the linear combination are recursively determined in terms of the transition rates of the Markov chain. For the class of Markov chains we consider, there is always at least one stationary measure (up to a scaling constant). We give various results pertaining to uniqueness and nonuniqueness of stationary measures, and show that the dimension of the linear space of signed invariant measures is at most the number of generating terms. A minimization problem is constructed in order to compute stationary measures numerically. Moreover, a heuristic linear approximation scheme is suggested for the same purpose by first approximating the generating terms. The correctness of the linear approximation scheme is justified in some special cases. Furthermore, a decomposition of the state space into different types of states (open and closed irreducible classes, and trapping, escaping and neutral states) is presented. Applications to stochastic reaction networks are well illustrated.