To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper introduces a novel expectation-maximization (EM) algorithm for estimating general phase-type (PH) distributions from left-truncated and right-censored (LTRC) data, a common challenge in survival analysis. The proposed algorithm is highly efficient with computational complexity that scales with the number of nonzero elements in the generator matrix. This feature makes the estimation of high-dimensional, sparse PH models computationally tractable and enables the practical use of the computationally intensive extended information criterion for model selection. Numerical experiments demonstrate its significant speed advantage over a modern benchmark and the applicability of PH models to complex lifetime data.
In many insurance contexts, dependence between risks of a portfolio may arise from their frequencies. We investigate a dependent risk model in which we assume the vector of count variables to be a tree-structured Markov random field with Poisson marginals. The tree structure translates into a wide variety of dependence schemes. We study the global risk of the portfolio and the risk allocation to all its constituents. We provide asymptotic results for portfolios defined on infinitely growing trees. To illustrate its flexibility and computational scalability to higher dimensions, we calibrate the risk model on real-world extreme rainfall data and perform a risk analysis.
When undertaking a community intervention, interventionists frequently recruit the help of community members who serve as key opinion leaders (KOLs). However, selecting a team of KOLs can be challenging because the evaluation of potential teams must balance considerations of members’ availability and diversity, as well as the team’s breadth of network coverage and cost of recruitment. This paper has two goals: to review the practical challenges that arise in the selection of KOLs for community interventions, and to facilitate the selection of KOLs when some of these practical challenges are present by introducing and demonstrating the KOLaide R package. We conclude by discussing future directions for facilitating the selection of KOLs in community intervention contexts.
We propose a novel approach to numerically approximate McKean–Vlasov stochastic differential equations (MV-SDEs) using stochastic gradient descent (SGD) while avoiding the use of interacting particle systems (IPSs) and the associated simulation costs required to achieve the ‘propagation of chaos’ limit. The SGD technique is deployed to solve a Euclidean minimization problem, obtained by first representing the MV-SDE as a minimization problem over the set of continuous functions of time, and then approximating the domain with a finite-dimensional sub-space. Convergence is established by proving certain intermediate stability and moment estimates of the relevant stochastic processes, including the tangent processes. Numerical experiments illustrate the competitive performance of our SGD-based method compared with the IPS benchmarks. This work offers a theoretical foundation for using the SGD method in the context of numerical approximation of MV-SDEs, and provides analytical tools to study its stability and convergence.
We consider a Lévy process reflected at the origin with additional independent and identically distributed collapses that occur at Poisson epochs, where a collapse is a jump downward to a state which is a random fraction of the state just before the jump. We first study the general case, then specialize to the case where the Lévy process is spectrally positive, and, finally, we specialize further to the two cases where the Lévy process is a Brownian motion and a compound Poisson process with exponential jumps minus a linear slope.
Point clouds derived from UAV photogrammetry are a cost-effective alternative to LiDAR for infrastructure inspections, but they often include both structural and non-structural elements that complicate analysis. Traditional denoising filters remove outliers indiscriminately and frequently erode edges, making it difficult to preserve the curved tunnel lining while distinguishing bolts, access gates, or pipelines. In contrast, segmentation-based approaches leverage geometric context to explicitly separate lining surfaces from ancillary components, thereby enabling more accurate deformation analysis and structural assessment. To that end, this paper presents a novel approach for denoising image point clouds using a synthetic training dataset to address the scarcity of labeled public data for enhancing point cloud quality. Unlike other denoising approaches that rely on projections or assume points lie on a predefined surface shape, this segmentation-based denoising method retains only meaningful points in their original locations, allowing for more accurate analysis of deformation. Enhanced by synthetic training datasets, the application of the proposed denoising method to a road tunnel image point cloud and a subway tunnel terrestrial laser scanning point cloud demonstrates its potential to enhance point cloud quality in tunnels with diverse geometries and point cloud data resources, even when data are limited. The method achieves an 80% mean intersection over union for both the road tunnel and the subway tunnel from manual annotation. This enables an improvement in structural deformation analysis at the mm level.
This paper develops a unified framework for catastrophe (CAT) bond pricing that integrates distortion operator theory with recurrent neural network (RNN) estimation. A novel peer-adjusted distortion factor is introduced, constructed from both the Wang transform and the jump-diffusion (JD) distortion operator, and calibrated using the market-weighted spread of comparable CAT bonds together with the target bond’s expected loss. This factor embeds prevailing investor sentiment, reinsurance capacity, and market liquidity into the distortion measure, enabling consistent pricing inference even when the bond’s own spread is unobserved. Empirically, the JD distortion model systematically outperforms both the canonical Wang transform and the raw expected loss in in-sample and out-of-sample tests, capturing discontinuous repricing and tail-risk compensation with greater precision. Extending the framework to a multifactor specification that combines actuarial fundamentals with financial-market covariates further enhances explanatory and predictive performance. From a methodological perspective, the RNN serves as a structural estimator for the parameters of the distortion operators, achieving higher accuracy, stability, and computational efficiency than conventional approaches such as MLE, GMM, or ensemble regressors. By unifying distortion operators with neural estimation, this study advances both the methodological and empirical foundations of CAT bond pricing within actuarial science.
A trace of a sequence is generated by deleting each bit of the sequence independently with a fixed probability. The well-studied trace reconstruction problem asks how many traces are required to reconstruct an unknown binary sequence with high probability. In this paper, we study the multidimensional version of this problem for matrices and hypermatrices, where a trace is generated by deleting each row/column of the matrix or each slice of the hypermatrix independently with a constant probability. Previously, Krishnamurthy, Mazumdar, McGregor and Pal showed that $\exp (\widetilde {O}(n^{d/(d+2)}))$ traces suffice to reconstruct any unknown $n\times n$ matrix (for $d=2$) and any unknown $n^{\times d}$ hypermatrix. By developing a dimension reduction procedure and establishing a multivariate version of the Littlewood-type result that lower bounds sparse complex polynomials around $1$, we improve this upper bound by showing that $\exp (\widetilde {O}(n^{3/7}))$ traces suffice to reconstruct any unknown $n\times n$ matrix, and $\exp (\widetilde {O}(n^{3/5}))$ traces suffice to reconstruct any unknown $n^{\times d}$ hypermatrix. In contrast to the earlier bound, our new exponent is bounded away from $1$ even as $d$ becomes very large.
In this paper, we derive the exact formula for the probability that three randomly and uniformly selected points from the interior of the unit cube form vertices of an obtuse triangle.
This article critically examines how Web3 decentralization policy trends impact global digital governance, questioning whether they genuinely distribute power or merely shift influence to a new, tech-savvy elite. Based on fieldwork in Silicon Valley since August 2022 and engagement with scholars and practitioners up to December 2025, the article provides a conceptual analysis with emerging empirical insights around the nascent global Web3 movement. While Web3 advocates challenge centralized data monopolies and traditional state structures, this analysis critiques the assumption that Web3 democratizes power, highlighting both its potential for inclusion and risks of exclusion, insofar as it may reinforce hierarchies rooted in technical expertise and digital access. While acknowledging the broader landscape of Web3 governance (including hybrid and federated models) and scoping the Global North and Global South contexts considering global adoption cases, the article particularly focuses on three post-Westphalian paradigms: (i) Network States, (ii) Network Sovereignties, and (iii) Algorithmic Nations. While Network States advocate for crypto-libertarian governance, Network Sovereignties and Algorithmic Nations emphasize cooperative governance aimed at empowering minority communities, such as indigenous groups, stateless nations, and e-diasporas, through decentralized, data-driven systems. By engaging with both the limitations and some promises, prospects, and pitfalls of Web3, this article questions whether Web3 can create a more inclusive global order or if influence is increasingly concentrated among a new elite. This article contributes to debates on sovereignty, governance, and citizenship by advocating hybrid policy frameworks that balance global and local dynamics, emphasizing solidarity, digital justice, and international cooperation for equitable Web3 governance.
We consider general discrete-time multitype branching processes on a countable set X. According to these processes, a particle of type $x\in X$ generates a random number of children and chooses their type in X, not necessarily independently nor with the same law for different parent types. We introduce a new type of stochastic ordering of multitype branching processes, generalising the germ order introduced by Hutchcroft, which relies on the generating function of the process. We prove that given two multitype branching processes with laws ${\boldsymbol{\mu}}$ and ${\boldsymbol{\nu}}$ respectively, with ${\boldsymbol{\mu}}\ge{\boldsymbol{\nu}}$, then in every set where there is survival according to ${\boldsymbol{\nu}}$, there is also survival according to ${\boldsymbol{\mu}}$. Moreover, in every set where there is strong survival according to ${\boldsymbol{\nu}}$, there is also strong survival according to ${\boldsymbol{\mu}}$, provided that the supremum of the global extinction probabilities for the ${\boldsymbol{\nu}}$ process, taken over all starting points x, is strictly smaller than 1. New conditions for survival and strong survival for inhomogeneous multitype branching processes are provided. We also extend a result of Moyal which claims that, under some conditions, the global extinction probability for a multitype branching process is the only fixed point of its generating function, whose supremum over all starting coordinates may be smaller than 1.
The purpose of this paper is to analyze the degree index and the clustering index in dense random graphs. The degree index in our setup is a certain measure of degree irregularity whose basic properties are well studied in the literature, and the corresponding theoretical analysis in a random graph setup turns out to be tractable. On the other hand, the clustering index, based on a similar reasoning, is first introduced in this paper. Computing exact expressions for the expected clustering index turns out to be more challenging even in the case of Erdős–Rényi graphs, and our results are on obtaining relevant upper bounds. These are also complemented with observations based on Monte Carlo simulations. In addition to the Erdős–Rényi case, we also present a simulation-based analysis for random regular graphs, the Barabási–Albert model, and the Watts–Strogatz model.
Given a sequence of graphs $G_n$ and a fixed graph H, denote by $T(H, G_n)$ the number of monochromatic copies of the graph H in a uniformly random c-coloring of the vertices of $G_n$. In this paper we study the joint distribution of a finite collection of monochromatic graph counts in networks with multiple layers (multiplex networks). Specifically, given a finite collection of graphs $H_1, H_2, \ldots, H_d$ we derive the joint distribution of $(T(H_1, G_n^{(1)}), T(H_2, G_n^{(2)}), \ldots, T(H_d, G_n^{(d)}))$, where $\mathbf{G}_n = (G_n^{(1)}, G_n^{(2)}, \ldots, G_n^{(d)})$ is a collection of dense graphs on the same vertex set converging in the multiplex cut-metric. The limiting distribution is the sum of two independent components: a multivariate Gaussian and a sum of independent bivariate stochastic integrals. This extends previous results on the marginal convergence of monochromatic subgraphs in a sequence of graphs to the joint convergence of a finite collection of monochromatic subgraphs in a sequence of multiplex networks. Several applications and examples are discussed.
We identify the size of the largest connected component in a subcritical inhomogeneous random graph with a kernel of preferential attachment type. The component is polynomial in the graph size with an explicitly given exponent, which is strictly larger than the exponent for the largest degree in the graph. This is in stark contrast to the behaviour of inhomogeneous random graphs with a kernel of rank one. Our proof uses local approximation by branching random walks going well beyond the weak local limit and novel results on subcritical killed branching random walks.
The goal of the Paris Agreement is to prevent global temperatures from rising by more than 2°C above pre-industrial levels and pursue efforts to limit them to 1.5°C above pre-industrial levels. This requires a significant reduction in global greenhouse gas emissions and achieving net zero emissions by 2050. Portfolio alignment metrics are forward-looking metrics intended to help investors understand whether their investment portfolios are on track to meet the Paris Agreement goals. They also aim to encourage capital flows towards activities needed for a net zero transition. Since 2020, several metrics have been put forward by industry groups and explored in technical papers. Companies and actuaries have been exploring the practicalities of these metrics and starting to incorporate them into investment reporting and design. But this has not been without key challenges. The Net Zero and Implications for Investment Portfolios working party aims to help actuaries improve their understanding of what net zero means for an investment portfolio and what the key mechanisms are to achieve this, as well as key challenges to date and the outlook for development.
In this paper we propose a refracted skew Brownian motion as a risk model with endogenous regime switching, which generalizes the refracted diffusion risk process introduced by Gerber and Shiu. We consider an optimal dividend problem for the refracted skew Brownian risk model and identify sufficient conditions, respectively, for barrier strategy, band strategy, and their variants to be optimal.
Extropy-based divergence measures offer distinct advantages over entropy-based counterparts, owing to their mathematical simplicity and enhanced interpretability. Relative extropy by Lad et al. [5] is a symmetric divergence measure between two probability distributions, and Mohammadi et al. [8] introduced the asymmetric divergence between two distributions based on extropy. We further study these measures, their properties, and interrelationships in this article. To address the divergence between truncated lifetime distributions, we define dynamic relative extropy for residual and past lifetime scenarios. Exploring the interrelationships of dynamic cases of relative extropy, extropy divergence, and extropy inaccuracy, we derive some unique properties and characterizations for the exponential distribution. A nonparametric estimator for relative extropy is developed, and its performance is assessed through numerical simulation studies. The practical applicability of relative extropy is used to analyze the divergence in lifetime patterns of mice under a lifetime feeding experiment and the shopping patterns of customers based on age and income groups. Further, the application of relative extropy is also applied to find the dissimilarity between two images.
Past research has indicated that the covariance of the stochastic gradient descent (SGD) error done via minibatching plays a critical role in determining its regularization and escape from low potential points. Motivated by some new research in this area, we prove universality results by showing that noise classes that have the same mean and covariance structure of SGD via minibatching have similar properties. We mainly consider the SGD algorithm, with multiplicative noise, introduced in previous work (Wu et al (2016) Int. Conf. on Machine Learning, PMLR, pp. 10367–10376), which has a much more general noise class than the SGD algorithm done via minibatching. We establish non-asymptotic bounds for the multiplicative SGD algorithm in the Wasserstein distance. We also show that the error term for the algorithm is approximately a scaled Gaussian distribution with mean 0 at any fixed point.