To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Fine-grained mortality forecasting has gained momentum in actuarial research due to its ability to capture localized, short-term fluctuations in death rates. This paper introduces MortFCNet, a deep-learning method that predicts weekly death rates using region-specific weather inputs. Unlike traditional Serfling-based methods and gradient-boosting models that rely on predefined fixed Fourier terms and manual feature engineering, MortFCNet automatically learns patterns from raw time-series data without needing explicitly defined Fourier terms or manual feature engineering. Extensive experiments across over 200 NUTS-3 regions in France, Italy, and Switzerland demonstrate that MortFCNet consistently outperforms both a standard Serfling-type baseline and XGBoost in terms of predictive accuracy. Our ablation studies further confirm its ability to uncover complex relationships in the data without feature engineering. Moreover, this work underscores a new perspective on exploring deep learning for advancing fine-grained mortality forecasting.
We study a continuous-time mutually catalytic branching model on the $\mathbb{Z}^{d}$. The model describes the behavior of two different populations of particles, performing random walk on the lattice in the presence of branching, that is, each particle dies at a certain rate and is replaced by a random number of offspring. The branching rate of a particle in one population is proportional to the number of particles of another population at the same site. We study the long time behavior for this model, in particular, coexistence and noncoexistence of two populations in the long run. Finally, we construct a sequence of renormalized processes and use duality techniques to investigate its limiting behavior.
This article examines the governance challenges of human genomic data sharing. The analysis builds upon the unique characteristics that distinguish genomic data from other forms of personal data, particularly its dual nature as both uniquely identifiable to individuals and inherently collective, reflecting familial and ethnic group characteristics. This duality informs a tripartite risk taxonomy: individual privacy violations, group-level harms, and bioterrorism threats. Examining regulatory frameworks in the European Union (EU) and China, the article demonstrates how current data protection mechanisms—primarily anonymisation and informed consent—prove inadequate for genomic data governance due to the impossibility of true anonymisation and the limitations of consent-based models in addressing the risks of such sharing. Drawing on the concept of “genomic contextualism,” the article proposes a nuanced framework that incorporates interest balancing, comprehensive data lifecycle management, and tailored technical safeguards. The objective is to protect individuals and underrepresented groups while maximising the scientific and clinical value of genomic data.
We introduce a family of parsimonious network models that are intended to generalize the configuration model to temporal settings. We present consistent estimators for the model parameters and perform numerical simulations to illustrate the properties of the estimators on finite samples. We also derive analytical solutions for the basic and effective reproduction numbers for the early stage of the discrete-time SIR spreading process for our temporal configuration model (TCM). We apply three distinct TCMs to empirical student proximity networks and compare their performance.
We study how COVID-19 affected the ownership co-location network of French multinationals over 2012–2022. Using INSEE’s LiFi, we build annual country-industry co-location networks and assess robustness via topology (density, centralization, assortativity, and clustering) and edge survival (Weighted Jaccard). We then test for post-shock shifts in the determinants of dyadic co-location with multiple regression quadratic assignment procedure. Three results emerge. First, the network’s core is robust: topology shows no discontinuity and centrality persists. Second, adaptation is continuous at the margin: around one-third of edges rewire, concentrated in the periphery while core ties endure. Third, after 2020 the determinants of tie weights change, with a reduced role for gravity-like factors and greater cross-sector rebalancing. Thus the system is structurally robust with active peripheral adjustment. Rather than strict resilience in the sense of a return to the pre-COVID configuration, we observe durable strategic reweighting.
Many empirical systems contain complex interactions of arbitrary size, representing, for example, chemical reactions, social groups, co-authorship relationships, and ecological dependencies. These interactions are known as higher-order interactions, and the collection of these interactions comprise a higher-order network, or hypergraph. Hypergraphs have established themselves as a popular and versatile mathematical representation of such systems, and a number of software packages written in various programming languages have been designed to analyze these networks. However, the ecosystem of higher-order network analysis software is fragmented due to specialization of each software’s programming interface and compatible data representations. To enable seamless data exchange between higher-order network analysis software packages, we introduce the Hypergraph Interchange Format (HIF), a standardized format for storing higher-order network data. HIF supports multiple types of higher-order networks, including undirected hypergraphs, directed hypergraphs, and abstract simplicial complexes, while actively exploring extensions to represent multiplex hypergraphs, temporal hypergraphs, and ordered hypergraphs. To accommodate the wide variety of metadata used in different contexts, HIF also includes support for attributes associated with nodes, edges, and incidences. This initiative is a collaborative effort involving authors, maintainers, and contributors from prominent hypergraph software packages. This project introduces a JSON schema with corresponding documentation and unit tests, example HIF-compliant datasets, and tutorials demonstrating the use of HIF with several popular higher-order network analysis software packages.
Fix integers $r \ge 2$ and $1\le s_1\le \cdots \le s_{r-1}\le t$ and set $s=\prod _{i=1}^{r-1}s_i$. Let $K=K(s_1, \ldots , s_{r-1}, t)$ denote the complete $r$-partite $r$-uniform hypergraph with parts of size $s_1, \ldots , s_{r-1}, t$. We prove that the Zarankiewicz number $z(n, K)= n^{r-1/s-o(1)}$ provided $t\gt 3^{s+o(s)}$. Previously this was known only for $t \gt ((r-1)(s-1))!$ due to Pohoata and Zakharov. Our novel approach, which uses Behrend’s construction of sets with no 3-term arithmetic progression, also applies for small values of $s_i$, for example, it gives $z(n, K(2,2,7))=n^{11/4-o(1)}$ where the exponent 11/4 is optimal, whereas previously this was only known with 7 replaced by 721.
We study stationary distributions in the context of stochastic reaction networks. In particular, we are interested in complex balanced reaction networks and the reduction of such networks by assuming that a set of species (called non-interacting species) are degraded fast (and therefore essentially absent from the network), implying that some reaction rates are large relative to others. Technically, we assume that these reaction rates are scaled by a common parameter N and let $N\to\infty$. The limiting stationary distribution as $N\to\infty$ is compared with the stationary distribution of the reduced reaction network obtained by elimination of the non-interacting species. In general, the limiting stationary distribution could differ from the stationary distribution of the reduced reaction network. We identify various sufficient conditions under which these two distributions are the same, including when the reaction network is detailed balanced and when the set of non-interacting species consists of intermediate species. In the latter case, the limiting stationary distribution essentially retains the form of the complex balanced distribution. This finding is particularly surprising given that the reduced reaction network could be non-weakly reversible and might exhibit unconventional kinetics.
The money exchange model is a type of agent-based model used to study how wealth distribution and inequality evolve through monetary exchanges between individuals. The primary focus of this model is to identify the limiting wealth distributions that emerge at the macroscopic level, given the microscopic rules governing the exchanges among agents. In this paper, we formulate generalized versions of the immediate exchange model, the uniform reshuffling model, and the uniform saving model, all of which are types of money exchange model, as discrete-time interacting particle systems and characterize their stationary distributions. Furthermore, we prove that, under appropriate scaling, the asymptotic wealth distribution converges to an exponential distribution for the uniform reshuffling model, and to either an exponential distribution or a gamma distribution depending on the tail behavior of the number of coins given/saved in the immediate exchange model and the random saving model, which generalizes the uniform saving model. In particular, our results provide a mathematically rigorous formulation and generalization of the assertions previously predicted in studies based on numerical simulations and heuristic arguments.
This paper investigates a continuous-time multidimensional risk model with stochastic returns driven by a geometric Lévy process, where each main claim is accompanied by a random number of delayed claims. By employing a framework of multivariate regular variation for claim sizes and allowing for arbitrarily dependent claim-number processes, we conduct asymptotic analyses for two types of ruin probabilities. Numerical examples are used to demonstrate the accuracy of our asymptotic estimates.
A novel family of statistical distributions, called enriched truncated exponentiated generalized family, is theoretically developed to model heavy-tailed data. One of the three-parameter sub-models of this family derived from log-logistic distribution is comprehensively studied. The statistical properties are explored, including moments and Fisher information matrix. In addition, tail-heaviness is studied using the tail-index approach. The method of maximum likelihood is used for parameter estimation, and existence and uniqueness of these estimators are shown. The flexibility of the new family is further validated by applying to the Norwegian fire insurance claim dataset. The goodness-of-fit measures are used to illustrate the adequacy of the proposed family of distributions. Furthermore, a backtesting procedure is conducted for well-known risk measures to assess the accuracy of the right tail fit.
The famous Sidorenko’s conjecture asserts that for every bipartite graph $H$, the number of homomorphisms from $H$ to a graph $G$ with given edge density is minimised when $G$ is pseudorandom. We prove that for any graph $H$, a graph obtained from replacing edges of $H$ by generalised theta graphs consisting of even paths satisfies Sidorenko’s conjecture, provided a certain divisibility condition on the number of paths. To achieve this, we prove unconditionally that bipartite graphs obtained from replacing each edge of a complete graph with a generalised theta graph satisfy Sidorenko’s conjecture, which extends a result of Conlon, Kim, Lee and Lee [J. Lond. Math. Soc., 2018].
This cross-sectional study investigated how care home size influences COVID-19 transmission dynamics, focusing on outbreaks in England during the second wave of COVID-19 (Wave 2; December 2020 to March 2021) and the Omicron wave (December 2021 to February 2022). Using data from the UK Health Security Agency and the Care Quality Commission, positive SARS-CoV-2 test results were matched to care home registration and occupancy data, examining outbreak trajectories in homes of varying sizes and resident age groups. The study included over 90,000 positive cases across the two waves. Small care homes (SCHs, with 10 or fewer beds), predominantly housing younger adults, showed significantly higher early positivity rates: 42% of residents were positive at outbreak detection, rising to 61% by day 7. In contrast, larger homes had early positivity rates of only 3–6%. These findings suggest that SCHs, often designed for communal living, facilitate rapid within-home transmission similar to household settings. The study concludes that outbreak control strategies in SCHs should differ from those in larger care homes, emphasizing proportionate, individualized approaches that consider resident vulnerability and minimize disruption to social support systems. These results have broader implications for managing future infectious disease outbreaks and support the development of tailored guidance based on care home size and resident demographics.
In this paper, we consider a bidimensional risk model with stochastic returns and dependent subexponential claims, in which every main claim may be accompanied by a delayed claim, occurring after an uncertain period of time. The surplus of each business line is allowed to be invested in a portfolio of risk-free assets, and the price process of the investment is modeled by a geometric Lévy process. Meanwhile, we employ a time-claim-dependent structure to describe the dependence among claims and the interarrival times. Some uniform asymptotic formulas for the finite-time ruin probabilities are derived under this structure. Finally, a simulation study is conducted to evaluate the accuracy of the derived results.
We study Langevin-type algorithms for sampling from Gibbs distributions such that the potentials are dissipative and their weak gradients have finite moduli of continuity not necessarily convergent to zero. Our main result is a non-asymptotic upper bound on the 2-Wasserstein distance between a Gibbs distribution and the law of general Langevin-type algorithms based on a Liptser–Shiryaev-type condition for change of measures and Poincaré inequalities. We apply this bound to show that the Langevin Monte Carlo algorithm can approximate Gibbs distributions with arbitrary accuracy if the potentials are dissipative and their gradients are uniformly continuous. We also propose Langevin-type algorithms with spherical smoothing for distributions whose potentials are not convex or continuously differentiable and show their polynomial complexities.
Effectiveness of nirsevimab against respiratory syncytial virus (RSV) hospitalization during the 2024/2025 season in Spain was estimated using a test-negative design (TND) and hospital-based respiratory infections surveillance data. Children born between 1 April 2024 and 31 March 2025 and hospitalized with severe respiratory infection between the start of the 2024 immunization campaign (regionally variable, between 16 September and 1 October 2024) and 31 March 2025 were systematically RT-PCR RSV-tested within 10 days of symptom onset and classified as cases if positive or controls if negative. Nirsevimab effectiveness ((1 − odds ratio) × 100) was estimated using logistic regression, adjusted for admission week, age, sex, high-risk factors, and regional RSV hospitalization rate. We included 199 cases (68.8% immunized) and 360 controls (86.4% immunized). Overall effectiveness was 65.5% (95% confidence interval: 45.2 to 78.3). Effectiveness was similar among infants born before and after the campaign start (63.6% vs. 70.4%, respectively). We found an unexpected early decrease in effectiveness with increasing time since immunization and age, albeit with wide confidence intervals for some groups. Strong age–period–cohort effects and potential sources of bias were identified, highlighting the need to further explore methodological challenges of implementing the TND in the dynamic population of newborns.
General additive functionals of patricia tries are studied asymptotically in a probabilistic model with independent, identically distributed letters from a finite alphabet. Asymptotic normality is shown after normalization together with asymptotic expansions of the moments. There are two regimes depending on the algebraic structure of the letter probabilities, with and without oscillations in the expansion of moments. As applications firstly the proportion of fringe trees of patricia tries with k keys is studied, which is oscillating around $(1-\rho(k))/(2H)k(k-1)$, where H denotes the source entropy and $\rho(k)$ is exponentially decreasing. The oscillations are identified explicitly. Secondly, the independence number of patricia tries and of tries is considered. The general results for additive functions also apply, where a leading constant is numerically approximated. The results extend work of Janson on tries by relating additive functionals on patricia tries to additive functionals on tries.