To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A graph $G$ is $q$-Ramsey for another graph $H$ if in any $q$-edge-colouring of $G$ there is a monochromatic copy of $H$, and the classic Ramsey problem asks for the minimum number of vertices in such a graph. This was broadened in the seminal work of Burr, Erdős, and Lovász to the investigation of other extremal parameters of Ramsey graphs, including the minimum degree.
It is not hard to see that if $G$ is minimally $q$-Ramsey for $H$ we must have $\delta (G) \ge q(\delta (H) - 1) + 1$, and we say that a graph $H$ is $q$-Ramsey simple if this bound can be attained. Grinshpun showed that this is typical of rather sparse graphs, proving that the random graph $G(n,p)$ is almost surely $2$-Ramsey simple when $\frac{\log n}{n} \ll p \ll n^{-2/3}$. In this paper, we explore this question further, asking for which pairs $p = p(n)$ and $q = q(n,p)$ we can expect $G(n,p)$ to be $q$-Ramsey simple.
We first extend Grinshpun’s result by showing that $G(n,p)$ is not just $2$-Ramsey simple, but is in fact $q$-Ramsey simple for any $q = q(n)$, provided $p \ll n^{-1}$ or $\frac{\log n}{n} \ll p \ll n^{-2/3}$. Next, when $p \gg \left ( \frac{\log n}{n} \right )^{1/2}$, we find that $G(n,p)$ is not $q$-Ramsey simple for any $q \ge 2$. Finally, we uncover some interesting behaviour for intermediate edge probabilities. When $n^{-2/3} \ll p \ll n^{-1/2}$, we find that there is some finite threshold $\tilde{q} = \tilde{q}(H)$, depending on the structure of the instance $H \sim G(n,p)$ of the random graph, such that $H$ is $q$-Ramsey simple if and only if $q \le \tilde{q}$. Aside from a couple of logarithmic factors, this resolves the qualitative nature of the Ramsey simplicity of the random graph over the full spectrum of edge probabilities.
We consider the constrained-degree percolation model in a random environment (CDPRE) on the square lattice. In this model, each vertex v has an independent random constraint $\kappa_v$ which takes the value $j\in \{0,1,2,3\}$ with probability $\rho_j$. The dynamics is as follows: at time $t=0$ all edges are closed; each edge e attempts to open at a random time $U(e)\sim \mathrm{U}(0,1]$, independently of all the other edges. It succeeds if at time U(e) both its end vertices have degrees strictly smaller than their respective constraints. We obtain exponential decay of the radius of the open cluster of the origin at all times when its expected size is finite. Since CDPRE is dominated by Bernoulli percolation, this result is meaningful only if the supremum of all values of t for which the expected size of the open cluster of the origin is finite is larger than $\frac12$. We prove this last fact by showing a sharp phase transition for an intermediate model.
Paragraph 53(a) of the new insurance accounting standard IFRS 17 suggests there is a relationship between the liability for remaining coverage (“LFRC”) calculated under the general measurement model (“GMM”) and premium allocation approach (“PAA”), although it is not immediately obvious how the two are related or could result in a similar estimate for the LFRC. This paper explores the underlying relationship between the GMM and PAA through the equivalence principle and presents a set of sufficient mathematical conditions that result in an identical LFRC when calculated under the GMM and PAA. An illustrative example is included to demonstrate how the sufficient conditions can be applied in practice and the optimisation opportunities offered to actuaries and accountants when conducting PAA eligibility testing.
This paper studies a novel Brownian functional defined as the supremum of a weighted average of the running Brownian range and its running reversal from extrema on the unit interval. We derive the Laplace transform for the squared reciprocal of this functional, which leads to explicit moment expressions that are new to the literature. We show that the proposed Brownian functional can be used to estimate the spot volatility of financial returns based on high-frequency price observations.
We propose an individual claims reserving model based on the conditional Aalen–Johansen estimator, as developed in Bladt and Furrer ((2023a) arXiv:2303.02119.). In our approach, we formulate a multi-state problem, where the underlying variable is the individual claim size, rather than time. The states in this model represent development periods, and we estimate the cumulative density function of individual claim sizes using the conditional Aalen–Johansen method as transition probabilities to an absorbing state. Our methodology reinterprets the concept of multi-state models and offers a strategy for modeling the complete curve of individual claim sizes. To illustrate our approach, we apply our model to both simulated and real datasets. Having access to the entire dataset enables us to support the use of our approach by comparing the predicted total final cost with the actual amount, as well as evaluating it in terms of the continuously ranked probability score.
As the global population continues to age, effective management of longevity risk becomes increasingly critical for various stakeholders. Accurate mortality forecasting serves as a cornerstone for addressing this challenge. This study proposes to leverage Kernel Principal Component Analysis (KPCA) to enhance mortality rate predictions. By extending the traditional Lee-Carter model with KPCA, we capture nonlinear patterns and complex relationships in mortality data. The newly proposed KPCA Lee-Carter algorithm is empirically tested and demonstrates superior forecasting performance. Furthermore, the model’s robustness was tested during the COVID-19 pandemic, showing that the KPCA Lee-Carter algorithm effectively captures increased uncertainty during extreme events while maintaining narrower prediction intervals. This makes it a valuable tool for mortality forecasting and risk management. Our findings contribute to the growing body of literature where actuarial science intersects with statistical learning, offering practical solutions to the challenges posed by an aging world population.
West Nile virus (WNV) is a mosquito-borne pathogen that can infect humans, equids, and many bird species, posing a threat to their health. It consists of eight lineages, with Lineage 1 (L1) and Lineage 2 (L2) being the most prevalent and pathogenic. Italy is one of the hardest-hit European nations, with 330 neurological cases and 37 fatalities in humans in the 2021–2022 season, in which the L1 re-emerged after several years of low circulation. We assembled a database comprising all publicly available WNV genomes, along with 31 new Italian strains of WNV L1 sequenced in this study, to trace their evolutionary history using phylodynamics and phylogeography. Our analysis suggests that WNV L1 may have initially entered Italy from Northern Africa around 1985 and indicates a connection between European and Western Mediterranean countries, with two distinct strains circulating within Italy. Furthermore, we identified new genetic mutations that are typical of the Italian strains and that can be tested in future studies to assess their pathogenicity. Our research clarifies the dynamics of WNV L1 in Italy, provides a comprehensive dataset of genome sequences for future reference, and underscores the critical need for continuous and coordinated surveillance efforts between Europe and Africa.
We derive some key extremal features for stationary kth-order Markov chains that can be used to understand how the process moves between an extreme state and the body of the process. The chains are studied given that there is an exceedance of a threshold, as the threshold tends to the upper endpoint of the distribution. Unlike previous studies with $k>1$, we consider processes where standard limit theory describes each extreme event as a single observation without any information about the transition to and from the body of the distribution. Our work uses different asymptotic theory which results in non-degenerate limit laws for such processes. We study the extremal properties of the initial distribution and the transition probability kernel of the Markov chain under weak assumptions for broad classes of extremal dependence structures that cover both asymptotically dependent and asymptotically independent Markov chains. For chains with $k>1$, the transition of the chain away from the exceedance involves novel functions of the k previous states, in comparison to just the single value, when $k=1$. This leads to an increase in the complexity of determining the form of this class of functions, their properties, and the method of their derivation in applications. We find that it is possible to derive an affine normalization, dependent on the threshold excess, such that non-degenerate limiting behaviour of the process, in the neighbourhood of the threshold excess, is assured for all lags. We find that these normalization functions have an attractive structure that has parallels to the Yule–Walker equations. Furthermore, the limiting process is always linear in the innovations. We illustrate the results with the study of kth-order stationary Markov chains with exponential margins based on widely studied families of copula dependence structures.
Consider a branching random walk on the real line with a random environment in time (BRWRE). A necessary and sufficient condition for the non-triviality of the limit of the derivative martingale is formulated. To this end, we investigate the random walk in a time-inhomogeneous random environment (RWRE), which is related to the BRWRE by the many-to-one formula. The key step is to figure out Tanaka’s decomposition for the RWRE conditioned to stay non-negative (or above a line), which is interesting in itself.
We introduce a modification of the generalized Pólya urn model containing two urns, and we study the number of balls $B_j(n)$ of a given color $j\in\{1,\ldots,J\}$ added to the urns after n draws, where $J\in\mathbb{N}$. We provide sufficient conditions under which the random variables $(B_j(n))_{n\in\mathbb{N}}$, properly normalized and centered, converge weakly to a limiting random variable. The result reveals a similar trichotomy as in the classical case with one urn, one of the main differences being that in the scaling we encounter 1-periodic continuous functions. Another difference in our results compared to the classical urn models is that the phase transition of the second-order behavior occurs at $\sqrt{\rho}$ and not at $\rho/2$, where $\rho$ is the dominant eigenvalue of the mean replacement matrix.
We study a discrete-time life cycle retirement planning problem for individual workers with four distinct investment options: self-management with dynamic investment (S), self-management with benchmark investment (B), hire-management with flexible allocation ($\text{H}_{1}$), and hire-management with alpha focus ($\text{H}_{2}$). We examine the investment strategies and consumption patterns during the defined contribution fund accumulation period, ending with a life annuity purchase at retirement to finance post-retirement consumption. Based on the calibrated model using US data, we employ numerical dynamic programming technique to optimize worker’s financial decisions. Our analysis reveals that, despite the agency risk, delegated investments can add value to a worker’s lifetime utility, with the $\text{H}_2$ option yielding the best lifetime utility outcome. However, after taking the fund management fee into consideration, we find that both the $\text{H}_1$ and $\text{H}_2$ options may not offer additional value compared to the S option, yet they still surpass the B option in performance.
Most infections with pandemic Vibrio cholerae are thought to result in subclinical disease and are not captured by surveillance. Previous estimates of the ratio of infections to clinical cases have varied widely (2 to 100 infections per case). Understanding cholera epidemiology and immunity relies on the ability to translate between numbers of clinical cases and the underlying number of infections in the population. We estimated the infection incidence during the first months of an outbreak in a cholera-naive population using a Bayesian vibriocidal antibody titer decay model combining measurements from a representative serosurvey and clinical surveillance data. 3,880 suspected cases were reported in Grande Saline, Haiti, between 20 October 2010 and 6 April 2011 (clinical attack rate 18.4%). We found that more than 52.6% (95% Credible Interval (CrI) 49.4-55.7) of the population ≥2 years showed serologic evidence of infection, with a lower infection rate among children aged 2-4 years (35.5%; 95%CrI 24.2-51.6) compared with people ≥5 years (53.1%; 95%CrI 49.4-56.4). This estimated infection rate, nearly three times the clinical attack rate, with underdetection mainly seen in those ≥5 years, has likely impacted subsequent outbreak dynamics. Our findings show how seroincidence estimates improve understanding of links between cholera burden, transmission dynamics and immunity.
Discussions of the development and governance of data-driven systems have, of late, come to revolve around questions of trust and trustworthiness. However, the connections between them remain relatively understudied and, more importantly, the conditions under which the latter quality of trustworthiness might reliably lead to the placing of ‘well-directed’ trust. In this paper, we argue that this challenge for the creation of ‘rich’ trustworthiness, which we term the Trustworthiness Recognition Problem, can usefully be approached as a problem of effective signalling, and suggest that its resolution can be informed by a multidisciplinary approach that relies on insights from economics and behavioural ecology. We suggest, overall, that the domain specificity inherent to the signalling theory paradigm offers an effective solution to the TRP, which we believe will be foundational to whether and how rapidly improving technologies are integrated in the healthcare space. We suggest that solving the TRP will not be possible without taking an interdisciplinary approach and suggest further avenues of inquiry that we believe will be fruitful.
Generative artificial intelligence (GenAI) has gained significant popularity in recent years. It is being integrated into a variety of sectors for its abilities in content creation, design, research, and many other functionalities. The capacity of GenAI to create new content—ranging from realistic images and videos to text and even computer code—has caught the attention of both the industry and the general public. The rise of publicly available platforms that offer these services has also made GenAI systems widely accessible, contributing to their mainstream appeal and dissemination. This article delves into the transformative potential and inherent challenges of incorporating GenAI into the domain of judicial decision-making. The article provides a critical examination of the legal and ethical implications that arise when GenAI is used in judicial rulings and their underlying rationale. While the adoption of this technology holds the promise of increased efficiency in the courtroom and expanded access to justice, it also introduces concerns regarding bias, interpretability, and accountability, thereby potentially undermining judicial discretion, the rule of law, and the safeguarding of rights. Around the world, judiciaries in different jurisdictions are taking different approaches to the use of GenAI in the courtroom. Through case studies of GenAI use by judges in jurisdictions including Colombia, Mexico, Peru, and India, this article maps out the challenges presented by integrating the technology in judicial determinations, and the risks of embracing it without proper guidelines for mitigating potential harms. Finally, this article develops a framework that promotes a more responsible and equitable use of GenAI in the judiciary, ensuring that the technology serves as a tool to protect rights, reduce risks, and ultimately, augment judicial reasoning and access to justice.
In this article, we give explicit bounds on the Wasserstein and Kolmogorov distances between random variables lying in the first chaos of the Poisson space and the standard normal distribution, using the results of Last et al. (Prob. Theory Relat. Fields165, 2016). Relying on the theory developed by Saulis and Statulevicius in Limit Theorems for Large Deviations (Kluwer, 1991) and on a fine control of the cumulants of the first chaoses, we also derive moderate deviation principles, Bernstein-type concentration inequalities, and normal approximation bounds with Cramér correction terms for the same variables. The aforementioned results are then applied to Poisson shot noise processes and, in particular, to the generalized compound Hawkes point processes (a class of stochastic models, introduced in this paper, which generalizes classical Hawkes processes). This extends the recent results of Hillairet et al. (ALEA19, 2022) and Khabou et al. (J. Theoret. Prob.37, 2024) regarding the normal approximation and those of Zhu (Statist. Prob. Lett.83, 2013) for moderate deviations.
Viruses present an amazing genetic variability. An ensemble of infecting viruses, also called a viral quasispecies, is a cloud of mutants centered around a specific genotype. The simplest model of evolution, whose equilibrium state is described by the quasispecies equation, is the Moran–Kingman model. For the sharp-peak landscape, we perform several exact computations and derive several exact formulas. We also obtain an exact formula for the quasispecies distribution, involving a series and the mean fitness. A very simple formula for the mean Hamming distance is derived, which is exact and does not require a specific asymptotic expansion (such as sending the length of the macromolecules to $\infty$ or the mutation probability to 0). With the help of these formulas, we present an original proof for the well-known phenomenon of the error threshold. We recover the limiting quasispecies distribution in the long-chain regime. We try also to extend these formulas to a general fitness landscape. We obtain an equation involving the covariance of the fitness and the Hamming class number in the quasispecies distribution. Going beyond the sharp-peak landscape, we consider fitness landscapes having finitely many peaks and a plateau-type landscape. Finally, within this framework, we prove rigorously the possible occurrence of the survival of the flattest, a phenomenon which was previously discovered by Wilke et al. (Nature 412, 2001) and which has been investigated in several works (see e.g. Codoñer et al. (PLOS Pathogens2, 2006), Franklin et al. (Artificial Life25, 2019), Sardanyés et al. (J. Theoret. Biol.250, 2008), and Tejero et al. (BMC Evolutionary Biol.11, 2011)).
In this article, I will consider the moral issues that might arise from the possibility of creating more complex and sophisticated autonomous intelligent machines or simply artificial intelligence (AI) that would have the human capacity for moral reasoning, judgment, and decision-making, and (the possibility) of humans enhancing their moral capacities beyond what is considered normal for humanity. These two possibilities raise an urgency for ethical principles that could be used to analyze the moral consequences of the intersection of AI and transhumanism. In this article, I deploy personhood-based relational ethics grounded on Afro-communitarianism as an African ethical framework to evaluate some of the moral problems at the intersection of AI and transhumanism. In doing so, I will propose some Afro-ethical principles for research and policy development in AI and transhumanism.
Anthrax is a bacterial zoonotic disease caused by Bacillus anthracis. We qualitatively examined facilitators and barriers to responding to a potential anthrax outbreak using the capability, opportunity, motivation behaviour model (COM-B model) in the high-risk rural district of Namisindwa, in Eastern Uganda. We chose the COM-B model because it provides a systematic approach for selecting evidence-based techniques and approaches for promoting the behavioural prompt response to anthrax outbreaks. Unpacking these facilitators and barriers enables the leaders and community members to understand existing resources and gaps so that they can leverage them for future anthrax outbreaks.
This was a qualitative cross-sectional study that was part of a bigger anthrax outbreak simulation study conducted in September 2023. We conducted 10 Key Informant interviews among key stakeholders. The interviews were audio recorded on Android-enabled phones and later transcribed verbatim. The transcripts were analyzed using a deductive thematic content approach through Nvivo 12.
The facilitators were; knowledge of respondents about anthrax disease and anthrax outbreak response, experience and presence of surveillance guidelines, availability of resources, and presence of communication channels. The identified barriers were; porous boarders that facilitate unregulated animal trade across, lack of essential personal protective equipment, and lack of funds for surveillance and response activities.
Generally, the district was partially ready for the next anthrax outbreak. The district was resourced in terms of human resources but lacked adequate funds for animal, environmental and human surveillance activities for anthrax and related response. The district technical staff had the knowledge required to respond to the anthrax outbreak but lacked adequate funds for animal, environmental and human surveillance for anthrax and related response. We think that our study findings are generalizable in similar settings and therefore call for the implementation of such periodic evaluations to help leverage the strong areas and improve other aspects. Anthrax is a growing threat in the region, and there should be proactive efforts in prevention, specifically, we recommend vaccination of livestock and further research for human vaccines.
When implementing Markov Chain Monte Carlo (MCMC) algorithms, perturbation caused by numerical errors is sometimes inevitable. This paper studies how the perturbation of MCMC affects the convergence speed and approximation accuracy. Our results show that when the original Markov chain converges to stationarity fast enough and the perturbed transition kernel is a good approximation to the original transition kernel, the corresponding perturbed sampler has fast convergence speed and high approximation accuracy as well. Our convergence analysis is conducted under either the Wasserstein metric or the $\chi^2$ metric, both are widely used in the literature. The results can be extended to obtain non-asymptotic error bounds for MCMC estimators. We demonstrate how to apply our convergence and approximation results to the analysis of specific sampling algorithms, including Random walk Metropolis, Metropolis adjusted Langevin algorithm with perturbed target densities, and parallel tempering Monte Carlo with perturbed densities. Finally, we present some simple numerical examples to verify our theoretical claims.
The analysis of insurance and annuity products issued on multiple lives requires the use of statistical models which account for lifetime dependence. This paper presents a Dirichlet process mixture-based approach that allows to model dependent lifetimes within a group, such as married couples, accounting for individual as well as group-specific covariates. The model is analyzed in a fully Bayesian setting and illustrated to jointly model the lifetime of male–female couples in a portfolio of joint and last survivor annuities of a Canadian life insurer. The inferential approach allows to account for right censoring and left truncation, which are common features of data in survival analysis. The model shows improved in-sample and out-of-sample performance compared to traditional approaches assuming independent lifetimes and offers additional insights into the determinants of the dependence between lifetimes and their impact on joint and last survivor annuity prices.