To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Despite efforts by the Volta River Authority (VRA) to provide services for schistosomiasis control in communities along Ghana’s Volta Basin, high rates of transmission and re-infection persist in the region. To strengthen intervention effectiveness, the VRA partnered with the University of Health and Allied Sciences to conduct implementation research aimed at developing context-specific, evidence-based quality improvement strategies. This mixed-method study evaluates the reach, effectiveness, adoption, implementation, and maintenance of the VRA’s quality improvement intervention for their mass drug administration (MDA) for schistosomiasis. Baseline and endline surveys were analysed using STATA and qualitative data from in-depth interviews (IDIs) and focus group discussions (FGDs) were coded and analysed thematically using Taguette. Urogenital schistosomiasis prevalence decreased by 87.83% in Shai Osudoku, 88.98% in South Tongu, and 90.96% in Asuogyaman after the intervention. The findings revealed high training levels among district health management staff and community drug distributors, high health worker satisfaction with the training, and positive community reception of the intervention. However, praziquantel side effects and related opportunity costs may have posed a barrier to drug uptake. Moreover, re-infection remains a challenge, which could be attributed to high domestic and economic reliance on the Volta River.
This study assessed the impact and cost-effectiveness of pre-exposure prophylaxis (PrEP) in reducing HIV infections and HIV-related deaths among four key populations in China: men who have sex with men (MSM). Female sex workers (FSW), people who inject drugs (PWID), and HIV-negative partners of serodiscordant couples (SDC). Decision-analytic Markov models simulated HIV transmission and progression in cohorts of 100,000 adults over 40 years under three strategies: no PrEP, daily oral PrEP, and on-demand oral PrEP evaluated nationaly and high-incidence provinces. Cost-effectiveness was measured using a willingness-to-pay threshold of US$37,653 per QALY. Across all populations, on-demand PrEP was the most cost-effective strategy. Among MSM, it was cost-effective both nationwide (ICER: $4,554/QALY) and in high-incidence provinces (ICER: $1,045-2,129/QALY), reducing new infections by 24.7%. Daily PrEP was also const-effective for MSM nationally and prevented 19.9% of infections. For FSW, on-demand PrEP was cost-effective in high-incidence provinces (ICER: $25,399-37,045/QALY), reducing infections by 21.8%-22.5%. For PWID, it was cost-effective in high-incidence provinces (ICER: $10,361-29,560/QALY), reducing infections by 15.5%-17.9%. For HIV-negative partners of SDC, on-demand PrEP was cost-effective both nationally and in high-incidence provinces, reducing infections by 24.0%. Overall, on-demand PrEP offers substantial health and economic benefits, particularly for HIV-negative partners of SDC and high-incidence regions.
Hybrid stochastic differential equations (SDEs) are a useful tool for modeling continuously varying stochastic systems modulated by a random environment, which may depend on the system state itself. In this paper we establish the pathwise convergence of solutions to hybrid SDEs using space-grid discretizations. Though time-grid discretizations are a classical approach for simulation purposes, our space-grid discretization provides a link with multi-regime Markov-modulated Brownian motions. This connection allows us to explore aspects that have been largely unexplored in the hybrid SDE literature. Specifically, we exploit our convergence result to obtain efficient and computationally tractable approximations for first-passage probabilities and expected occupation times of the solutions to hybrid SDEs. Lastly, we illustrate the effectiveness of the resulting approximations through numerical examples.
In this paper we are concerned with susceptible–infected–removed (SIR) epidemics with vertex-dependent recovery and infection rates on complete graphs. We show that the hydrodynamic limit of our model is driven by a nonlinear function-valued ordinary differential equation consistent with a mean-field analysis. We further show that the fluctuation of our process is driven by a generalized Ornstein–Uhlenbeck process. A key step in the proofs of the main results is to show that states of different vertices are approximately independent as the population $N\rightarrow+\infty$.
We study the bilateral preference graphs $\mathit{LK}(n, k)$ of La and Kabkab, obtained as follows. Put independent and uniform [0, 1] weights on the edges of the complete graph $K_n$. Then, each edge (i, j) is included in $\mathit{LK}(n,k)$ if it is bilaterally preferred, in the sense that it is among the k edges of lowest weight incident to vertex i, and among the k edges of lowest weight incident to vertex j. We show that $k = \log(n)$ is the connectivity threshold, solving a conjecture of La and Kabkab, and obtaining finer results about the window. We also investigate the asymptotic behavior of the average degree of vertices in $\mathit{LK}(n, k)$ as $n\rightarrow\infty$.
In the 1980s, Erdős and Sós initiated the study of Turán problems with a uniformity condition on the distribution of edges: the uniform Turán density of a hypergraph $H$ is the infimum over all $d$ for which any sufficiently large hypergraph with the property that all its linear-size subhypergraphs have density at least $d$ contains $H$. In particular, they asked to determine the uniform Turán densities of $K_4^{(3)-}$ and $K_4^{(3)}$. After more than 30 years, the former was solved in [Israel J. Math. 211 (2016), 349 – 366] and [J. Eur. Math. Soc. 20 (2018), 1139 – 1159], while the latter still remains open. Till today, there are known constructions of $3$-uniform hypergraphs with uniform Turán density equal to $0$, $1/27$, $4/27$, and $1/4$ only. We extend this list by a fifth value: we prove an easy to verify sufficient condition for the uniform Turán density to be equal to $8/27$ and identify hypergraphs satisfying this condition.
Blockchain technology has attracted attention from public sector agencies, mainly for its perceived potential to improve transparency, data integrity, and administrative processes. However, its concrete value and applicability within government settings remain contested, and real-world adoption has been limited and uneven. This raises questions regarding the conditions that promote or impede adoption at the institutional level. Fuzzy-set qualitative comparative analysis is employed in this research to explore how the combined effects of national-level regulatory clarity, financial provision, digital readiness, and ecosystem engagement shape patterns of blockchain adoption in the European public sector. Rather than identifying any single factor as decisive, our findings reveal a plurality of institutional paths leading to high adoption intensity, with regulatory certainty and European Union funding appearing most frequently on high-consistency paths. In contrast, digital readiness indicators and national research and development budgets are substitutable, challenging resource-based perceptions of technology adoption and supporting a configurational understanding that accounts for institutional interdependence and contextuality. We argue that policy strategies cannot look for overall readiness but should place key institutional strengths relative to local conditions and public value objectives.
This paper considers option valuation under finite mixture models in a discrete-time economy. Specifically, the Esscher transform is employed to select a pricing kernel. Novel finite mixture models with negative-shifted Gamma and negative-shifted inverse Gaussian distributions are developed. A hybrid finite mixture model that allows different parametric forms for component distributions is introduced to incorporate model uncertainty. An empirical characteristic function estimation method is employed to estimate the finite mixture models. Closed-form pricing formulas for a European call option are obtained for some finite mixture models. Empirical examples using data on the Bitcoin-USD prices are provided to illustrate an application of the proposed models to value Bitcoin options.
Credibility theory provides a fundamental framework in actuarial science for estimating policyholder premiums by blending individual claims experience with overall portfolio data. Bühlmann and Bühlmann–Straub credibility models are widely used because, in the Bayesian hierarchical setting, they are the best linear Bayes estimators, minimizing the Bayes risk (expected squared error loss) within the class of linear estimators given the experience data for a particular risk class. To improve estimation accuracy, quadratic credibility models incorporate higher-order terms, capturing more information about the underlying risk structure. This study develops a robust quadratic credibility (RQC) framework that integrates second-order polynomial adjustments of robustly transformed ground-up loss data, such as winsorized moments, to improve stability in the presence of extreme claims or heavy-tailed distributions. Extending semi-linear credibility, RQC maintains interpretability while enhancing statistical efficiency. We establish its asymptotic properties, derive closed-form expressions for the RQC premium, and demonstrate its superior performance in reducing mean square error (MSE). We additionally derive semi-linear credibility structural parameters using winsorized data, further strengthening the robustness of credibility estimation. Analytical comparisons and empirical applications highlight RQC’s ability to capture claim heterogeneity, offering a more reliable and equitable approach to premium estimation. This research advances credibility theory by introducing a refined methodology that balances efficiency, robustness, and practical applicability across diverse insurance settings.
This article provides a general asymptotic theory for mildly explosive autoregression. We confirm that Cauchy limit theory remains invariant across a broad range of error processes, including general linear processes with martingale difference innovations, stationary causal processes, and nonlinear autoregressive time series, such as threshold autoregressive and bilinear models. Our results unify the Cauchy limit theory for long memory, short memory, and anti-persistent innovations within a single framework. Notably, we demonstrate that in the presence of anti-persistent innovations, the Cauchy limit theory may be violated when the regression coefficient approaches the local-to-unity range. Additionally, we explore extensions to models with varying drift, which is of significant interest in its own right.
Designing efficient and rigorous numerical methods for sequential decision-making under uncertainty is a difficult problem that arises in many applications frameworks. In this paper we focus on the numerical solution of a subclass of impulse control problems for the piecewise deterministic Markov process (PDMP) when the jump times are hidden. We first state the problem as a partially observed Markov decision process (POMDP) on a continuous state space and with controlled transition kernels corresponding to some specific skeleton chains of the PDMP. We then proceed to build a numerically tractable approximation of the POMDP by tailor-made discretizations of the state spaces. The main difficulty in evaluating the discretization error comes from the possible random jumps of the PDMP between consecutive epochs of the POMDP and requires special care. Finally, we discuss the practical construction of discretization grids and illustrate our method on simulations.
Build a firm foundation for studying statistical modelling, data science, and machine learning with this practical introduction to statistics, written with chemical engineers in mind. It introduces a data–model–decision approach to applying statistical methods to real-world chemical engineering challenges, establishes links between statistics, probability, linear algebra, calculus, and optimization, and covers classical and modern topics such as uncertainty quantification, risk modelling, and decision-making under uncertainty. Over 100 worked examples using Matlab and Python demonstrate how to apply theory to practice, with over 70 end-of-chapter problems to reinforce student learning, and key topics are introduced using a modular structure, which supports learning at a range of paces and levels. Requiring only a basic understanding of calculus and linear algebra, this textbook is the ideal introduction for undergraduate students in chemical engineering, and a valuable preparatory text for advanced courses in data science and machine learning with chemical engineering applications.
In recent years, the manufacturing sector has seen an influx of artificial intelligence applications, seeking to harness its capabilities to improve productivity. However, manufacturing organizations have limited understanding of risks that are posed by the usage of artificial intelligence, especially those related to trust, responsibility, and ethics. While significant effort has been put into developing various general frameworks and definitions to capture these risks, manufacturing and supply chain practitioners face difficulties in implementing these and understanding their impact. These issues can have a significant effect on manufacturing companies, not only at an organization level but also on their employees, clients, and suppliers. This paper aims to increase understanding of trustworthy, responsible, and ethical Artificial Intelligence challenges as they apply to manufacturing and supply chains. We first conduct a systematic mapping study on concepts relevant to trust, responsibility and ethics and their interrelationships. We then use a broadened view of a machine learning lifecycle as a basis to understand how risks and challenges related to these concepts emanate from each phase in the lifecycle. We follow a case study driven approach, providing several illustrative examples that focus on how these challenges manifest themselves in actual manufacturing practice. Finally, we propose a series of research questions as a roadmap for future research in trustworthy, responsible and ethical artificial intelligence applications in manufacturing, to ensure that the envisioned economic and societal benefits are delivered safely and responsibly.
In many contexts, an individual’s beliefs and behavior are affected by the choices of their social or geographic neighbors. This influence results in local correlation in people’s actions, which in turn affects how information and behaviors spread. Previously developed frameworks capture local social influence using network games, but discard local correlation in players’ strategies. This paper develops a network games framework that allows for local correlation in players’ strategies by incorporating a richer partial information structure than previous models. Using this framework we also examine the dependence of equilibrium outcomes on network clustering—the probability that two individuals with a mutual neighbor are connected to each other. We find that clustering reduces the number of players needed to provide a public good and allows for market sharing in technology standards competitions.
When overdispersion and correlation co-occur in longitudinal count data, as is often the case, an analysis method that can handle both phenomena simultaneously is needed. The correlated Poisson distribution (CPD) proposed by Drezner and Farnum (Communications in Statistics-Theory and Methods, 22(11), 3051–3063, 1994) is a generalization of the classical Poisson distribution with the incorporation of an additional parameter that allows dependence between successive observations of the phenomenon under study. This parameter both measures the correlation and reflects the degree of dispersion. The classical Poisson distribution is obtained as a special case when the correlation is zero. We present an in-depth review of this CPD and discuss some methods to estimate the distribution parameters. The inclusion of regression components in this distribution is enabled by allowing one of the parameters to include available information concerning, in this case, automobile insurance policyholders. The proposed distribution can be viewed as an alternative to the Poisson, negative binomial, and Poisson-inverse Gaussian approaches. We then describe applications of the distribution, suggest it is appropriate for modeling the number of claims in an automobile insurance portfolio, and establish some new distribution properties.
The practice of actuarial science has always been rooted in computation. From the early days of hand-constructed tables and commutation functions to today’s large-scale stochastic simulations and machine learning models, actuaries have continuously adapted their analytical tools to the technology of their time. The rapid growth of high-performance computing, open-source software, and data-driven methodologies now offers new possibilities for actuarial modeling – transforming not only how we calculate, but also how we think about risk, uncertainty, and decision-making. This editorial introduces a thematic collection on Actuarial Software, which showcases recent advances at the intersection of actuarial modeling and computational science.