To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We introduce a novel preferential attachment model using the draw variables of a modified Pólya urn with an expanding number of colors, notably capable of modeling influential opinions (in terms of vertices of high degree) as the graph evolves. Similar to the Barabási-Albert model, the generated graph grows in size by one vertex at each time instance; in contrast however, each vertex of the graph is uniquely characterized by a color, which is represented by a ball color in the Pólya urn. More specifically at each time step, we draw a ball from the urn and return it to the urn along with a number of reinforcing balls of the same color; we also add another ball of a new color to the urn. We then construct an edge between the new vertex (corresponding to the new color) and the existing vertex whose color ball is drawn. Using color-coded vertices in conjunction with the time-varying reinforcing parameter allows for vertices added (born) later in the process to potentially attain a high degree in a way that is not captured in the Barabási-Albert model. We study the degree count of the vertices by analyzing the draw vectors of the underlying stochastic process. In particular, we establish the probability distribution of the random variable counting the number of draws of a given color which determines the degree of the vertex corresponding to that color in the graph. We further provide simulation results presenting a comparison between our model and the Barabási-Albert network.
A version of the classical Buffon problem in the plane naturally extends to the setting of any Riemannian surface with constant Gaussian curvature. The Buffon probability determines a Buffon deficit. The relationship between Gaussian curvature and the Buffon deficit is similar to the relationship that the Bertrand–Diguet–Puiseux theorem establishes between Gaussian curvature and both circumference and area deficits.
Previous studies suggest that influenza virus infection may provide temporary non-specific immunity and hence lower the risk of non-influenza respiratory virus infection. In a randomized controlled trial of influenza vaccination, 1 330 children were followed-up in 2009–2011. Respiratory swabs were collected when they reported acute respiratory illness and tested against influenza and other respiratory viruses. We used Poisson regression to compare the incidence of non-influenza respiratory virus infection before and after influenza virus infection. Based on 52 children with influenza B virus infection, the incidence rate ratio (IRR) of non-influenza respiratory virus infection after influenza virus infection was 0.47 (95% confidence interval: 0.27–0.82) compared with before infection. Simulation suggested that this IRR was 0.87 if the temporary protection did not exist. We identified a decreased risk of non-influenza respiratory virus infection after influenza B virus infection in children. Further investigation is needed to determine if this decreased risk could be attributed to temporary non-specific immunity acquired from influenza virus infection.
We give algorithms for approximating the partition function of the ferromagnetic $q$-color Potts model on graphs of maximum degree $d$. Our primary contribution is a fully polynomial-time approximation scheme for $d$-regular graphs with an expansion condition at low temperatures (that is, bounded away from the order-disorder threshold). The expansion condition is much weaker than in previous works; for example, the expansion exhibited by the hypercube suffices. The main improvements come from a significantly sharper analysis of standard polymer models; we use extremal graph theory and applications of Karger’s algorithm to count cuts that may be of independent interest. It is #BIS-hard to approximate the partition function at low temperatures on bounded-degree graphs, so our algorithm can be seen as evidence that hard instances of #BIS are rare. We also obtain efficient algorithms in the Gibbs uniqueness region for bounded-degree graphs. While our high-temperature proof follows more standard polymer model analysis, our result holds in the largest-known range of parameters $d$ and $q$.
We consider the super-replication problem for a class of exotic options known as life-contingent options within the framework of the Black–Scholes market model. The option is allowed to be exercised if the death of the option holder occurs before the expiry date, otherwise there is a compensation payoff at the expiry date. We show that there exists a minimal super-replication portfolio and determine the associated initial investment. We then give a characterisation of when replication of the option is possible. Finally, we give an example of an explicit super-replicating hedge for a simple life-contingent option.
Let $\mathcal{C}$ denote the family of all coherent distributions on the unit square $[0,1]^2$, i.e. all those probability measures $\mu$ for which there exists a random vector $(X,Y)\sim \mu$, a pair $(\mathcal{G},\mathcal{H})$ of $\sigma$-fields, and an event E such that $X=\mathbb{P}(E\mid\mathcal{G})$, $Y=\mathbb{P}(E\mid\mathcal{H})$ almost surely. We examine the set $\mathrm{ext}(\mathcal{C})$ of extreme points of $\mathcal{C}$ and provide its general characterisation. Moreover, we establish several structural properties of finitely-supported elements of $\mathrm{ext}(\mathcal{C})$. We apply these results to obtain the asymptotic sharp bound $\lim_{\alpha \to \infty}\alpha\cdot(\sup_{(X,Y)\in \mathcal{C}}\mathbb{E}|X-Y|^{\alpha}) = {2}/{\mathrm{e}}$.
Motivated by insurance applications, we propose a new approach for the validation of real-world economic scenarios. This approach is based on the statistical test developed by Chevyrev and Oberhauser ((2022) Journal of Machine Learning Research, 23(176), 1–42.) and relies on the notions of signature and maximum mean distance. This test allows to check whether two samples of stochastic processes paths come from the same distribution. Our contribution is to apply this test to a variety of stochastic processes exhibiting different pathwise properties (Hölder regularity, autocorrelation, and regime switches) and which are relevant for the modelling of stock prices and stock volatility as well as of inflation in view of actuarial applications.
Expectiles have received increasing attention as a risk measure in risk management because of their coherency and elicitability at the level $\alpha\geq1/2$. With a view to practical risk assessments, this paper delves into the worst-case expectile, where only partial information on the underlying distribution is available and there is no closed-form representation. We explore the asymptotic behavior of the worst-case expectile on two specified ambiguity sets: one is through the Wasserstein distance from a reference distribution and transforms this problem into a convex optimization problem via the well-known Kusuoka representation, and the other is induced by higher moment constraints. We obtain precise results in some special cases; nevertheless, there are no unified closed-form solutions. We aim to fully characterize the extreme behaviors; that is, we pursue an approximate solution as the level $\alpha $ tends to 1, which is aesthetically pleasing. As an application of our technique, we investigate the ambiguity set induced by higher moment conditions. Finally, we compare our worst-case expectile approach with a more conservative method based on stochastic order, which is referred to as ‘model aggregation’.
We show joint convergence of the Łukasiewicz path and height process for slightly supercritical Galton–Watson forests. This shows that the height processes for supercritical continuous-state branching processes as constructed by Lambert (2002) are the limit under rescaling of their discrete counterparts. Unlike for (sub-)critical Galton–Watson forests, the height process does not encode the entire metric structure of a supercritical Galton–Watson forest. We demonstrate that this result is nonetheless useful, by applying it to the configuration model with an independent and identically distributed power-law degree sequence in the critical window, of which we obtain the metric space scaling limit in the product Gromov–Hausdorff–Prokhorov topology, which is of independent interest.
In this paper we extend results on reconstruction of probabilistic supports of independent and identically distributed random variables to supports of dependent stationary ${\mathbb R}^d$-valued random variables. All supports are assumed to be compact of positive reach in Euclidean space. Our main results involve the study of the convergence in the Hausdorff sense of a cloud of stationary dependent random vectors to their common support. A novel topological reconstruction result is stated, and a number of illustrative examples are presented. The example of the Möbius Markov chain on the circle is treated at the end with simulations.
This paper presents an asymptotic theory for recurrent jump diffusion models with well-defined scale functions. The class of such models is broad, including general nonstationary as well as stationary jump diffusions with state-dependent jump sizes and intensities. The asymptotics for recurrent jump diffusion models with scale functions are largely comparable to the asymptotics for the corresponding diffusion models without jumps. For stationary jump diffusions, our asymptotics yield the usual law of large numbers and the standard central limit theory with normal limit distributions. The asymptotics for nonstationary jump diffusions, on the other hand, are nonstandard and the limit distributions are given as generalized diffusion processes.
Precision healthcare is an emerging field of science that utilizes an individual’s health information, context, and genetics to provide more personalized diagnostics and treatments. In this manuscript, we leverage that concept and present a group of machine learning models for precision gaming. These predictive models guide adolescents through best practices related to their health. The use case deployed is for girls in India through a mobile application released in three different Indian states. To evaluate the usability of the models, experiments are designed and data (demographic, behavioral, and health-related) are collected. The experimental results are presented and discussed.
Hand hygiene (HH) is the paramount measure used to prevent healthcare-associated infections. A repeated cross-sectional study was undertaken with direct observation of the degree of compliance on HH of healthcare personnel during the SARS-CoV-2 pandemic. Between, 2018–2019, 9,083 HH opportunities were considered, and 5,821 in 2020–2022. Chi squared tests were used to identify associations. The crude and adjusted odds ratios were used along with a logistic regression model for statistical analyses. Compliance on HH increased significantly (p < 0.001) from 54.5% (95% CI: 53.5, 55.5) to 70.1% (95% CI: 68.9, 71.2) during the COVID-19 pandemic. This increase was observed in four of the five key moments of HH established by the World Health Organization (WHO) (p < 0.05), except at moment 4. The factors that were significantly and independently associated with compliance were the time period considered, type of healthcare-personnel, attendance at training sessions, knowledge of HH and WHO guidelines, and availability of hand disinfectant alcoholic solution in pocket format. Highest HH compliance occurred during the COVID-19 pandemic, reflecting a positive change in healthcare-personnel’s behaviour regarding HH recommendations.
We consider local level and local linear estimators for estimation and inference in time-varying parameter (TVP) regressions with general stationary covariates. The latter estimator also yields estimates for parameter derivatives that are utilized for the development of time invariance tests for the regression coefficients. Our theoretical framework is general enough to allow for a wide range of stationary regressors, including stationary long memory. We demonstrate that neglecting time variation in the regression parameters has a range of adverse effects in inference, in particular, when regressors exhibit long-range dependence. For instance, parametric tests diverge under the null hypothesis when the memory order is strictly positive. The finite sample performance of the methods developed is investigated with the aid of a simulation experiment. The proposed methods are employed for exploring the predictability of SP500 returns by realized variance. We find evidence of time variability in the intercept as well as episodic predictability when realized variance is utilized as a predictor in TVP specifications.
Deep neural networks have become an important tool for use in actuarial tasks, due to the significant gains in accuracy provided by these techniques compared to traditional methods, but also due to the close connection of these models to the generalized linear models (GLMs) currently used in industry. Although constraining GLM parameters relating to insurance risk factors to be smooth or exhibit monotonicity is trivial, methods to incorporate such constraints into deep neural networks have not yet been developed. This is a barrier for the adoption of neural networks in insurance practice since actuaries often impose these constraints for commercial or statistical reasons. In this work, we present a novel method for enforcing constraints within deep neural network models, and we show how these models can be trained. Moreover, we provide example applications using real-world datasets. We call our proposed method ICEnet to emphasize the close link of our proposal to the individual conditional expectation model interpretability technique.
The deployment of digital technologies in African cities, beyond improving service delivery, raises issues of digital inclusion, digital rights, and increasing spatial and social inequalities. As part of the African Cities Lab Summit 2023, we conducted a workshop with 20 multidisciplinary participants to explore issues related to the deployment of digital technologies in African cities. This research is a policy paper that addresses these issues and provides policy recommendations for local governments. It emphasizes the importance of inclusive digital infrastructure, regulations safeguarding vulnerable sectors, and governance ensuring citizens’ rights in the digital transformation. Focusing on transparency, equity, and collaboration with communities, local governments play a vital role in fostering inclusive digital transformation, essential for equitable and rights-centric smart cities in Africa.
Real-time evaluation (RTE) supports populations (e.g., persons experiencing homelessness (PEH) to engage in evaluation of health interventions who may otherwise be overlooked. The aim of this RTE was to explore the understanding of TB amongst PEH, identify barriers/facilitators to attending screening for PEH alongside suggestions for improving TB-screening events targeting PEH, who have high and complex health needs. This RTE composed of free-text structured one-to-one interviews performed immediately after screening at a single tuberculosis (TB) screening event. Handwritten forms were transcribed for thematic analysis, with codes ascribed to answers that were developed into core themes. All RTE participants (n=15) learned about the screening event on the day it was held. Key concerns amongst screening attendees included: stigma around drug use, not understanding the purpose of TB screening, lack of trusted individuals/services present, too many partner organizations involved, and language barriers. Facilitators to screening included a positive welcome to the event, a satisfactory explanation of screening tests, and sharing of results. A need for improved event promotion alongside communication of the purpose of TB screening amongst PEH was also identified. A lack of trust identified by some participants suggests the range of services present should be reconsidered for future screening events.
In order to clarify and visualize the real state of the structural performances of ships in operation and establish a more optimal, data-driven framework for ship design, construction and operation, an industry-academia joint R&D project on the digital twin for ship structures (DTSS) was conducted in Japan. This paper presents the major achievements of the project. The DTSS aims to grasp the stress responses over the whole ship structure in waves by data assimilation that merges hull monitoring and numerical simulation. Three data assimilation methods, namely, the wave spectrum method, Kalman filter method, and inverse finite element method were used, and their effectiveness was examined through model and full-scale ship measurements. Methods for predicting short-term extreme responses and long-term cumulative fatigue damage were developed for navigation and maintenance support using statistical approaches. In comparison with conventional approaches, response predictions were significantly improved by DTSS using real response data in encountered waves. Utilization scenarios for DTSS in the maritime industry were presented from the viewpoints of navigation support, maintenance support, rule improvement, and product value improvement, together with future research needs for implementation in the maritime industry.
Despite the growing availability of sensing and data in general, we remain unable to fully characterize many in-service engineering systems and structures from a purely data-driven approach. The vast data and resources available to capture human activity are unmatched in our engineered world, and, even in cases where data could be referred to as “big,” they will rarely hold information across operational windows or life spans. This paper pursues the combination of machine learning technology and physics-based reasoning to enhance our ability to make predictive models with limited data. By explicitly linking the physics-based view of stochastic processes with a data-based regression approach, a derivation path for a spectrum of possible Gaussian process models is introduced and used to highlight how and where different levels of expert knowledge of a system is likely best exploited. Each of the models highlighted in the spectrum have been explored in different ways across communities; novel examples in a structural assessment context here demonstrate how these approaches can significantly reduce reliance on expensive data collection. The increased interpretability of the models shown is another important consideration and benefit in this context.
We investigate an optimal stopping problem for the expected value of a discounted payoff on a regime-switching geometric Brownian motion under two constraints on the possible stopping times: only at exogenous random times, and only during a specific regime. The main objectives are to show that an optimal stopping time exists as a threshold type and to derive expressions for the value functions and the optimal threshold. To this end, we solve the corresponding variational inequality and show that its solution coincides with the value functions. Some numerical results are also introduced. Furthermore, we investigate some asymptotic behaviors.