To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The monitoring of infrastructure assets using sensor networks is becoming increasingly prevalent. A digital twin in the form of a finite element (FE) model, as commonly used in design and construction, can help make sense of the copious amount of collected sensor data. This paper demonstrates the application of the statistical finite element method (statFEM), which provides a principled means of synthesizing data and physics-based models, in developing a digital twin of a self-sensing structure. As a case study, an instrumented steel railway bridge of $ 27.34\hskip1.5pt \mathrm{m} $ length located along the West Coast Mainline near Staffordshire in the UK is considered. Using strain data captured from fiber Bragg grating sensors at 108 locations along the bridge superstructure, statFEM can predict the “true” system response while taking into account the uncertainties in sensor readings, applied loading, and FE model misspecification errors. Longitudinal strain distributions along the two main I-beams are both measured and modeled during the passage of a passenger train. The statFEM digital twin is able to generate reasonable strain distribution predictions at locations where no measurement data are available, including at several points along the main I-beams and on structural elements on which sensors are not even installed. The implications for long-term structural health monitoring and assessment include optimization of sensor placement and performing more reliable what-if analyses at locations and under loading scenarios for which no measurement data are available.
One complication in mortality modelling is capturing the impact of risk factors that contribute to mortality differentials between different populations. Evidence has suggested that mortality differentials tend to diminish over age. Classical methods such as the Gompertz law attempt to capture mortality patterns over age using intercept and slope parameters, possibly causing an unjustified mortality crossover at advanced ages when applied independently to different populations. In recent research, Richards (Scandinavian Actuarial Journal2020(2), 110–127) proposed a Hermite spline (HS) model that describes the age pattern of mortality differentials using one parameter and circumvents an unreasonable crossover by default. The original HS model was applied to pension data at individual level in the age dimension only. This paper extends the method to model population mortality in both age and period dimensions. Our results indicate that in addition to possessing desirable fitting properties, the HS approach can produce accurate mortality forecasts, compared with the Gompertz and P-splines models.
Let X be a continuous-time strongly mixing or weakly dependent process and let T be a renewal process independent of X. We show general conditions under which the sampled process $(X_{T_i},T_i-T_{i-1})^{\top}$ is strongly mixing or weakly dependent. Moreover, we explicitly compute the strong mixing or weak dependence coefficients of the renewal sampled process and show that exponential or power decay of the coefficients of X is preserved (at least asymptotically). Our results imply that essentially all central limit theorems available in the literature for strongly mixing or weakly dependent processes can be applied when renewal sampled observations of the process X are at our disposal.
Financial inclusion depends on providing adjusted services for citizens with disclosed vulnerabilities. At the same time, the financial industry needs to adhere to a strict regulatory framework, which is often in conflict with the desire for inclusive, adaptive, and privacy-preserving services. In this article we study how this tension impacts the deployment of privacy-sensitive technologies aimed at financial inclusion. We conduct a qualitative study with banking experts to understand their perspectives on service development for financial inclusion. We build and demonstrate a prototype solution based on open source decentralized identifiers and verifiable credentials software and report on feedback from the banking experts on this system. The technology is promising thanks to its selective disclosure of vulnerabilities to the full control of the individual. This supports GDPR requirements, but at the same time, there is a clear tension between introducing these technologies and fulfilling other regulatory requirements, particularly with respect to “Know Your Customer.” We consider the policy implications stemming from these tensions and provide guidelines for the further design of related technologies.
We study large-deviation probabilities of Telecom processes appearing as limits in a critical regime of the infinite-source Poisson model elaborated by I. Kaj and M. Taqqu. We examine three different regimes of large deviations (LD) depending on the deviation level. A Telecom process $(Y_t)_{t \ge 0}$ scales as $t^{1/\gamma}$, where t denotes time and $\gamma\in(1,2)$ is the key parameter of Y. We must distinguish moderate LD ${\mathbb P}(Y_t\ge y_t)$ with $t^{1/\gamma} \ll y_t \ll t$, intermediate LD with $ y_t \approx t$, and ultralarge LD with $ y_t \gg t$. The results we obtain essentially depend on another parameter of Y, namely the resource distribution. We solve completely the cases of moderate and intermediate LD (the latter being the most technical one), whereas the ultralarge deviation asymptotics is found for the case of regularly varying distribution tails. In all the cases considered, the large-deviation level is essentially reached by the minimal necessary number of ‘service processes’.
In this short note we introduce two notions of dispersion-type variability orders, namely expected shortfall-dispersive (ES-dispersive) order and expectile-dispersive (ex-dispersive) order, which are defined by two classes of popular risk measures, the expected shortfall and the expectiles. These new orders can be used to compare the variability of two risk random variables. It is shown that either the ES-dispersive order or the ex-dispersive order is the same as the dilation order. This gives us some insight into parametric measures of variability induced by risk measures in the literature.
Chronic food insecurity remains a challenge globally, exacerbated by climate change-driven shocks such as droughts and floods. Forecasting food insecurity levels and targeting vulnerable households is apriority for humanitarian programming to ensure timely delivery of assistance. In this study, we propose to harness a machine learning approach trained on high-frequency household survey data to infer the predictors of food insecurity and forecast household level outcomes in near real-time. Our empirical analyses leverage the Measurement Indicators for Resilience Analysis (MIRA) data collection protocol implemented by Catholic Relief Services (CRS) in southern Malawi, a series of sentinel sites collecting household data monthly. When focusing on predictors of community-level vulnerability, we show that a random forest model outperforms other algorithms and that location and self-reported welfare are the best predictors of food insecurity. We also show performance results across several neural networks and classical models for various data modeling scenarios to forecast food security. We pose that problem as binary classification via dichotomization of the food security score based on two different thresholds, which results in two different positive class to negative class ratios. Our best performing model has an F1 of 81% and an accuracy of 83% in predicting food security outcomes when the outcome is dichotomized based on threshold 16 and predictor features consist of historical food security score along with 20 variables selected by artificial intelligence explainability frameworks. These results showcase the value of combining high-frequency sentinel site data with machine learning algorithms to predict future food insecurity outcomes.
Scale-free percolation is a stochastic model for complex networks. In this spatial random graph model, vertices $x,y\in\mathbb{Z}^d$ are linked by an edge with probability depending on independent and identically distributed vertex weights and the Euclidean distance $|x-y|$. Depending on the various parameters involved, we get a rich phase diagram. We study graph distance and compare it to the Euclidean distance of the vertices. Our main attention is on a regime where graph distances are (poly-)logarithmic in the Euclidean distance. We obtain improved bounds on the logarithmic exponents. In the light tail regime, the correct exponent is identified.
We propose a series-based nonparametric specification test for a regression function when data are spatially dependent, the “space” being of a general economic or social nature. Dependence can be parametric, parametric with increasing dimension, semiparametric or any combination thereof, thus covering a vast variety of settings. These include spatial error models of varying types and levels of complexity. Under a new smooth spatial dependence condition, our test statistic is asymptotically standard normal. To prove the latter property, we establish a central limit theorem for quadratic forms in linear processes in an increasing dimension setting. Finite sample performance is investigated in a simulation study, with a bootstrap method also justified and illustrated. Empirical examples illustrate the test with real-world data.
The distribution of human leukocyte antigens in the population assists in matching solid organ donors and recipients when the typing methods used do not provide sufficiently precise information. This is made possible by linkage disequilibrium (LD), where alleles co-occur more often than random chance would suggest. There is a trade-off between the high bias and low variance of a broad sample from the population and the low bias but high variance of a focused sample. Some of this trade-off could be alleviated if sub-populations shared LD despite having different allele frequencies. These experiments show that Bayesian estimation can balance bias and variance by tuning the effective sample size of the reference panel, but the LD as represented by an additive or multiplicative copula is not shared.
A neural network framework is used to design a new Ni-based superalloy that surpasses the performance of IN718 for laser-blown-powder directed-energy-deposition repair applications. The framework utilized a large database comprising physical and thermodynamic properties for different alloy compositions to learn both composition to property and also property to property relationships. The alloy composition space was based on IN718, although, W was additionally included and the limiting Al and Co content were allowed to increase compared standard IN718, thereby allowing the alloy to approach the composition of ATI 718Plus® (718Plus). The composition with the highest probability of satisfying target properties including phase stability, solidification strain, and tensile strength was identified. The alloy was fabricated, and the properties were experimentally investigated. The testing confirms that this alloy offers advantages for additive repair applications over standard IN718.
Given a finite strongly connected directed graph $G=(V, E)$, we study a Markov chain taking values on the space of probability measures on V. The chain, motivated by biological applications in the context of stochastic population dynamics, is characterized by transitions between states that respect the structure superimposed by E: mass (probability) can only be moved between neighbors in G. We provide conditions for the ergodicity of the chain. In a simple, symmetric case, we fully characterize the invariant probability.
Determining accurate capital requirements is a central activity across the life insurance industry. This is computationally challenging and often involves the acceptance of proxy errors that directly impact capital requirements. Within simulation-based capital models, where proxies are being used, capital estimates are approximations that contain both statistical and proxy errors. Here, we show how basic error analysis combined with targeted exact computation can entirely eliminate proxy errors from the capital estimate. Consideration of the possible ordering of losses, combined with knowledge of their error bounds, identifies an important subset of scenarios. When these scenarios are calculated exactly, the resulting capital estimate can be made devoid of proxy errors. Advances in the handling of proxy errors improve the accuracy of capital requirements.
This is the first report on a population-based prospective study of invasive group B streptococcus (GBS) disease among children aged <15 years conducted over a period of 11 years in Japan. This study investigated the incidence and clinical manifestations of invasive GBS disease in children in Chiba Prefecture, Japan, and analysed the serotypes and drug susceptibility of GBS strains isolated during the study period. Overall, 127 episodes of invasive GBS disease were reported in 123 patients. Of these, 124 were observed in 120 patients aged <1 year, and the remaining three episodes were reported in a 9-year-old child and two 14-year-old children with underlying disease. For patients aged <1 year, the incidence rate per 1000 live births was 0.24 (0.15–0.36). The incidences of early-onset disease and late-onset disease were 0.04 (0.0–0.09) and 0.17 (0.08–0.25), respectively. The rate of meningitis was 45.2%, and the incidence of GBS meningitis was higher than that of other invasive diseases among children in Japan. Of the 109 patients for whom prognosis was available, 7 (6.4%) died and 21 (19.3%) had sequelae. In total, 68 strains were analysed. The most common were serotype III strains (n = 42, 61.8%), especially serotype III/ST17 strains (n = 22, 32.4%). This study showed that the incidence of invasive GBS disease among Japanese children was constant during the study period. Because of the high incidence of meningitis and disease burden, new preventive strategies, such as GBS vaccine, are essential.
We present a recurrence–transience classification for discrete-time Markov chains on manifolds with negative curvature. Our classification depends only on geometric quantities associated to the increments of the chain, defined via the Riemannian exponential map. We deduce that a recurrent chain that has zero average drift at every point cannot be uniformly elliptic, unlike in the Euclidean case. We also give natural examples of zero-drift recurrent chains on negatively curved manifolds, including on a stochastically incomplete manifold.
Bovine tuberculosis (bTB) is a chronic, infectious and zoonotic disease of domestic and wild animals caused mainly by Mycobacterium bovis. This study investigated farm management factors associated with recurrent bTB herd breakdowns (n = 2935) disclosed in the period 23 May 2016 to 21 May 2018 and is a follow-up to our 2020 paper which looked at long duration bTB herd breakdowns. A case control study design was used to construct an explanatory set of farm-level management factors associated with recurrent bTB herd breakdowns. In Northern Ireland, a Department of Agriculture Environment and Rural Affairs (DAERA) Veterinarian investigates bTB herd breakdowns using standardised guidelines to allocate a disease source. In this study, source was strongly linked to carryover of infection, suggesting that the diagnostic tests had failed to clear herd infection during the breakdown period. Other results from this study associated with recurrent bTB herd breakdowns were herd size and type (dairy herds 43% of cases), with both these variables intrinsically linked. Other associated risk factors were time of application of slurry, badger access to silage clamps, badger setts in the locality, cattle grazing silage fields immediately post-harvest, number of parcels of land the farmer associated with bTB, number of land parcels used for grazing and region of the country.
We study multivariate polynomials over ‘structured’ grids. Firstly, we propose an interpretation as to what it means for a finite subset of a field to be structured; we do so by means of a numerical parameter, the nullity. We then extend several results – notably, the Combinatorial Nullstellensatz and the Coefficient Theorem – to polynomials over structured grids. The main point is that the structure of a grid allows the degree constraints on polynomials to be relaxed.
Household transmission plays a key role in the spread of COVID-19 through populations. In this paper, we report on the transmission of COVID-19 within households in a metropolitan area in Australia, examine the impact of various factors and highlight priority areas for future public health responses. We collected and reviewed retrospective case report data and follow-up interview responses from households with a positive case of the Delta COVID-19 variant in Queensland in 2021. The overall secondary attack rate (SAR) among household contacts was 29.6% and the mean incubation period for secondary cases was 4.3 days. SAR was higher where the index case was male (57.9% vs. 14.3%) or aged ≤12 years (38.7% vs. 17.4%) but similar for adult contacts that were double vaccinated (35.7%) and unvaccinated (33.3%). Most interview participants emphasised the importance of clear, consistent and compassionate health advice as a key priority for managing outbreaks in the home. The overall rate of household transmission was slightly higher than that reported in previous studies on the wild COVID-19 variant and secondary infections developed more rapidly. While vaccination did not appear to affect the risk of transmission to adult subjects, uptake in the sample was ultimately high.
The aim of the present article is to evaluate the use of the Autoregressive Fractionally Integrated Moving Average (ARFIMA) model in predicting spatially and temporally localized political violent events using the Integrated Crisis Early Warning System (ICEWS). The performance of the ARFIMA model is compared to that of a naïve model in reference to two common relevant hypotheses: the ARFIMA model would outperform a naïve model and the rate of outperformance would deteriorate the higher the level of spatial aggregation. This analytical strategy is used to predict political violent events in Afghanistan. The analysis consists of three parts. The first is a replication of Yonamine’s study for the period beginning in April 2010 and ending in March 2012. The second part compares the results to those of Yonamine. The comparison was used to assess the validity of the conclusions drawn in the original study, which was based on the Global Database of Events, Language, and Tone, for the implementation of this approach to ICEWS data. Building on the conclusions of this comparison, the third part uses Yonamine’s approach to predict violent events in Afghanistan over a significantly longer period of time (January 1995–August 2021). The conclusions provide an assessment of the utility of short-term localized forecasting.