To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The COVID-19 pandemic modified the epidemiology and the transmission of respiratory syncytial virus (RSV). We collected data on RSV positivity and incidence from children hospitalized in the largest tertiary paediatric hospital in Greece before (2018–2020, period A), during (2020–2021, period B), and after (2021–2023, period C) the COVID-19 lockdown. A total of 9,508 children were tested for RSV. RSV positivity (%) was 17.6% (552/3,134) for period A, 2.1% (13/629) for period B, and 13.4% (772/5,745) for period C (p < 0.001). The mean age (±SD) of RSV-positive children among the three periods was A: 5.9(±9.3), B: 13.6 (±25.3), and C: 16.7 (±28.6) months (p < 0.001). The peak of RSV epidemiology was shifted from January–March (period A) to October–December (period C). RSV in-hospital incidence per 1,000 hospitalizations in paediatric departments was A:16.7, B:1.0, and C:28.1 (p < 0.001), and the incidence in the intensive care unit was A: 17.3, B: 0.6, and C: 26.6 (p < 0.001). A decrease in RSV incidence was observed during the COVID-19 lockdown period, whereas a significant increase was observed after the lockdown. A change in epidemiological patterns was identified after the end of the lockdown, with an earlier seasonal peak and an age shift of increased RSV incidence in older children.
By the technique of augmented truncations, we obtain the perturbation bounds on the distance of the finite-time state distributions of two continuous-time Markov chains (CTMCs) in a type of weaker norm than the V-norm. We derive the estimates for strongly and exponentially ergodic CTMCs. In particular, we apply these results to get the bounds for CTMCs satisfying Doeblin or stochastically monotone conditions. Some examples are presented to illustrate the limitation of the V-norm in perturbation analysis and to show the quality of the weak norm.
This paper introduces DivFolio, a multiperiod portfolio selection and analytic software application that incorporates automated and user-determined divestment practices accommodating Environmental Social Governance (ESG) and portfolio carbon footprint considerations. This freely available portfolio analytics software tool is written in R with a GUI interface developed as an R Shiny application for ease of user experience. Users can utilize this software to dynamically assess the performance of asset selections from global equity, exchange-traded funds, exchange-traded notes, and depositary receipts markets over multiple time periods. This assessment is based on the impact of ESG investment and fossil-fuel divestment practices on portfolio behavior in terms of risk, return, stability, diversification, and climate mitigation credentials of associated investment decisions. We highlight two applications of DivFolio. The first revolves around using sector scanning to divest from a specialized portfolio featuring constituents of the FTSE 100. The second, rooted in actuarial considerations, focuses on divestment strategies informed by environmental risk assessments for mixed pension portfolios in the US and UK.
Intergovernmental collaboration is needed to address global problems. Modern solutions to these problems often include data-driven methods like artificial intelligence (AI), which require large amounts of data to perform well. As AI emerges as a central catalyst in deriving effective solutions for global problems, the infrastructure that supports its data needs becomes crucial. However, data sharing between governments is often constrained due to socio-technical barriers such as concerns over data privacy, data sovereignty issues, and the risks of information misuse. Federated learning (FL) presents a promising solution as a decentralized AI methodology, enabling the use of data from multiple silos without necessitating central aggregation. Instead of sharing raw data, governments can build their own models and just share the model parameters with a central server aggregating all parameters, resulting in a superior overall model. By conducting a structured literature review, we show how major intergovernmental data-sharing challenges listed by the Organisation for Economic Co-operation and Development can be overcome by utilizing FL. Furthermore, we provide a tangible resource implementing FL linked to the Ukrainian refugee crisis that can be utilized by researchers and policymakers alike who want to implement FL in cases where data cannot be shared. Enhanced AI while maintaining privacy through FL thus allows governments to collaboratively address global problems, positively impacting governments and citizens.
Multivariate regular variation is a key concept that has been applied in finance, insurance, and risk management. This paper proposes a new dependence assumption via a framework of multivariate regular variation. Under the condition that financial and insurance risks satisfy our assumption, we conduct asymptotic analyses for multidimensional ruin probabilities in the discrete-time and continuous-time cases. Also, we present a two-dimensional numerical example satisfying our assumption, through which we show the accuracy of the asymptotic result for the discrete-time multidimensional insurance risk model.
We investigate the existence of a rainbow Hamilton cycle in a uniformly edge-coloured randomly perturbed digraph. We show that for every $\delta \in (0,1)$ there exists $C = C(\delta ) \gt 0$ such that the following holds. Let $D_0$ be an $n$-vertex digraph with minimum semidegree at least $\delta n$ and suppose that each edge of the union of $D_0$ with a copy of the random digraph $\mathbf{D}(n,C/n)$ on the same vertex set gets a colour in $[n]$ independently and uniformly at random. Then, with high probability, $D_0 \cup \mathbf{D}(n,C/n)$ has a rainbow directed Hamilton cycle.
This improves a result of Aigner-Horev and Hefetz ((2021) SIAM J. Discrete Math.35(3) 1569–1577), who proved the same in the undirected setting when the edges are coloured uniformly in a set of $(1 + \varepsilon )n$ colours.
Guaranteed minimum accumulation benefits (GMABs) are retirement savings vehicles that protect the policyholder against downside market risk. This article proposes a valuation method for these contracts based on physics-inspired neural networks (PINNs), in the presence of multiple financial and biometric risk factors. A PINN integrates principles from physics into its learning process to enhance its efficiency in solving complex problems. In this article, the driving principle is the Feynman–Kac (FK) equation, which is a partial differential equation (PDE) governing the GMAB price in an arbitrage-free market. In our context, the FK PDE depends on multiple variables and is difficult to solve using classical finite difference approximations. In comparison, PINNs constitute an efficient alternative that can evaluate GMABs with various specifications without the need for retraining. To illustrate this, we consider a market with four risk factors. We first derive a closed-form expression for the GMAB that serves as a benchmark for the PINN. Next, we propose a scaled version of the FK equation that we solve using a PINN. Pricing errors are analyzed in a numerical illustration.
This paper will outline the functionality available in the CovRegpy package which was written for actuarial practitioners, wealth managers, fund managers, and portfolio analysts in the language of Python 3.11. The objective is to develop a new class of covariance regression factor models for covariance forecasting, along with a library of portfolio allocation tools that integrate with this new covariance forecasting framework. The novelty is in two stages: the type of covariance regression model and factor extractions used to construct the covariates used in the covariance regression, along with a powerful portfolio allocation framework for dynamic multi-period asset investment management.
The major contributions of package CovRegpy can be found on the GitHub repository for this library in the scripts: CovRegpy.py, CovRegpy_DCC.py, CovRegpy_RPP.py, CovRegpy_SSA.py, CovRegpy_SSD.py, and CovRegpy_X11.py. These six scripts contain implementations of software features including multivariate covariance time series models based on the regularized covariance regression (RCR) framework, dynamic conditional correlation (DCC) framework, risk premia parity (RPP) weighting functions, singular spectrum analysis (SSA), singular spectrum decomposition (SSD), and X11 decomposition framework, respectively.
These techniques can be used sequentially or independently with other techniques to extract implicit factors to use them as covariates in the RCR framework to forecast covariance and correlation structures and finally apply portfolio weighting strategies based on the portfolio risk measures based on forecasted covariance assumptions. Explicit financial factors can be used in the covariance regression framework, implicit factors can be used in the traditional explicit market factor setting, and RPP techniques with long/short equity weighting strategies can be used in traditional covariance assumption frameworks.
We examine, herein, two real-world case studies for actuarial practitioners. The first of these is a modification (demonstrating the regularization of covariance regression) of the original example from Hoff & Niu ((2012). Statistica Sinica, 22(2), 729–753) which modeled the covariance and correlative relationship that exists between forced expiratory volume (FEV) and age and FEV and height. We examine this within the context of making probabilistic predictions about mortality rates in patients with chronic obstructive pulmonary disease.
The second case study is a more complete example using this package wherein we present a funded and unfunded UK pension example. The decomposition algorithm isolates high-, mid-, and low-frequency structures from FTSE 100 constituents over 20 years. These are used to forecast the forthcoming quarter’s covariance structure to weight the portfolio based on the RPP strategy. These fully funded pensions are compared against the performance of a fully unfunded pension using the FTSE 100 index performance as a proxy.
Nosocomial outbreak of varicella zoster virus (VZV) has been reported when susceptible individuals encounter a case of chicken pox or shingles. A suspected VZV outbreak was investigated in a 50-bedded in-patient facility of Physical Medicine and Rehabilitation in a tertiary care multispecialty hospital. A 30-year-old female patient admitted with Pott’s spine was clinically diagnosed with chicken pox on 31 December 2022. The following week, four more cases were identified in the same ward. All cases were diagnosed as laboratory-confirmed varicella zoster infection by PCR. Primary case was a housekeeping staff who was clinically diagnosed with chicken pox 3 weeks prior (9 December 2022). He returned to work on eighth day of infection (17 December 2022) after apparent clinical recovery but before the lesions had crusted over. Thirty-one HCWs were identified as contacts a and three had no evidence of immunity. Two of these susceptible HCWs had onset of chickenpox shortly after first dose of VZV vaccination was inoculated. All cases recovered after treatment with no reported complications. VZV infection is highly contagious in healthcare settings with susceptible populations. Prompt identification of cases and implementation of infection prevention and control measures like patient isolation and vaccination are essential for the containment of outbreaks.
Until the early twentieth century, populations on many Pacific Islands had never experienced measles. As travel to the Pacific Islands by Europeans became more common, the arrival of measles and other pathogens had devastating consequences. In 1911, Rotuma in Fiji was hit by a measles epidemic, which killed 13% of the island population. Detailed records show two mortality peaks, with individuals reported as dying solely from measles in the first and from measles and diarrhoea in the second. Measles is known to disrupt immune system function. Here, we investigate whether the pattern of mortality on Rotuma in 1911 was a consequence of the immunosuppressive effects of measles. We use a compartmental model to simulate measles infection and immunosuppression. Whilst immunosuppressed, we assume that individuals are vulnerable to dysfunctional reactions triggered by either (i) a newly introduced infectious agent arriving at the same time as measles or (ii) microbes already present in the population in a pre-existing equilibrium state. We show that both forms of the immunosuppression model provide a plausible fit to the data and that the inclusion of immunosuppression in the model leads to more realistic estimates of measles epidemiological parameters than when immunosuppression is not included.
Using diverse real-world examples, this text examines what models used for data analysis mean in a specific research context. What assumptions underlie analyses, and how can you check them? Building on the successful 'Data Analysis and Graphics Using R,' 3rd edition (Cambridge, 2010), it expands upon topics including cluster analysis, exponential time series, matching, seasonality, and resampling approaches. An extended look at p-values leads to an exploration of replicability issues and of contexts where numerous p-values exist, including gene expression.Developing practical intuition, this book assists scientists in the analysis of their own data, and familiarizes students in statistical theory with practical data analysis. The worked examples and accompanying commentary teach readers to recognize when a method works and, more importantly, when it doesn't. Each chapter contains copious exercises. Selected solutions, notes, slides, and R code are available online, with extensive references pointing to detailed guides to R.
This collection of four short courses looks at group representations, graph spectra, statistical optimality, and symbolic dynamics, highlighting their common roots in linear algebra. It leads students from the very beginnings in linear algebra to high-level applications: representations of finite groups, leading to probability models and harmonic analysis; eigenvalues of growing graphs from quantum probability techniques; statistical optimality of designs from Laplacian eigenvalues of graphs; and symbolic dynamics, applying matrix stability and K-theory. An invaluable resource for researchers and beginning Ph.D. students, this book includes copious exercises, notes, and references.
This article proposes a framework of linked software agents that continuously interact with an underlying knowledge graph to automatically assess the impacts of potential flooding events. It builds on the idea of connected digital twins based on the World Avatar dynamic knowledge graph to create a semantically rich asset of data, knowledge, and computational capabilities accessible to humans, applications, and artificial intelligence. We develop three new ontologies to describe and link environmental measurements and their respective reporting stations, flood events, and their potential impact on population and built infrastructure as well as the built environment of a city itself. These coupled ontologies are deployed to dynamically instantiate near real-time data from multiple fragmented sources into the World Avatar. Sequences of autonomous agents connected via the derived information framework automatically assess consequences of newly instantiated data, such as newly raised flood warnings, and cascade respective updates through the graph to ensure up-to-date insights into the number of people and building stock value at risk. Although we showcase the strength of this technology in the context of flooding, our findings suggest that this system-of-systems approach is a promising solution to build holistic digital twins for various other contexts and use cases to support truly interoperable and smart cities.
This study compared the likelihood of long-term sequelae following infection with SARS-CoV-2 variants, other acute respiratory infections (ARIs) and non-infected individuals. Participants (n=5,630) were drawn from Virus Watch, a prospective community cohort investigating SARS-CoV-2 epidemiology in England. Using logistic regression, we compared predicted probabilities of developing long-term symptoms (>2 months) during different variant dominance periods according to infection status (SARS-CoV-2, other ARI, or no infection), adjusting for confounding by demographic and clinical factors and vaccination status. SARS-CoV-2 infection during early variant periods up to Omicron BA.1 was associated with greater probability of long-term sequalae (adjusted predicted probability (PP) range 0.27, 95% CI = 0.22–0.33 to 0.34, 95% CI = 0.25–0.43) compared with later Omicron sub-variants (PP range 0.11, 95% CI 0.08–0.15 to 0.14, 95% CI 0.10–0.18). While differences between SARS-CoV-2 and other ARIs (PP range 0.08, 95% CI 0.04–0.11 to 0.23, 95% CI 0.18–0.28) varied by period, all post-infection estimates substantially exceeded those for non-infected participants (PP range 0.01, 95% CI 0.00, 0.02 to 0.03, 95% CI 0.01–0.06). Variant was an important predictor of SARS-CoV-2 post-infection sequalae, with recent Omicron sub-variants demonstrating similar probabilities to other contemporaneous ARIs. Further aetiological investigation including between-pathogen comparison is recommended.
We investigate branching processes in varying environment, for which $\overline{f}_n \to 1$ and $\sum_{n=1}^\infty (1-\overline{f}_n)_+ = \infty$, $\sum_{n=1}^\infty (\overline{f}_n - 1)_+ < \infty$, where $\overline{f}_n$ stands for the offspring mean in generation n. Since subcritical regimes dominate, such processes die out almost surely, therefore to obtain a nontrivial limit we consider two scenarios: conditioning on nonextinction, and adding immigration. In both cases we show that the process converges in distribution without normalization to a nondegenerate compound-Poisson limit law. The proofs rely on the shape function technique, worked out by Kersting (2020).
Environmental exposures are known to be associated with pathogen transmission and immune impairment, but the association of exposures with aetiology and severity of community-acquired pneumonia (CAP) are unclear. A retrospective observational study was conducted at nine hospitals in eight provinces in China from 2014 to 2019. CAP patients were recruited according to inclusion criteria, and respiratory samples were screened for 33 respiratory pathogens using molecular test methods. Sociodemographic, environmental and clinical factors were used to analyze the association with pathogen detection and disease severity by logistic regression models combined with distributed lag nonlinear models. A total of 3323 CAP patients were included, with 709 (21.3%) having severe illness. 2064 (62.1%) patients were positive for at least one pathogen. More severe patients were found in positive group. After adjusting for confounders, particulate matter (PM) 2.5 and 8-h ozone (O3-8h) were significant association at specific lag periods with detection of influenza viruses and Klebsiella pneumoniae respectively. PM10 and carbon monoxide (CO) showed cumulative effect with severe CAP. Pollutants exposures, especially PM, O3-8h, and CO should be considered in pathogen detection and severity of CAP to improve the clinical aetiological and disease severity diagnosis.
Traditionally, electricity distribution networks were designed for unidirectional power flow without the need to accommodate generation installed at the point of use. However, with the increase in Distributed Energy Resources and other Low Carbon Technologies, the role of distribution networks is changing. This shift brings challenges, including the need for intensive metering and more frequent reconfiguration to identify threats from voltage and thermal violations. Mitigating action through reconfiguration is informed by State Estimation, which is especially challenging for low voltage distribution networks where the constraints of low observability, non-linear load relationships, and highly unbalanced systems all contribute to the difficulty of producing accurate state estimates. To counter low observability, this paper proposes the application of a novel transfer learning methodology, based upon the concept of conditional online Bayesian transfer, to make forward predictions of bus pseudo-measurements. Day ahead load forecasts at a fully observed point on the network are adjusted using the intraday residuals at other points in the network to provide them with load forecasts without the need for a complete set of forecast models at all substations. These form pseudo-measurements that then inform the state estimates at future time points. This methodology is demonstrated on both a representative IEEE Test network and on an actual GB 11 kV feeder network.
The atomic bomb uses fission of heavy elements to produce a large amount of energy. It was designed and deployed during World War II by the United States military. The first test of an atomic bomb occurred in July 1945 in New Mexico and was given the name Trinity; this test was not declassified until 1949. In that year, Geoffrey Ingram Taylor released two papers detailing his process in calculating the energy yield of the atomic bomb from pictures of the Trinity explosion alone. Many scientists made similar calculations concurrently, although Taylor is often accredited with them. Since then, many scientists have also attempted to calculate a yield through various methods. This paper walks through these methods with a focus on Taylor’s method—based on first principles—as well as redoing the calculations that he performed with modern tools. In this paper, we make use of state-of-the-art computer vision tools to find a more precise measurement of the blast radius, as well as using curve fitting and numerical integration methods. With more precise measurements we are able to follow in Taylor’s footstep toward a more accurate approximation.