To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We investigate the feasibility of cyber risk transfer through insurance-linked securities (ILS). On the investor side, we elicit the preferred characteristics of cyber ILS and the corresponding return expectations. We then estimate the cost of equity of insurers and compare it to the Rate on Line expected by investors to match demand and supply in the cyber ILS market. Our results show that cyber ILS will work for both cedents and investors if the cyber risk is sufficiently well understood. Thus, challenges related to cyber risk modeling need to be overcome before a meaningful cyber ILS market may emerge.
Gaussian graphical models are useful tools for conditional independence structure inference of multivariate random variables. Unfortunately, Bayesian inference of latent graph structures is challenging due to exponential growth of $\mathcal{G}_n$, the set of all graphs in n vertices. One approach that has been proposed to tackle this problem is to limit search to subsets of $\mathcal{G}_n$. In this paper we study subsets that are vector subspaces with the cycle space $\mathcal{C}_n$ as the main example. We propose a novel prior on $\mathcal{C}_n$ based on linear combinations of cycle basis elements and present its theoretical properties. Using this prior, we implement a Markov chain Monte Carlo algorithm, and show that (i) posterior edge inclusion estimates computed with our technique are comparable to estimates from the standard technique despite searching a smaller graph space, and (ii) the vector space perspective enables straightforward implementation of MCMC algorithms.
Reliable short-term load forecasting is vital for the planning and operation of electric power systems. Short-term load forecasting is a critical component used in purchasing and generating electric power, dispatching, and load switching, which is essential for balancing supply and demand and mitigating the risk of power shortages. This is becoming even more critical given the transition to carbon-neutral technologies in the energy sector. Specifically, since renewable sources are inherently uncertain, a distributed energy system with renewable generation units is more heavily dependent on accurate load forecasts for demand-response management than traditional energy sectors. Despite extensive literature on forecasting electricity demand, most studies focus on predicting the total demand solely based on the previous-step observations of aggregate demand. With advances in smart-metering technology and the availability of high-resolution consumption data, harnessing fine-resolution smart-meter data in load forecasting has attracted increasing attention. Studies using smart-meter data mainly involve a “bottom-up” approach that develops separate forecast models at sub-aggregate levels and aggregates the forecasts to estimate the total demand. While this approach is conducive to incorporating fine-resolution data for load forecasting, it has several shortcomings that can result in sub-optimal forecasts. However, these shortcomings are hardly acknowledged in the load forecasting literature. This work demonstrates how limitations imposed by such a bottom-up load forecasting approach can lead to misleading results, which could hamper efficient load management within a carbon-neutral grid.
Human infection with antimicrobial-resistant Campylobacter species is an important public health concern due to the potentially increased severity of illness and risk of death. Our objective was to synthesise the knowledge of factors associated with human infections with antimicrobial-resistant strains of Campylobacter. This scoping review followed systematic methods, including a protocol developed a priori. Comprehensive literature searches were developed in consultation with a research librarian and performed in five primary and three grey literature databases. Criteria for inclusion were analytical and English-language publications investigating human infections with an antimicrobial-resistant (macrolides, tetracyclines, fluoroquinolones, and/or quinolones) Campylobacter that reported factors potentially linked with the infection. The primary and secondary screening were completed by two independent reviewers using Distiller SR®. The search identified 8,527 unique articles and included 27 articles in the review. Factors were broadly categorised into animal contact, prior antimicrobial use, participant characteristics, food consumption and handling, travel, underlying health conditions, and water consumption/exposure. Important factors linked to an increased risk of infection with a fluoroquinolone-resistant strain included foreign travel and prior antimicrobial use. Identifying consistent risk factors was challenging due to the heterogeneity of results, inconsistent analysis, and the lack of data in low- and middle-income countries, highlighting the need for future research.
We consider an insurance company modelling its surplus process by a Brownian motion with drift. Our target is to maximise the expected exponential utility of discounted dividend payments, given that the dividend rates are bounded by some constant. The utility function destroys the linearity and the time-homogeneity of the problem considered. The value function depends not only on the surplus, but also on time. Numerical considerations suggest that the optimal strategy, if it exists, is of a barrier type with a nonlinear barrier. In the related article of Grandits et al. (Scand. Actuarial J.2, 2007), it has been observed that standard numerical methods break down in certain parameter cases, and no closed-form solution has been found. For these reasons, we offer a new method allowing one to estimate the distance from an arbitrary smooth-enough function to the value function. Applying this method, we investigate the goodness of the most obvious suboptimal strategies—payout on the maximal rate, and constant barrier strategies—by measuring the distance from their performance functions to the value function.
This commentary explores the potential of private companies to advance scientific progress and solve social challenges through opening and sharing their data. Open data can accelerate scientific discoveries, foster collaboration, and promote long-term business success. However, concerns regarding data privacy and security can hinder data sharing. Companies have options to mitigate the challenges through developing data governance mechanisms, collaborating with stakeholders, communicating the benefits, and creating incentives for data sharing, among others. Ultimately, open data has immense potential to drive positive social impact and business value, and companies can explore solutions for their specific circumstances and tailor them to their specific needs.
We study Granger Causality in the context of wide-sense stationary time series. The focus of the analysis is to understand how the underlying topological structure of the causality graph affects graph recovery by means of the pairwise testing heuristic. Our main theoretical result establishes a sufficient condition (in particular, the graph must satisfy a polytree assumption we refer to as strong causality) under which the graph can be recovered by means of unconditional and binary pairwise causality testing. Examples from the gene regulatory network literature are provided which establish that graphs which are strongly causal, or very nearly so, can be expected to arise in practice. We implement finite sample heuristics derived from our theory, and use simulation to compare our pairwise testing heuristic against LASSO-based methods. These simulations show that, for graphs which are strongly causal (or small perturbations thereof) the pairwise testing heuristic is able to more accurately recover the underlying graph. We show that the algorithm is scalable to graphs with thousands of nodes, and that, as long as structural assumptions are met, exhibits similar high-dimensional scaling properties as the LASSO. That is, performance degrades slowly while the system size increases and the number of available samples is held fixed. Finally, a proof-of-concept application example shows, by attempting to classify alcoholic individuals using only Granger causality graphs inferred from EEG measurements, that the inferred Granger causality graph topology carries identifiable features.
We study the asymptotic behaviour of the expectation of the maxima and minima of a random assignment process generated by a large matrix with multinomial entries. A variety of results is obtained for different sparsity regimes.
With a focus on the risk contribution in a portofolio of dependent risks, Colini-Baldeschi et al. (2018) introduced Shapley values for variance and standard deviation games. In this note we extend their results, introducing tail variance as well as tail standard deviation games. We derive closed-form expressions for the Shapley values for the tail variance game and we analyze the vector majorization problem for the two games. In particular, we construct two examples showing that the risk contribution rankings for the two games may be inverted depending on the conditioning threshold and the tail fatness. Motivated by these examples, we formulate a conjecture for general portfolios. Lastly, we discuss risk management implications, including the characterization of tail covariance premiums and reinsurance pricing for peer-to-peer insurance policies.
A system of interacting multi-class finite-state jump processes is analyzed. The model under consideration consists of a block-structured network with dynamically changing multi-color nodes. The interactions are local and described through local empirical measures. Two levels of heterogeneity are considered: between and within the blocks where the nodes are labeled into two types. The central nodes are those connected only to nodes from the same block, whereas the peripheral nodes are connected to both nodes from the same block and nodes from other blocks. Limits of such systems as the number of nodes tends to infinity are investigated. In particular, under specific regularity conditions, propagation of chaos and the law of large numbers are established in a multi-population setting. Moreover, it is shown that, as the number of nodes goes to infinity, the behavior of the system can be represented by the solution of a McKean–Vlasov system. Then, we prove large deviations principles for the vectors of empirical measures and the empirical processes, which extends the classical results of Dawson and Gärtner (Stochastics20, 1987) and Léonard (Ann. Inst. H. Poincaré Prob. Statist.31, 1995).
We study homogenization for a class of non-symmetric pure jump Feller processes. The jump intensity involves periodic and aperiodic constituents, as well as oscillating and non-oscillating constituents. This means that the noise can come both from the underlying periodic medium and from external environments, and is allowed to have different scales. It turns out that the Feller process converges in distribution, as the scaling parameter goes to zero, to a Lévy process. As special cases of our result, some homogenization problems studied in previous works can be recovered. We also generalize the approach to the homogenization of symmetric stable-like processes with variable order. Moreover, we present some numerical experiments to demonstrate the usage of our homogenization results in the numerical approximation of first exit times.
Under mild assumptions, we show that the exact convergence rate in total variation is also exact in weaker Wasserstein distances for the Metropolis–Hastings independence sampler. We develop a new upper and lower bound on the worst-case Wasserstein distance when initialized from points. For an arbitrary point initialization, we show that the convergence rate is the same and matches the convergence rate in total variation. We derive exact convergence expressions for more general Wasserstein distances when initialization is at a specific point. Using optimization, we construct a novel centered independent proposal to develop exact convergence rates in Bayesian quantile regression and many generalized linear model settings. We show that the exact convergence rate can be upper bounded in Bayesian binary response regression (e.g. logistic and probit) when the sample size and dimension grow together.
We study the large-volume asymptotics of the sum of power-weighted edge lengths $\sum_{e \in E}|e|^\alpha$ in Poisson-based spatial random networks. In the regime $\alpha > d$, we provide a set of sufficient conditions under which the upper-large-deviation asymptotics are characterized by a condensation phenomenon, meaning that the excess is caused by a negligible portion of Poisson points. Moreover, the rate function can be expressed through a concrete optimization problem. This framework encompasses in particular directed, bidirected, and undirected variants of the k-nearest-neighbor graph, as well as suitable $\beta$-skeletons.
Gaussian process regression is widely used to model an unknown function on a continuous domain by interpolating a discrete set of observed design points. We develop a theoretical framework for proving new moderate deviations inequalities on different types of error probabilities that arise in GP regression. Two specific examples of broad interest are the probability of falsely ordering pairs of points (incorrectly estimating one point as being better than another) and the tail probability of the estimation error at an arbitrary point. Our inequalities connect these probabilities to the mesh norm, which measures how well the design points fill the space.
Several information measures have been proposed and studied in the literature. One such measure is extropy, a complementary dual function of entropy. Its meaning and related aging notions have not yet been studied in great detail. In this paper, we first illustrate that extropy information ranks the uniformity of a wide array of absolutely continuous families. We then discuss several theoretical merits of extropy. We also provide a closed-form expression of it for finite mixture distributions. Finally, the dynamic versions of extropy are also discussed, specifically the residual extropy and past extropy measures.
This article uses data from several publicly available databases to show that the distribution of intellectual property for frontier technologies, including those useful for sustainable development, is very highly skewed in favor of a handful of developed countries. The intellectual property rights (IPR) regime as it exists does not optimize the global flow of technology and know-how for the attainment of the sustainable development goals and is in need of updating. Some features of the Fourth Industrial Revolution imply that the current system of patents is even more in need of reform than before. COVID-19 vaccines and therapies and the vast inequality in access to these has highlighted the costs of inaction. We recommend several policy changes for the international IPR regime. Broadly, these fall into three categories: allowing greater flexibility for developing countries, reassessing the appropriateness of patents for technologies that may be considered public goods, and closing loopholes that allow for unreasonable intellectual property protections.
Now in its second edition, this accessible text presents a unified Bayesian treatment of state-of-the-art filtering, smoothing, and parameter estimation algorithms for non-linear state space models. The book focuses on discrete-time state space models and carefully introduces fundamental aspects related to optimal filtering and smoothing. In particular, it covers a range of efficient non-linear Gaussian filtering and smoothing algorithms, as well as Monte Carlo-based algorithms. This updated edition features new chapters on constructing state space models of practical systems, the discretization of continuous-time state space models, Gaussian filtering by enabling approximations, posterior linearization filtering, and the corresponding smoothers. Coverage of key topics is expanded, including extended Kalman filtering and smoothing, and parameter estimation. The book's practical, algorithmic approach assumes only modest mathematical prerequisites, suitable for graduate and advanced undergraduate students. Many examples are included, with Matlab and Python code available online, enabling readers to implement algorithms in their own projects.
Learn by doing with this user-friendly introduction to time series data analysis in R. This book explores the intricacies of managing and cleaning time series data of different sizes, scales and granularity, data preparation for analysis and visualization, and different approaches to classical and machine learning time series modeling and forecasting. A range of pedagogical features support students, including end-of-chapter exercises, problems, quizzes and case studies. The case studies are designed to stretch the learner, introducing larger data sets, enhanced data management skills, and R packages and functions appropriate for real-world data analysis. On top of providing commented R programs and data sets, the book's companion website offers extra case studies, lecture slides, videos and exercise solutions. Accessible to those with a basic background in statistics and probability, this is an ideal hands-on text for undergraduate and graduate students, as well as researchers in data-rich disciplines
Country-wide social distancing and suspension of non-emergency medical care due to the COVID-19 pandemic will undoubtedly have affected public health in multiple ways. While non-pharmaceutical interventions are expected to reduce the transmission of several infectious diseases, severe disruptions to healthcare systems have hampered diagnosis, treatment, and routine vaccination. We examined the effect of this disruption on meningococcal disease and vaccination in the UK. By adapting an existing mathematical model for meningococcal carriage, we addressed the following questions: What is the predicted impact of the existing MenACWY adolescent vaccination programme? What effect might social distancing and reduced vaccine uptake both have on future epidemiology? Will catch-up vaccination campaigns be necessary? Our model indicated that the MenACWY vaccine programme was generating substantial indirect protection and suppressing transmission by 2020. COVID-19 social distancing is expected to have accelerated this decline, causing significant long-lasting reductions in both carriage prevalence of meningococcal A/C/W/Y strains and incidence of invasive meningococcal disease. In all scenarios modelled, pandemic social mixing effects outweighed potential reductions in vaccine uptake, causing an overall decline in carriage prevalence from 2020 for at least 5 years. Model outputs show strong consistency with recently published case data for England.