To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We introduce a comprehensive method for establishing stochastic orders among order statistics in the independent and identically distributed case. This approach relies on the assumption that the underlying distribution is linked to a reference distribution through a transform order. Notably, this method exhibits broad applicability, particularly since several well-known nonparametric distribution families can be defined using relevant transform orders, including the convex and the star transform orders. Moreover, for convex-ordered families, we show that an application of Jensen’s inequality gives bounds for the probability that a random variable exceeds the expected value of its corresponding order statistic.
The newly introduced discipline of Population-Based Structural Health Monitoring (PBSHM) has been developed in order to circumvent the issue of data scarcity in “classical” SHM. PBSHM does this by using data across an entire population, in order to improve diagnostics for a single data-poor structure. The improvement of inferences across populations uses the machine-learning technology of transfer learning. In order that transfer makes matters better, rather than worse, PBSHM assesses the similarity of structures and only transfers if a threshold of similarity is reached. The similarity measures are implemented by embedding structures as models —Irreducible-Element (IE) models— in a graph space. The problem with this approach is that the construction of IE models is subjective and can suffer from author-bias, which may induce dissimilarity where there is none. This paper proposes that IE-models be transformed to a canonical form through reduction rules, in which possible sources of ambiguity have been removed. Furthermore, in order that other variations —outside the control of the modeller— are correctly dealt with, the paper introduces the idea of a reality model, which encodes details of the environment and operation of the structure. Finally, the effects of the canonical form on similarity assessments are investigated via a numerical population study. A final novelty of the paper is in the implementation of a neural-network-based similarity measure, which learns reduction rules from data; the results with the new graph-matching network (GMN) are compared with a previous approach based on the Jaccard index, from pure graph theory.
We consider a superprocess $\{X_t\colon t\geq 0\}$ in a random environment described by a Gaussian field $\{W(t,x)\colon t\geq 0,x\in \mathbb{R}^d\}$. First, we set up a representation of $\mathbb{E}[\langle g, X_t\rangle\mathrm{e}^{-\langle \,f,X_t\rangle }\mid\sigma(W)\vee\sigma(X_r,0\leq r\leq s)]$ for $0\leq s < t$ and some functions f,g, which generalizes the result in Mytnik and Xiong (2007, Theorem 2.15). Next, we give a uniform upper bound for the conditional log-Laplace equation with unbounded initial values. We then use this to establish the corresponding conditional entrance law. Finally, the excursion representation of $\{X_t\colon t\geq 0\}$ is given.
Data-based methods have gained increasing importance in engineering. Success stories are prevalent in areas such as data-driven modeling, control, and automation, as well as surrogate modeling for accelerated simulation. Beyond engineering, generative and large-language models are increasingly helping with tasks that, previously, were solely associated with creative human processes. Thus, it seems timely to seek artificial-intelligence-support for engineering design tasks to automate, help with, or accelerate purpose-built designs of engineering systems for instance in mechanics and dynamics, where design so far requires a lot of specialized knowledge. Compared with established, predominantly first-principles-based methods, the datasets used for training, validation, and test become an almost inherent part of the overall methodology. Thus, data publishing becomes just as important in (data-driven) engineering science as appropriate descriptions of conventional methodology in publications in the past. However, in mechanics and dynamics, quite widely, still traditional publishing practices are prevalent that largely do not yet take into account the rising role of data as much as that may already be the case in pure data-scientific research. This article analyzes the value and challenges of data publishing in mechanics and dynamics, in particular regarding engineering design tasks, showing that the latter raise also challenges and considerations not typical in fields where data-driven methods have been booming originally. Researchers currently find barely any guidance to overcome these challenges. Thus, ways to deal with these challenges are discussed and a set of examples from across different design problems shows how data publishing can be put into practice.
In this paper, we consider estimating spot/instantaneous volatility matrices of high-frequency data collected for a large number of assets. We first combine classic nonparametric kernel-based smoothing with a generalized shrinkage technique in the matrix estimation for noise-free data under a uniform sparsity assumption, a natural extension of the approximate sparsity commonly used in the literature. The uniform consistency property is derived for the proposed spot volatility matrix estimator with convergence rates comparable to the optimal minimax one. For high-frequency data contaminated by microstructure noise, we introduce a localized pre-averaging estimation method that reduces the effective magnitude of the noise. We then use the estimation tool developed in the noise-free scenario and derive the uniform convergence rates for the developed spot volatility matrix estimator. We further combine kernel smoothing with the shrinkage technique to estimate the time-varying volatility matrix of the high-dimensional noise vector. In addition, we consider large spot volatility matrix estimation in time-varying factor models with observable risk factors and derive the uniform convergence property. We provide numerical studies including simulation and empirical application to examine the performance of the proposed estimation methods in finite samples.
Online customer feedback management (CFM) is becoming increasingly important for businesses. Providing timely and effective responses to guest reviews can be challenging, especially as the volume of reviews grows. This paper explores the response process and the potential for artificial intelligence (AI) augmentation in response formulation. We propose an orchestration concept for human–AI collaboration in co-writing within the hospitality industry, supported by a novel NLP-based solution that combines the strengths of both human and AI. Although complete automation of the response process remains out of reach, our findings offer practical implications for improving response speed and quality through human–AI collaboration. Additionally, we formulate policy recommendations for businesses and regulators in CFM. Our study provides transferable design knowledge for developing future CFM products.
In recent decades, analysing the progression of mortality rates has become very important for both public and private pension schemes, as well as for the life insurance branch of insurance companies. Traditionally, the tools used in this field were based on stochastic and deterministic approaches that allow extrapolating mortality rates beyond the last year of observation. More recently, new techniques based on machine learning have been introduced as alternatives to traditional models, giving practitioners new opportunities. Among these, neural networks (NNs) play an important role due to their computation power and flexibility to treat the data without any probabilistic assumption. In this paper, we apply multi-task NNs, whose approach is based on leveraging useful information contained in multiple related tasks to help improve the generalized performance of all the tasks, to forecast mortality rates. Finally, we compare the performance of multi-task NNs to that of existing single-task NNs and traditional stochastic models on mortality data from 17 different countries.
The conditional expectation $m_{X}(s)=\mathrm{E}[X|S=s]$, where X and Y are two independent random variables with $S=X+Y$, plays a key role in various actuarial applications. For instance, considering the conditional mean risk-sharing rule, $m_X(s)$ determines the contribution of the agent holding the risk X to a risk-sharing pool. It is also a relevant function in the context of risk management, for example, when considering natural capital allocation principles. The monotonicity of $m_X(\!\cdot\!)$ is particularly significant under these frameworks, and it has been linked to log-concave densities since Efron (1965). However, the log-concavity assumption may not be realistic in some applications because it excludes heavy-tailed distributions. We consider random variables with regularly varying densities to illustrate how heavy tails can lead to a nonmonotonic behavior for $m_X(\!\cdot\!)$. This paper first aims to identify situations where $m_X(\!\cdot\!)$ could fail to be increasing according to the tail heaviness of X and Y. Second, the paper aims to study the asymptotic behavior of $m_X(s)$ as the value s of the sum gets large. The analysis is then extended to zero-augmented probability distributions, commonly encountered in applications to insurance, and to sums of more than two random variables and to two random variables with a Farlie–Gumbel–Morgenstern copula. Consequences for risk sharing and capital allocation are discussed. Many numerical examples illustrate the results.
Gaussian random polytopes have received a lot of attention, especially in the case where the dimension is fixed and the number of points goes to infinity. Our focus is on the less-studied case where the dimension goes to infinity and the number of points is proportional to the dimension d. We study several natural quantities associated with Gaussian random polytopes in this setting. First, we show that the expected number of facets is equal to $C(\alpha)^{d+o(d)}$, where $C(\alpha)$ is some constant which depends on the constant of proportionality $\alpha$. We also extend this result to the expected number of k-facets. We then consider the more difficult problem of the asymptotics of the expected number of pairs of estranged facets of a Gaussian random polytope. When the number of points is 2d, we determine the constant C such that the expected number of pairs of estranged facets is equal to $C^{d+o(d)}$.
Understanding and tracking societal discourse around essential governance challenges of our times is crucial. One possible heuristic is to conceptualize discourse as a network of actors and policy beliefs.
Here, we present an exemplary and widely applicable automated approach to extract discourse networks from large volumes of media data, as a bipartite graph of organizations and beliefs connected by stance edges. Our approach leverages various natural language processing techniques, alongside qualitative content analysis. We combine named entity recognition, named entity linking, supervised text classification informed by close reading, and a novel stance detection procedure based on large language models.
We demonstrate our approach in an empirical application tracing urban sustainable transport discourse networks in the Swiss urban area of Zürich over 12 years, based on more than one million paragraphs extracted from slightly less than two million newspaper articles.
We test the internal validity of our approach. Based on evaluations against manually automated data, we find support for what we call the window validity hypothesis of automated discourse network data gathering. The internal validity of automated discourse network data gathering increases if inferences are combined over sliding time windows.
Our results show that when leveraging data redundancy and stance inertia through windowed aggregation, automated methods can recover basic structure and higher-level structurally descriptive metrics of discourse networks well. Our results also demonstrate the necessity of creating high-quality test sets and close reading and that efforts invested in automation should be carefully considered.
This paper explores the evolution of the concept of peace in the context of a globalized and digitalized 21st century, proposing a novel vision that shifts from viewing peace as a thing or a condition, to understanding peace as dynamic and relational process that emerges through human interactions. Building on - yet also going beyond - traditional definitions of peace as something to be found through inner reflection (virtue ethics), as the product of reason, contracts and institutions (Enlightenment philosophy), and as the absence of different forms of violence (modern peace research), this paper introduces a new meso-level theory on networks, emphasizing the importance of connections, interactions and relationships in the physical and online worlds. The paper is structured around three main objectives: conceptualizing relational peace in terms of the quantity and quality of interactions, mapping these interactions into networks of peace, and examining how these networks interact with their environment, including the influence of digital transformation and artificial intelligence. By integrating insights from ethical and peace research literature, the paper makes theoretical, conceptual, and methodological contributions towards understanding peace as an emergent property of human behavior. Through this innovative approach, the paper aims to provide clarity on how peace (and violence) emerges through interactions and relations in an increasingly networked and digitalized global society, offering a foundation for future empirical research and concerted policy action in this area. It highlights the need for bridging normative and descriptive sciences to better understand and promote peace in the digital age.
Considering a double-indexed array $(Y_{n,i:\,n\ge 1,i\ge 1})$ of non-negative regularly varying random variables, we study the random-length weighted sums and maxima from its ‘row’ sequences. These sums and maxima may have the same tail and extremal indices (Markovich and Rodionov 2020). The main constraints of the latter results are that there exists a unique series in a scheme of series with the minimum tail index and the tail of the term number is lighter than the tail of the terms. Here, a bounded random number of series are allowed to have the minimum tail index and the tail of the term number may be heavier than the tail of the terms. We derive the tail and extremal indices of the weighted non-stationary random-length sequences under a broader set of conditions than in Markovich and Rodionov (2020). We provide examples of random sequences for which the assumptions are valid. Perspectives in adopting the results in different application areas are formulated.
Conditional risk measures and their associated risk contribution measures are commonly employed in finance and actuarial science for evaluating systemic risk and quantifying the effects of risk interactions. This paper introduces various types of contribution ratio measures based on the multivariate conditional value-at-risk (MCoVaR), multivariate conditional expected shortfall (MCoES), and multivariate marginal mean excess (MMME) studied in [34] (Ortega-Jiménez, P., Sordo, M., & Suárez-Llorens, A. (2021). Stochastic orders and multivariate measures of risk contagion. Insurance: Mathematics and Economics, vol. 96, 199–207) and [11] (Das, B., & Fasen-Hartmann, V. (2018). Risk contagion under regular variation and asymptotic tail independence. Journal of Multivariate Analysis165(1), 194–215) to assess the relative effects of a single risk when other risks in a group are in distress. The properties of these contribution risk measures are examined, and sufficient conditions for comparing these measures between two sets of random vectors are established using univariate and multivariate stochastic orders and statistically dependent notions. Numerical examples are presented to validate these conditions. Finally, a real dataset from the cryptocurrency market is used to analyze the spillover effects through our proposed contribution measures.
Post COVID-19 condition (PCC) refers to persistent symptoms occurring ≥12 weeks after COVID-19. This living systematic review (SR) assessed the impact of vaccination on PCC and vaccine safety among those with PCC, and was previously published with data up to December 2022. Searches were updated to 31 January 2024 and standard SR methodology was followed. Seventy-eight observational studies were included (47 new). There is moderate confidence that two doses pre-infection reduces the odds of PCC (pooled OR (pOR) 0.69, 95% CI 0.64–0.74, I2 = 35.16%). There is low confidence for remaining outcomes of one dose and three or more doses. A booster dose may further reduce the odds of PCC compared to only a primary series (pOR 0.85, 95% CI 0.74–0.98, I2 = 16.85%). Among children ≤18 years old, vaccination may not reduce the odds (pOR 0.79, 95% CI 0.56–1.11, I2 = 37.2%) of PCC. One study suggests that vaccination within 12 weeks post-infection may reduce the odds of PCC. For those with PCC, vaccination appears safe (four studies) and may reduce the odds of PCC persistence (pOR 0.73, 95% CI 0.57–0.92, I2 = 15.5%).
Health-related quality of life (HRQoL) in the context of COVID-19 is not fully understood. We assessed HRQoL using Patient-Reported Outcomes Measurement Information System® measures among 559 former COVID-19 patients and 298 non-infected individuals. HRQoL was captured once up to 2 years after the initial test. Additionally, we described associations of characteristics with impaired HRQoL. Overall, HRQoL scores were inferior among former patients. A meaningful group difference of at least three T-score points was discernible until 12 months after testing for fatigue (3.1), sleep disturbance (3.5), and dyspnoea (3.7). Cognitive function demonstrated such difference even at >18 months post-infection (3.3). Following dichotomization, pronounced differences in impaired HRQoL were observed in physical (19.2% of former patients, 7.3% of non-infected) and cognitive function (37.6% of former patients, 16.5% of non-infected). Domains most commonly affected among former patients were depression (34.9%), fatigue (37.4%), and cognitive function. Factors that associated with HRQoL impairments among former patients included age (OR ≤2.1), lower education (OR ≤5.3), and COVID-19-related hospitalization (OR ≤4.7), among others. These data underline the need for continued attention of the scientific community to further investigate potential long-term health limitations after COVID-19 to ultimately establish adequate screening and management options for those affected.
where $b\,:\, \mathbb{R}^d \rightarrow \mathbb{R}^d$ is a Lipschitz-continuous function, $A \in \mathbb{R}^{d \times d}$ is a positive-definite matrix, $(Z_t)_{t\geqslant 0}$ is a d-dimensional rotationally symmetric $\alpha$-stable Lévy process with $\alpha \in (1,2)$ and $x\in\mathbb{R}^{d}$. We use two Euler–Maruyama schemes with decreasing step sizes $\Gamma = (\gamma_n)_{n\in \mathbb{N}}$ to approximate the invariant measure of $(X_t)_{t \geqslant 0}$: one uses independent and identically distributed $\alpha$-stable random variables as innovations, and the other employs independent and identically distributed Pareto random variables. We study the convergence rates of these two approximation schemes in the Wasserstein-1 distance. For the first scheme, under the assumption that the function b is Lipschitz and satisfies a certain dissipation condition, we demonstrate a convergence rate of $\gamma^{\frac{1}{\alpha}}_n$. This convergence rate can be improved to $\gamma^{1+\frac {1}{\alpha}-\frac{1}{\kappa}}_n$ for any $\kappa \in [1,\alpha)$, provided b has the additional regularity of bounded second-order directional derivatives. For the second scheme, where the function b is assumed to be twice continuously differentiable, we establish a convergence rate of $\gamma^{\frac{2-\alpha}{\alpha}}_n$; moreover, we show that this rate is optimal for the one-dimensional stable Ornstein–Uhlenbeck process. Our theorems indicate that the recent significant result of [34] concerning the unadjusted Langevin algorithm with additive innovations can be extended to stochastic differential equations driven by an $\alpha$-stable Lévy process and that the corresponding convergence rate exhibits similar behaviour. Compared with the result in [6], our assumptions have relaxed the second-order differentiability condition, requiring only a Lipschitz condition for the first scheme, which broadens the applicability of our approach.
In this article, we study an optimization problem for a couple including two breadwinners with uncertain life times. Both breadwinners need to choose the optimal strategies for consumption, investment, housing, and life insurance purchasing to maximize the utility. In this article, the prices of housing assets and investment risky assets are assumed to be correlated. These two breadwinners are considered to have dependent mortality rates to include the breaking heart effect. The method of copula functions is used to construct the joint survival functions of two breadwinners. The analytical solutions of optimal strategies can be achieved, and numerical results are demonstrated.
The localized nature of severe weather events leads to a concentration of correlated risks that can substantially amplify aggregate event-level losses. We propose a copula-based regression model for replicated spatial data to characterize the dependence between property damage claims arising from a common storm when analyzing its financial impact. The factor copula captures the location-based spatial dependence between properties, as well as the aspatial dependence induced by the common shock of experiencing the same storm. The framework allows insurers to flexibly incorporate the observed heterogeneity in marginal models of skewed, heavy-tailed, and zero-inflated insurance losses, while retaining the model interpretation in decomposing latent sources of dependence. We present a likelihood-based estimation to address the computational challenges from the discreteness and high dimensionality in the outcome of interest. Using hail damage insurance claims data from a US insurer, we demonstrate the effect of dependence on claims management decisions.