To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The loss count distributions whose probabilities ultimately satisfy Panjer’s recursion were classified between 1981 and 2002; they split into six types, which look quite diverse. Yet, the distributions are closely related – we show that their probabilities emerge out of one formula: the binomial series. We propose a parameter change that leads to a unified, practical and intuitive, representation of the Panjer distributions and their parameter space. We determine the subsets of the parameter space where the probabilities are continuous functions of the parameters. Finally, we give an inventory of parameterisations used for Panjer distributions.
In this paper we estimate the expected error of a stochastic approximation algorithm where the maximum of a function is found using finite differences of a stochastic representation of that function. An error estimate of the order $n^{-1/5}$ for the nth iteration is achieved using suitable parameters. The novelty with respect to previous studies is that we allow the stochastic representation to be discontinuous and to consist of possibly dependent random variables (satisfying a mixing condition).
Bacterial antibiotic resistance (AMR) is a significant threat to public health, with the sentinel ‘ESKAPEE’ pathogens, being of particular concern. A cohort study spanning 5.5 years (2016–2021) was conducted at a provincial general hospital in Crete, Greece, to describe the epidemiology of ESKAPEE-associated bacteraemia regarding levels of AMR and their impact on patient outcomes. In total, 239 bloodstream isolates were examined from 226 patients (0.7% of 32 996 admissions) with a median age of 75 years, 28% of whom had severe comorbidity and 46% with prior stay in ICU. Multidrug resistance (MDR) was lowest for Pseudomonas aeruginosa (30%) and Escherichia coli (33%), and highest among Acinetobacter baumannii (97%); the latter included 8 (22%) with extensive drug-resistance (XDR), half of which were resistant to all antibiotics tested. MDR bacteraemia was more likely to be healthcare-associated than community-onset (RR 1.67, 95% CI 1.04–2.65). Inpatient mortality was 22%, 35% and 63% for non-MDR, MDR and XDR episodes, respectively (P = 0.004). Competing risks survival analysis revealed increasing mortality linked to longer hospitalisation with increasing AMR levels, as well as differential pathogen-specific effects. A. baumannii bacteraemia was the most fatal (14-day death hazard ratio 3.39, 95% CI 1.74–6.63). Differences in microbiology, AMR profile and associated mortality compared to national and international data emphasise the importance of similar investigations of local epidemiology.
We suggest two related conjectures dealing with the existence of spanning irregular subgraphs of graphs. The first asserts that any $d$-regular graph on $n$ vertices contains a spanning subgraph in which the number of vertices of each degree between $0$ and $d$ deviates from $\frac{n}{d+1}$ by at most $2$. The second is that every graph on $n$ vertices with minimum degree $\delta$ contains a spanning subgraph in which the number of vertices of each degree does not exceed $\frac{n}{\delta +1}+2$. Both conjectures remain open, but we prove several asymptotic relaxations for graphs with a large number of vertices $n$. In particular we show that if $d^3 \log n \leq o(n)$ then every $d$-regular graph with $n$ vertices contains a spanning subgraph in which the number of vertices of each degree between $0$ and $d$ is $(1+o(1))\frac{n}{d+1}$. We also prove that any graph with $n$ vertices and minimum degree $\delta$ contains a spanning subgraph in which no degree is repeated more than $(1+o(1))\frac{n}{\delta +1}+2$ times.
For a bivariate random vector $(X, Y)$, suppose $X$ is some interesting loss variable and $Y$ is a benchmark variable. This paper proposes a new variability measure called the joint tail-Gini functional, which considers not only the tail event of benchmark variable $Y$, but also the tail information of $X$ itself. It can be viewed as a class of tail Gini-type variability measures, which also include the recently proposed tail-Gini functional. It is a challenging and interesting task to measure the tail variability of $X$ under some extreme scenarios of the variables by extending the Gini's methodology, and the two tail variability measures can serve such a purpose. We study the asymptotic behaviors of these tail Gini-type variability measures, including tail-Gini and joint tail-Gini functionals. The paper conducts this study under both tail dependent and tail independent cases, which are modeled by copulas with so-called tail order property. Some examples are also shown to illuminate our results. In particular, a generalization of the joint tail-Gini functional is considered to provide a more flexible version.
We introduce a general two-colour interacting urn model on a finite directed graph, where each urn at a node reinforces all the urns in its out-neighbours according to a fixed, non-negative, and balanced reinforcement matrix. We show that the fraction of balls of either colour converges almost surely to a deterministic limit if either the reinforcement is not of Pólya type or the graph is such that every vertex with non-zero in-degree can be reached from some vertex with zero in-degree. We also obtain joint central limit theorems with appropriate scalings. Furthermore, in the remaining case when there are no vertices with zero in-degree and the reinforcement is of Pólya type, we restrict our analysis to a regular graph and show that the fraction of balls of either colour converges almost surely to a finite random limit, which is the same across all the urns.
This paper examines the preservation of several aging classes of lifetime distributions in the formation of coherent and mixed systems with independent and identically distributed (i.i.d.) or identically distributed (i.d.) component lifetimes. The increasing mean inactivity time class and the decreasing mean time to failure class are developed for the lifetime of systems with possibly dependent and i.d. component lifetimes. The increasing likelihood ratio property is also discussed for the lifetime of a coherent system with i.i.d. component lifetimes. We present sufficient conditions satisfied by the signature of a coherent system with i.i.d. components with exponential distribution, under which the decreasing mean remaining lifetime, the increasing mean inactivity time, and the decreasing mean time to failure are all satisfied by the lifetime of the system. Illustrative examples are presented to support the established results.
Data assimilation is theoretically founded on probability, statistics, control theory, information theory, linear algebra, and functional analysis. At the same time, data assimilation is a very practical subject, given its goal of estimating the posterior probability density function in realistic high-dimensional applications. This puts data assimilation at the intersection between the contrasting requirements of theory and practice. Based on over twenty years of teaching courses in data assimilation, Principles of Data Assimilation introduces a unique perspective that is firmly based on mathematical theories, but also acknowledges practical limitations of the theory. With the inclusion of numerous examples and practical case studies throughout, this new perspective will help students and researchers to competently interpret data assimilation results and to identify critical challenges of developing data assimilation algorithms. The benefit of information theory also introduces new pathways for further development, understanding, and improvement of data assimilation methods.
Nonlinear Markov chains with finite state space were introduced by Kolokoltsov (Nonlinear Markov Processes and Kinetic Equations, 2010). The characteristic property of these processes is that the transition probabilities depend not only on the state, but also on the distribution of the process. Here we provide first results regarding their invariant distributions and long-term behaviour: we show that under a continuity assumption an invariant distribution exists and provide a sufficient criterion for the uniqueness of the invariant distribution. Moreover, we present examples of peculiar limit behaviour that cannot occur for classical linear Markov chains. Finally, we present for the case of small state spaces sufficient (and easy-to-verify) criteria for the ergodicity of the process.
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) caused the novel global coronavirus disease 2019 (COVID-19) disease outbreak. Its pathogenesis is mostly located in the respiratory tract. However, other organs are also affected. Hence, realising how such a complex disturbance affects patients after recovery is crucial. Regarding the significance of control of COVID-19-related complications after recovery, the current study was designed to review the cellular and molecular mechanisms linking COVID-19 to significant long-term signs including renal and cardiac complications, cutaneous and neurological manifestations, as well as blood coagulation disorders. This virus can directly influence on the cells through Angiotensin converting enzyme 2 (ACE-2) to induce cytokine storm. Acute release of Interleukin-1 (IL1), IL6 and plasminogen activator inhibitor 1 (PAI-1) have been related to elevating risk of heart failure. Also, inflammatory cytokines like IL-8 and Tumour necrosis factor-α cause the secretion of von Willebrand factor (VWF) from human endothelial cells and then VWF binds to Neutrophil extracellular traps to induce thrombosis. On the other hand, the virus can damage the blood–brain barrier by increasing its permeability and subsequently enters into the central nervous system and the systemic circulation. Furthermore, SARS-induced ACE2-deficiency decreases [des-Arg9]-bradykinin (desArg9-BK) degradation in kidneys to induce inflammation, thrombotic problems, fibrosis and necrosis. Notably, the angiotensin II-angiotensin II type 1 receptor binding causes an increase in aldosterone and mineralocorticoid receptors on the surface of dendritic cells cells, leading to recalling macrophage and monocyte into inflammatory sites of skin. In conclusions, all the pathways play a key role in the pathogenesis of these disturbances. Nevertheless, more investigations are necessary to determine more pathogenetic mechanisms of the virus.
Let f be the density function associated to a matrix-exponential distribution of parameters $(\boldsymbol{\alpha}, T,\boldsymbol{{s}})$. By exponentially tilting f, we find a probabilistic interpretation which generalizes the one associated to phase-type distributions. More specifically, we show that for any sufficiently large $\lambda\ge 0$, the function $x\mapsto \left(\int_0^\infty e^{-\lambda s}f(s)\textrm{d} s\right)^{-1}e^{-\lambda x}f(x)$ can be described in terms of a finite-state Markov jump process whose generator is tied to T. Finally, we show how to revert the exponential tilting in order to assign a probabilistic interpretation to f itself.
This study provides an econometric methodology to test a linear structural relationship among economic variables. We propose the so-called distance-difference (DD) test and show that it has omnibus power against arbitrary nonlinear structural relationships. If the DD-test rejects the linear model hypothesis, a sequential testing procedure assisted by the DD-test can consistently estimate the degree of a polynomial function that arbitrarily approximates the nonlinear structural equation. Using extensive Monte Carlo simulations, we confirm the DD-test’s finite sample properties and compare its performance with the sequential testing procedure assisted by the J-test and moment selection criteria. Finally, through investigation, we empirically illustrate the relationship between the value-added and its production factors using firm-level data from the United States. We demonstrate that the production function has exhibited a factor-biased technological change instead of Hicks-neutral technology presumed by the Cobb–Douglas production function.
We have previously shown that the geographic routing’s greedy packet forwarding distance (PFD), in dissimilarity values of its average measures, characterizes a mobile ad hoc network’s (MANET) topology by node size. In this article, we demonstrate a distribution-based analysis of the PFD measures that were generated by two representative greedy algorithms, namely GREEDY and ELLIPSOID. The result shows the potential of the distribution-based dissimilarity learning of the PFD in topology characterizing. Characterizing dynamic MANET topology supports context-aware performance optimization in position-based or geographic packet routing.
It was recently proven that the correlation function of the stationary version of a reflected Lévy process is nonnegative, nonincreasing, and convex. In another branch of the literature it was established that the mean value of the reflected process starting from zero is nondecreasing and concave. In the present paper it is shown, by putting them in a common framework, that these results extend to substantially more general settings. Indeed, instead of reflected Lévy processes, we consider a class of more general stochastically monotone Markov processes. In this setup we show monotonicity results associated with a supermodular function of two coordinates of our Markov process, from which the above-mentioned monotonicity and convexity/concavity results directly follow, but now for the class of Markov processes considered rather than just reflected Lévy processes. In addition, various results for the transient case (when the Markov process is not in stationarity) are provided. The conditions imposed are natural, in that they are satisfied by various frequently used Markovian models, as illustrated by a series of examples.
Combining cross-sectional and time-series data is a long and well-established practice in empirical economics. We develop a central limit theory that explicitly accounts for possible dependence between the two datasets. We focus on common factors as the mechanism behind this dependence. Using our central limit theorem (CLT), we establish the asymptotic properties of parameter estimates of a general class of models based on a combination of cross-sectional and time-series data, recognizing the interdependence between the two data sources in the presence of aggregate shocks. Despite the complicated nature of the analysis required to formulate the joint CLT, it is straightforward to implement the resulting parameter limiting distributions due to a formal similarity of our approximations with Murphy and Topel’s (1985, Journal of Business and Economic Statistics 3, 370–379) formula.
Reducing negative attitudes toward older adults is an urgent issue. A previous study has conducted “stereotype embodiment theory”-based interventions (SET interventions) that present participants with the contents of SET and related empirical findings. I focus on the subjective time to become older (the perception of how long people feel it will be before they become old) as a mechanism for the effect of SET interventions. I make the SET intervention group and the control group in which the participants are presented with an irrelevant vignette. The data from 641 participants (M = 31.97 years) were analyzed. Consequently, the SET intervention shortened the subjective time to become older and reduced negative attitudes toward older adults. When considering SET interventions, it would be useful to focus not only on the self-interested motives to avoid age discrimination but also on the subjective time to become older.
We present a study of the joint distribution of both the state of a level-dependent quasi-birth–death (QBD) process and its associated running maximum level, at a fixed time t: more specifically, we derive expressions for the Laplace transforms of transition functions that contain this information, and the expressions we derive contain familiar constructs from the classical theory of QBD processes. Indeed, one important takeaway from our results is that the distribution of the running maximum level of a level-dependent QBD process can be studied using results that are highly analogous to the more well-established theory of level-dependent QBD processes that focuses primarily on the joint distribution of the level and phase. We also explain how our methods naturally extend to the study of level-dependent Markov processes of M/G/1 type, if we instead keep track of the running minimum level instead of the running maximum level.
As relational event models are an increasingly popular model for studying relational structures, the reliability of large-scale event data collection becomes more and more important. Automated or human-coded events often suffer from non-negligible false-discovery rates in event identification. And most sensor data are primarily based on actors’ spatial proximity for predefined time windows; hence, the observed events could relate either to a social relationship or random co-location. Both examples imply spurious events that may bias estimates and inference. We propose the Relational Event Model for Spurious Events (REMSE), an extension to existing approaches for interaction data. The model provides a flexible solution for modeling data while controlling for spurious events. Estimation of our model is carried out in an empirical Bayesian approach via data augmentation. Based on a simulation study, we investigate the properties of the estimation procedure. To demonstrate its usefulness in two distinct applications, we employ this model to combat events from the Syrian civil war and student co-location data. Results from the simulation and the applications identify the REMSE as a suitable approach to modeling relational event data in the presence of spurious events.
Aeroengine performance is determined by temperature and pressure profiles along various axial stations within an engine. Given limited sensor measurements, we require a statistically principled approach for inferring these profiles. In this paper we detail a Bayesian methodology for interpolating the spatial temperature or pressure profile at axial stations within an aeroengine. The profile at any given axial station is represented as a spatial Gaussian random field on an annulus, with circumferential variations modelled using a Fourier basis and radial variations modelled with a squared exponential kernel. This Gaussian random field is extended to ingest data from multiple axial measurement planes, with the aim of transferring information across the planes. To facilitate this type of transfer learning, a novel planar covariance kernel is proposed. In the scenario where frequencies comprising the temperature field are unknown, we utilise a sparsity-promoting prior on the frequencies to encourage sparse representations. This easily extends to cases with multiple engine planes whilst accommodating frequency variations between the planes. The main quantity of interest, the spatial area average is readily obtained in closed form. We term this the Bayesian area average and demonstrate how this metric offers far more representative averages than a sector area average---a widely used area averaging approach. Furthermore, the Bayesian area average naturally decomposes the posterior uncertainty into terms characterising insufficient sampling and sensor measurement error respectively. This too provides a significant improvement over prior standard deviation based uncertainty breakdowns.