To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper considers ergodic, continuous-time Markov chains $\{X(t)\}_{t \in (\!-\infty,\infty)}$ on $\mathbb{Z}^+=\{0,1,\ldots\}$. For an arbitrarily fixed $N \in \mathbb{Z}^+$, we study the conditional stationary distribution $\boldsymbol{\pi}(N)$ given the Markov chain being in $\{0,1,\ldots,N\}$. We first characterize $\boldsymbol{\pi}(N)$ via systems of linear inequalities and identify simplices that contain $\boldsymbol{\pi}(N)$, by examining the $(N+1) \times (N+1)$ northwest corner block of the infinitesimal generator $\textbf{\textit{Q}}$ and the subset of the first $N+1$ states whose members are directly reachable from at least one state in $\{N+1,N+2,\ldots\}$. These results are closely related to the augmented truncation approximation (ATA), and we provide some practical implications for the ATA. Next we consider an extension of the above results, using the $(K+1) \times (K+1)$ ($K > N$) northwest corner block of $\textbf{\textit{Q}}$ and the subset of the first $K+1$ states whose members are directly reachable from at least one state in $\{K+1,K+2,\ldots\}$. Furthermore, we introduce new state transition structures called (K, N)-skip-free sets, using which we obtain the minimum convex polytope that contains $\boldsymbol{\pi}(N)$.
The angular power spectrum is a natural tool to analyse the observed galaxy number count fluctuations. In a standard analysis, the angular galaxy distribution is sliced into concentric redshift bins and all correlations of its harmonic coefficients between bin pairs are considered—a procedure referred to as ‘tomography’. However, the unparalleled quality of data from oncoming spectroscopic galaxy surveys for cosmology will render this method computationally unfeasible, given the increasing number of bins. Here, we put to test against synthetic data a novel method proposed in a previous study to save computational time. According to this method, the whole galaxy redshift distribution is subdivided into thick bins, neglecting the cross-bin correlations among them; each of the thick bin is, however, further subdivided into thinner bins, considering in this case all the cross-bin correlations. We create a simulated data set that we then analyse in a Bayesian framework. We confirm that the newly proposed method saves computational time and gives results that surpass those of the standard approach.
Despite high exposure to Middle East respiratory syndrome coronavirus (MERS-CoV), the predictors for seropositivity in the context of husbandry practices for camels in Eastern Africa are not well understood. We conducted a cross-sectional survey to describe the camel herd profile and determine the factors associated with MERS-CoV seropositivity in Northern Kenya. We enrolled 29 camel-owning households and administered questionnaires to collect herd and household data. Serum samples collected from 493 randomly selected camels were tested for anti-MERS-CoV antibodies using a microneutralisation assay, and regression analysis used to correlate herd and household characteristics with camel seropositivity. Households reared camels (median = 23 camels and IQR 16–56), and at least one other livestock species in two distinct herds; a home herd kept near homesteads, and a range/fora herd that resided far from the homestead. The overall MERS-CoV IgG seropositivity was 76.3%, with no statistically significant difference between home and fora herds. Significant predictors for seropositivity (P ⩽ 0.05) included camels 6–10 years old (aOR 2.3, 95% CI 1.0–5.2), herds with ⩾25 camels (aOR 2.0, 95% CI 1.2–3.4) and camels from Gabra community (aOR 2.3, 95% CI 1.2–4.2). These results suggest high levels of virus transmission among camels, with potential for human infection.
We consider a fractional Brownian motion with linear drift such that its unknown drift coefficient has a prior normal distribution and construct a sequential test for the hypothesis that the drift is positive versus the alternative that it is negative. We show that the problem of constructing the test reduces to an optimal stopping problem for a standard Brownian motion obtained by a transformation of the fractional Brownian motion. The solution is described as the first exit time from some set, and it is shown that its boundaries satisfy a certain integral equation, which is solved numerically.
We provide the first generic exact simulation algorithm for multivariate diffusions. Current exact sampling algorithms for diffusions require the existence of a transformation which can be used to reduce the sampling problem to the case of a constant diffusion matrix and a drift which is the gradient of some function. Such a transformation, called the Lamperti transformation, can be applied in general only in one dimension. So, completely different ideas are required for the exact sampling of generic multivariate diffusions. The development of these ideas is the main contribution of this paper. Our strategy combines techniques borrowed from the theory of rough paths, on the one hand, and multilevel Monte Carlo on the other.
An important problem in modeling networks is how to generate a randomly sampled graph with given degrees. A popular model is the configuration model, a network with assigned degrees and random connections. The erased configuration model is obtained when self-loops and multiple edges in the configuration model are removed. We prove an upper bound for the number of such erased edges for regularly-varying degree distributions with infinite variance, and use this result to prove central limit theorems for Pearson’s correlation coefficient and the clustering coefficient in the erased configuration model. Our results explain the structural correlations in the erased configuration model and show that removing edges leads to different scaling of the clustering coefficient. We prove that for the rank-1 inhomogeneous random graph, another null model that creates scale-free simple networks, the results for Pearson’s correlation coefficient as well as for the clustering coefficient are similar to the results for the erased configuration model.
This study aimed to analyse the spatial–temporal distribution of COVID-19 mortality in Sergipe, Northeast, Brazil. It was an ecological study utilising spatiotemporal analysis techniques that included all deaths confirmed by COVID-19 in Sergipe, from 2 April to 14 June 2020. Mortality rates were calculated per 100 000 inhabitants and the temporal trends were analysed using a segmented log-linear model. For spatial analysis, the Kernel estimator was used and the crude mortality rates were smoothed by the empirical Bayesian method. The space–time prospective scan statistics applied the Poisson's probability distribution model. There were 391 COVID-19 registered deaths, with the majority among ⩾60 years old (62%) and males (53%). The most prevalent comorbidities were hypertension (40%), diabetes (31%) and cardiovascular disease (15%). An increasing mortality trend across the state was observed, with a higher increase in the countryside. An active spatiotemporal cluster of mortality comprising the metropolitan area and neighbouring cities was identified. The trend of COVID-19 mortality in Sergipe was increasing and the spatial distribution of deaths was heterogeneous with progression towards the countryside. Therefore, the use of spatial analysis techniques may contribute to surveillance and control of COVID-19 pandemic.
Providing optimal strategies for maintaining technical systems in good working condition is an important goal in reliability engineering. The main aim of this paper is to propose some optimal maintenance policies for coherent systems based on some partial information about the status of components in the system. For this purpose, in the first part of the paper, we propose two criteria under which we compute the probability of the number of failed components in a coherent system with independent and identically distributed components. The first proposed criterion utilizes partial information about the status of the components with a single inspection of the system, and the second one uses partial information about the status of component failure under double monitoring of the system. In the computation of both criteria, we use the notion of the signature vector associated with the system. Some stochastic comparisons between two coherent systems have been made based on the proposed concepts. Then, by imposing some cost functions, we introduce new approaches to the optimal corrective and preventive maintenance of coherent systems. To illustrate the results, some examples are examined numerically and graphically.
We consider the problem of numerical integration when the sampling nodes form a stationary point process on the real line. In previous papers it was argued that a naïve Riemann sum approach can cause a severe variance inflation when the sampling points are not equidistant. We show that this inflation can be avoided using a higher-order Newton–Cotes quadrature rule which exploits smoothness properties of the integrand. Under mild assumptions, the resulting estimator is unbiased and its variance asymptotically obeys a power law as a function of the mean point distance. If the Newton–Cotes rule is of sufficiently high order, the exponent of this law turns out to only depend on the point process through its mean point distance. We illustrate our findings with the stereological estimation of the volume of a compact object, suggesting alternatives to the well-established Cavalieri estimator.
Fractal percolation exhibits a dramatic topological phase transition, changing abruptly from a dust-like set to a system-spanning cluster. The transition points are unknown and difficult to estimate. In many classical percolation models the percolation thresholds have been approximated well using additive geometric functionals, known as intrinsic volumes. Motivated by the question of whether a similar approach is possible for fractal models, we introduce corresponding geometric functionals for the fractal percolation process F. They arise as limits of expected functionals of finite approximations of F. We establish the existence of these limit functionals and obtain explicit formulas for them as well as for their finite approximations.
Draw-down time for a stochastic process is the first passage time of a draw-down level that depends on the previous maximum of the process. In this paper we study the draw-down-related Parisian ruin problem for spectrally negative Lévy risk processes. Intuitively, a draw-down Parisian ruin occurs when the surplus process has continuously stayed below the dynamic draw-down level for a fixed amount of time. We introduce the draw-down Parisian ruin time and solve the corresponding two-sided exit problems via excursion theory. We also find an expression for the potential measure for the process killed at the draw-down Parisian time. As applications, we obtain new results for spectrally negative Lévy risk processes with dividend barrier and with Parisian ruin.
Elementary treatments of Markov chains, especially those devoted to discrete-time and finite state-space theory, leave the impression that everything is smooth and easy to understand. This exposition of the works of Kolmogorov, Feller, Chung, Kato, and other mathematical luminaries, which focuses on time-continuous chains but is not so far from being elementary itself, reminds us again that the impression is false: an infinite, but denumerable, state-space is where the fun begins. If you have not heard of Blackwell's example (in which all states are instantaneous), do not understand what the minimal process is, or do not know what happens after explosion, dive right in. But beware lest you are enchanted: 'There are more spells than your commonplace magicians ever dreamed of.'
A lack of political legitimacy undermines the ability of the European Union (EU) to resolve major crises and threatens the stability of the system as a whole. By integrating digital data into political processes, the EU seeks to base decision-making increasingly on sound empirical evidence. In particular, artificial intelligence (AI) systems have the potential to increase political legitimacy by identifying pressing societal issues, forecasting potential policy outcomes, and evaluating policy effectiveness. This paper investigates how citizens’ perceptions of EU input, throughput, and output legitimacy are influenced by three distinct decision-making arrangements: (a) independent human decision-making by EU politicians; (b) independent algorithmic decision-making (ADM) by AI-based systems; and (c) hybrid decision-making (HyDM) by EU politicians and AI-based systems together. The results of a preregistered online experiment (n = 572) suggest that existing EU decision-making arrangements are still perceived as the most participatory and accessible for citizens (input legitimacy). However, regarding the decision-making process itself (throughput legitimacy) and its policy outcomes (output legitimacy), no difference was observed between the status quo and HyDM. Respondents tend to perceive ADM systems as the sole decision-maker to be illegitimate. The paper discusses the implications of these findings for (a) EU legitimacy and (b) data-driven policy-making and outlines (c) avenues for future research.
The concept of a “digital twin” as a model for data-driven management and control of physical systems has emerged over the past decade in the domains of manufacturing, production, and operations. In the context of buildings and civil infrastructure, the notion of a digital twin remains ill-defined, with little or no consensus among researchers and practitioners of the ways in which digital twin processes and data-centric technologies can support design and construction. This paper builds on existing concepts of Building Information Modeling (BIM), lean project production systems, automated data acquisition from construction sites and supply chains, and artificial intelligence to formulate a mode of construction that applies digital twin information systems to achieve closed loop control systems. It contributes a set of four core information and control concepts for digital twin construction (DTC), which define the dimensions of the conceptual space for the information used in DTC workflows. Working from the core concepts, we propose a DTC information system workflow—including information stores, information processing functions, and monitoring technologies—according to three concentric control workflow cycles. DTC should be viewed as a comprehensive mode of construction that prioritizes closing the control loops rather than an extension of BIM tools integrated with sensing and monitoring technologies.