To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Zhu and He [(2018). A new closed-form formula for pricing European options under a skew Brownian motion. The European Journal of Finance 24(12): 1063–1074] provided an innovative closed-form solution by replacing the standard Brownian motion in the Black–Scholes framework using a particular skew Brownian motion. Their formula involves numerically integrating the product of the Guassian density and corresponding distribution function. Being different from their pricing formula, we derive a much simpler formula that only involves the Gaussian distribution function and Owen's $T$ function.
We prove new mixing rate estimates for the random walks on homogeneous spaces determined by a probability distribution on a finite group $G$. We introduce the switched random walk determined by a finite set of probability distributions on $G$, prove that its long-term behaviour is determined by the Fourier joint spectral radius of the distributions, and give Hermitian sum-of-squares algorithms for the effective estimation of this quantity.
We present PolicyCLOUD: a prototype for an extensible serverless cloud-based system that supports evidence-based elaboration and analysis of policies. PolicyCLOUD allows flexible exploitation and management of policy-relevant dataflows, by enabling the practitioner to register datasets and specify a sequence of transformations and/or information extraction through registered ingest functions. Once a possibly transformed dataset has been ingested, additional insights can be retrieved by further applying registered analytic functions to it. PolicyCLOUD was built as an extensible framework toward the creation of an analytic ecosystem. As of now, we have developed several essential ingest and analytic functions that are built-in within the framework. They include data cleaning, enhanced interoperability, and sentiment analysis generic functions; in addition, a trend analysis function is being created as a new built-in function. PolicyCLOUD has also the ability to tap on the analytic capabilities of external tools; we demonstrate this with a social dynamics tool implemented in conjunction with PolicyCLOUD, and describe how this stand-alone tool can be integrated with the PolicyCLOUD platform to enrich it with policy modeling, design and simulation capabilities. Furthermore, PolicyCLOUD is supported by a tailor-made legal and ethical framework derived from privacy/data protection best practices and existing standards at the EU level, which regulates the usage and dissemination of datasets and analytic functions throughout its policy-relevant dataflows. The article describes and evaluates the application of PolicyCLOUD to four families of pilots that cover a wide range of policy scenarios.
We discuss a recently proposed family of statistical network models—relational hyperevent models (RHEMs)—for analyzing team selection and team performance in scientific coauthor networks. The underlying rationale for using RHEM in studies of coauthor networks is that scientific collaboration is intrinsically polyadic, that is, it typically involves teams of any size. Consequently, RHEM specify publication rates associated with hyperedges representing groups of scientists of any size. Going beyond previous work on RHEM for meeting data, we adapt this model family to settings in which relational hyperevents have a dedicated outcome, such as a scientific paper with a measurable impact (e.g., the received number of citations). Relational outcome can on the one hand be used to specify additional explanatory variables in RHEM since the probability of coauthoring may be influenced, for instance, by prior (shared) success of scientists. On the other hand, relational outcome can also serve as a response variable in models seeking to explain the performance of scientific teams. To tackle the latter, we propose relational hyperevent outcome models that are closely related with RHEM to the point that both model families can specify the likelihood of scientific collaboration—and the expected performance, respectively—with the same set of explanatory variables allowing to assess, for instance, whether variables leading to increased collaboration also tend to increase scientific impact. For illustration, we apply RHEM to empirical coauthor networks comprising more than 350,000 published papers by scientists working in three scientific disciplines. Our models explain scientific collaboration and impact by, among others, individual activity (preferential attachment), shared activity (familiarity), triadic closure, prior individual and shared success, and prior success disparity among the members of hyperedges.
Wind turbine towers are subjected to highly varying internal loads, characterized by large uncertainty. The uncertainty stems from many factors, including what the actual wind fields experienced over time will be, modeling uncertainties given the various operational states of the turbine with and without controller interaction, the influence of aerodynamic damping, and so forth. To monitor the true experienced loading and assess the fatigue, strain sensors can be installed at fatigue-critical locations on the turbine structure. A more cost-effective and practical solution is to predict the strain response of the structure based only on a number of acceleration measurements. In this contribution, an approach is followed where the dynamic strains in an existing onshore wind turbine tower are predicted using a Gaussian process latent force model. By employing this model, both the applied dynamic loading and strain response are estimated based on the acceleration data. The predicted dynamic strains are validated using strain gauges installed near the bottom of the tower. Fatigue is subsequently assessed by comparing the damage equivalent loads calculated with the predicted as opposed to the measured strains. The results confirm the usefulness of the method for continuous tracking of fatigue life consumption in onshore wind turbine towers.
The effect of milorganite, a commercially available organic soil amendment, on soil nutrients, plant growth, and yield has been investigated. However, its effect on soil hydraulic properties remains less understood. Therefore, this study aimed to investigate the effect of milorganite amendment on soil evaporation, moisture retention, hydraulic conductivity, and electrical conductivity of a Krome soil. A column experiment was conducted with two milorganite application rates (15 and 30% v/v) and a non-amended control soil. The results revealed that milorganite reduced evaporation rates and the length of Stage I of the evaporation process compared with the control. Moreover, milorganite increased moisture retention at saturation and permanent wilting point while decreasing soil hydraulic conductivity. In addition, milorganite increased soil electrical conductivity. Overall, milorganite resulted in increased soil moisture retention; however, moisture in the soil may not be readily available for plants due to increased soil salinity.
During the past half-century, exponential families have attained a position at the center of parametric statistical inference. Theoretical advances have been matched, and more than matched, in the world of applications, where logistic regression by itself has become the go-to methodology in medical statistics, computer-based prediction algorithms, and the social sciences. This book is based on a one-semester graduate course for first year Ph.D. and advanced master's students. After presenting the basic structure of univariate and multivariate exponential families, their application to generalized linear models including logistic and Poisson regression is described in detail, emphasizing geometrical ideas, computational practice, and the analogy with ordinary linear regression. Connections are made with a variety of current statistical methodologies: missing data, survival analysis and proportional hazards, false discovery rates, bootstrapping, and empirical Bayes analysis. The book connects exponential family theory with its applications in a way that doesn't require advanced mathematical preparation.
The short timescale of the solar flare reconnection process has long proved to be a puzzle. Recent studies suggest the importance of the formation of plasmoids in the reconnecting current sheet, with quantifying the aspect ratio of the width to length of the current sheet in terms of a negative power $ \alpha $ of the Lundquist number, that is, $ {S}^{-\alpha } $, being key to understanding the onset of plasmoids formation. In this paper, we make the first application of theoretical scalings for this aspect ratio to observed flares to evaluate how plasmoid formation may connect with observations. For three different flares that show plasmoids we find a range of $ \alpha $ values of $ \alpha =0.26 $ to $ 0.31 $. The values in this small range implies that plasmoids may be forming before the theoretically predicted critical aspect ratio ($ \alpha =1/3 $) has been reached, potentially presenting a challenge for the theoretical models.
We consider supercritical site percolation on the $d$-dimensional hypercube $Q^d$. We show that typically all components in the percolated hypercube, besides the giant, are of size $O(d)$. This resolves a conjecture of Bollobás, Kohayakawa, and Łuczak from 1994.
Longevity risk is putting more and more financial pressure on governments and pension plans worldwide due to pensioners’ increasing trend of life expectancy and the growing numbers of people reaching retirement age. Lee and Carter (1992, Journal of the American Statistical Association, 87(419), 659–671.) applied a one-factor dynamic factor model to forecast the trend of mortality improvement, and the model has since become the field’s workhorse. It is, however, well known that their model is subject to the limitation of overlooking cross-dependence between different age groups. We introduce Factor-Augmented Vector Autoregressive (FAVAR) models to the mortality modelling literature. The model, obtained by adding an unobserved factor process to a Vector Autoregressive (VAR) process, nests VAR and Lee–Carter models as special cases and inherits both frameworks’ advantages. A Bayesian estimation approach, adapted from the Minnesota prior, is proposed. The empirical application to the US and French mortality data demonstrates our proposed method’s efficacy in both in-sample and out-of-sample performance.
Modeling and forecasting of mortality rates are closely related to a wide range of actuarial practices, such as the designing of pension schemes. To improve the forecasting accuracy, age coherence is incorporated in many recent mortality models, which suggests that the long-term forecasts will not diverge infinitely among age groups. Despite their usefulness, misspecification is likely to occur for individual mortality models when applied in empirical studies. The reliableness and accuracy of forecast rates are therefore negatively affected. In this study, an ensemble averaging or model averaging (MA) approach is proposed, which adopts age-specific weights and asymptotically achieves age coherence in mortality forecasting. The ensemble space contains both newly developed age-coherent and classic age-incoherent models to achieve the diversity. To realize the asymptotic age coherence, consider parameter errors, and avoid overfitting, the proposed method minimizes the variance of out-of-sample forecasting errors, with a uniquely designed coherent penalty and smoothness penalty. Our empirical data set include ten European countries with mortality rates of 0–100 age groups and spanning 1950–2016. The outstanding performance of MA is presented using the empirical sample for mortality forecasting. This finding robustly holds in a range of sensitivity analyses. A case study based on the Italian population is finally conducted to demonstrate the improved forecasting efficiency of MA and the validity of the proposed estimation of weights, as well as its usefulness in actuarial applications such as the annuity pricing.
Layer reinsurance treaty is a common form obtained in the problem of optimal reinsurance design. In this paper, we study allocations of policy limits in layer reinsurance treaties with dependent risks. We investigate the effects of orderings and heterogeneity among policy limits on the expected utility functions of the terminal wealth from the viewpoint of risk-averse insurers faced with right tail weakly stochastic arrangement increasing losses. Orderings on optimal allocations are presented for normal layer reinsurance contracts under certain conditions. Parallel studies are also conducted for randomized layer reinsurance contracts. As a special case, the worst allocations of policy limits are also identified when the exact dependence structure among the losses is unknown. Numerical examples are presented to shed light on the theoretical findings.
A $(p,q)$-colouring of a graph $G$ is an edge-colouring of $G$ which assigns at least $q$ colours to each $p$-clique. The problem of determining the minimum number of colours, $f(n,p,q)$, needed to give a $(p,q)$-colouring of the complete graph $K_n$ is a natural generalization of the well-known problem of identifying the diagonal Ramsey numbers $r_k(p)$. The best-known general upper bound on $f(n,p,q)$ was given by Erdős and Gyárfás in 1997 using a probabilistic argument. Since then, improved bounds in the cases where $p=q$ have been obtained only for $p\in \{4,5\}$, each of which was proved by giving a deterministic construction which combined a $(p,p-1)$-colouring using few colours with an algebraic colouring.
In this paper, we provide a framework for proving new upper bounds on $f(n,p,p)$ in the style of these earlier constructions. We characterize all colourings of $p$-cliques with $p-1$ colours which can appear in our modified version of the $(p,p-1)$-colouring of Conlon, Fox, Lee, and Sudakov. This allows us to greatly reduce the amount of case-checking required in identifying $(p,p)$-colourings, which would otherwise make this problem intractable for large values of $p$. In addition, we generalize our algebraic colouring from the $p=5$ setting and use this to give improved upper bounds on $f(n,6,6)$ and $f(n,8,8)$.
Hyperbolic random graphs (HRGs) and geometric inhomogeneous random graphs (GIRGs) are two similar generative network models that were designed to resemble complex real-world networks. In particular, they have a power-law degree distribution with controllable exponent $\beta$ and high clustering that can be controlled via the temperature $T$.
We present the first implementation of an efficient GIRG generator running in expected linear time. Besides varying temperatures, it also supports underlying geometries of higher dimensions. It is capable of generating graphs with ten million edges in under a second on commodity hardware. The algorithm can be adapted to HRGs. Our resulting implementation is the fastest sequential HRG generator, despite the fact that we support non-zero temperatures. Though non-zero temperatures are crucial for many applications, most existing generators are restricted to $T = 0$. We also support parallelization, although this is not the focus of this paper. Moreover, we note that our generators draw from the correct probability distribution, that is, they involve no approximation.
Besides the generators themselves, we also provide an efficient algorithm to determine the non-trivial dependency between the average degree of the resulting graph and the input parameters of the GIRG model. This makes it possible to specify the desired expected average degree as input.
Moreover, we investigate the differences between HRGs and GIRGs, shedding new light on the nature of the relation between the two models. Although HRGs represent, in a certain sense, a special case of the GIRG model, we find that a straightforward inclusion does not hold in practice. However, the difference is negligible for most use cases.
Digital identity systems are promoted with the promise of great benefit and inclusion. The case of the Ugandan digital identity system demonstrates that the impact of digital identity systems is not only positive but also has negative impacts, significantly affecting human lives for the worse. The impact on the human lives of digital identity systems can be assessed by multiple frameworks. A specific framework that has been mentioned is the capabilities approach (CA). This article demonstrates that the CA is a framework to assess the impact on human lives that can be operationalized for technology and information and communication technology, including digital identity systems. Further research is required to compare the CA with other candidate evaluation frameworks.
This paper provides an examination of inter-organizational collaboration in the UK research system. Data are collected on organizational collaboration on projects funded by four key UK research councils: Arts and Humanities Research Council, Economic and Social Research Council, Engineering and Physical Sciences Research Council, and Biotechnology and Biological Sciences Research Council. The organizational partnerships include both academic and nonacademic institutions. A collaboration network is created for each research council, and an exponential random graph model is applied to inform on the mechanisms underpinning collaborative tie formation on research council-funded projects. We find that in the sciences, collaborative patterns are much more hierarchical and concentrated in a small handful of actors compared to the social sciences and humanities projects. Institutions that are members of the elite Russell Group (a set of 24 high-ranking UK universities) are much more likely to be involved in collaborations across research councils.