To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this perspective, I give my answer to the question of how quantum computing will impact on data-intensive applications in engineering and science. I focus on quantum Monte Carlo integration as a likely source of (relatively) near-term quantum advantage, but also discuss some other ideas that have garnered widespread interest.
We prove existence and uniqueness for the solution of a class of mixed fractional stochastic differential equations with discontinuous drift driven by both standard and fractional Brownian motion. Additionally, we establish a generalized Itô rule valid for functions with an absolutely continuous derivative and applicable to solutions of mixed fractional stochastic differential equations with Lipschitz coefficients, which plays a key role in our proof of existence and uniqueness. The proof of such a formula is new and relies on showing the existence of a density of the law under mild assumptions on the diffusion coefficient.
We establish exponential ergodicity for a class of Markov processes with interactions, including two-factor type processes and Gruschin type processes. The proof is elementary and direct via the Markov coupling technique.
We study the time-consistent investment and contribution policies in a defined benefit stochastic pension fund where the manager discounts the instantaneous utility over a finite planning horizon and the final function at constant but different instantaneous rates of time preference. This difference, which can be motivated for some uncertainties affecting payoffs at the end of the planning horizon, will induce a variable bias between the relative valuation of the final function and the previous payoffs and will lead the manager to show time-inconsistent preferences. Both the benefits and the contribution rate are proportional to the total wage of the workers that we suppose is stochastic. The aim is to maximize a CRRA utility function of the net benefit relative to salary in a bounded horizon and to maximize a CRRA final utility of the fund level relative to the salary. The problem is solved by means of dynamic programming techniques, and main results are illustrated numerically.
The dimension of models derived on the basis of data is commonly restricted by the number of observations, or in the context of monitored systems, sensing nodes. This is particularly true for structural systems, which are typically high-dimensional in nature. In the scope of physics-informed machine learning, this article proposes a framework—termed neural modal ordinary differential equations (Neural Modal ODEs)—to integrate physics-based modeling with deep learning for modeling the dynamics of monitored and high-dimensional engineered systems. In this initiating exploration, we restrict ourselves to linear or mildly nonlinear systems. We propose an architecture that couples a dynamic version of variational autoencoders with physics-informed neural ODEs (Pi-Neural ODEs). An encoder, as a part of the autoencoder, learns the mappings from the first few items of observational data to the initial values of the latent variables, which drive the learning of embedded dynamics via Pi-Neural ODEs, imposing a modal model structure on that latent space. The decoder of the proposed model adopts the eigenmodes derived from an eigenanalysis applied to the linearized portion of a physics-based model: a process implicitly carrying the spatial relationship between degrees-of-freedom (DOFs). The framework is validated on a numerical example, and an experimental dataset of a scaled cable-stayed bridge, where the learned hybrid model is shown to out perform a purely physics-based approach to modeling. We further show the functionality of the proposed scheme within the context of virtual sensing, that is, the recovery of generalized response quantities in unmeasured DOFs from spatially sparse data.
Motivated by Ahmadi-Javid (Journal of Optimization Theory Applications, 155(3), 2012, 1105–1123) and Ahmadi-Javid and Pichler (Mathematics and Financial Economics, 11, 2017, 527–550), the concept of Tsallis Value-at-Risk (TsVaR) based on Tsallis entropy is introduced in this paper. TsVaR corresponds to the tightest possible upper bound obtained from the Chernoff inequality for the Value-at-Risk. The main properties and analogous dual representation of TsVaR are investigated. These results partially generalize the Entropic Value-at-Risk by involving Tsallis entropies. Three spaces, called the primal, dual, and bidual Tsallis spaces, corresponding to TsVaR are fully studied. It is shown that these spaces equipped with the norm induced by TsVaR are Banach spaces. The Tsallis spaces are related to the $L^p$ spaces, as well as specific Orlicz hearts and Orlicz spaces. Finally, we derive explicit formula for the dual TsVaR norm.
Zhu and He [(2018). A new closed-form formula for pricing European options under a skew Brownian motion. The European Journal of Finance 24(12): 1063–1074] provided an innovative closed-form solution by replacing the standard Brownian motion in the Black–Scholes framework using a particular skew Brownian motion. Their formula involves numerically integrating the product of the Guassian density and corresponding distribution function. Being different from their pricing formula, we derive a much simpler formula that only involves the Gaussian distribution function and Owen's $T$ function.
We prove new mixing rate estimates for the random walks on homogeneous spaces determined by a probability distribution on a finite group $G$. We introduce the switched random walk determined by a finite set of probability distributions on $G$, prove that its long-term behaviour is determined by the Fourier joint spectral radius of the distributions, and give Hermitian sum-of-squares algorithms for the effective estimation of this quantity.
We present PolicyCLOUD: a prototype for an extensible serverless cloud-based system that supports evidence-based elaboration and analysis of policies. PolicyCLOUD allows flexible exploitation and management of policy-relevant dataflows, by enabling the practitioner to register datasets and specify a sequence of transformations and/or information extraction through registered ingest functions. Once a possibly transformed dataset has been ingested, additional insights can be retrieved by further applying registered analytic functions to it. PolicyCLOUD was built as an extensible framework toward the creation of an analytic ecosystem. As of now, we have developed several essential ingest and analytic functions that are built-in within the framework. They include data cleaning, enhanced interoperability, and sentiment analysis generic functions; in addition, a trend analysis function is being created as a new built-in function. PolicyCLOUD has also the ability to tap on the analytic capabilities of external tools; we demonstrate this with a social dynamics tool implemented in conjunction with PolicyCLOUD, and describe how this stand-alone tool can be integrated with the PolicyCLOUD platform to enrich it with policy modeling, design and simulation capabilities. Furthermore, PolicyCLOUD is supported by a tailor-made legal and ethical framework derived from privacy/data protection best practices and existing standards at the EU level, which regulates the usage and dissemination of datasets and analytic functions throughout its policy-relevant dataflows. The article describes and evaluates the application of PolicyCLOUD to four families of pilots that cover a wide range of policy scenarios.
We discuss a recently proposed family of statistical network models—relational hyperevent models (RHEMs)—for analyzing team selection and team performance in scientific coauthor networks. The underlying rationale for using RHEM in studies of coauthor networks is that scientific collaboration is intrinsically polyadic, that is, it typically involves teams of any size. Consequently, RHEM specify publication rates associated with hyperedges representing groups of scientists of any size. Going beyond previous work on RHEM for meeting data, we adapt this model family to settings in which relational hyperevents have a dedicated outcome, such as a scientific paper with a measurable impact (e.g., the received number of citations). Relational outcome can on the one hand be used to specify additional explanatory variables in RHEM since the probability of coauthoring may be influenced, for instance, by prior (shared) success of scientists. On the other hand, relational outcome can also serve as a response variable in models seeking to explain the performance of scientific teams. To tackle the latter, we propose relational hyperevent outcome models that are closely related with RHEM to the point that both model families can specify the likelihood of scientific collaboration—and the expected performance, respectively—with the same set of explanatory variables allowing to assess, for instance, whether variables leading to increased collaboration also tend to increase scientific impact. For illustration, we apply RHEM to empirical coauthor networks comprising more than 350,000 published papers by scientists working in three scientific disciplines. Our models explain scientific collaboration and impact by, among others, individual activity (preferential attachment), shared activity (familiarity), triadic closure, prior individual and shared success, and prior success disparity among the members of hyperedges.
Wind turbine towers are subjected to highly varying internal loads, characterized by large uncertainty. The uncertainty stems from many factors, including what the actual wind fields experienced over time will be, modeling uncertainties given the various operational states of the turbine with and without controller interaction, the influence of aerodynamic damping, and so forth. To monitor the true experienced loading and assess the fatigue, strain sensors can be installed at fatigue-critical locations on the turbine structure. A more cost-effective and practical solution is to predict the strain response of the structure based only on a number of acceleration measurements. In this contribution, an approach is followed where the dynamic strains in an existing onshore wind turbine tower are predicted using a Gaussian process latent force model. By employing this model, both the applied dynamic loading and strain response are estimated based on the acceleration data. The predicted dynamic strains are validated using strain gauges installed near the bottom of the tower. Fatigue is subsequently assessed by comparing the damage equivalent loads calculated with the predicted as opposed to the measured strains. The results confirm the usefulness of the method for continuous tracking of fatigue life consumption in onshore wind turbine towers.
The effect of milorganite, a commercially available organic soil amendment, on soil nutrients, plant growth, and yield has been investigated. However, its effect on soil hydraulic properties remains less understood. Therefore, this study aimed to investigate the effect of milorganite amendment on soil evaporation, moisture retention, hydraulic conductivity, and electrical conductivity of a Krome soil. A column experiment was conducted with two milorganite application rates (15 and 30% v/v) and a non-amended control soil. The results revealed that milorganite reduced evaporation rates and the length of Stage I of the evaporation process compared with the control. Moreover, milorganite increased moisture retention at saturation and permanent wilting point while decreasing soil hydraulic conductivity. In addition, milorganite increased soil electrical conductivity. Overall, milorganite resulted in increased soil moisture retention; however, moisture in the soil may not be readily available for plants due to increased soil salinity.
During the past half-century, exponential families have attained a position at the center of parametric statistical inference. Theoretical advances have been matched, and more than matched, in the world of applications, where logistic regression by itself has become the go-to methodology in medical statistics, computer-based prediction algorithms, and the social sciences. This book is based on a one-semester graduate course for first year Ph.D. and advanced master's students. After presenting the basic structure of univariate and multivariate exponential families, their application to generalized linear models including logistic and Poisson regression is described in detail, emphasizing geometrical ideas, computational practice, and the analogy with ordinary linear regression. Connections are made with a variety of current statistical methodologies: missing data, survival analysis and proportional hazards, false discovery rates, bootstrapping, and empirical Bayes analysis. The book connects exponential family theory with its applications in a way that doesn't require advanced mathematical preparation.
The short timescale of the solar flare reconnection process has long proved to be a puzzle. Recent studies suggest the importance of the formation of plasmoids in the reconnecting current sheet, with quantifying the aspect ratio of the width to length of the current sheet in terms of a negative power $ \alpha $ of the Lundquist number, that is, $ {S}^{-\alpha } $, being key to understanding the onset of plasmoids formation. In this paper, we make the first application of theoretical scalings for this aspect ratio to observed flares to evaluate how plasmoid formation may connect with observations. For three different flares that show plasmoids we find a range of $ \alpha $ values of $ \alpha =0.26 $ to $ 0.31 $. The values in this small range implies that plasmoids may be forming before the theoretically predicted critical aspect ratio ($ \alpha =1/3 $) has been reached, potentially presenting a challenge for the theoretical models.
We consider supercritical site percolation on the $d$-dimensional hypercube $Q^d$. We show that typically all components in the percolated hypercube, besides the giant, are of size $O(d)$. This resolves a conjecture of Bollobás, Kohayakawa, and Łuczak from 1994.
Longevity risk is putting more and more financial pressure on governments and pension plans worldwide due to pensioners’ increasing trend of life expectancy and the growing numbers of people reaching retirement age. Lee and Carter (1992, Journal of the American Statistical Association, 87(419), 659–671.) applied a one-factor dynamic factor model to forecast the trend of mortality improvement, and the model has since become the field’s workhorse. It is, however, well known that their model is subject to the limitation of overlooking cross-dependence between different age groups. We introduce Factor-Augmented Vector Autoregressive (FAVAR) models to the mortality modelling literature. The model, obtained by adding an unobserved factor process to a Vector Autoregressive (VAR) process, nests VAR and Lee–Carter models as special cases and inherits both frameworks’ advantages. A Bayesian estimation approach, adapted from the Minnesota prior, is proposed. The empirical application to the US and French mortality data demonstrates our proposed method’s efficacy in both in-sample and out-of-sample performance.
Modeling and forecasting of mortality rates are closely related to a wide range of actuarial practices, such as the designing of pension schemes. To improve the forecasting accuracy, age coherence is incorporated in many recent mortality models, which suggests that the long-term forecasts will not diverge infinitely among age groups. Despite their usefulness, misspecification is likely to occur for individual mortality models when applied in empirical studies. The reliableness and accuracy of forecast rates are therefore negatively affected. In this study, an ensemble averaging or model averaging (MA) approach is proposed, which adopts age-specific weights and asymptotically achieves age coherence in mortality forecasting. The ensemble space contains both newly developed age-coherent and classic age-incoherent models to achieve the diversity. To realize the asymptotic age coherence, consider parameter errors, and avoid overfitting, the proposed method minimizes the variance of out-of-sample forecasting errors, with a uniquely designed coherent penalty and smoothness penalty. Our empirical data set include ten European countries with mortality rates of 0–100 age groups and spanning 1950–2016. The outstanding performance of MA is presented using the empirical sample for mortality forecasting. This finding robustly holds in a range of sensitivity analyses. A case study based on the Italian population is finally conducted to demonstrate the improved forecasting efficiency of MA and the validity of the proposed estimation of weights, as well as its usefulness in actuarial applications such as the annuity pricing.