To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Consider a general mortality-linked security (MLS) with a bounded payoff contingent on the evolution of the underlying mortality rate and the performance of associated risky assets. The mortality rate and asset prices are assumed to jointly follow a multivariate Itô process, driven by both a multivariate Brownian motion and a Poisson point process. We follow the utility indifference approach to pricing this MLS under the physical measure. To this end, we employ backward stochastic differential equations (BSDEs) to characterize the optimal investment strategy and the value function for the involved optimization problems. We then solve the resulting nonlinear BSDEs with a non-Lipschitz generator. This methodology, which combines the utility indifference approach with BSDE techniques, provides numerical tractability through Monte Carlo simulations. Finally, we conduct comprehensive numerical studies on the valuation of several concrete MLSs, with a focus on the sensitivity analysis of the indifference prices against various key model parameters, including, in particular, the correlation between the underlying mortality rate and asset price.
In this article, we study a non-uniform distribution on permutations biased by their number of records that we call record-biased permutations. We give several generative processes for record-biased permutations, explaining also how they can be used to devise efficient (linear) random samplers. For several classical permutation statistics, we obtain their expectation using the above generative processes, as well as their limit distributions in the regime that has a logarithmic number of records (as in the uniform case). Finally, increasing the bias to obtain a regime with an expected linear number of records, we establish the convergence of record-biased permutations to a deterministic permuton, which we fully characterise. This model was introduced in our earlier work [3], in the context of realistic analysis of algorithms. We conduct here a more thorough study but with a theoretical perspective.
Let $r, k, n$ be integers satisfying $1\leqslant r\leqslant k\leqslant n/2$. Let ${{\mathcal{R}}}_r(n, k)$ denote the proportion of permutations $\pi \in {{\mathcal{S}}}_n$ that fix a set of size $k$ and have no cycle of length less than $r$. In this note, we determine the order of magnitude of ${{\mathcal{R}}}_r(n, k)$ uniformly for all $2\leqslant r\leqslant k\leqslant n/2$. This result generalises the corresponding estimate of Eberhard, Ford, and Green for the case $r=1$.
We prove that for any $k\geq 3$ for clause/variable ratios up to the Gibbs uniqueness threshold of the corresponding Galton-Watson tree, the number of satisfying assignments of random $k$-SAT formulas is given by the ‘replica symmetric solution’ predicted by physics methods [Monasson, Zecchina: Phys. Rev. Lett. 76 (1996)]. Furthermore, while the Gibbs uniqueness threshold is still not known precisely for any $k\geq 3$, we derive new lower bounds on this threshold that improve over prior work [Montanari and Shah: SODA (2007)]. The improvement is significant particularly for small $k$.
Confirming a conjecture of Erdős on the chromatic number of Kneser hypergraphs, Alon, Frankl and Lovász proved that in any $q$-colouring of the edges of the complete $r$-uniform hypergraph, there exists a monochromatic matching of size $\lfloor \frac {n+q-1}{r+q-1}\rfloor$. In this paper, we prove a transference version of this theorem. More precisely, for fixed $q$ and $r$, we show that with high probability, a monochromatic matching of approximately the same size exists in any $q$-colouring of a random hypergraph, already when the average degree is a sufficiently large constant. In fact, our main new result is a defect version of the Alon–Frankl–Lovász theorem for almost complete hypergraphs. From this, the transference version is obtained via a variant of the weak hypergraph regularity lemma. The proof of the defect version uses tools from extremal set theory developed in the study of the Erdős matching conjecture.
Counting the number of isomers of a chemical molecule is one of the formative problems of graph theory. However, recent progress has been slow, and the problem has largely been ignored in modern network science. Here we provide an introduction to the mathematics of counting network structures and then use it to derive results for two new classes of molecules. In contrast to previously studied examples, these classes take additional chemical complexity into account and thus require the use of multivariate generating functions. The results illustrate the elegance of counting theory, highlighting it as an important tool that should receive more attention in network science.
This paper presents an illustrated tutorial for conducting an embedded Mixed-Method Social Network Analysis (MMSNA) to examine the dynamic interplay between human agency and social networks. We draw on an empirical study in education that investigated how teachers enact relational agency within their school networks to support the integration of migrant students. We propose a replicable method and stepwise procedure for designing, implementing and evaluating an embedded MMSNA. While the potential of MMSNA has long been recognized across disciplines, its purpose and operationalization are often underexplained. We illustrate how MMSNA can be used to analyze both network structures and the agency of actors embedded within them, in alignment with specific research objectives and theoretical perspectives.
Fractional Brownian motion, with its long-time correlated increments, has been applied in many fields in recent years. Since volatility was shown to be rough by Gatheral, Jaisson, and Rosenbaum, fractional Brownian motion has gained popularity as a financial model. In this work, we revisit the definitions and properties of the univariate and multivariate fractional Brownian motions, and consider four simulation methods. We demonstrate the issues associated with applying the standard Euler scheme for simulating stochastic processes driven by fractional Brownian motion with $H < \frac{1}{2}$ (which we call the rough models). We then introduce a novel approximate method for simulating such rough models based on the fast algorithm by Ma and Wu, which accounts for a factor of 10 speedup. Finally, we consider applications of these methods to option pricing.
We study large deviations for Cox–Ingersoll–Ross processes with small noise and state-dependent fast switching via associated Hamilton–Jacobi–Bellman equations. As time scales separate, when the noise goes to 0 and the rate of switching goes to $\infty$, we get a limit equation characterized by the averaging principle. Moreover, we prove the large deviation principle with an action-integral form rate function to describe the asymptotic behavior of such systems. The new ingredient is establishing the comparison principle in the singular context. The proof is carried out using the nonlinear semigroup method from Feng and Kurtz’s book [14].
The vast majority of researchers, actuaries, and demographers use standard time series analysis techniques to project time-varying parameters of popular mortality forecasting methods such as the Lee–Carter and Li–Lee models. However, spatial dependence can be as significant as temporal autocorrelation in these time series, and the underlying panel structure of the data is often neglected. We draw on techniques from panel and spatial econometrics, including ordinary and spatial dynamic panel linear models, spatiotemporal autoregressive integrated moving average processes, and spatial eigenvector filters, to capture such dependence and improve projections. We present a methodology to estimate the parameters of these techniques from spatial multipopulation mortality series, select their optimal hyperparameters, and use them for forecasting. We propose a tailor-made robust selection framework to identify the best model–technique combinations for each country, as well as a bootstrap-based procedure to quantify projection uncertainty with accurate nominal coverage on a separate validation period and a strategy for assessing the quality of the resulting prediction intervals. We test these methods on mortality data from 22 European countries. The results show that the proposed techniques yield a clear advantage in both point and interval forecasts for several populations, and these findings are corroborated by a robust selection design and additional robustness checks. These improvements have the potential to deliver meaningful gains for life insurance, pensions, and other contexts involving longevity risk.
We consider the problem of maximizing the expected average reward obtained over an infinite time horizon by n weakly coupled Markov decision processes. Our setup is a substantial generalization of the multi-armed restless bandit problem that allows for multiple actions and constraints. We establish a connection with a deterministic and continuous-variable control problem where the objective is to maximize the average reward derived from an occupancy measure that represents the empirical distribution of the processes when $n \to \infty$. We show that a solution of this fluid problem can be used to construct policies for the weakly coupled processes that achieve the maximum expected average reward as $n \to \infty$, and we give sufficient conditions for the existence of solutions. Under certain assumptions on the constraints, we prove that these conditions are automatically satisfied if the unconstrained single-process problem admits a suitable unichain and aperiodic policy. In particular, the assumptions include multi-armed restless bandits and a broad class of problems with multiple actions and inequality constraints. Also, the policies can be constructed in an explicit way in these cases. Our theoretical results are complemented by several concrete examples and numerical experiments, which include multichain setups that are covered by the theoretical results.
The InsurTech industry has undergone almost a decade of development. Despite its initial success, the industry now faces challenges from global uncertainties and regulatory adjustments, which lead to concerns about sustainable profit growth and the ongoing development of InsurTech. This study provides an overview of the evolution of InsurTech development from both academic and practical perspectives. A bibliometric analysis of more than 20,000 published articles, including both practice articles and academic articles, is put forward. As compared to other review articles in this field, which often focus on either the practice or the scholarly side of development, this article brings together a review of both academic and practice-based articles from fields relevant to InsurTech including artificial intelligence, the Internet of Things, and also powerful computing technology. A keyword extraction framework is developed and applied. Using text analysis, this study reviews the prioritized topics, analyzes the robustness of the development of publication growth, identifies emerging insurance business lines, and also highlights the challenges and gaps in both academic and practice development. This study aims to motivate collaboration between academics and industry to face the challenges posed by the integration of InsurTech into insurance operations.
We evaluate the performance and level of intergenerational cross-subsidy in flat-accrual and dynamic-accrual collective defined contribution (CDC) schemes, which have been designed to be compatible with UK legislation. In the flat-accrual scheme, all members accrue the benefits at the same rate, irrespective of age. This captures the most significant feature of the Royal Mail Collective Pension Plan, which is currently the only UK CDC scheme. The dynamic-accrual schemes seek to reduce intergenerational cross-subsidies by varying the rate of benefit-accrual schemes in accordance with the age of members and the current funding level. We find that these CDC schemes can often be successful in smoothing pension outcomes postretirement while outperforming a defined contribution scheme followed by annuity purchase at the point of retirement. However, this out-performance is not guaranteed in a flat-accrual scheme, and there is little smoothing of projected pension outcomes before retirement. There are significant intergenerational cross-subsidies in the flat-accrual scheme, which qualitatively mirror the cross-subsidies seen in defined benefit schemes, but the magnitude of cross-subsidies is much larger in flat-accrual CDC schemes. The dynamic-accrual scheme design seeks to reduce such cross-subsidies, but we find significant cross-subsidies still arise due to the approximate pricing methodology used to determine the benefits.