To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In most industrialised countries, one of the major societal challenges is the demographic change coming along with the ageing of the population. The increasing life expectancy observed over the last decades underlines the importance to find ways to appropriately cover the financial needs of the elderly. A particular issue arises in the area of health, where sufficient care must be provided to a growing number of dependent elderly in need of long-term care (LTC) services. In many markets, the offering of life insurance products incorporating care options and LTC insurance products is generally scarce. In our research, we therefore examine a life annuity product with an embedded care option potentially providing additional financial support to dependent persons. To evaluate the care option, we determine the minimum price that the annuity provider requires and the policyholder’s willingness to pay for the care option. For the latter, we employ individual utility functions taking account of the policyholder’s condition. We base our numerical study on recently developed transition probability data from Switzerland. Our findings give new and realistic insights into the nature and the utility of life annuity products proposing an embedded care option for tackling the financing of LTC needs.
We analysed the coronavirus disease 2019 epidemic curve from March to the end of April 2020 in Germany. We use statistical models to estimate the number of cases with disease onset on a given day and use back-projection techniques to obtain the number of new infections per day. The respective time series are analysed by a trend regression model with change points. The change points are estimated directly from the data. We carry out the analysis for the whole of Germany and the federal state of Bavaria, where we have more detailed data. Both analyses show a major change between 9 and 13 March for the time series of infections: from a strong increase to a decrease. Another change was found between 25 March and 29 March, where the decline intensified. Furthermore, we perform an analysis stratified by age. A main result is a delayed course of the pandemic for the age group 80 + resulting in a turning point at the end of March. Our results differ from those by other authors as we take into account the reporting delay, which turned out to be time dependent and therefore changes the structure of the epidemic curve compared to the curve of newly reported cases.
Even though the trend in mortality improvements has experienced several permanent changes in the past, the uncertainty regarding future mortality trends is often left unmodeled when pricing longevity-linked securities. In this paper, we present a stochastic modeling framework for the valuation of longevity-linked securities which explicitly considers the risk of random future changes in the long-term mortality trend. We construct a set of meaningful probability distortions which imply equivalent risk-adjusted pricing measures under which the basic model structure is preserved. Inspired by risk-based capital requirements for (re)insurers, we also establish a cost-of-capital pricing approach which then serves as the appropriate reference framework for finding a reasonable range for the market price of longevity risk. In a numerical application, we demonstrate that our model produces plausible risk loadings and show that a greater proportion of the risk loading is allocated to longer maturities when the risk of random future mortality trend changes is adequately modeled.
Centralities are a widely studied phenomenon in network science. In policy networks, central actors are of interest because they are assumed to control information flows, to link opposing coalitions and to directly impact decision-making. First, we study what type of actor (e.g., state authorities or interest groups) is able to occupy central positions in the highly institutionalized context of policy networks. Second, we then ask whether bonding or bridging centralities prove to be more stable over time. Third, we investigate how these types of centrality influence actors’ positions in a network over time. We therefore adopt a longitudinal perspective and run exponential random graph models, including lagged central network positions at t1 as the main independent variable for actors’ activity and popularity at t2. Results confirm that very few actors are able to maintain central positions over time.
It has been conjectured that, for any fixed \[{\text{r}} \geqslant 2\] and sufficiently large n, there is a monochromatic Hamiltonian Berge-cycle in every \[({\text{r}} - 1)\]-colouring of the edges of \[{\text{K}}_{\text{n}}^{\text{r}}\], the complete r-uniform hypergraph on n vertices. In this paper we prove this conjecture.
In this paper, we develop a methodology to automatically classify claims using the information contained in text reports (redacted at their opening). From this automatic analysis, the aim is to predict if a claim is expected to be particularly severe or not. The difficulty is the rarity of such extreme claims in the database, and hence the difficulty, for classical prediction techniques like logistic regression to accurately predict the outcome. Since data is unbalanced (too few observations are associated with a positive label), we propose different rebalance algorithm to deal with this issue. We discuss the use of different embedding methodologies used to process text data, and the role of the architectures of the networks.
Throughout the past couple of decades, the surge in the sale of equity-linked products has led to many discussions on the evaluation and risk management of surrender options embedded in these products. However, most studies treat such options as American/Bermudian style options. In this article, a different approach is presented where only a portion of the policyholders react optimally due to the belief that not all policyholders are rational. Through this method, a probability of surrender is obtained based on the option moneyness and the product is partially hedged using local risk-control strategies. This partial hedging approach is versatile since few assumptions are required for the financial framework. To compare the different surrender assumptions, the initial capital requirement for an equity-linked product is obtained under a regime-switching equity model. Numerical examples illustrate the dynamics and efficiency of this hedging approach.
The spatio-temporal dynamics of an outbreak provide important insights to help direct public health resources intended to control transmission. They also provide a focus for detailed epidemiological studies and allow the timing and impact of interventions to be assessed.
A common approach is to aggregate case data to administrative regions. Whilst providing a good visual impression of change over space, this method masks spatial variation and assumes that disease risk is constant across space. Risk factors for COVID-19 (e.g. population density, deprivation and ethnicity) vary from place to place across England so it follows that risk will also vary spatially. Kernel density estimation compares the spatial distribution of cases relative to the underlying population, unfettered by arbitrary geographical boundaries, to produce a continuous estimate of spatially varying risk.
Using test results from healthcare settings in England (Pillar 1 of the UK Government testing strategy) and freely available methods and software, we estimated the spatial and spatio-temporal risk of COVID-19 infection across England for the first 6 months of 2020. Widespread transmission was underway when partial lockdown measures were introduced on 23 March 2020 and the greatest risk erred towards large urban areas. The rapid growth phase of the outbreak coincided with multiple introductions to England from the European mainland. The spatio-temporal risk was highly labile throughout.
In terms of controlling transmission, the most important practical application of our results is the accurate identification of areas within regions that may require tailored intervention strategies. We recommend that this approach is absorbed into routine surveillance outputs in England. Further risk characterisation using widespread community testing (Pillar 2) data is needed as is the increased use of predictive spatial models at fine spatial scales.
The possibility of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) transmission by fomites or environmental surfaces has been suggested. It is unclear if SARS-CoV-2 can be detected in outdoor public areas. The objective of the current study was to assess the presence of SARS-CoV-2 in environmental samples collected at public playgrounds and water fountains, in a country with high disease prevalence. Environmental samples were collected from six cities in central Israel. Samples were collected from drinking fountains and high-touch recreational equipment at playgrounds. Sterile pre-moistened swabs were used to collect the samples, put in viral transfer media and transferred to the laboratory. Viral detection was achieved by real-time reverse transcriptase–polymerase chain reaction, targeting four genes. Forty-three samples were collected from playground equipment and 25 samples from water fountains. Two of the 43 (4.6%) samples from playground equipment and one (4%) sample from a drinking fountain tested positive. It is unclear whether the recovery of viral RNA on outdoor surfaces also indicates the possibility of acquiring the virus. Adherence to environmental and personal hygiene in urban settings seems prudent.
We bound the error for the normal approximation of the number of triangles in the Erdős–Rényi random graph with respect to the Kolmogorov metric. Our bounds match the best available Wasserstein bounds obtained by Barbour et al. [(1989). A central limit theorem for decomposable random variables with applications to random graphs. Journal of Combinatorial Theory, Series B 47: 125–145], resolving a long-standing open problem. The proofs are based on a new variant of the Stein–Tikhomirov method—a combination of Stein's method and characteristic functions introduced by Tikhomirov [(1976). The rate of convergence in the central limit theorem for weakly dependent variables. Vestnik Leningradskogo Universiteta 158–159, 166].
In nonparametric and high-dimensional statistical models, the classical Gauss–Fisher–Le Cam theory of the optimality of maximum likelihood estimators and Bayesian posterior inference does not apply, and new foundations and ideas have been developed in the past several decades. This book gives a coherent account of the statistical theory in infinite-dimensional parameter spaces. The mathematical foundations include self-contained 'mini-courses' on the theory of Gaussian and empirical processes, approximation and wavelet theory, and the basic theory of function spaces. The theory of statistical inference in such models - hypothesis testing, estimation and confidence sets - is presented within the minimax paradigm of decision theory. This includes the basic theory of convolution kernel and projection estimation, but also Bayesian nonparametrics and nonparametric maximum likelihood estimation. In a final chapter the theory of adaptive inference in nonparametric models is developed, including Lepski's method, wavelet thresholding, and adaptive inference for self-similar functions. Winner of the 2017 PROSE Award for Mathematics.
The primary objective of this scholarly work is to develop two estimation procedures – maximum likelihood estimator (MLE) and method of trimmed moments (MTM) – for the mean and variance of lognormal insurance payment severity data sets affected by different loss control mechanism, for example, truncation (due to deductibles), censoring (due to policy limits), and scaling (due to coinsurance proportions), in insurance and financial industries. Maximum likelihood estimating equations for both payment-per-payment and payment-per-loss data sets are derived which can be solved readily by any existing iterative numerical methods. The asymptotic distributions of those estimators are established via Fisher information matrices. Further, with a goal of balancing efficiency and robustness and to remove point masses at certain data points, we develop a dynamic MTM estimation procedures for lognormal claim severity models for the above-mentioned transformed data scenarios. The asymptotic distributional properties and the comparison with the corresponding MLEs of those MTM estimators are established along with extensive simulation studies. Purely for illustrative purpose, numerical examples for 1500 US indemnity losses are provided which illustrate the practical performance of the established results in this paper.
Fourier analysis can provide policymakers useful information for analysing the pandemic behaviours. This paper proposes a Fourier analysis approach for examining the cycle length and the power spectrum of the pandemic by converting the number of deaths due to coronavirus disease 2019 in the US to the frequency domain. Policymakers can control the pandemic by using observed cycle length whether they should strengthen their policy or not. The proposed Fourier method is useful for analysing waves in other medical applications.
COVID-19 is causing a significant burden on medical and healthcare resources globally due to high numbers of hospitalisations and deaths recorded as the pandemic continues. This research aims to assess the effects of climate factors (i.e., daily average temperature and average relative humidity) on effective reproductive number of COVID-19 outbreak in Wuhan, China during the early stage of the outbreak. Our research showed that effective reproductive number of COVID-19 will increase by 7.6% (95% Confidence Interval: 5.4% ~ 9.8%) per 1°C drop in mean temperature at prior moving average of 0–8 days lag in Wuhan, China. Our results indicate temperature was negatively associated with COVID-19 transmissibility during early stages of the outbreak in Wuhan, suggesting temperature is likely to effect COVID-19 transmission. These results suggest increased precautions should be taken in the colder seasons to reduce COVID-19 transmission in the future, based on past success in controlling the pandemic in Wuhan, China.
New computing and communications paradigms will result in traffic loads in information server systems that fluctuate over much broader ranges of time scales than current systems. In addition, these fluctuation time scales may only be indirectly known or even be unknown. However, we should still be able to accurately design and manage such systems. This paper addresses this issue: we consider an M/M/1 queueing system operating in a random environment (denoted M/M/1(R)) that alternates between HIGH and LOW phases, where the load in the HIGH phase is higher than in the LOW phase. Previous work on the performance characteristics of M/M/1(R) systems established fundamental properties of the shape of performance curves. In this paper, we extend monotonicity results to include convexity and concavity properties, provide a partial answer to an open problem on stochastic ordering, develop new computational techniques, and include boundary cases and various degenerate M/M/1(R) systems. The basis of our results are novel representations for the mean number in system and the probability of the system being empty. We then apply these results to analyze practical aspects of system operation and design; in particular, we derive the optimal service rate to minimize mean system cost and provide a bias analysis of the use of customer-level sampling to estimate time-stationary quantities.
IFRS 17 Insurance Contracts is a new accounting standard currently expected to come into force on 1 January 2023. It supersedes IFRS 4 Insurance Contracts. IFRS 17 establishes key principles that entities must apply in all aspects of the accounting of insurance contracts. In doing so, the Standard aims to increase the usefulness, comparability, transparency and quality of financial statements.
A fundamental concept introduced by IFRS 17 is the contractual service margin (CSM). This represents the unearned profit that an entity expects to earn as it provides services. However, as a principles-based standard, IFRS 17 results in entities having to apply significant judgement when determining the inputs, assumptions and techniques it uses to determine the CSM at each reporting period.
In general, the Standard resolves broad categories of mismatches which arise under IFRS 4. Notable examples include mismatches between assets recorded at current market value and liabilities calculated using fixed discount rates as well as inconsistencies in the timing of profit recognition over the duration of an insurance contract. However, there are requirements of IFRS 17 that may create economic or accounting mismatches of its own. For example, new mismatches could arise between the measurement of underlying contracts and the corresponding reinsurance held. Additionally, mismatches can still arise between the measurement of liabilities and the assets that support the liabilities.
This paper explores the technical, operational and commercial issues that arise across these and other areas focusing on the CSM. As a standard that is still very much in its infancy, and for which wider consensus on topics is yet to be achieved, this paper aims to provide readers with a deeper understanding of the issues and opportunities that accompany it.
Initial insurance losses are often reported with a textual description of the claim. The claims manager must determine the adequate case reserve for each known claim. In this paper, we present a framework for predicting the amount of loss given a textual description of the claim using a large number of words found in the descriptions. Prior work has focused on classifying insurance claims based on keywords selected by a human expert, whereas in this paper the focus is on loss amount prediction with automatic word selection. In order to transform words into numeric vectors, we use word cosine similarities and word embedding matrices. When we consider all unique words found in the training dataset and impose a generalised additive model to the resulting explanatory variables, the resulting design matrix is high dimensional. For this reason, we use a group lasso penalty to reduce the number of coefficients in the model. The scalable, analytical framework proposed provides for a parsimonious and interpretable model. Finally, we discuss the implications of the analysis, including how the framework may be used by an insurance company and how the interpretation of the covariates can lead to significant policy change. The code can be found in the TAGAM R package (github.com/scottmanski/TAGAM).
Severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) led to a significant disease burden and disruptions in health systems. We describe the epidemiology and transmission characteristics of early coronavirus disease 2019 (COVID-19) cases in Bavaria, Germany. Cases were reverse transcription polymerase chain reaction (RT-PCR)-confirmed SARS-CoV-2 infections, reported from 20 January−19 March 2020. The incubation period was estimated using travel history and date of symptom onset. To estimate the serial interval, we identified pairs of index and secondary cases. By 19 March, 3546 cases were reported. A large proportion was exposed abroad (38%), causing further local transmission. Median incubation period of 256 cases with exposure abroad was 3.8 days (95%CI: 3.5–4.2). For 95% of infected individuals, symptom onset occurred within 10.3 days (95%CI: 9.1–11.8) after exposure. The median serial interval, using 53 pairs, was 3.5 days (95%CI: 3.0–4.2; mean: 3.9, s.d.: 2.2). Travellers returning to Germany had an important influence on the spread of SARS-CoV-2 infections in Bavaria in early 2020. Especially in times of low incidence, public health agencies should identify holiday destinations, and areas with ongoing local transmission, to monitor potential importation of SARS-CoV-2 infections. Travellers returning from areas with ongoing community transmission should be advised to quarantine to prevent re-introductions of COVID-19.
In this paper we consider the pricing and hedging of financial derivatives in a model-independent setting, for a trader with additional information, or beliefs, on the evolution of asset prices. In particular, we suppose that the trader wants to act in a way which is independent of any modelling assumptions, but that she observes market information in the form of the prices of vanilla call options on the asset. We also assume that both the payoff of the derivative, and the insider’s information or beliefs, which take the form of a set of impossible paths, are time-invariant. In this way we accommodate drawdown constraints, as well as information/beliefs on quadratic variation or on the levels hit by asset prices. Our setup allows us to adapt recent work of [12] to prove duality results and a monotonicity principle. This enables us to determine geometric properties of the optimal models. Moreover, for specific types of information, we provide simple conditions for the existence of consistent models for the informed agent. Finally, we provide an example where our framework allows us to compute the impact of the information on the agent’s pricing bounds.