To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Typhoid fever is a major cause of illness and mortality in low- and middle-income settings. We investigated the association of typhoid fever and rainfall in Blantyre, Malawi, where multi-drug-resistant typhoid has been transmitting since 2011. Peak rainfall preceded the peak in typhoid fever by approximately 15 weeks [95% confidence interval (CI) 13.3, 17.7], indicating no direct biological link. A quasi-Poisson generalised linear modelling framework was used to explore the relationship between rainfall and typhoid incidence at biologically plausible lags of 1–4 weeks. We found a protective effect of rainfall anomalies on typhoid fever, at a two-week lag (P = 0.006), where a 10 mm lower-than-expected rainfall anomaly was associated with up to a 16% reduction in cases (95% CI 7.6, 26.5). Extreme flooding events may cleanse the environment of S. Typhi, while unusually low rainfall may reduce exposure from sewage overflow. These results add to evidence that rainfall anomalies may play a role in the transmission of enteric pathogens, and can help direct future water and sanitation intervention strategies for the control of typhoid fever.
This paper highlights how actuarial thinking could contribute to the development of a justice perspective relating to biodiversity risks and also the implications for actuarial work. The impact of biodiversity loss as well as the use of ecosystem services is not equally distributed across society (both intra and inter generationally). The Biodiversity and Natural Capital Risk Working Party has been setup to proactively take forward a series of activities including think pieces, webinars and external engagement on these risks.
This paper introduces the concept of Natural Capital and explores the implications for actuarial work by way of case studies. It is part of a wider series of IFoA papers focussing on the risks from global biodiversity loss and how these risks can be mitigated.
Between 21 November and 22 December 2020, a SARS-CoV-2 community testing pilot took place in the South Wales Valleys. We conducted a case-control study in adults taking part in the pilot using an anonymous online questionnaire. Social, demographic and behavioural factors were compared in people with a positive lateral flow test (cases) and a sample of negatives (controls). A total of 199 cases and 2621 controls completed a questionnaire (response rates: 27.1 and 37.6% respectively). Following adjustment, cases were more likely to work in the hospitality sector (aOR 3.39, 95% CI 1.43–8.03), social care (aOR 2.63, 1.22–5.67) or healthcare (aOR 2.31, 1.29–4.13), live with someone self-isolating due to contact with a case (aOR 3.07, 2.03–4.62), visit a pub (aOR 2.87, 1.11–7.37) and smoke or vape (aOR 1.54, 1.02–2.32). In this community, and at this point in the epidemic, reducing transmission from a household contact who is self-isolating would have the biggest public health impact (population-attributable fraction: 0.2). As restrictions on social mixing are relaxed, hospitality venues will become of greater public health importance, and those working in this sector should be adequately protected. Smoking or vaping may be an important modifiable risk factor.
The COVID-19 epidemic showed inter-regional differences in Italy. We used an ecological study design and publicly available data to compare the basic reproduction number (R0), the doubling time of the infection (DT) and the COVID-19 cumulative incidence (CI), death rate, case fatality rate (CFR) and time lag to slow down up to a 50-days doubling time in the first and the second 2020 epidemic waves (δDT50) by region. We also explored socio-economic, environmental and lifestyle variables with multiple regression analysis. COVID-19 CI and CFR changed in opposite directions in the second vs. the first wave: the CI increased sixfold with no evidence of a relationship with the testing rate; the CFR decreased in the regions where it was initially higher but increased where it was lower. The R0 did not change; the initially mildly affected regions, but not those where the first wave had most severely hit, showed a greater δDT50 amplitude. Vehicular traffic, average temperature, population density, average income, education and household size showed a correlation with COVID-19 outcomes. The deadly experience in the first epidemic wave and the varying preparedness of the local health systems might have contributed to the inter-regional differences in the second COVID-19 epidemic wave.
The impact of influenza and pneumonia on individuals in clinical risk groups in England has not previously been well characterized. Using nationally representative linked databases (Clinical Practice Research Database (CPRD), Hospital Episode Statistics (HES) and Office for National Statistics (ONS)), we conducted a retrospective cohort study among adults (≥ 18 years) during the 2010/2011–2019/2020 influenza seasons to estimate the incidence of influenza- and pneumonia-diagnosed medical events (general practitioner (GP) diagnoses, hospitalisations and deaths), stratified by age and risk conditions. The study population included a seasonal average of 7.2 million individuals; approximately 32% had ≥1 risk condition, 42% of whom received seasonal influenza vaccines. Medical event incidence rates increased with age, with ~1% of adults aged ≥75 years hospitalized for influenza/pneumonia annually. Among individuals with vs. without risk conditions, GP diagnoses occurred 2–5-fold more frequently and hospitalisations were 7–10-fold more common. Among those with obesity, respiratory, kidney or cardiovascular disorders, hospitalisation were 5–40-fold more common than in individuals with no risk conditions. Though these findings likely underestimate the full burden of influenza, they emphasize the concentration of disease burden in specific age and risk groups and support existing recommendations for influenza vaccination.
This chapter gives a brief overview of Bayesian hypothesis testing. We first describe a standard Bayesian analysis of a single binomial response, going through the prior distribution choice and explaining how the posterior is calculated. We then discuss Bayesian hypothesis testing using the Bayes factor, a measure of how much the posterior odds of believing in one hypothesis changes from the prior odds. We show, using a binomial example, how the Bayes factor may be highly dependent on the prior distribution, even with extremely large sample sizes. We next discuss Bayes hypothesis testing using decision theory, reviewing the intrinsic discrepancy of Bernardo, as well as the loss functions proposed by Freedman. Freedman’s loss functions allow the posterior belief in the null hypothesis to equal the p-value. We next discuss well-calibrated null preferences priors, which applied to parameters from the natural exponential family (binomial, negative binomial, Poisson, normal), also give the posterior belief in the null hypothesis equal to valid one-sided p-values, and give credible intervals equal to valid confidence intervals.
In this chapter, we study risks associated with movements of interest rates in financial markets. We begin with a brief discussion of the term structure of interest rates. We then discuss commonly used interest rate sensitive securities. This is followed by the study of different measures of sensitivity to interest rates, including duration and convexity. We consider mitigating interest rate risk through hedging and immunization. Finally, we take a more in-depth look at the drivers of interest rate term structure dynamics.
The objective of this chapter is to extend the ad hoc least squares method of somewhat arbitrarily selected base functions to a more generic method applicable to a broad range of functions – the Fourier series, which is an expansion of a relatively arbitrary function (with certain smoothness requirement and finite jumps at worst) with a series of sinusoidal functions. An important mathematical reason for using Fourier series is its “completeness” and almost guaranteed convergence. Here “completeness” means that the error goes to zero when the whole Fourier series with infinite base function is used. In other words, the Fourier series formed by the selected sinusoidal functions is sufficient to linearly combine into a function that converges to an arbitrary continuous function. This chapter on Fourier series will lay out a foundation that will lead to Fourier Transform and spectrum analysis. In this sense, this chapter is important as it provides background information and theoretical preparation.
The chapter addresses testing when using models. We review linear models, generalized linear models, and proportional odds models, including issues such as checking model assumptions and separation (e.g., when one covariate completely predicts a binary response). We discuss the Neyman–Scott problem, that is, when bias for a fixed parameter estimate can result when the number of nuisance parameters grows with the sample size. With clustered data, we compare mixed effects models and marginal models, pointing out that for logistic regression and other models the fixed effect estimands are different in the two type of models. We present simulations showing that many models may be interpreted as a multiple testing situation, and adjustments should often be made if testing for many effects in a model. We discuss model selection using methods such as Akaike’s information criterion, the lasso, and cross-validation. We compare different model selection processes and their effect on the Type I error rate for a parameter from the final chosen model.
The objective of this chapter is to present some important relations between the Fourier Transform and correlation functions. It turns out that the cross-correlation function and autocorrelation function have some useful relationships to Fourier Transform and power spectrum of the individual functions. As a result, cospectrum and coherence (normalized statistical correlation in frequency domain) can be defined.
In this chapter, we present the frequency-severity model, which is implicit in common risk calculations used in practice. In this model, the total loss from a risk, or set of risks, is treated as a random sum of random, identically distributed individual losses. If the frequency and severity random variables are independent, then the mean and variance of the aggregate loss can easily be calculated from the moments of the frequency and severity distributions. However, numerical methods are usually required for other metrics, such as quantiles or expected shortfall. We show how to implement these methods and discuss the limitations of this type of model, arising from the independence assumptions.
The analyses we have discussed in previous chapters include the use of base functions, such as sinusoidal functions with specified frequencies, i.e. harmonic analysis; sinusoidal base functions with a frequency range from 0 to the Nyquist frequency with an interval inversely proportional to the total length of time of the data, i.e. Fourier analysis; and wavelet base functions for wavelet analysis. These base functions, however, are chosen regardless of the nature of the variability of the data themselves. In this chapter, we will discuss a different method, in which the base functions are determined empirically, that is dependent on the nature of the data. In other words, this method will find the base functions from the data and these base functions describe the nature of the data. The method is applicable to many types of data, especially to time series data at multiple locations, e.g. a sequence of weather maps or satellite images. There are several variants of the method, but here we will only provide an introduction for the basics.
This chapter introduces the fast Fourier Transform (FFT) for discrete Fourier Transform, beginning with the discretization of the Fourier Transform to its digital expression with constant time intervals. When the integral in Fourier Transform is replaced by a summation, the continuous Fourier Transform is changed to discrete. The discrete Fourier Transform and its inverse are exact relations. An example of the discrete Fourier Transform is discussed for a simple rectangular window function which results in the sinc function, useful for the interpretation of finite sampling effect. A technique of zero-padding is introduced with the discrete Fourier Transform for better visualization of the spectrum. But the computation of discrete Fourier Transform of a long time series can be quite “labor intensive” or costly in computer time with a direct computation. However, since the base functions are periodic, a direct computation can have many duplications in multiplications of terms. Algorithms can be designed to reduce the duplications so that the speed of computation is increased. The reduction of duplicated computations can be repeatedly done through an FFT algorithm. In MATLAB, this is done by a simple command fft. The efficiency of FFT is discussed.
This chapter covers paired data, such as comparing responses before and after a treatment in one group of individuals. The sign test (also called the exact McNemar’s test when responses are binary) is compared to a median test on the differences of responses within pairs, and we show that the sign test is often more appropriate. We give confidence intervals compatible with the sign test. We discuss parameters associated with the Wilcoxon signed-rank test (often more powerful than the sign test) and assumptions needed to give associated confidence intervals. When we can assume symmetric distribution on the differences within pairs, the t-test is another option, and we discuss asymptotic relative efficiency for choosing between the t-test and Wilcoxon signed-rank test. We compare parameterizing the treatment effect as differences or ratios. We discuss tests using Pearson’s and Spearman’s correlation and Kendal’s tau, and present confidence intervals assuming normality. When the paired data represent different assays or raters, then agreement coefficients are needed (e.g., Cohen’s kappa, or Lin’s concordance correlation coefficient).
In this chapter, we consider qualitative and quantitative aspects of risk related to the development, implementation, and uses of quantitative models in enterprise risk management (ERM). First, we discuss the different ways that model risk arises, including defective models, inappropriate applications, and inadequate or inappropriate interpretation of the results. We consider the lifecycle of a model – from development, through regular updating and revision, to the decommissioning stage. We review quantitative approaches to measuring model and parameter uncertainty, based on a Bayesian framework. Finally, we discuss some aspects of model governance, and some potential methods for mitigating model risk.
This chapter defines statistical hypothesis tests mathematically. Those tests assume two sets of probability models, called the null and alternative hypotheses. A decision rule is a function that depends on the data and a specified ?-level and determines whether or not to reject the null hypothesis. We define concepts related to properties of hypothesis tests such as Type I and II error rates, validity, size, power, invariance, and robustness. The definitions are general but are explained with examples such as testing a binomial parameter, or Wilcoxon–Mann–Whitney tests. P-values are defined as the smallest ?-level for observed data for which we would reject the null at that level and all larger levels. Confidence sets and confidence intervals are defined in relation to a series of hypothesis tests with changing null hypotheses. Compatibility between p-value functions and confidence intervals is defined, and an example with Fisher’s exact test shows that compatibility is not always present for some common tests.