To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this study, we consider option pricing under a Markov regime-switching GARCH-jump (RS-GARCH-jump) model. More specifically, we derive the risk neutral dynamics and propose a lattice algorithm to price European and American options in this framework. We also provide a method of parameter estimation in our RS-GARCH-jump setting using historical data on the underlying time series. To measure the pricing performance of the proposed algorithm, we investigate the convergence of the tree-based results to the true option values and show that this algorithm exhibits good convergence. By comparing the pricing results of RS-GARCH-jump model with regime-switching GARCH (RS-GARCH) model, GARCH-jump model, GARCH model, Black–Scholes (BS) model, and Regime-Switching (RS) model, we show that accommodating jump effect and regime switching substantially changes the option prices. The empirical results also show that the RS-GARCH-jump model performs well in explaining option prices and confirm the importance of allowing for both jump components and regime switching.
The Republic of Ireland (ROI) currently reports the highest incidence rates of Shiga-toxin producing Escherichia coli (STEC) enteritis and cryptosporidiosis in Europe, with the spatial distribution of both infections exhibiting a clear urban/rural divide. To date, no investigation of the role of socio-demographic profile on the incidence of either infection in the ROI has been undertaken. The current study employed bivariate analyses and Random Forest classification to identify associations between individual components of a national deprivation index and spatially aggregated cases of STEC enteritis and cryptosporidiosis. Classification accuracies ranged from 78.2% (STEC, urban) to 90.6% (cryptosporidiosis, rural). STEC incidence was (negatively) associated with a mean number of persons per room and percentage of local authority housing in both urban and rural areas, addition to lower levels of education in rural areas, while lower unemployment rates were associated with both infections, irrespective of settlement type. Lower levels of third-level education were associated with cryptosporidiosis in rural areas only. This study highlights settlement-specific disparities with respect to education, unemployment and household composition, associated with the incidence of enteric infection. Study findings may be employed for improved risk communication and surveillance to safeguard public health across socio-demographic profiles.
We extend the Annually Recalculated Virtual Annuity (ARVA) spending rule for retirement savings decumulation (Waring and Siegel (2015) Financial Analysts Journal, 71(1), 91–107) to include a cap and a floor on withdrawals. With a minimum withdrawal constraint, the ARVA strategy runs the risk of depleting the investment portfolio. We determine the dynamic asset allocation strategy which maximizes a weighted combination of expected total withdrawals (EW) and expected shortfall (ES), defined as the average of the worst 5% of the outcomes of real terminal wealth. We compare the performance of our dynamic strategy to simpler alternatives which maintain constant asset allocation weights over time accompanied by either our same modified ARVA spending rule or withdrawals that are constant over time in real terms. Tests are carried out using both a parametric model of historical asset returns as well as bootstrap resampling of historical data. Consistent with previous literature that has used different measures of reward and risk than EW and ES, we find that allowing some variability in withdrawals leads to large improvements in efficiency. However, unlike the prior literature, we also demonstrate that further significant enhancements are possible through incorporating a dynamic asset allocation strategy rather than simply keeping asset allocation weights constant throughout retirement.
A set S of permutations is forcing if for any sequence $\{\Pi_i\}_{i \in \mathbb{N}}$ of permutations where the density $d(\pi,\Pi_i)$ converges to $\frac{1}{|\pi|!}$ for every permutation $\pi \in S$, it holds that $\{\Pi_i\}_{i \in \mathbb{N}}$ is quasirandom. Graham asked whether there exists an integer k such that the set of all permutations of order k is forcing; this has been shown to be true for any $k\ge 4$. In particular, the set of all 24 permutations of order 4 is forcing. We provide the first non-trivial lower bound on the size of a forcing set of permutations: every forcing set of permutations (with arbitrary orders) contains at least four permutations.
The experiment investigated the effects of dietary ascorbic acid and betaine stress responses, serum testosterone levels, and some sexual traits in male Japanese quails during the dry season. A total of 240 male Japanese quails (14 days old) were used and randomly assigned to four groups, each group has three replicates (n = 20). Birds in treatment groups were fed ascorbic acid (AA); betaine (BET); and AA + BET in their diets, whereas the control birds were fed only basal diet. Environmental conditions were predominantly outside thermoneutral zone for Japanese quails. Dietary AA ± BET increased (p < .05) serum catalase, reduced glutathione and testosterone, but lowered (p < .05) cortisol levels when compared with control group. Supplemental AA, BET, or AA + BET enhanced (p < .05) cloacal gland size and sexual traits. In conclusion, dietary AA and BET improved stress responses, serum testosterone levels, and some sexual traits in male Japanese quails during the dry season.
Let $\{Y_{1},\ldots ,Y_{n}\}$ be a collection of interdependent nonnegative random variables, with $Y_{i}$ having an exponentiated location-scale model with location parameter $\mu _i$, scale parameter $\delta _i$ and shape (skewness) parameter $\beta _i$, for $i\in \mathbb {I}_{n}=\{1,\ldots ,n\}$. Furthermore, let $\{L_1^{*},\ldots ,L_n^{*}\}$ be a set of independent Bernoulli random variables, independently of $Y_{i}$'s, with $E(L_{i}^{*})=p_{i}^{*}$, for $i\in \mathbb {I}_{n}.$ Under this setup, the portfolio of risks is the collection $\{T_{1}^{*}=L_{1}^{*}Y_{1},\ldots ,T_{n}^{*}=L_{n}^{*}Y_{n}\}$, wherein $T_{i}^{*}=L_{i}^{*}Y_{i}$ represents the $i$th claim amount. This article then presents several sufficient conditions, under which the smallest claim amounts are compared in terms of the usual stochastic and hazard rate orders. The comparison results are obtained when the dependence structure among the claim severities are modeled by (i) an Archimedean survival copula and (ii) a general survival copula. Several examples are also presented to illustrate the established results.
This paper discusses the use of modelling techniques for the purpose of risk management within life insurers. The key theme of the paper is that life insurance is long-term business and carries with it long-term risks, yet much of modern actuarial risk management is focussed on short-term modelling approaches. These typically include the use of copula simulation models within a 1-year Value-at-Risk (VaR) framework. The paper discusses the limitations inherent within the techniques currently used in the UK and discusses how the focus of the next generation of actuarial models may be on long-term stochastic projections. The scope of the paper includes a discussion of how existing techniques, together with new approaches, may be used to develop such models and the benefits this can bring. The paper concludes with a practical example of how a long-term stochastic risk model may be implemented.
This paper introduces and demonstrates the use of quantum computers for asset–liability management (ALM). A summary of historical and current practices in ALM used by actuaries is given showing how the challenges have previously been met. We give an insight into what ALM may be like in the immediate future demonstrating how quantum computers can be used for ALM. A quantum algorithm for optimising ALM calculations is presented and tested using a quantum computer. We conclude that the discovery of the strange world of quantum mechanics has the potential to create investment management efficiencies. This in turn may lead to lower capital requirements for shareholders and lower premiums and higher insured retirement incomes for policyholders.
We prove an analogue of Alon’s spectral gap conjecture for random bipartite, biregular graphs. We use the Ihara–Bass formula to connect the non-backtracking spectrum to that of the adjacency matrix, employing the moment method to show there exists a spectral gap for the non-backtracking matrix. A by-product of our main theorem is that random rectangular zero-one matrices with fixed row and column sums are full rank with high probability. Finally, we illustrate applications to community detection, coding theory, and deterministic matrix completion.
The feasibility of non-pharmacological public health interventions (NPIs) such as physical distancing or isolation at home to prevent severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) transmission in low-resource countries is unknown. Household survey data from 54 African countries were used to investigate the feasibility of SARS-CoV-2 NPIs in low-resource settings. Across the 54 countries, approximately 718 million people lived in households with ⩾6 individuals at home (median percentage of at-risk households 56% (95% confidence interval (CI), 51% to 60%)). Approximately 283 million people lived in households where ⩾3 people slept in a single room (median percentage of at-risk households 15% (95% CI, 13% to 19%)). An estimated 890 million Africans lack on-site water (71% (95% CI, 62% to 80%)), while 700 million people lacked in-home soap/washing facilities (56% (95% CI, 42% to 73%)). The median percentage of people without a refrigerator in the home was 79% (95% CI, 67% to 88%), while 45% (95% CI, 39% to 52%) shared toilet facilities with other households. Individuals in low-resource settings have substantial obstacles to implementing NPIs for mitigating SARS-CoV-2 transmission. These populations urgently need to be prioritised for coronavirus disease 2019 vaccination to prevent disease and to contain the global pandemic.
We revisit in-sample asymptotic analysis extensively used in the realized volatility literature. We show that there are gains to be made in estimating current realized volatility from considering realizations in prior periods. The weighting schemes also relate to Kalman-Bucy filters, although our approach is non-Gaussian and model-free. We derive theoretical results for a broad class of processes pertaining to volatility, higher moments, and leverage. The paper also contains a Monte Carlo simulation study showing the benefits of across-sample combinations.
Self-instigated isolation is heavily relied on to curb severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) transmission. Accounting for uncertainty in the latent and prepatent periods, as well as the proportion of infections that remain asymptomatic, the limits of this intervention at different phases of infection resurgence are estimated. We show that by October 2020, SARS-CoV-2 transmission rates in England had already begun exceeding levels that could be interrupted using this intervention alone, lending support to the second national lockdown on 5th November 2020.
We present a mathematical model for the simulation of the development of an outbreak of coronavirus disease 2019 (COVID-19) in a slum area under different interventions. Instead of representing interventions as modulations of the parameters of a free-running epidemic, we introduce a model structure that accounts for the actions but does not assume the results. The disease is modelled in terms of the progression of viraemia reported in scientific studies. The emergence of symptoms in the model reflects the statistics of a nation-wide highly detailed database consisting of more than 62 000 cases (about a half of them confirmed by reverse transcription-polymerase chain reaction tests) with recorded symptoms in Argentina. The stochastic model displays several of the characteristics of COVID-19 such as a high variability in the evolution of the outbreaks, including long periods in which they run undetected, spontaneous extinction followed by a late outbreak and unimodal as well as bimodal progressions of daily counts of cases (second waves without ad-hoc hypothesis). We show how the relation between undetected cases (including the ‘asymptomatic’ cases) and detected cases changes as a function of the public policies, the efficiency of the implementation and the timing with respect to the development of the outbreak. We show also that the relation between detected cases and total cases strongly depends on the implemented policies and that detected cases cannot be regarded as a measure of the outbreak, being the dependency between total cases and detected cases in general not monotonic as a function of the efficiency in the intervention method. According to the model, it is possible to control an outbreak with interventions based on the detection of symptoms only in the case when the presence of just one symptom prompts isolation and the detection efficiency reaches about 80% of the cases. Requesting two symptoms to trigger intervention can be enough to fail in the goals.
About 800 foodborne disease outbreaks are reported in the United States annually. Few are associated with food recalls. We compared 226 outbreaks associated with food recalls with those not associated with recalls during 2006–2016. Recall-associated outbreaks had, on average, more illnesses per outbreak and higher proportions of hospitalisations and deaths than non-recall-associated outbreaks. The top confirmed aetiology for recall-associated outbreaks was Salmonella. Pasteurised and unpasteurised dairy products, beef and molluscs were the most frequently implicated foods. The most common pathogen−food pairs for outbreaks with recalls were Escherichia coli-beef and norovirus-molluscs; the top pairs for non-recall-associated outbreaks were scombrotoxin-fish and ciguatoxin-fish. For outbreaks with recalls, 48% of the recalls occurred after the outbreak, 27% during the outbreak, 3% before the outbreak, and 22% were inconclusive or had unknown recall timing. Fifty per cent of recall-associated outbreaks were multistate, compared with 2% of non-recall-associated outbreaks. The differences between recall-associated outbreaks and non-recall-associated outbreaks help define the types of outbreaks and food vehicles that are likely to have a recall. Improved outbreak vehicle identification and traceability of rarely recalled foods could lead to more recalls of these products, resulting in fewer illnesses and deaths.
We develop a theory of graph algebras over general fields. This is modelled after the theory developed by Freedman et al. (2007, J. Amer. Math. Soc.20 37–51) for connection matrices, in the study of graph homomorphism functions over real edge weight and positive vertex weight. We introduce connection tensors for graph properties. This notion naturally generalizes the concept of connection matrices. It is shown that counting perfect matchings, and a host of other graph properties naturally defined as Holant problems (edge models), cannot be expressed by graph homomorphism functions with both complex vertex and edge weights (or even from more general fields). Our necessary and sufficient condition in terms of connection tensors is a simple exponential rank bound. It shows that positive semidefiniteness is not needed in the more general setting.
Estimating the case fatality ratio (CFR) for COVID-19 is an important aspect of public health. However, calculating CFR accurately is problematic early in a novel disease outbreak, due to uncertainties regarding the time course of disease and difficulties in diagnosis and reporting of cases. In this work, we present a simple method for calculating the CFR using only public case and death data over time by exploiting the correspondence between the time distributions of cases and deaths. The time-shifted distribution (TSD) analysis generates two parameters of interest: the delay time between reporting of cases and deaths and the CFR. These parameters converge reliably over time once the exponential growth phase has finished. Analysis is performed for early COVID-19 outbreaks in many countries, and we discuss corrections to CFR values using excess-death and seroprevalence data to estimate the infection fatality ratio (IFR). While CFR values range from 0.2% to 20% in different countries, estimates for IFR are mostly around 0.5–0.8% for countries that experienced moderate outbreaks and 1–3% for severe outbreaks. The simplicity and transparency of TSD analysis enhance its usefulness in characterizing a new disease as well as the state of the health and reporting systems.
This article discusses the technology of city digital twins (CDTs) and its potential applications in the policymaking context. The article analyzes the history of the development of the concept of digital twins and how it is now being adopted on a city-scale. One of the most advanced projects in the field—Virtual Singapore—is discussed in detail to determine the scope of its potential domains of application and highlight challenges associated with it. Concerns related to data privacy, availability, and its applicability for predictive simulations are analyzed, and potential usage of synthetic data is proposed as a way to address these challenges. The authors argue that despite the abundance of urban data, the historical data are not always applicable for predictions about the events for which there does not exist any data, as well as discuss the potential privacy challenges of the usage of micro-level individual mobility data in CDTs. A task-based approach to urban mobility data generation is proposed in the last section of the article. This approach suggests that city authorities can establish services responsible for asking people to conduct certain activities in an urban environment in order to create data for possible policy interventions for which there does not exist useful historical data. This approach can help in addressing the challenges associated with the availability of data without raising privacy concerns, as the data generated through this approach will not represent any real individual in society.
We review a combinatoric approach to the Hodge conjecture for Fermat varieties and announce new cases where the conjecture is true. We show the Hodge conjecture for Fermat fourfolds $ {X}_m^4 $ of degree m ≤ 100 coprime to 6, and also prove the conjecture for $ {X}_{21}^n $ and $ {X}_{27}^n $, for all n.