To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The spatio-temporal dynamics of an outbreak provide important insights to help direct public health resources intended to control transmission. They also provide a focus for detailed epidemiological studies and allow the timing and impact of interventions to be assessed.
A common approach is to aggregate case data to administrative regions. Whilst providing a good visual impression of change over space, this method masks spatial variation and assumes that disease risk is constant across space. Risk factors for COVID-19 (e.g. population density, deprivation and ethnicity) vary from place to place across England so it follows that risk will also vary spatially. Kernel density estimation compares the spatial distribution of cases relative to the underlying population, unfettered by arbitrary geographical boundaries, to produce a continuous estimate of spatially varying risk.
Using test results from healthcare settings in England (Pillar 1 of the UK Government testing strategy) and freely available methods and software, we estimated the spatial and spatio-temporal risk of COVID-19 infection across England for the first 6 months of 2020. Widespread transmission was underway when partial lockdown measures were introduced on 23 March 2020 and the greatest risk erred towards large urban areas. The rapid growth phase of the outbreak coincided with multiple introductions to England from the European mainland. The spatio-temporal risk was highly labile throughout.
In terms of controlling transmission, the most important practical application of our results is the accurate identification of areas within regions that may require tailored intervention strategies. We recommend that this approach is absorbed into routine surveillance outputs in England. Further risk characterisation using widespread community testing (Pillar 2) data is needed as is the increased use of predictive spatial models at fine spatial scales.
The possibility of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) transmission by fomites or environmental surfaces has been suggested. It is unclear if SARS-CoV-2 can be detected in outdoor public areas. The objective of the current study was to assess the presence of SARS-CoV-2 in environmental samples collected at public playgrounds and water fountains, in a country with high disease prevalence. Environmental samples were collected from six cities in central Israel. Samples were collected from drinking fountains and high-touch recreational equipment at playgrounds. Sterile pre-moistened swabs were used to collect the samples, put in viral transfer media and transferred to the laboratory. Viral detection was achieved by real-time reverse transcriptase–polymerase chain reaction, targeting four genes. Forty-three samples were collected from playground equipment and 25 samples from water fountains. Two of the 43 (4.6%) samples from playground equipment and one (4%) sample from a drinking fountain tested positive. It is unclear whether the recovery of viral RNA on outdoor surfaces also indicates the possibility of acquiring the virus. Adherence to environmental and personal hygiene in urban settings seems prudent.
We bound the error for the normal approximation of the number of triangles in the Erdős–Rényi random graph with respect to the Kolmogorov metric. Our bounds match the best available Wasserstein bounds obtained by Barbour et al. [(1989). A central limit theorem for decomposable random variables with applications to random graphs. Journal of Combinatorial Theory, Series B 47: 125–145], resolving a long-standing open problem. The proofs are based on a new variant of the Stein–Tikhomirov method—a combination of Stein's method and characteristic functions introduced by Tikhomirov [(1976). The rate of convergence in the central limit theorem for weakly dependent variables. Vestnik Leningradskogo Universiteta 158–159, 166].
In nonparametric and high-dimensional statistical models, the classical Gauss–Fisher–Le Cam theory of the optimality of maximum likelihood estimators and Bayesian posterior inference does not apply, and new foundations and ideas have been developed in the past several decades. This book gives a coherent account of the statistical theory in infinite-dimensional parameter spaces. The mathematical foundations include self-contained 'mini-courses' on the theory of Gaussian and empirical processes, approximation and wavelet theory, and the basic theory of function spaces. The theory of statistical inference in such models - hypothesis testing, estimation and confidence sets - is presented within the minimax paradigm of decision theory. This includes the basic theory of convolution kernel and projection estimation, but also Bayesian nonparametrics and nonparametric maximum likelihood estimation. In a final chapter the theory of adaptive inference in nonparametric models is developed, including Lepski's method, wavelet thresholding, and adaptive inference for self-similar functions. Winner of the 2017 PROSE Award for Mathematics.
The primary objective of this scholarly work is to develop two estimation procedures – maximum likelihood estimator (MLE) and method of trimmed moments (MTM) – for the mean and variance of lognormal insurance payment severity data sets affected by different loss control mechanism, for example, truncation (due to deductibles), censoring (due to policy limits), and scaling (due to coinsurance proportions), in insurance and financial industries. Maximum likelihood estimating equations for both payment-per-payment and payment-per-loss data sets are derived which can be solved readily by any existing iterative numerical methods. The asymptotic distributions of those estimators are established via Fisher information matrices. Further, with a goal of balancing efficiency and robustness and to remove point masses at certain data points, we develop a dynamic MTM estimation procedures for lognormal claim severity models for the above-mentioned transformed data scenarios. The asymptotic distributional properties and the comparison with the corresponding MLEs of those MTM estimators are established along with extensive simulation studies. Purely for illustrative purpose, numerical examples for 1500 US indemnity losses are provided which illustrate the practical performance of the established results in this paper.
Fourier analysis can provide policymakers useful information for analysing the pandemic behaviours. This paper proposes a Fourier analysis approach for examining the cycle length and the power spectrum of the pandemic by converting the number of deaths due to coronavirus disease 2019 in the US to the frequency domain. Policymakers can control the pandemic by using observed cycle length whether they should strengthen their policy or not. The proposed Fourier method is useful for analysing waves in other medical applications.
COVID-19 is causing a significant burden on medical and healthcare resources globally due to high numbers of hospitalisations and deaths recorded as the pandemic continues. This research aims to assess the effects of climate factors (i.e., daily average temperature and average relative humidity) on effective reproductive number of COVID-19 outbreak in Wuhan, China during the early stage of the outbreak. Our research showed that effective reproductive number of COVID-19 will increase by 7.6% (95% Confidence Interval: 5.4% ~ 9.8%) per 1°C drop in mean temperature at prior moving average of 0–8 days lag in Wuhan, China. Our results indicate temperature was negatively associated with COVID-19 transmissibility during early stages of the outbreak in Wuhan, suggesting temperature is likely to effect COVID-19 transmission. These results suggest increased precautions should be taken in the colder seasons to reduce COVID-19 transmission in the future, based on past success in controlling the pandemic in Wuhan, China.
New computing and communications paradigms will result in traffic loads in information server systems that fluctuate over much broader ranges of time scales than current systems. In addition, these fluctuation time scales may only be indirectly known or even be unknown. However, we should still be able to accurately design and manage such systems. This paper addresses this issue: we consider an M/M/1 queueing system operating in a random environment (denoted M/M/1(R)) that alternates between HIGH and LOW phases, where the load in the HIGH phase is higher than in the LOW phase. Previous work on the performance characteristics of M/M/1(R) systems established fundamental properties of the shape of performance curves. In this paper, we extend monotonicity results to include convexity and concavity properties, provide a partial answer to an open problem on stochastic ordering, develop new computational techniques, and include boundary cases and various degenerate M/M/1(R) systems. The basis of our results are novel representations for the mean number in system and the probability of the system being empty. We then apply these results to analyze practical aspects of system operation and design; in particular, we derive the optimal service rate to minimize mean system cost and provide a bias analysis of the use of customer-level sampling to estimate time-stationary quantities.
IFRS 17 Insurance Contracts is a new accounting standard currently expected to come into force on 1 January 2023. It supersedes IFRS 4 Insurance Contracts. IFRS 17 establishes key principles that entities must apply in all aspects of the accounting of insurance contracts. In doing so, the Standard aims to increase the usefulness, comparability, transparency and quality of financial statements.
A fundamental concept introduced by IFRS 17 is the contractual service margin (CSM). This represents the unearned profit that an entity expects to earn as it provides services. However, as a principles-based standard, IFRS 17 results in entities having to apply significant judgement when determining the inputs, assumptions and techniques it uses to determine the CSM at each reporting period.
In general, the Standard resolves broad categories of mismatches which arise under IFRS 4. Notable examples include mismatches between assets recorded at current market value and liabilities calculated using fixed discount rates as well as inconsistencies in the timing of profit recognition over the duration of an insurance contract. However, there are requirements of IFRS 17 that may create economic or accounting mismatches of its own. For example, new mismatches could arise between the measurement of underlying contracts and the corresponding reinsurance held. Additionally, mismatches can still arise between the measurement of liabilities and the assets that support the liabilities.
This paper explores the technical, operational and commercial issues that arise across these and other areas focusing on the CSM. As a standard that is still very much in its infancy, and for which wider consensus on topics is yet to be achieved, this paper aims to provide readers with a deeper understanding of the issues and opportunities that accompany it.
Initial insurance losses are often reported with a textual description of the claim. The claims manager must determine the adequate case reserve for each known claim. In this paper, we present a framework for predicting the amount of loss given a textual description of the claim using a large number of words found in the descriptions. Prior work has focused on classifying insurance claims based on keywords selected by a human expert, whereas in this paper the focus is on loss amount prediction with automatic word selection. In order to transform words into numeric vectors, we use word cosine similarities and word embedding matrices. When we consider all unique words found in the training dataset and impose a generalised additive model to the resulting explanatory variables, the resulting design matrix is high dimensional. For this reason, we use a group lasso penalty to reduce the number of coefficients in the model. The scalable, analytical framework proposed provides for a parsimonious and interpretable model. Finally, we discuss the implications of the analysis, including how the framework may be used by an insurance company and how the interpretation of the covariates can lead to significant policy change. The code can be found in the TAGAM R package (github.com/scottmanski/TAGAM).
Severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) led to a significant disease burden and disruptions in health systems. We describe the epidemiology and transmission characteristics of early coronavirus disease 2019 (COVID-19) cases in Bavaria, Germany. Cases were reverse transcription polymerase chain reaction (RT-PCR)-confirmed SARS-CoV-2 infections, reported from 20 January−19 March 2020. The incubation period was estimated using travel history and date of symptom onset. To estimate the serial interval, we identified pairs of index and secondary cases. By 19 March, 3546 cases were reported. A large proportion was exposed abroad (38%), causing further local transmission. Median incubation period of 256 cases with exposure abroad was 3.8 days (95%CI: 3.5–4.2). For 95% of infected individuals, symptom onset occurred within 10.3 days (95%CI: 9.1–11.8) after exposure. The median serial interval, using 53 pairs, was 3.5 days (95%CI: 3.0–4.2; mean: 3.9, s.d.: 2.2). Travellers returning to Germany had an important influence on the spread of SARS-CoV-2 infections in Bavaria in early 2020. Especially in times of low incidence, public health agencies should identify holiday destinations, and areas with ongoing local transmission, to monitor potential importation of SARS-CoV-2 infections. Travellers returning from areas with ongoing community transmission should be advised to quarantine to prevent re-introductions of COVID-19.
In this paper we consider the pricing and hedging of financial derivatives in a model-independent setting, for a trader with additional information, or beliefs, on the evolution of asset prices. In particular, we suppose that the trader wants to act in a way which is independent of any modelling assumptions, but that she observes market information in the form of the prices of vanilla call options on the asset. We also assume that both the payoff of the derivative, and the insider’s information or beliefs, which take the form of a set of impossible paths, are time-invariant. In this way we accommodate drawdown constraints, as well as information/beliefs on quadratic variation or on the levels hit by asset prices. Our setup allows us to adapt recent work of [12] to prove duality results and a monotonicity principle. This enables us to determine geometric properties of the optimal models. Moreover, for specific types of information, we provide simple conditions for the existence of consistent models for the informed agent. Finally, we provide an example where our framework allows us to compute the impact of the information on the agent’s pricing bounds.
We consider the optimal prediction problem of stopping a spectrally negative Lévy process as close as possible to a given distance $b \geq 0$ from its ultimate supremum, under a squared-error penalty function. Under some mild conditions, the solution is fully and explicitly characterised in terms of scale functions. We find that the solution has an interesting non-trivial structure: if b is larger than a certain threshold then it is optimal to stop as soon as the difference between the running supremum and the position of the process exceeds a certain level (less than b), while if b is smaller than this threshold then it is optimal to stop immediately (independent of the running supremum and position of the process). We also present some examples.
In April 2018, Public Health England was notified of cases of Shigella sonnei who had eaten food from three different catering outlets in England. The outbreaks were initially investigated as separate events, but whole-genome sequencing (WGS) showed they were caused by the same strain. The investigation included analyses of epidemiological data, the food chain and microbiological examination of food samples. WGS was used to determine the phylogenetic relatedness and antimicrobial resistance profile of the outbreak strain. Ultimately, 33 cases were linked to this outbreak; the majority had eaten food from seven outlets specialising in Indian or Middle Eastern cuisine. Five outlets were linked to two or more cases, all of which used fresh coriander although a shared supplier was not identified. An investigation at one of the venues recorded that 86% of cases reported eating dishes with coriander as an ingredient or garnish. Four cases were admitted to hospital and one had evidence of treatment failure with ciprofloxacin. Phylogenetic analysis showed that the outbreak strain was part of a wider multidrug-resistant clade associated with travel to Pakistan. Poor hygiene practices during cultivation, distribution or preparation of fresh produce are likely contributing factors.
A shared ledger is a record of transactions that can be updated by any member of a group of users. The notion of independent and consistent record-keeping in a shared ledger is important for blockchain and more generally for distributed ledger technologies. In this paper we analyze a stochastic model for the shared ledger known as the tangle, which was devised as the basis for the IOTA cryptocurrency. The model is a random directed acyclic graph, and its growth is described by a non-Markovian stochastic process. We first prove ergodicity of the stochastic process, and then derive a delay differential equation for the fluid model which describes the tangle at high arrival rate. We prove convergence in probability of the tangle process to the fluid model, and also prove global stability of the fluid model. The convergence proof relies on martingale techniques.
This paper develops a new test statistic for parameters defined by moment conditions that exhibits desirable relative error properties for the approximation of tail area probabilities. Our statistic, called the tilted exponential tilting (TET) statistic, is constructed by estimating certain cumulant generating functions under exponential tilting weights. We show that the asymptotic p-value of the TET statistic can provide an accurate approximation to the p-value of an infeasible saddlepoint statistic, which admits a Lugannani–Rice style adjustment with relative errors of order $n^{-1}$ both in normal and large deviation regions. Numerical results illustrate the accuracy of the proposed TET statistic. Our results cover both just- and overidentified moment condition models. A limitation of our analysis is that the theoretical approximation results are exclusively for the infeasible saddlepoint statistic, and closeness of the p-values for the infeasible statistic to the ones for the feasible TET statistic is only numerically assessed.
This paper considers optimal admission and routing control in multi-class service systems in which customers can either receive quality regular service which is subject to congestion or can receive congestion-free but less desirable service at an alternative service station, which we call the self-service station. We formulate the problem within the Markov decision process framework and focus on characterizing the structure of dynamic optimal policies which maximize the expected long-run rewards. For this, value function and sample path arguments are used. The congestion sensitivity of customers is modeled with class-independent holding costs at the regular service station. The results show how the admission rewards of customer classes affect their priorities at the regular and self-service stations. We explore that the priority for regular service may not only depend on regular service admission rewards of classes but also on the difference between regular and self-service admission rewards. We show that optimal policies have monotonicity properties, regarding the optimal decisions of individual customer classes such that they divide the state space into three connected regions per class.
We prove the sharp bound for the probability that two experts who have access to different information, represented by different $\sigma$-fields, will give radically different estimates of the probability of an event. This is relevant when one combines predictions from various experts in a common probability space to obtain an aggregated forecast. The optimizer for the bound is explicitly described. This paper was originally titled ‘Contradictory predictions’.