To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Several information measures have been proposed and studied in the literature. One such measure is extropy, a complementary dual function of entropy. Its meaning and related aging notions have not yet been studied in great detail. In this paper, we first illustrate that extropy information ranks the uniformity of a wide array of absolutely continuous families. We then discuss several theoretical merits of extropy. We also provide a closed-form expression of it for finite mixture distributions. Finally, the dynamic versions of extropy are also discussed, specifically the residual extropy and past extropy measures.
This article uses data from several publicly available databases to show that the distribution of intellectual property for frontier technologies, including those useful for sustainable development, is very highly skewed in favor of a handful of developed countries. The intellectual property rights (IPR) regime as it exists does not optimize the global flow of technology and know-how for the attainment of the sustainable development goals and is in need of updating. Some features of the Fourth Industrial Revolution imply that the current system of patents is even more in need of reform than before. COVID-19 vaccines and therapies and the vast inequality in access to these has highlighted the costs of inaction. We recommend several policy changes for the international IPR regime. Broadly, these fall into three categories: allowing greater flexibility for developing countries, reassessing the appropriateness of patents for technologies that may be considered public goods, and closing loopholes that allow for unreasonable intellectual property protections.
Now in its second edition, this accessible text presents a unified Bayesian treatment of state-of-the-art filtering, smoothing, and parameter estimation algorithms for non-linear state space models. The book focuses on discrete-time state space models and carefully introduces fundamental aspects related to optimal filtering and smoothing. In particular, it covers a range of efficient non-linear Gaussian filtering and smoothing algorithms, as well as Monte Carlo-based algorithms. This updated edition features new chapters on constructing state space models of practical systems, the discretization of continuous-time state space models, Gaussian filtering by enabling approximations, posterior linearization filtering, and the corresponding smoothers. Coverage of key topics is expanded, including extended Kalman filtering and smoothing, and parameter estimation. The book's practical, algorithmic approach assumes only modest mathematical prerequisites, suitable for graduate and advanced undergraduate students. Many examples are included, with Matlab and Python code available online, enabling readers to implement algorithms in their own projects.
Learn by doing with this user-friendly introduction to time series data analysis in R. This book explores the intricacies of managing and cleaning time series data of different sizes, scales and granularity, data preparation for analysis and visualization, and different approaches to classical and machine learning time series modeling and forecasting. A range of pedagogical features support students, including end-of-chapter exercises, problems, quizzes and case studies. The case studies are designed to stretch the learner, introducing larger data sets, enhanced data management skills, and R packages and functions appropriate for real-world data analysis. On top of providing commented R programs and data sets, the book's companion website offers extra case studies, lecture slides, videos and exercise solutions. Accessible to those with a basic background in statistics and probability, this is an ideal hands-on text for undergraduate and graduate students, as well as researchers in data-rich disciplines
Country-wide social distancing and suspension of non-emergency medical care due to the COVID-19 pandemic will undoubtedly have affected public health in multiple ways. While non-pharmaceutical interventions are expected to reduce the transmission of several infectious diseases, severe disruptions to healthcare systems have hampered diagnosis, treatment, and routine vaccination. We examined the effect of this disruption on meningococcal disease and vaccination in the UK. By adapting an existing mathematical model for meningococcal carriage, we addressed the following questions: What is the predicted impact of the existing MenACWY adolescent vaccination programme? What effect might social distancing and reduced vaccine uptake both have on future epidemiology? Will catch-up vaccination campaigns be necessary? Our model indicated that the MenACWY vaccine programme was generating substantial indirect protection and suppressing transmission by 2020. COVID-19 social distancing is expected to have accelerated this decline, causing significant long-lasting reductions in both carriage prevalence of meningococcal A/C/W/Y strains and incidence of invasive meningococcal disease. In all scenarios modelled, pandemic social mixing effects outweighed potential reductions in vaccine uptake, causing an overall decline in carriage prevalence from 2020 for at least 5 years. Model outputs show strong consistency with recently published case data for England.
Which patterns must a two-colouring of $K_n$ contain if each vertex has at least $\varepsilon n$ red and $\varepsilon n$ blue neighbours? We show that when $\varepsilon \gt 1/4$, $K_n$ must contain a complete subgraph on $\Omega (\log n)$ vertices where one of the colours forms a balanced complete bipartite graph.
When $\varepsilon \leq 1/4$, this statement is no longer true, as evidenced by the following colouring $\chi$ of $K_n$. Divide the vertex set into $4$ parts nearly equal in size as $V_1,V_2,V_3, V_4$, and let the blue colour class consist of the edges between $(V_1,V_2)$, $(V_2,V_3)$, $(V_3,V_4)$, and the edges contained inside $V_2$ and inside $V_3$. Surprisingly, we find that this obstruction is unique in the following sense. Any two-colouring of $K_n$ in which each vertex has at least $\varepsilon n$ red and $\varepsilon n$ blue neighbours (with $\varepsilon \gt 0$) contains a vertex set $S$ of order $\Omega _{\varepsilon }(\log n)$ on which one colour class forms a balanced complete bipartite graph, or which has the same colouring as $\chi$.
This paper provides nonparametric specification tests for the commonly used homogeneous and stable coefficients structures in panel data models. We first obtain the augmented residuals by estimating the model under the null hypothesis and then run auxiliary time series regressions of augmented residuals on covariates with time-varying coefficients (TVCs) via sieve methods. The test statistic is then constructed by averaging the squared fitted values, which are close to zero under the null and deviate from zero under the alternatives. We show that the test statistic, after being appropriately standardized, is asymptotically normal under the null and under a sequence of Pitman local alternatives. A bootstrap procedure is proposed to improve the finite sample performance of our test. In addition, we extend the procedure to test other structures, such as the homogeneity of TVCs or the stability of heterogeneous coefficients. The joint test is extended to panel models with two-way fixed effects. Monte Carlo simulations indicate that our tests perform reasonably well in finite samples. We apply the tests to re-examine the environmental Kuznets curve in the United States, and find that the model with homogenous TVCs is more appropriate for this application.
The recent reinforcement of CoV surveillance in animals fuelled by the COVID-19 pandemic provided increasing evidence that mammals other than bats might hide further diversity and play critical roles in human infectious diseases. This work describes the results of a two-year survey carried out in Italy with the double objective of uncovering CoV diversity associated with wildlife and of excluding the establishment of a reservoir for SARS-CoV-2 in particularly susceptible or exposed species. The survey targeted hosts from five different orders and was harmonised across the country in terms of sample size, target tissues, and molecular test. Results showed the circulation of 8 CoV species in 13 hosts out of the 42 screened. Coronaviruses were either typical of the host species/genus or normally associated with their domestic counterpart. Two novel viruses likely belonging to a novel CoV genus were found in mustelids. All samples were negative for SARS-CoV-2, with minimum detectable prevalence ranging between 0.49% and 4.78% in the 13 species reaching our threshold sample size of 59 individuals. Considering that within-species transmission in white-tailed deer resulted in raising the prevalence from 5% to 81% within a few months, this result would exclude a sustained cycle after spillback in the tested species.
Following Bradonjić and Saniee, we study a model of bootstrap percolation on the Gilbert random geometric graph on the 2-dimensional torus. In this model, the expected number of vertices of the graph is n, and the expected degree of a vertex is $a\log n$ for some fixed $a>1$. Each vertex is added with probability p to a set $A_0$ of initially infected vertices. Vertices subsequently become infected if they have at least $ \theta a \log n $ infected neighbours. Here $p, \theta \in [0,1]$ are taken to be fixed constants.
We show that if $\theta < (1+p)/2$, then a sufficiently large local outbreak leads with high probability to the infection spreading globally, with all but o(n) vertices eventually becoming infected. On the other hand, for $ \theta > (1+p)/2$, even if one adversarially infects every vertex inside a ball of radius $O(\sqrt{\log n} )$, with high probability the infection will spread to only o(n) vertices beyond those that were initially infected.
In addition we give some bounds on the $(a, p, \theta)$ regions ensuring the emergence of large local outbreaks or the existence of islands of vertices that never become infected. We also give a complete picture of the (surprisingly complex) behaviour of the analogous 1-dimensional bootstrap percolation model on the circle. Finally we raise a number of problems, and in particular make a conjecture on an ‘almost no percolation or almost full percolation’ dichotomy which may be of independent interest.
The evidence for the incubation period of Legionnaires’ disease is based on data from a small number of outbreaks. An incubation period of 2–10 days is commonly used for the definition and investigation of cases. In the German LeTriWa study, we collaborated with public health departments to identify evidence-based sources of exposure among cases of Legionnaires’ disease within 1–14 days before symptom onset. For each individual, we assigned weights to the numbered days of exposure before symptom onset, giving the highest weight to exposure days of cases with only one possible day of exposure. We then calculated an incubation period distribution where the median was 5 days and the mode was 6 days. The cumulative distribution reached 89% by the 10th day before symptom onset. One case-patient with immunosuppression had a single day of exposure to the likely infection source only 1 day before symptom onset. Overall, our results support the 2- to 10-day incubation period used in case definition, investigation, and surveillance of cases with Legionnaires’ disease.
The myopic strategy is one of the most important strategies when studying bandit problems. In 2018, Nouiehed and Ross put forward a conjecture about Feldman’s bandit problem (J. Appl. Prob. (2018) 55, 318–324). They proposed that for Bernoulli two-armed bandit problems, the myopic strategy stochastically maximizes the number of wins. In this paper we consider the two-armed bandit problem with more general distributions and utility functions. We confirm this conjecture by proving a stronger result: if the agent playing the bandit has a general utility function, the myopic strategy is still optimal if and only if this utility function satisfies reasonable conditions.
We study competing first passage percolation on graphs generated by the configuration model with infinite-mean degrees. Initially, two uniformly chosen vertices are infected with a type 1 and type 2 infection, respectively, and the infection then spreads via nearest neighbors in the graph. The time it takes for the type 1 (resp. 2) infection to traverse an edge e is given by a random variable $X_1(e)$ (resp. $X_2(e)$) and, if the vertex at the other end of the edge is still uninfected, it then becomes type 1 (resp. 2) infected and immune to the other type. Assuming that the degrees follow a power-law distribution with exponent $\tau \in (1,2)$, we show that with high probability as the number of vertices tends to infinity, one of the infection types occupies all vertices except for the starting point of the other type. Moreover, both infections have a positive probability of winning regardless of the passage-time distribution. The result is also shown to hold for the erased configuration model, where self-loops are erased and multiple edges are merged, and when the degrees are conditioned to be smaller than $n^\alpha$ for some $\alpha\gt 0$.
Given the assumption that a loss random variable has a certain parametric distribution, the empirical analysis of the properties of the loss requires the parameters to be estimated. In this chapter, we review the theory of parametric estimation, including the properties of an estimator and the concepts of point estimation, interval estimation, unbiasedness, consistency and efficiency. Apart from the parametric approach, we may also estimate the distribution functions and the probability (density) functions of the loss random variables directly without assuming a certain parametric form.
Ratemaking refers to the determination of the premium rates to cover the potential loss payments incurred under an insurance policy. In addition to the losses, the premium should also cover all the expenses as well as the profit margin. As past losses are used to project future losses, care must be taken to adjust for potential increases in the lost costs. There are two methods to determine the premium rates: the loss cost method and the loss ratio method.
Having discussed models for claim frequency and claim severity separately, we now turn our attention to modeling the aggregate loss of a block of insurance policies. Much of the time we shall use the terms aggregate loss and aggregate claim interchangeably, although we recognize the difference between them as discussed in the last chapter. There are two major approaches in modeling aggregate loss: the individual risk model and the collective risk model.
After a model has been estimated, we have to evaluate it to ascertain that the assumptions applied are acceptable and supported by the data. This should be done prior to using the model for prediction and pricing. Model evaluation can be done using graphical methods, as well as formal misspecification tests and diagnostic checks.