To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The paper establishes the local asymptotic normality property for general conditionally heteroskedastic time series models of multiplicative form, $\epsilon _t=\sigma _t(\boldsymbol {\theta }_0)\eta _t$, where the volatility $\sigma _t(\boldsymbol {\theta }_0)$ is a parametric function of $\{\epsilon _{s}, s< t\}$, and $(\eta _t)$ is a sequence of i.i.d. random variables with common density $f_{\boldsymbol {\theta }_0}$. In contrast with earlier results, the finite dimensional parameter $\boldsymbol {\theta }_0$ enters in both the volatility and the density specifications. To deal with nondifferentiable functions, we introduce a conditional notion of the familiar quadratic mean differentiability condition which takes into account parameter variation in both the volatility and the errors density. Our results are illustrated on two particular models: the APARCH with asymmetric Student-t distribution, and the Beta-t-GARCH model, and are extended to handle a conditional mean.
Carbapenemase-producing Enterobacterales (CPE) are important globally. In 2017, Ireland declared a national public health emergency to address CPE in acute hospitals. A National Public Health Emergency Team and an expert advisory group (EAG) were established. The EAG has identified key learnings to inform future strategies. First, there is still an opportunity to prevent CPE becoming endemic. Second, damp environmental reservoirs in hospitals are inadequately controlled. Third, antibiotic stewardship remains important in control. Finally, there is no current requirement to extend screening to detect CPE outside of acute hospitals. These conclusions and their implications may also be relevant in other countries.
UEFA Euro 2020 tournament was scheduled to take place in 2020, but due to the coronavirus disease 2019 (COVID-19) pandemic was rescheduled to start on 11 June 2021. Approximately 4500 Finnish spectators participated, travelling between Finland and Russia during the period of 16 to 30 June to attend matches played on 16 and 21 June. A total of 419 persons returning from Russia or with a connection to Russia were detected positive for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Of the 321 sequenced samples 303 turned out to be of the Delta variant. None of these cases was hospitalised. In the following weeks findings of the Delta variant increased rapidly. Thus, EURO 2020 travel-related imported cases likely facilitated this rapid surge of Delta variant, but this impact would likely have been seen with the typical increase in the number of travellers entering Finland later in the summer.
Neonatal listeriosis is rare and detecting more than one case together would be unlikely without a causal link. Thirty-five instances of neonatal listeriosis where cross-infection occurred in the UK and Ireland were reviewed together with 29 other similar episodes reported elsewhere. All instances comprised an infant who was ill at or within one day of delivery and who had direct or indirect contact with a second infant, or in the minority, two or more infants, who then usually developed meningitis 6 to 12 days later. In most instances, the infants were nursed on the same day in obstetric units or new-born nurseries and consequently, staff and equipment were common: hence, the likely route of transmission was via direct or indirect neonate to neonate contact. In one instance, a stethoscope was used on both infants nursed in different parts of the same hospital. In a further incident, the mother of the early-onset infant cuddled a baby from an adjacent bed who developed meningitis 12 days later. The largest outbreak occurred in Costa Rica where nine neonatal listeriosis cases resulted after bathing in mineral-oil shortly after birth which had been contaminated from the early-onset index case.
Many real-world problems require to optimize trajectories under constraints. Classical approaches are often based on optimal control methods but require an exact knowledge of the underlying dynamics and constraints, which could be challenging or even out of reach. In view of this, we leverage data-driven approaches to design a new end-to-end framework which is dynamics-free for optimized and realistic trajectories. Trajectories are here decomposed on function basis, trading the initial infinite dimension problem on a multivariate functional space for a parameter optimization problem. Then a maximum a posteriori approach which incorporates information from data is used to obtain a new penalized optimization problem. The penalized term narrows the search on a region centered on data and includes estimated features of the problem. We apply our data-driven approach to two settings in aeronautics and sailing routes optimization. The developed approach is implemented in the Python library PyRotor.
Stable Lévy processes lie at the intersection of Lévy processes and self-similar Markov processes. Processes in the latter class enjoy a Lamperti-type representation as the space-time path transformation of so-called Markov additive processes (MAPs). This completely new mathematical treatment takes advantage of the fact that the underlying MAP for stable processes can be explicitly described in one dimension and semi-explicitly described in higher dimensions, and uses this approach to catalogue a large number of explicit results describing the path fluctuations of stable Lévy processes in one and higher dimensions. Written for graduate students and researchers in the field, this book systemically establishes many classical results as well as presenting many recent results appearing in the last decade, including previously unpublished material. Topics explored include first hitting laws for a variety of sets, path conditionings, law-preserving path transformations, the distribution of extremal points, growth envelopes and winding behaviour.
Three-dimensional panel models are widely used in empirical analysis. Researchers use various combinations of fixed effects for three-dimensional panels while the correct specification is unknown. When one imposes a parsimonious model and the true model is rich in complexity, the fitted model inevitably incurs the consequences of misspecification including potential bias. When a richly specified model is employed and the true model is parsimonious, then the consequences typically include a poor fit with larger standard errors than necessary. It is therefore useful for researchers to have good model selection techniques that assist in determining the “true” model or a satisfactory approximation. In this light, Lu, Miao, and Su (2021, Econometric Reviews 40, 867–898) propose methods of model selection. We advance this literature by proposing a method of post-selection inference for regression parameters. Despite our use of the lasso technique as the means of model selection, our assumptions allow for many and even all fixed effects to be nonzero. This property is important to avoid a degenerate distribution of fixed effects which often reflect economic sizes of countries in gravity analyses of trade. Using an international trade database, we document evidence that our key assumption of approximately sparse fixed effects is plausibly satisfied for gravity analyses of trade. We also establish the uniform size control over alternative data generating processes of fixed effects. Simulation studies demonstrate that the proposed method is less biased than under-fitting fixed effect estimators, is more efficient than over-fitting fixed effect estimators, and robustly allows for inference that is as accurate as the oracle estimator.
The left-tailed unit-root tests of the panel analysis of nonstationarity in idiosyncratic and common components (PANIC) proposed by Bai and Ng (2004, Econometrica 72, 1127–1177) have standard local asymptotic power. We assess the size and power properties of the right-tailed version of the PANIC tests when the common and/or the idiosyncratic components are moderately explosive. We find that, when an idiosyncratic component is moderately explosive, the tests for the common components may have considerable size distortions, and those for an idiosyncratic component may suffer from the nonmonotonic power problem. We provide an analytic explanation under the moderately local to unity framework developed by Phillips and Magdalinos (2007, Journal of Econometrics 136, 115–130). We then propose a new cross-sectional (CS) approach to disentangle the common and idiosyncratic components in a relatively short explosive window. Our Monte Carlo simulations show that the CS approach is robust to the nonmonotonic power problem.
A company with $n$ geographically widely dispersed sites seeks insurance that pays off if $m$ out of the $n$ sites experience rarely occurring catastrophes (e.g., earthquakes) during a year. This study describes an adaptive dynamic strategy that enables an insurance company to offer the policy with smaller loss probability than more conventional static policies induce, but at a comparable or lower premium. The strategy accomplishes this by periodically purchasing reinsurance on individual sites. Exploiting rarity, the policy induces zero loss with probability one if no more than one quake occurs during any review interval. The policy also may induce a profit if $m$ or more quakes occur in an interval if no quakes have occurred in previous intervals. The study also examines the benefit of more than one reinsurance policy per active site. The study relies on expected utility to determine indifference premiums and derives an upper bound on loss probability independent of premium.
We develop a nonparametric Bayesian analysis of regression discontinuity (RD) designs, allowing for covariates, in which we model and estimate the unknown functions of the forcing variable by basis expansion methods. In a departure from current methods, we use the entire data on the forcing variable, but we emphasize the data near the threshold by placing some knots at and near the threshold, a technique we refer to as soft-windowing. To handle the nonequally spaced knots that emerge from soft-windowing, we construct a prior on the spline coefficients, from a second-order Ornstein–Uhlenbeck process, which is hyperparameter light, and satisfies the Kullback–Leibler support property. In the fuzzy RD design, we explain the divergence between the treatment implied by the forcing variable, and the actual intake, by a discrete confounder variable, taking three values, complier, never-taker, and always-taker, and a model with four potential outcomes. Choice of the soft-window, and the number of knots, is determined by marginal likelihoods, computed by the method of Chib [Journal of the American Statistical Association, 1995, 90, 1313–1321] as a by-product of the Markov chain Monte Carlo (MCMC)-based estimation. Importantly, in each case, we allow for covariates, incorporated nonparametrically by additive natural cubic splines. The potential outcome error distributions are modeled as student-t, with an extension to Dirichlet process mixtures. We derive the large sample posterior consistency, and posterior contraction rate, of the RD average treatment effect (ATE) (in the sharp case) and RD ATE for compliers (in the fuzzy case), as the number of basis parameters increases with sample size. The excellent performance of the methods is documented in simulation experiments, and in an application to educational attainment of women from Meyersson [Econometrica, 2014, 82, 229–269].
Across a wide variety of applications, the self-exciting Hawkes process has been used to model phenomena in which the history of events influences future occurrences. However, there may be many situations in which the past events only influence the future as long as they remain active. For example, a person spreads a contagious disease only as long as they are contagious. In this paper, we define a novel generalization of the Hawkes process that we call the ephemerally self-exciting process. In this new stochastic process, the excitement from one arrival lasts for a randomly drawn activity duration, hence the ephemerality. Our study includes exploration of the process itself as well as connections to well-known stochastic models such as branching processes, random walks, epidemics, preferential attachment, and Bayesian mixture models. Furthermore, we prove a batch scaling construction of general, marked Hawkes processes from a general ephemerally self-exciting model, and this novel limit theorem both provides insight into the Hawkes process and motivates the model contained herein as an attractive self-exciting process in its own right.
During the first phase of the COVID-19 pandemic in 2020, concerns were raised that healthcare workers (HCWs) were at high risk of infection. The aim of this study was to explore the transmission of COVID-19 among HCWs during a staff outbreak at an inpatient ward in Sweden 1 March to 31 May 2020. A mixed-methods approach was applied using several data sources. In total, 152 of 176 HCWs participated. The incidence of COVID-19 among HCWs was 33%. Among cases, 48 (96%) performed activities involving direct contact with COVID-19 patients. Contact tracing connected 78% of cases to interaction with another contagious co-worker. Only a few HCW cases reported contact with a confirmed COVID-19 case at home (n = 6; 12%) or in the community (n = 3; 6%). Multiple logistic regression identified direct care of COVID-19 patients and positive COVID-19 family contact as risk factors for infection (adjusted OR 8.4 and 9.0 respectively). Main interventions to stop the outbreak were physical distancing between HCWs, reinforcement of personal hygiene routines and rigorous surface cleaning. The personal protective equipment used in contact with patients was not changed in response to the outbreak. We highlight HCW-to-HCW transmission of COVID-19 in a hospital environment and the importance of preventing droplet and contact transmission between co-workers.
In this paper, we consider the problem of sustainable harvesting. We explain how the manager maximizes his/her profit according to the quantity of natural resource available in a harvesting area and under the constraint of penalties and fines when the quota is exceeded. We characterize the optimal values and some optimal strategies using a verification result. We then show by numerical examples that this optimal strategy is better than naive ones. Moreover, we define a level of fines which insures the double objective of the sustainable harvesting: a remaining quantity of available natural resource to insure its sustainability and an acceptable income for the manager.
All social and policy researchers need to synthesize data into a visual representation. Producing good visualizations combines creativity and technique. This book teaches the techniques and basics to produce a variety of visualizations, allowing readers to communicate data and analyses in a creative and effective way. Visuals for tables, time series, maps, text, and networks are carefully explained and organized, showing how to choose the right plot for the type of data being analysed and displayed. Examples are drawn from public policy, public safety, education, political tweets, and public health. The presentation proceeds step by step, starting from the basics, in the programming languages R and Python so that readers learn the coding skills while simultaneously becoming familiar with the advantages and disadvantages of each visualization. No prior knowledge of either Python or R is required. Code for all the visualizations are available from the book's website.
For the gambler’s ruin problem with two players starting with the same amount of money, we show the playing time is stochastically maximized when the games are fair.
This paper studies a composite problem involving decision-making about the optimal entry time and dynamic consumption afterwards. In Stage 1, the investor has access to full market information subject to some information costs and needs to choose an optimal stopping time to initiate Stage 2; in Stage 2, the investor terminates the costly full information acquisition and starts dynamic investment and consumption under partial observation of free public stock prices. Habit formation preferences are employed, in which past consumption affects the investor’s current decisions. Using the stochastic Perron method, the value function of the composite problem is proved to be the unique viscosity solution of some variational inequalities.
One of the most fundamental tasks in non-life insurance, done on regular basis, is risk reserving assessment analysis, which amounts to predict stochastically the overall loss reserves to cover possible claims. The most common reserving methods are based on different parametric approaches using aggregated data structured in the run-off triangles. In this paper, we propose a rather non-parametric approach, which handles the underlying loss development triangles as functional profiles and predicts the claim reserve distribution through permutation bootstrap. Three competitive functional-based reserving techniques, each with slightly different scope, are presented; their theoretical and practical advantages – in particular, effortless implementation, robustness against outliers, and wide-range applicability – are discussed. Theoretical justifications of the methods are derived as well. An evaluation of the empirical performance of the designed methods and a full-scale comparison with standard (parametric) reserving techniques are carried on several hundreds of real run-off triangles against the known real loss outcomes. An important objective of the paper is also to promote the idea of natural usefulness of the functional reserving methods among the reserving practitioners.
We review the empirical comparison of Stochastic Actor-oriented Models (SAOMs) and Temporal Exponential Random Graph Models (TERGMs) by Leifeld & Cranmer in this journal [Network Science 7(1):20–51, 2019]. When specifying their TERGM, they use exogenous nodal attributes calculated from the outcome networks’ observed degrees instead of endogenous ERGM equivalents of structural effects as used in the SAOM. This turns the modeled endogeneity into circularity and obtained results are tautological. In consequence, their out-of-sample predictions using TERGMs are based on out-of-sample information and thereby predict the future using observations from the future. Thus, their analysis rests on erroneous model specifications that invalidate the article’s conclusions. Finally, beyond these specific points, we argue that their evaluation metric—tie-level predictive accuracy—is unsuited for the task of comparing model performance.
Kansas City, Missouri, became one of the major United States hotspots for COVID-19 due to an increase in the rate of positive COVID-19 test results. Despite the large numbers of positive cases in Kansas City, MO, the spatial-temporal analysis of data has been less investigated. However, it is critical to detect emerging clusters of COVID-19 and enforce control and preventive policies within those clusters. We conducted a prospective Poisson spatial-temporal analysis of Kansas City, MO data to detect significant space-time clusters of COVID-19 positive cases at the zip code level in Kansas City, MO. The analysis focused on daily infected cases in four equal periods of 3 months. We detected temporal patterns of emerging and re-emerging space-time clusters between March 2020 and February 2021. Three statistically significant clusters emerged in the first period, mainly concentrated in downtown. It increased to seven clusters in the second period, spreading across a broader region in downtown and north of Kansas City. In the third period, nine clusters covered large areas of north and downtown Kansas City, MO. Ten clusters were present in the last period, further extending the infection along the State Line Road. The statistical results were communicated with local health officials and provided the necessary guidance for decision-making and allocating resources (e.g., vaccines and testing sites). As more data become available, statistical clustering can be used as a COVID-19 surveillance tool to measure the effects of vaccination.