To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Arctic rabies virus variant (ARVV) is enzootic in Quebec (Canada) north of the 55th parallel. With climate change, increased risk of re-incursion of ARVV in more densely populated southern regions raises public and animal health concerns. The objective of this study was to prioritise geographical areas to target for an early detection of ARVV incursion south of the 55th parallel based on the historical spatio-temporal trends of reported rabies in foxes in Quebec. Descriptive analyses of fox rabies cases from 1953 to 2017 were conducted. Three periods show increases in the number of fox rabies cases in southern regions and indicate incursion from northern areas or neighbouring provinces. The available data, particularly in central and northern regions of the province, were scarce and of low spatial resolution, making it impossible to identify the path of spread with precision. Hence, we investigated the use of multiple criteria, such as historical rabies cases, human population density and red fox (Vulpes vulpes) relative abundance, to prioritise areas for enhanced surveillance. This study underscores the need to define and maintain new criteria for selecting samples to be analysed in order to detect rapidly ARVV cases outside the current enzootic area and any potential re-incursion of the virus into central and southern regions of the province.
The use of natural pozzolans in concrete applications is gaining more attention because of the associated environmental, economic, and technical benefits. In this study, reference cemented mine backfill samples were prepared using Portland cement, and experimental samples were prepared by partially replacing Portland cement with 10 or 20 wt.% fly ash as a byproduct (artificial) pozzolan or pumice as a natural pozzolan. Samples were cured for 7, 14, and 28 days to investigate uniaxial compressive strength development. Backfill samples containing 10 wt.% pumice had almost a similar compressive strength as reference samples. There is strong potential for pumice to be used in cemented backfill to minimize costs, improve backfill properties, and promote the sustainability of the mining industry.
Probability theory has diverse applications in a plethora of fields, including physics, engineering, computer science, chemistry, biology and economics. This book will familiarize students with various applications of probability theory, stochastic modeling and random processes, using examples from all these disciplines and more. The reader learns via case studies and begins to recognize the sort of problems that are best tackled probabilistically. The emphasis is on conceptual understanding, the development of intuition and gaining insight, keeping technicalities to a minimum. Nevertheless, a glimpse into the depth of the topics is provided, preparing students for more specialized texts while assuming only an undergraduate-level background in mathematics. The wide range of areas covered - never before discussed together in a unified fashion – includes Markov processes and random walks, Langevin and Fokker–Planck equations, noise, generalized central limit theorem and extreme values statistics, random matrix theory and percolation theory.
This article considers what types of strategic communication messaging regarding migration policy are likely to be more or less effective. To do so, the article summarizes the literature to, first, note the broadly postulated effectiveness of value-based messaging and, second, note how underdefined this concept remains. To overcome this shortcoming, I introduce Schwarz’s psychological theory of “basic human values” and use European Social Survey data to visualize the relationship between these values and attitudes to immigration. I argue that messaging with a value-basis that is concordant with that of its audience is more likely to elicit sympathy, whereas that which is discordant with the values of its audience is more likely to elicit antipathy. Given the value-balanced orientations of those with moderate attitudes to immigration, persuasive migration messaging should attempt to mobilize values of its opposition; that is pro-migration messaging should mobilize Schwarz’s values of conformity, tradition, security, and power, whereas anti-migration messaging should mobilize values of universalism, benevolence, self-direction, and stimulation. I then turn to an inventory of 135 migration communication campaigns provided by the International Centre for Migration Policy Development. I show that few pro-migration campaigns contained value-based messaging, whereas all anti-migration campaigns did. Similarly, very few pro-migration campaigns included values besides “universalism” and “benevolence,” whereas anti-migration campaigns included values associated with both pro- and anti-migration attitudes. I visually demonstrate examples of each case before discussing ramifications for policy communication.
New Zealand has a long-running campylobacter infection (campylobacteriosis) epidemic with contaminated fresh chicken meat as the major source. This is both the highest impact zoonosis and the largest food safety problem in the country. Adding to this burden is the recent rapid emergence of antibiotic resistance in these campylobacter infections acquired from locally-produced chicken. Campylobacteriosis rates halved in 2008, as compared with the previous 5 years, following the introduction of regulatory limits on allowable contamination levels in fresh chicken meat, with large health and economic benefits resulting. In the following decade, disease rates do not appear to have declined further. The cumulative impact would equate to an estimated 539 000 cases, 5480 hospitalisations, 284 deaths and economic costs of approximately US$380 million during the last 10 years (2009–2018). Additional regulatory interventions, that build on previously successful regulations in this country, are urgently needed to control the source of this epidemic.
We study optimal bandwidth selection in nonparametric cointegrating regression where the regressor is a stochastic trend process driven by short or long memory innovations. Unlike stationary regression, the optimal bandwidth is found to be a random sequence which depends on the sojourn time of the process. All random sequences $h_{n}$ that lie within a wide band of rates as the sample size $n\rightarrow \infty $ have the property that local level and local linear kernel estimates are asymptotically normal, which enables inference and conveniently corresponds to limit theory in the stationary regression case. This finding reinforces the distinctive flexibility of data-based nonparametric regression procedures for nonstationary nonparametric regression. The present results are obtained under exogenous regressor conditions, which are restrictive but which enable flexible data-based methods of practical implementation in nonparametric predictive regressions within that environment.
The projection of outstanding liabilities caused by incurred losses or claims has played a fundamental role in general insurance operations. Loss reserving methods based on individual losses generally perform better than those based on aggregate losses. This study uses a parametric individual information model taking not only individual losses but also individual information such as age, gender, and so on from policies themselves into account. Based on this model, this study proposes a computation procedure for the projection of the outstanding liabilities, discusses the estimation and statistical properties of the unknown parameters, and explores the asymptotic behaviors of the resulting loss reserving as the portfolio size approaching infinity. Most importantly, this study demonstrates the benefits of individual information on loss reserving. Remarkably, the accuracy gained from individual information is much greater than that from considering individual losses. Therefore, it is highly recommended to use individual information in loss reserving in general insurance.
Build the skills for determining appropriate error limits for quantities that matter with this essential toolkit. Understand how to handle a complete project and how uncertainty enters into various steps. Provides a systematic, worksheet-based process to determine error limits on measured quantities, and all likely sources of uncertainty are explored, measured or estimated. Features instructions on how to carry out error analysis using Excel and MATLAB®, making previously tedious calculations easy. Whether you are new to the sciences or an experienced engineer, this useful resource provides a practical approach to performing error analysis. Suitable as a text for a junior or senior level laboratory course in aerospace, chemical and mechanical engineering, and for professionals.
Cost-effective sampling design is a problem of major concern in some experiments especially when the measurement of the characteristic of interest is costly or painful or time-consuming. In this article, we investigate ratio-type estimators of the population mean of the study variable, involving either the first or the third quartile of the auxiliary variable, using ranked set sampling (RSS) and extreme ranked set sampling (ERSS) schemes. The properties of the estimators are obtained. The estimators in RSS and ERSS are compared to their counterparts in simple random sampling (SRS) for normal data. The numerical results show that the estimators in RSS and ERSS are significantly more efficient than their counterparts in SRS.
We prove that, for any $t \ge 3$, there exists a constant c = c(t) > 0 such that any d-regular n-vertex graph with the second largest eigenvalue in absolute value λ satisfying $\lambda \le c{d^{t - 1}}/{n^{t - 2}}$ contains vertex-disjoint copies of kt covering all but at most ${n^{1 - 1/(8{t^4})}}$ vertices. This provides further support for the conjecture of Krivelevich, Sudakov and Szábo (Combinatorica24 (2004), pp. 403–426) that (n, d, λ)-graphs with n ∈ 3ℕ and $\lambda \le c{d^2}/n$ for a suitably small absolute constant c > 0 contain triangle-factors. Our arguments combine tools from linear programming with probabilistic techniques, and apply them in a certain weighted setting. We expect this method will be applicable to other problems in the field.
We build a computational framework to support the planning of development and the evaluation of budgetary strategies toward the 2030 Agenda. The methodology takes into account some of the complexities of the political economy underpinning the policymaking process: the multidimensionality of development, the interlinkages between these dimensions, and the inefficiencies of policy interventions, as well as institutional factors that promote or discourage these inefficiencies. The framework is scalable and usable even with limited publicly available information: development-indicator data. However, it can be further refined as more data becomes available, for example, on public expenditure. We demonstrate its usage through an application for the Mexican federal government. For this, we infer historical policy priorities, that is, the non-observable allocations of transformative resources that generated past changes in development indicators. We also show how to use the tool to assess the feasibility of development goals, to measure policy coherence, and to identify accelerators. Overall, the framework and its computational tools allow policymakers and other stakeholders to embrace a complexity (and a quantitative) view to tackle the challenges of the Sustainable Development Goals.
For research in the fields of engineering asset management (EAM) and system health, relevant data resides in the information systems of the asset owners, typically industrial corporations or government bodies. For academics to access EAM data sets for research purposes can be a difficult and time-consuming task. To facilitate a more consistent approach toward releasing asset-related data, we have developed a data risk assessment tool (DRAT). This tool evaluates and suggests controls to manage, risks associated with the release of EAM datasets to academic entities for research purposes. Factors considered in developing the tool include issues such as where accountability for approval sits in organizations, what affects an individual manager’s willingness to approve release, and how trust between universities and industry can be established and damaged. This paper describes the design of the DRAT tool and demonstrates its use on case studies provided by EAM owners for past research projects. The DRAT tool is currently being used to manage the data release process in a government-industry-university research partnership.
Inaccuracy and information measures based on cumulative residual entropy are quite useful and have attracted considerable attention in many fields including reliability theory. Using a point process martingale approach and a compensator version of Kumar and Taneja's generalized inaccuracy measure of two nonnegative continuous random variables, we define here an inaccuracy measure between two coherent systems when the lifetimes of their common components are observed. We then extend the results to the situation when the components in the systems are subject to failure according to a double stochastic Poisson process.
The retirement systems in many developed countries have been increasingly moving from defined benefit towards defined contribution system. In defined contribution systems, financial and longevity risks are shifted from pension providers to retirees. In this paper, we use a probabilistic approach to analyse the uncertainty associated with superannuation accumulation and decumulation. We apply an economic scenario generator called the Simulation of Uncertainty for Pension Analysis (SUPA) model to project uncertain future financial and economic variables. This multi-factor stochastic investment model, based on the Monte Carlo method, allows us to obtain the probability distribution of possible outcomes regarding the superannuation accumulation and decumulation phases, such as relevant percentiles. We present two examples to demonstrate the implementation of the SUPA model for the uncertainties during both phases under the current superannuation and Age Pension policy, and test two superannuation policy reforms suggested by the Grattan Institute.
The main subject of this introductory book is simple random walk on the integer lattice, with special attention to the two-dimensional case. This fascinating mathematical object is the point of departure for an intuitive and richly illustrated tour of related topics at the active edge of research. It starts with three different proofs of the recurrence of the two-dimensional walk, via direct combinatorial arguments, electrical networks, and Lyapunov functions. After reviewing some relevant potential-theoretic tools, the reader is guided toward the relatively new topic of random interlacements - which can be viewed as a 'canonical soup' of nearest-neighbour loops through infinity - again with emphasis on two dimensions. On the way, readers will visit conditioned simple random walks - which are the 'noodles' in the soup - and also discover how Poisson processes of infinite objects are constructed and review the recently introduced method of soft local times. Each chapter ends with many exercises, making it suitable for courses and independent study.
Using a stochastic model, we assess the risk of importation-induced local transmission chains in locations seeing few or no local transmissions and evaluate the role of quarantine in the mitigation of this risk. We find that the rate of importations plays a critical role in determining the risk that case importations lead to local transmission chains, more so than local transmission characteristics, i.e. strength of social distancing measures (NPI). The latter influences the severity of the outbreaks when they do take place. Quarantine after arrival in a location is an efficacious way to reduce the rate of importations. Locations that see no or low-level local transmission should ensure that the rate of importations remains low. A high level of compliance with post-arrival quarantine followed by testing achieves this objective with less of an impact than travel restrictions or bans.
The epidemiological target of lockdowns is to drive down the effective reproduction number (Rt) to less than 1. A key unknown is the duration that lockdowns need to be in place to achieve this and which lockdown measures are effective. Daily number of laboratory confirmed community coronavirus 2019 cases were extracted from regular reports from the Ministry of Health Singapore from 20 March 2020 to 4 May 2020. We generated daily Rt to estimate the time needed for these public health lockdown measures to control the spread of severe acute respiratory syndrome coronavirus 2 as demonstrated by Rt < 1. It took about 14 days of nationwide lockdown for the Rt trend to change and start falling. The upper limit of the 95% confidence interval for time to Rt < 1 was day 15 of lockdown. We have shown that it is possible to start ‘bending the Rt curve’ about 2 weeks after implementation of specific lockdown measures with strict compliance.
An acute gastroenteritis (AGE) outbreak caused by a norovirus occurred at a hospital in Shanghai, China, was studied for molecular epidemiology, host susceptibility and serological roles. Rectal and environmental swabs, paired serum samples and saliva specimens were collected. Pathogens were detected by real-time polymerase chain reaction and DNA sequencing. Histo-blood group antigens (HBGA) phenotypes of saliva samples and their binding to norovirus protruding proteins were determined by enzyme-linked immunosorbent assay. The HBGA-binding interfaces and the surrounding region were analysed by the MegAlign program of DNAstar 7.1. Twenty-seven individuals in two care units were attacked with AGE at attack rates of 9.02 and 11.68%. Eighteen (78.2%) symptomatic and five (38.4%) asymptomatic individuals were GII.6/b norovirus positive. Saliva-based HBGA phenotyping showed that all symptomatic and asymptomatic cases belonged to A, B, AB or O secretors. Only four (16.7%) out of the 24 tested serum samples showed low blockade activity against HBGA-norovirus binding at the acute phase, whereas 11 (45.8%) samples at the convalescence stage showed seroconversion of such blockade. Specific blockade antibody in the population played an essential role in this norovirus epidemic. A wide HBGA-binding spectrum of GII.6 supports a need for continuous health attention and surveillance in different settings.