To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We build a computational framework to support the planning of development and the evaluation of budgetary strategies toward the 2030 Agenda. The methodology takes into account some of the complexities of the political economy underpinning the policymaking process: the multidimensionality of development, the interlinkages between these dimensions, and the inefficiencies of policy interventions, as well as institutional factors that promote or discourage these inefficiencies. The framework is scalable and usable even with limited publicly available information: development-indicator data. However, it can be further refined as more data becomes available, for example, on public expenditure. We demonstrate its usage through an application for the Mexican federal government. For this, we infer historical policy priorities, that is, the non-observable allocations of transformative resources that generated past changes in development indicators. We also show how to use the tool to assess the feasibility of development goals, to measure policy coherence, and to identify accelerators. Overall, the framework and its computational tools allow policymakers and other stakeholders to embrace a complexity (and a quantitative) view to tackle the challenges of the Sustainable Development Goals.
For research in the fields of engineering asset management (EAM) and system health, relevant data resides in the information systems of the asset owners, typically industrial corporations or government bodies. For academics to access EAM data sets for research purposes can be a difficult and time-consuming task. To facilitate a more consistent approach toward releasing asset-related data, we have developed a data risk assessment tool (DRAT). This tool evaluates and suggests controls to manage, risks associated with the release of EAM datasets to academic entities for research purposes. Factors considered in developing the tool include issues such as where accountability for approval sits in organizations, what affects an individual manager’s willingness to approve release, and how trust between universities and industry can be established and damaged. This paper describes the design of the DRAT tool and demonstrates its use on case studies provided by EAM owners for past research projects. The DRAT tool is currently being used to manage the data release process in a government-industry-university research partnership.
Inaccuracy and information measures based on cumulative residual entropy are quite useful and have attracted considerable attention in many fields including reliability theory. Using a point process martingale approach and a compensator version of Kumar and Taneja's generalized inaccuracy measure of two nonnegative continuous random variables, we define here an inaccuracy measure between two coherent systems when the lifetimes of their common components are observed. We then extend the results to the situation when the components in the systems are subject to failure according to a double stochastic Poisson process.
The retirement systems in many developed countries have been increasingly moving from defined benefit towards defined contribution system. In defined contribution systems, financial and longevity risks are shifted from pension providers to retirees. In this paper, we use a probabilistic approach to analyse the uncertainty associated with superannuation accumulation and decumulation. We apply an economic scenario generator called the Simulation of Uncertainty for Pension Analysis (SUPA) model to project uncertain future financial and economic variables. This multi-factor stochastic investment model, based on the Monte Carlo method, allows us to obtain the probability distribution of possible outcomes regarding the superannuation accumulation and decumulation phases, such as relevant percentiles. We present two examples to demonstrate the implementation of the SUPA model for the uncertainties during both phases under the current superannuation and Age Pension policy, and test two superannuation policy reforms suggested by the Grattan Institute.
The main subject of this introductory book is simple random walk on the integer lattice, with special attention to the two-dimensional case. This fascinating mathematical object is the point of departure for an intuitive and richly illustrated tour of related topics at the active edge of research. It starts with three different proofs of the recurrence of the two-dimensional walk, via direct combinatorial arguments, electrical networks, and Lyapunov functions. After reviewing some relevant potential-theoretic tools, the reader is guided toward the relatively new topic of random interlacements - which can be viewed as a 'canonical soup' of nearest-neighbour loops through infinity - again with emphasis on two dimensions. On the way, readers will visit conditioned simple random walks - which are the 'noodles' in the soup - and also discover how Poisson processes of infinite objects are constructed and review the recently introduced method of soft local times. Each chapter ends with many exercises, making it suitable for courses and independent study.
Using a stochastic model, we assess the risk of importation-induced local transmission chains in locations seeing few or no local transmissions and evaluate the role of quarantine in the mitigation of this risk. We find that the rate of importations plays a critical role in determining the risk that case importations lead to local transmission chains, more so than local transmission characteristics, i.e. strength of social distancing measures (NPI). The latter influences the severity of the outbreaks when they do take place. Quarantine after arrival in a location is an efficacious way to reduce the rate of importations. Locations that see no or low-level local transmission should ensure that the rate of importations remains low. A high level of compliance with post-arrival quarantine followed by testing achieves this objective with less of an impact than travel restrictions or bans.
The epidemiological target of lockdowns is to drive down the effective reproduction number (Rt) to less than 1. A key unknown is the duration that lockdowns need to be in place to achieve this and which lockdown measures are effective. Daily number of laboratory confirmed community coronavirus 2019 cases were extracted from regular reports from the Ministry of Health Singapore from 20 March 2020 to 4 May 2020. We generated daily Rt to estimate the time needed for these public health lockdown measures to control the spread of severe acute respiratory syndrome coronavirus 2 as demonstrated by Rt < 1. It took about 14 days of nationwide lockdown for the Rt trend to change and start falling. The upper limit of the 95% confidence interval for time to Rt < 1 was day 15 of lockdown. We have shown that it is possible to start ‘bending the Rt curve’ about 2 weeks after implementation of specific lockdown measures with strict compliance.
An acute gastroenteritis (AGE) outbreak caused by a norovirus occurred at a hospital in Shanghai, China, was studied for molecular epidemiology, host susceptibility and serological roles. Rectal and environmental swabs, paired serum samples and saliva specimens were collected. Pathogens were detected by real-time polymerase chain reaction and DNA sequencing. Histo-blood group antigens (HBGA) phenotypes of saliva samples and their binding to norovirus protruding proteins were determined by enzyme-linked immunosorbent assay. The HBGA-binding interfaces and the surrounding region were analysed by the MegAlign program of DNAstar 7.1. Twenty-seven individuals in two care units were attacked with AGE at attack rates of 9.02 and 11.68%. Eighteen (78.2%) symptomatic and five (38.4%) asymptomatic individuals were GII.6/b norovirus positive. Saliva-based HBGA phenotyping showed that all symptomatic and asymptomatic cases belonged to A, B, AB or O secretors. Only four (16.7%) out of the 24 tested serum samples showed low blockade activity against HBGA-norovirus binding at the acute phase, whereas 11 (45.8%) samples at the convalescence stage showed seroconversion of such blockade. Specific blockade antibody in the population played an essential role in this norovirus epidemic. A wide HBGA-binding spectrum of GII.6 supports a need for continuous health attention and surveillance in different settings.
We investigated a large gastroenteritis outbreak that occurred in Northern Greece in 2019. A case was defined as anyone presenting with diarrhoea and/or vomiting from 24 January 2019 to 04 February 2019. We conducted a case-control study (CCS) using random selection of participants >16 years of age, residents of town X, who visited the health care centre between 25 and 28 January 2019. Moreover, we conducted a retrospective cohort study (CS) at the four elementary schools of the town. We collected clinical and water samples and the water supply system was inspected. In total, we recorded 638 cases (53% female; median age was 44 years (range 0–93)). Forty-eight cases and 52 controls participated in the CCS and 236 students in the CS. Both CCS and CS indicated tap water as the most likely source (OR 10, 95% CI 2.09–93.4, explaining 95.7% of cases; RR = 2.22, 95% CI 1.42–3.46, respectively). More than one pathogen was detected from stool samples of 6 of the 11 cases tested (norovirus, Campylobacter jejuni, Enterohemorrhagic E. coli (EHEC) and Enteropathogenic E. coli (EPEC)). Water samples, collected after ad-hoc chlorination, tested negative. Technical failures of the water tanks' status were identified. Our results suggested a waterborne outbreak. We recommended regular monitoring of the water supply system and immediate repair of technical failures.
This textbook introduces fundamental concepts, major models, and popular applications of pattern recognition for a one-semester undergraduate course. To ensure student understanding, the text focuses on a relatively small number of core concepts with an abundance of illustrations and examples. Concepts are reinforced with hands-on exercises to nurture the student's skill in problem solving. New concepts and algorithms are framed by real-world context and established as part of the big picture introduced in an early chapter. A problem-solving strategy is employed in several chapters to equip students with an approach for new problems in pattern recognition. This text also points out common errors that a new player in pattern recognition may encounter, and fosters the ability for readers to find useful resources and independently solve a new pattern recognition task through various working examples. Students with an undergraduate understanding of mathematical analysis, linear algebra, and probability will be well prepared to master the concepts and mathematical analysis presented here.
In this paper, we apply flexible data-driven analysis methods on large-scale mass transit data to identify areas for improvement in the engineering and operation of urban rail systems. Specifically, we use data from automated fare collection (AFC) and automated vehicle location (AVL) systems to obtain a more precise characterisation of the drivers of journey time variance on the London Underground, and thus an improved understanding of delay. Total journey times are decomposed via a probabilistic assignment algorithm, and semiparametric regression is undertaken to disentangle the effects of passenger-specific travel characteristics from network-related factors. For total journey times, we find that network characteristics, primarily train speeds and headways, represent the majority of journey time variance. However, within the typically twice as onerous access and egress time components, passenger-level heterogeneity is more influential. On average, we find that intra-passenger heterogeneity represents 6% and 19% of variance in access and egress times, respectively, and that inter-passenger effects have a similar or greater degree of influence than static network characteristics. The analysis shows that while network-specific characteristics are the primary drivers of journey time variance in absolute terms, a nontrivial proportion of passenger-perceived variance would be influenced by passenger-specific characteristics. The findings have potential applications related to improving the understanding of passenger movements within stations, for example, the analysis can be used to assess the relative way-finding complexity of stations, which can in turn guide transit operators in the targeting of potential interventions.
A number of theoretical results have provided sufficient conditions for the selection of payoff-efficient equilibria in games played on networks when agents imitate successful neighbors and make occasional mistakes (stochastic stability). However, those results only guarantee full convergence in the long-run, which might be too restrictive in reality. Here, we employ a more gradual approach relying on agent-based simulations avoiding the double limit underlying these analytical results. We focus on the circular-city model, for which a sufficient condition on the population size relative to the neighborhood size was identified by Alós-Ferrer & Weidenholzer [(2006) Economics Letters, 93, 163–168]. Using more than 100,000 agent-based simulations, we find that selection of the efficient equilibrium prevails also for a large set of parameters violating the previously identified condition. Interestingly, the extent to which efficiency obtains decreases gradually as one moves away from the boundary of this condition.
This chapter is a tutorial about some of the key issues in semantics of the first-order aspects of probabilistic programming languages for statistical modelling – languages such as Church, Anglican, Venture and WebPPL. We argue that s-finite measures and s-finite kernels provide a good semantic basis.
Reasoning about probabilistic programs is hard because it compounds the difficulty of classic program analysis with sometimes subtle questions of probability theory. Having precise mathematical models, or semantics, describing their behaviour is therefore particularly important. In this chapter, we review two probabilistic semantics. First, an operational semantics which models the local, step-by-step, behaviour of programs, then a denotational semantics describing global behaviour as an operator transforming probability distributions over memory states.
Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and masking of soft errors is challenging, expensive and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning and big data analytics) can often naturally tolerate soft errors.In this chapter, we demonstrate how a programming language, Rely, enables developers to reason about and verify the quantitative reliability of an application – namely, the probability that it produces the correct result when executed on unreliable hardware. Rely leverages a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering.
Church's λ-calculus has become a universally accepted model of pure functional programming, and its properties have been thoroughly scrutinised by the research community in the last 90 years. Many variations of it have been introduced for the sake of capturing programming with various forms of effects, thus going beyond pure functional programming. This chapter is meant to be a gentle introduction to a family of such calculi, namely probabilistic λ-calculi, in their two main variations: randomised λ-calculi and Bayesian λ-calculi. We focus our attention on the operational semantics, expressive power and termination properties of randomised λ-calculi, only giving some hints and references about denotational models and Bayesian λ-calculi.
The quantitative analysis of probabilistic programs answers queries involving the expected values of program variables and expressions involving them, as well as bounds on the probabilities of assertions. In this chapter, we will present the use of concentration of measure inequalities to reason about such bounds. First, we will briefly present and motivate standard concentration of measure inequalities. Next, we survey approaches to reason about quantitative properties using concentration of measure inequalities, illustrating these on numerous motivating examples. Finally, we discuss currently open challenges in this area for future work.
Property-based random testing á la QuickCheck requires building efficient generators for well-distributed random data satisfying complex logical predicates, but writing these generators can be difficult and error prone. This chapter introduces a probabilistic domain-specific language in which generators are conveniently expressed by decorating predicates with lightweight annotations to control both the distribution of generated values and the amount of constraint solving that happens before each variable is instantiated. This language, called Luck, makes generators easier to write, read and maintain. We give Luck a probabilistic formal semantics and prove several fundamental properties, including the soundness and completeness of random generation with respect to a standard predicate semantics. We evaluate Luck on common examples from the property-based testing literature and on two significant case studies, showing that it can be used in complex domains with comparable bug-finding effectiveness and a significant reduction in testing code size compared to handwritten generators.