To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper provides practical guidance to UK-based financial institutions (UKFIs) that are subject to the “operational resilience” guideline requirements of the Bank of England (BoE), Prudential Regulatory Authority and Financial Conduct Authority, issued in 2021, and fully effective for 31 March 2025. It contains practical suggestions and recommendations to assist UKFIs in implementing the guidelines. The scope of the paper covers issues related to (a) overviewing the latest equivalent operational resilience guidance in other countries and internationally (b) identifying key issues related to risk culture, risk appetite, information technology, tolerance setting, risk modelling, scenario planning and customer oriented operational resilience (c) identifying a framework for operational resilience based on a thorough understanding of these parameters and (d) designing and implementing an operational resilience maturity dashboard based on a sample of large UKIFs. The study also contains recommendations for further action, including enhanced controls and operational risk management frameworks. It concludes by identifying imperative policy actions to ensure that the implementation of the guidelines is more effective.
In this paper we adopt the probabilistic mean value theorem in order to study differences of the variances of transformed and stochastically ordered random variables, based on a suitable extension of the equilibrium operator. We also develop a rigorous approach aimed at expressing the variance of transformed random variables. This is based on a joint distribution which, in turn, involves the variance of the original random variable, as well as its mean residual lifetime and mean inactivity time. Then we provide applications to the additive hazards model and to some well-known random variables of interest in actuarial science. These deal with a new notion, called the ‘centred mean residual lifetime’, and a suitably related stochastic order. Finally, we also address the analysis of the differences of the variances of transformed discrete random variables thanks to the use of a discrete version of the equilibrium operator.
The gambler’s ruin problem for correlated random walks (CRWs), both with and without delays, is addressed using the optional stopping theorem for martingales. We derive closed-form expressions for the ruin probabilities and the expected game duration for CRWs with increments $\{1,-1\}$ and for symmetric CRWs with increments $\{1,0,-1\}$ (CRWs with delays). Additionally, a martingale technique is developed for general CRWs with delays. The gambler’s ruin probability for a game involving bets on two arbitrary patterns is also examined.
Cryptosporidium parvum is a well-established cause of gastrointestinal illness in both humans and animals and often causes outbreaks at animal contact events, despite the availability of a code of practice that provides guidance on the safe management of these events. We describe a large C. parvum outbreak following a lamb-feeding event at a commercial farm in Wales in 2024, alongside findings from a cohort study to identify high-risk exposures. Sixty-seven cases were identified, 57 were laboratory-confirmed C. parvum, with similar genotypes. Environmental investigations found a lack of adherence to established guidance. The cohort study identified 168 individuals with cryptosporidiosis-like illness from 540 exposure questionnaires (distributed via email to 790 lead bookers). Cases were more likely to have had closer contact with lambs (odds ratio (OR) kissed lambs = 2.4, 95% confidence interval (95% CI): 1.2–4.8). A multivariable analysis found cases were more likely to be under 10 years (adjusted OR (aOR) = 4.5, 95% CI: 2.0–10.0) and have had visible faeces on their person (aOR = 3.6, 95% CI: 2.1–6.2). We provide evidence that close contact at lamb-feeding events presents an increased likelihood of illness, suggesting that farms should limit animal contact at these events and that revisions to established codes of practice may be necessary. Enhancing risk awareness among farmers and visitors is needed, particularly regarding children.
In this paper we study the optimal multiple stopping problem with weak regularity for the reward, where the reward is given by a set of random variables indexed by stopping times. When the reward family is upper semicontinuous in expectation along stopping times, we construct the optimal multiple stopping strategy using the auxiliary optimal single stopping problems. We also obtain the corresponding results when the reward is given by a progressively measurable process.
This article aims at facilitating the widespread application of Energy Management Systems (EMSs), especially in buildings and cities, in order to support the realization of future carbon-neutral energy systems. We claim that economic viability is a severe issue for the utilization of EMSs at scale and that the provisioning of forecasting and optimization algorithms as a service can make a major contribution to achieving it. To this end, we present the Energy Service Generics software framework that allows the derivation of fully functional services from existing forecasting or optimization code with ease. This work documents the strictly systematic development of the framework, beginning with requirement analysis, from which a sophisticated design concept is derived, followed by a description of the implementation of the framework. Furthermore, we present the concept of the Open Energy Services community, our effort to continuously maintain the service framework but also provide ready-to-use forecasting and optimization services. Finally, an evaluation of our framework and community concept, as well as a demarcation between our work and the current state of the art, is presented.
While the Sustainable Development Goals (SDGs) were being negotiated, global policymakers assumed that advances in data technology and statistical capabilities, what was dubbed the “data revolution”, would accelerate development outcomes by improving policy efficiency and accountability. The 2014 report to the United Nations Secretary General, “A World That Counts” framed the data-for-development agenda, and proposed four pathways to impact: measuring for accountability, generating disaggregated and real-time data supplies, improving policymaking, and implementing efficiency. The subsequent experience suggests that while many recommendations were implemented globally to advance the production of data and statistics, the impact on SDG outcomes has been inconsistent. Progress towards SDG targets has stalled despite advances in statistical systems capability, data production, and data analytics. The coherence of the SDG policy agenda has undoubtedly improved aspects of data collection and supply, with SDG frameworks standardizing greater indicator reporting. However, other events, including the response to COVID-19, have played catalytic roles in statistical system innovation. Overall, increased financing for statistical systems has not materialized, though planning and monitoring of these national systems may have longer-term impacts. This article reviews how assumptions about the data revolution have evolved and where new assumptions are necessary to advance the impact across the data value chain. These include focusing on measuring what matters most for decision-making needs across polycentric institutions, leveraging the SDGs for global data standardization and strategic financial mobilization, closing data gaps while enhancing policymaker analytic capabilities, and fostering collective intelligence to drive data innovation, credible information, and sustainable development outcomes.
This study analyzed standardized excess mortality due to specific causes during the Covid-19 pandemic across 33 European countries, using Eurostat data (2016–2021) and Our World in Data databases. Causes included circulatory and respiratory diseases, neoplasms, transport accidents, and “other” causes (e.g., diabetes, dementia, ill-defined conditions). Additional variables such as vaccination rates, economic and health indicators, demographics, and government stringency measures were also examined. Key findings include: (1) Most European countries (excluding Central and Eastern Europe), recorded lower than expected excess mortality from circulatory and respiratory diseases, neoplasms, and transport accidents. Ireland had the lowest excess respiratory mortality in both 2020 and 2021; (2) Croatia, Cyprus, Malta, and Turkey showed significant positive excess mortality from “other” causes, potentially linked to public health restrictions, with Turkey as an exception; (3) Regression analysis found that higher human development index and vaccination rates were associated with lower excess mortality. Policy Implications are: (1) Statistically significant positive or negative cause-specific excess mortality may indicate future health trends; (2) The pandemic and government stringency measures negatively affected mortality from “other” causes; (3) Strengthening health system resilience, investing in digital medicine, directing aid to countries with weaker systems, and supporting disadvantaged groups are key recommendations.
We propose a novel micro-level Cox model for incurred but not reported (IBNR) claims count based on hidden Markov models. Initially formulated as a continuous-time model, it addresses the complexity of incorporating temporal dependencies and policyholder risk attributes. However, the continuous-time model faces significant challenges in maximizing the likelihood and fitting right-truncated reporting delays. To overcome these issues, we introduce two discrete-time versions: one incorporating unsystematic randomness in reporting delays through a Dirichlet distribution and one without. We provide the EM algorithm for parameter estimation for all three models and apply them to an auto-insurance dataset to estimate IBNR claim counts. Our results show that while all models perform well, the discrete-time versions demonstrate superior performance by jointly modeling delay and frequency, with the Dirichlet-based model capturing additional variability in reporting delays. This approach enhances the accuracy and reliability of IBNR reserving, offering a flexible framework adaptable to different levels of granularity within an insurance portfolio.
We prove a Poisson process approximation result for stabilising functionals of a determinantal point process. Our results use concrete couplings of determinantal processes with different Palm measures and exploit their association properties. Second, we focus on the Ginibre process and show in the asymptotic scenario of an increasing observation window that the process of points with a large nearest neighbour distance converges after a suitable scaling to a Poisson point process. As a corollary, we obtain the scaling of the maximum nearest neighbour distance in the Ginibre process, which turns out to be different from its analogue for independent points.
For a Sunday morning drive sometime in May 2023, the busy Outer Ring Road in Bengaluru seemed much more congested than usual. Vehicles were coming from everywhere and spilling into and out of this main road, and what was surreal was that this congestion was without the usual levels of nudging, shoving, shouting, and scraping on the road. For one, the road was crawling with traffic police, being, I should add, ably assisted by burly Bharatiya Janata Party (BJP) party workers. For another, it was one among Narendra Modi's several visits to the state as part of his campaigning for the assembly elections, and so it looked like everyone knew the reason behind the congestion and everyone seemed resigned to it, happily or otherwise. Modi was going to go through one of the perpendicular roads as part of his Bengaluru road show.
And as is typical of most Bengaluru drivers, I also took the chance of exiting the main road, into the narrow alleyways, hoping that I could get to another road that I presumed would be out of the vicinity of the road show. At the end of this alleyway maze, I suddenly found myself in the middle of a much wider road, again overflowing with cops and BJP party workers. With no traffic around, strangely, they casually gave me a glance as if my car was intruding upon something. Clearly it looked like I was.
I found myself on the road that Modi's procession had just crossed barely a few minutes ago, and those public officials were possibly breathing a sigh of relief when my car came in as an unwelcome pull back to reality. It was very quiet, with absolutely no traffic on the road.
Modeling detailed chemical kinetics is a primary challenge in combustion simulations. We present a novel framework to enforce physical constraints, specifically total mass and elemental conservation, during the reaction of ML models’ training for the reduced composition space chemical kinetics of large chemical mechanisms in combustion. In these models, the transport equations for a subset of representative species are solved with the ML approaches, while the remaining nonrepresentative species are “recovered” with a separate artificial neural network trained on data. Given the strong correlation between full and reduced solution vectors, our method utilizes a small neural network to establish an accurate and physically consistent mapping. By leveraging this mapping, we enforce physical constraints in the training process of the ML model for reduced composition space chemical kinetics. The framework is demonstrated here for methane, CH4, and oxidation. The resulting solution vectors from our deep operator networks (DeepONet)-based approach are accurate and align more consistently with physical laws.
As part of my dissertation research many summers ago, I lived for a couple of months in a few villages that straddled the borders of Karnataka, Tamil Nadu, and Andhra Pradesh. I was initiated into the political economy of this region by Mr Krishne Gowda of Bathlahalli village, an elderly patron of the region, who lived with his married sons and their families. Armed with a law degree from decades ago and a towel over the armpit (Manor 2004) now, Mr Gowda was the quintessential mover-and-shaker politician. On one post-lunch afternoon in the early days, he asked me what subject I was studying, and I told him “Political Science.” He looked at me, and then, in his earnestness to educate me, he said what I remember as the following:
Look, you are studying politics but let me tell you that we villagers know a lot about politics and data because we vote on many things. We vote in the panchayat elections, Assembly elections, and Lok Sabha of course. But we also have votes for cooperative bank elections, committees within panchayats, and so on.
We also know how to deal with the government. When they come and ask us how many members there are in my household, I decide the answer according to who is asking. If it is the forest official who asks, I will say one household. If it is for rations, I will say multiple households. If it is for elections, I will say three households. If it is for census, I will say one household and so on … it really depends on what the benefit is.
To compare is to “assimilate” and to discover deeper or fundamental similarities below the surface of secondary diversities (Sartori 1970). This chapter will discuss the underlying conceptual attributes of populism and how they have been constructed as they provide the background for indexing the cases in Chapter 4. The intention behind parsing populism into its underlying conceptual attributes is to be able to identify how they configure with each other to constitute the various populisms in India. And since set theoretic analysis is the approach adopted here to understand these configurations, this chapter will also translate these attributes and their constructs as necessary and sufficient conditions.
At this point, it may be helpful to step back from populism and understand the construction and the kind of concept structure being used and why that justifies the need for sufficient and necessary conditions and the downstream analysis that follows. The description provided here is a simple adaptation of the framework outlined by Goertz (2006). The concept structure being used here is multilevel and multidimensional. A multilevel concept has a basic structure, reflected through the secondary level as visible attributes whereby each attribute in turn can be measured through indicators as membership scores (in this project) or as variables in projects with a quantitative design. A multidimensional concept has different dimensions that constitute the basic level of the concept. The nature of the relationship between the attributes and the basic level can be causal, ontological, and substitutable. In this project, the attributes share an ontological relationship with the basic concept, according to which the various attributes are not just the defining features of the basic concept but in fact are the elements that compose the basic level.
Below are some clarificatory notes related to the dataset.
Populist Outcome Related
Clarifications on Electoral Data
A few additional clarifications, as background information, are necessary to understand some of the measures related to the electoral data.
1. Bal Thackeray had never contested an election. But it seemed unjustifiable to ignore him, as he was clearly a populist leader of some measure in Maharashtra. Thus, I took the first electoral victory of the Shiv Sena in the assembly elections of 1995 as the populist instance, because Bal Thackeray reigned supreme for many years prior to and after this victory. And I took the electoral statistics of the incumbent chief minister and loyalist, Manohar Joshi, as the proxy for Bal Thackeray, assuming that the party chief that has got the assembly majority for the first time in its history would have its most trusted loyalist as the chief minister.
2. Jayalalitha's elections in 2001, 2011, and 2016 also need clarifications. In 2001, Jayalalitha was disqualified from competing in the election in May 2001 but was acquitted in December 2001 and thereafter won the byelections from Andipatti in 2002. In 2011, even though Jayalalitha won from the Srirangam constituency (which was the constituency included in the dataset), she was convicted by the Karnataka High Court soon thereafter and acquitted subsequently. She then contested from the RK Nagar constituency and resumed her chief ministership and contested from RK Nagar again in 2016.