To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper we study the optimal multiple stopping problem with weak regularity for the reward, where the reward is given by a set of random variables indexed by stopping times. When the reward family is upper semicontinuous in expectation along stopping times, we construct the optimal multiple stopping strategy using the auxiliary optimal single stopping problems. We also obtain the corresponding results when the reward is given by a progressively measurable process.
This article aims at facilitating the widespread application of Energy Management Systems (EMSs), especially in buildings and cities, in order to support the realization of future carbon-neutral energy systems. We claim that economic viability is a severe issue for the utilization of EMSs at scale and that the provisioning of forecasting and optimization algorithms as a service can make a major contribution to achieving it. To this end, we present the Energy Service Generics software framework that allows the derivation of fully functional services from existing forecasting or optimization code with ease. This work documents the strictly systematic development of the framework, beginning with requirement analysis, from which a sophisticated design concept is derived, followed by a description of the implementation of the framework. Furthermore, we present the concept of the Open Energy Services community, our effort to continuously maintain the service framework but also provide ready-to-use forecasting and optimization services. Finally, an evaluation of our framework and community concept, as well as a demarcation between our work and the current state of the art, is presented.
While the Sustainable Development Goals (SDGs) were being negotiated, global policymakers assumed that advances in data technology and statistical capabilities, what was dubbed the “data revolution”, would accelerate development outcomes by improving policy efficiency and accountability. The 2014 report to the United Nations Secretary General, “A World That Counts” framed the data-for-development agenda, and proposed four pathways to impact: measuring for accountability, generating disaggregated and real-time data supplies, improving policymaking, and implementing efficiency. The subsequent experience suggests that while many recommendations were implemented globally to advance the production of data and statistics, the impact on SDG outcomes has been inconsistent. Progress towards SDG targets has stalled despite advances in statistical systems capability, data production, and data analytics. The coherence of the SDG policy agenda has undoubtedly improved aspects of data collection and supply, with SDG frameworks standardizing greater indicator reporting. However, other events, including the response to COVID-19, have played catalytic roles in statistical system innovation. Overall, increased financing for statistical systems has not materialized, though planning and monitoring of these national systems may have longer-term impacts. This article reviews how assumptions about the data revolution have evolved and where new assumptions are necessary to advance the impact across the data value chain. These include focusing on measuring what matters most for decision-making needs across polycentric institutions, leveraging the SDGs for global data standardization and strategic financial mobilization, closing data gaps while enhancing policymaker analytic capabilities, and fostering collective intelligence to drive data innovation, credible information, and sustainable development outcomes.
This study analyzed standardized excess mortality due to specific causes during the Covid-19 pandemic across 33 European countries, using Eurostat data (2016–2021) and Our World in Data databases. Causes included circulatory and respiratory diseases, neoplasms, transport accidents, and “other” causes (e.g., diabetes, dementia, ill-defined conditions). Additional variables such as vaccination rates, economic and health indicators, demographics, and government stringency measures were also examined. Key findings include: (1) Most European countries (excluding Central and Eastern Europe), recorded lower than expected excess mortality from circulatory and respiratory diseases, neoplasms, and transport accidents. Ireland had the lowest excess respiratory mortality in both 2020 and 2021; (2) Croatia, Cyprus, Malta, and Turkey showed significant positive excess mortality from “other” causes, potentially linked to public health restrictions, with Turkey as an exception; (3) Regression analysis found that higher human development index and vaccination rates were associated with lower excess mortality. Policy Implications are: (1) Statistically significant positive or negative cause-specific excess mortality may indicate future health trends; (2) The pandemic and government stringency measures negatively affected mortality from “other” causes; (3) Strengthening health system resilience, investing in digital medicine, directing aid to countries with weaker systems, and supporting disadvantaged groups are key recommendations.
We propose a novel micro-level Cox model for incurred but not reported (IBNR) claims count based on hidden Markov models. Initially formulated as a continuous-time model, it addresses the complexity of incorporating temporal dependencies and policyholder risk attributes. However, the continuous-time model faces significant challenges in maximizing the likelihood and fitting right-truncated reporting delays. To overcome these issues, we introduce two discrete-time versions: one incorporating unsystematic randomness in reporting delays through a Dirichlet distribution and one without. We provide the EM algorithm for parameter estimation for all three models and apply them to an auto-insurance dataset to estimate IBNR claim counts. Our results show that while all models perform well, the discrete-time versions demonstrate superior performance by jointly modeling delay and frequency, with the Dirichlet-based model capturing additional variability in reporting delays. This approach enhances the accuracy and reliability of IBNR reserving, offering a flexible framework adaptable to different levels of granularity within an insurance portfolio.
We prove a Poisson process approximation result for stabilising functionals of a determinantal point process. Our results use concrete couplings of determinantal processes with different Palm measures and exploit their association properties. Second, we focus on the Ginibre process and show in the asymptotic scenario of an increasing observation window that the process of points with a large nearest neighbour distance converges after a suitable scaling to a Poisson point process. As a corollary, we obtain the scaling of the maximum nearest neighbour distance in the Ginibre process, which turns out to be different from its analogue for independent points.
For a Sunday morning drive sometime in May 2023, the busy Outer Ring Road in Bengaluru seemed much more congested than usual. Vehicles were coming from everywhere and spilling into and out of this main road, and what was surreal was that this congestion was without the usual levels of nudging, shoving, shouting, and scraping on the road. For one, the road was crawling with traffic police, being, I should add, ably assisted by burly Bharatiya Janata Party (BJP) party workers. For another, it was one among Narendra Modi's several visits to the state as part of his campaigning for the assembly elections, and so it looked like everyone knew the reason behind the congestion and everyone seemed resigned to it, happily or otherwise. Modi was going to go through one of the perpendicular roads as part of his Bengaluru road show.
And as is typical of most Bengaluru drivers, I also took the chance of exiting the main road, into the narrow alleyways, hoping that I could get to another road that I presumed would be out of the vicinity of the road show. At the end of this alleyway maze, I suddenly found myself in the middle of a much wider road, again overflowing with cops and BJP party workers. With no traffic around, strangely, they casually gave me a glance as if my car was intruding upon something. Clearly it looked like I was.
I found myself on the road that Modi's procession had just crossed barely a few minutes ago, and those public officials were possibly breathing a sigh of relief when my car came in as an unwelcome pull back to reality. It was very quiet, with absolutely no traffic on the road.
Modeling detailed chemical kinetics is a primary challenge in combustion simulations. We present a novel framework to enforce physical constraints, specifically total mass and elemental conservation, during the reaction of ML models’ training for the reduced composition space chemical kinetics of large chemical mechanisms in combustion. In these models, the transport equations for a subset of representative species are solved with the ML approaches, while the remaining nonrepresentative species are “recovered” with a separate artificial neural network trained on data. Given the strong correlation between full and reduced solution vectors, our method utilizes a small neural network to establish an accurate and physically consistent mapping. By leveraging this mapping, we enforce physical constraints in the training process of the ML model for reduced composition space chemical kinetics. The framework is demonstrated here for methane, CH4, and oxidation. The resulting solution vectors from our deep operator networks (DeepONet)-based approach are accurate and align more consistently with physical laws.
As part of my dissertation research many summers ago, I lived for a couple of months in a few villages that straddled the borders of Karnataka, Tamil Nadu, and Andhra Pradesh. I was initiated into the political economy of this region by Mr Krishne Gowda of Bathlahalli village, an elderly patron of the region, who lived with his married sons and their families. Armed with a law degree from decades ago and a towel over the armpit (Manor 2004) now, Mr Gowda was the quintessential mover-and-shaker politician. On one post-lunch afternoon in the early days, he asked me what subject I was studying, and I told him “Political Science.” He looked at me, and then, in his earnestness to educate me, he said what I remember as the following:
Look, you are studying politics but let me tell you that we villagers know a lot about politics and data because we vote on many things. We vote in the panchayat elections, Assembly elections, and Lok Sabha of course. But we also have votes for cooperative bank elections, committees within panchayats, and so on.
We also know how to deal with the government. When they come and ask us how many members there are in my household, I decide the answer according to who is asking. If it is the forest official who asks, I will say one household. If it is for rations, I will say multiple households. If it is for elections, I will say three households. If it is for census, I will say one household and so on … it really depends on what the benefit is.
To compare is to “assimilate” and to discover deeper or fundamental similarities below the surface of secondary diversities (Sartori 1970). This chapter will discuss the underlying conceptual attributes of populism and how they have been constructed as they provide the background for indexing the cases in Chapter 4. The intention behind parsing populism into its underlying conceptual attributes is to be able to identify how they configure with each other to constitute the various populisms in India. And since set theoretic analysis is the approach adopted here to understand these configurations, this chapter will also translate these attributes and their constructs as necessary and sufficient conditions.
At this point, it may be helpful to step back from populism and understand the construction and the kind of concept structure being used and why that justifies the need for sufficient and necessary conditions and the downstream analysis that follows. The description provided here is a simple adaptation of the framework outlined by Goertz (2006). The concept structure being used here is multilevel and multidimensional. A multilevel concept has a basic structure, reflected through the secondary level as visible attributes whereby each attribute in turn can be measured through indicators as membership scores (in this project) or as variables in projects with a quantitative design. A multidimensional concept has different dimensions that constitute the basic level of the concept. The nature of the relationship between the attributes and the basic level can be causal, ontological, and substitutable. In this project, the attributes share an ontological relationship with the basic concept, according to which the various attributes are not just the defining features of the basic concept but in fact are the elements that compose the basic level.
Below are some clarificatory notes related to the dataset.
Populist Outcome Related
Clarifications on Electoral Data
A few additional clarifications, as background information, are necessary to understand some of the measures related to the electoral data.
1. Bal Thackeray had never contested an election. But it seemed unjustifiable to ignore him, as he was clearly a populist leader of some measure in Maharashtra. Thus, I took the first electoral victory of the Shiv Sena in the assembly elections of 1995 as the populist instance, because Bal Thackeray reigned supreme for many years prior to and after this victory. And I took the electoral statistics of the incumbent chief minister and loyalist, Manohar Joshi, as the proxy for Bal Thackeray, assuming that the party chief that has got the assembly majority for the first time in its history would have its most trusted loyalist as the chief minister.
2. Jayalalitha's elections in 2001, 2011, and 2016 also need clarifications. In 2001, Jayalalitha was disqualified from competing in the election in May 2001 but was acquitted in December 2001 and thereafter won the byelections from Andipatti in 2002. In 2011, even though Jayalalitha won from the Srirangam constituency (which was the constituency included in the dataset), she was convicted by the Karnataka High Court soon thereafter and acquitted subsequently. She then contested from the RK Nagar constituency and resumed her chief ministership and contested from RK Nagar again in 2016.
In this chapter, I will present the results of the qualitative comparative analysis (QCA) and then interpret them to describe the configurations that constitute populism in India. I will then provide the results of the tests for necessary and sufficient conditions and discuss the parameters of fit in terms of consistency and coverage. Finally, I will cover the various solution terms that indicate the pathways to populism.
The Sets of Data
Table 5.1 presents the data, that is, the sets of data, comprising 37 cases along five conditions and a populist outcome. Describing the worksheet as comprising sets of data instead of a dataset seems more accurate because each of the columns in the sheet is a set and its members are points of data (as fuzzy scores or as percentage scores) along the rows as constituent parts of the unit of analysis. The unit of analysis is an instance of a party candidate contesting elections at the state or at the national level. The cases have been purposively selected by reviewing the scholarship that explicitly indicates that the cases can be identified as instances of populism. And the conditions described earlier—electoral invocation to their people (P), antagonistic boundary setting (B), populist political leadership (L), populist attitude (A), and anxiety about the future (F)—are some of the commonly accepted attributes in the comparative scholarship on populism. In set theoretic terms, we will explore if P, B, L, A, and F are the conditions that constitute the membership of the populist outcome Y.
When we see data on a spreadsheet, concepts and methods associated with standard quantitative techniques inevitably come to mind. Usually and by default, we try to make sense of the data by deriving the summary statistics to understand what has gone up or down, we explore associations between factors by identifying correlations, and administer technical tests to see if the results confirm, reinterpret, or nullify our research questions and hypotheses.
But is it possible to look at a dataset “qualitatively”? And what would that imply? Is it possible to look at columns and rows and identify relations and configurations between them that are more than associational? At first glance, the possibility of this approach seems incongruous because we usually associate qualitative methods with text and quantitative methods with numbers.
This chapter introduces the reader to a qualitative approach by providing an overview of the set theoretic methodology and the QCA method. An introduction to the methodology and the method is important not just because it is mostly an unfamiliar method to many social scientists, particularly those who work in the Indian context, but also because, as a methodology, its philosophical and conceptual roots are somewhat distinct from standard social science approaches. And, equally important, because QCA relies on numbers and software codes for analysis, it misconstrues expectations since the use of numbers can inadvertently lead to interpretations based on quantitative reasoning.