To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The exploration proposed here is pursued through a complex, regional case study. Regional case studies enable delineating a portion of the world, with a consistent set of institutions and policies as well as geographical and material conditions that set the frame for people’s lives, and to identify the complex dynamics by which sociogenetic, microgenetic and ontogenetic transformation co-occur. This chapter presents how we approached, conceived and analysed this case study. To start with, I define my approach to ageing as a form of personal engagement, which progressively developed into a collaborative project. After showing the relevance of a regional case study for sociocultural psychology of the lifecourse, I present the fieldwork, the data collection, an overview of the participants and the main line of the analysis.
This chapter analyses how trade law conceptualises data and AI. It shows that trade law applies long-established concepts to these novel phenomena while experimenting with new categories in preferential agreements. For data, these categories include data as a good, as a service, as a digital product, intellectual property, electronic transmissions, and as a regulatory object. For AI, the chapter distinguishes between the trade regulation of AI components, AI products, and AI governance. It concludes by suggesting that trade law can be understood as a form of AI/data law, which may help in recognising and addressing the challenges that the digital economy poses for trade law.
Chapter 3 offers an initial empirical assessment of the book’s main argument. It begins by identifying the partisan Left and Right in postcommunist Europe and analyzing patterns of their programmatic positions and ballot box results. The findings support the theoretical expectation that the long-term economic stances and electoral performance of the Left, but not of the Right, are two mutually related legacies of postcommunist junctures. Next, having developed a set of variables to compare long- and short-term electoral effects on the Left, I show that the former are the stronger predictor of illiberal electoral outcomes. The chapter closes with a discussion of why rival arguments prioritizing economic, political, and cultural demand, institutional and leadership supply, and international factors fail to adequately explain the variation identified in book’s opening chapter. This point is empirically reinforced in Appendix D, where I test the plausibility of the postcommunist juncture theory vis-à-vis rival hypotheses by using an original dataset and standard statistical methods for cross-sectional and time series data. The results show that the long-term regularities rooted in postcommunist junctures explain illiberal electoral outcomes in the most significant and consistent way.
Edited by
Daniel Naurin, University of Oslo,Urška Šadl, European University Institute, Florence,Jan Zglinski, London School of Economics and Political Science
This chapter offers an overview of the varieties of data that are used in EU law scholarship alongside an overview of the associate research methods employed to analyse it. Based on a systematic literature review of 248 academic articles in the area of EU law and EU courts specifically, it addresses two questions: first, what data sources and methods are the most prevalent in EU law? Second, what are the advantages and pitfalls of different data sources and research methods and how can an understanding of these improve the study of EU law? Finally, the chapter seeks to stimulate a critical discussion of the extent to which emerging and non-traditional data sources both complement and challenge the traditional understandings of what counts as law. The chapter starts with an overview of the most commonly used source of data in EU legal research on courts – courts’ case law – before turning to other, less traditional sources of data in EU law such as interview and survey data, and data based on official statistics, newspapers, and courts’ websites.
Edited by
Daniel Naurin, University of Oslo,Urška Šadl, European University Institute, Florence,Jan Zglinski, London School of Economics and Political Science
The chapter discusses the creation and maintenance of databases offering accurate, research-ready data for multidisciplinary use. It draws on the experience with the IUROPA CJEU Database Project (IUROPA), which has collected data about the decision-makers and the decisions of the Court of Justice of the European Union (CJEU). IUROPA and similar multi-user databases must live up to four criteria for databases, as proposed by Weinshall and Epstein. First, they must address real-world problems. Second, they must be open and accessible. Third, they must deliver reliable and reproducible data. Fourth, they must be ageless and easily calibrated to research purposes unknown at the time of data collection and cleaning. These criteria involve trade-offs: the quest for reliability may, first, precipitate difficult choices such as whether to discard or improve upon ‘imperfect’ data or tempt creators to endlessly postpone publication of ‘incomplete’ data; second, sustainability and human intervention are inversely proportionate when it comes to database maintenance; finally, a fledgling discipline like empirical legal studies in EU law imposes a disproportionate time commitment and financial responsibility on a small group of researchers.
Based on the past year’s traffic stats to the Humanities Indicators web site, the submitted article takes a question-based approach to answer what Americans seem most interested in learning about the humanities. Using infographics and short summary paragraphs, the report walks through key data points about the current state of the humanities using the most recent available data from the federal government or surveys conducted by the project.
In light of progressive criticism of the managerial ‘expert’ logic dominant in the development field, the article analyses how international organizations (IOs) increasingly seek to pluralize their knowledge by adding to their toolkit certain territory-based elements of participatory approaches to data, especially from the Global South. It examines how such attempts to pluralize IOs’ expertise translate in practice, by focusing on the localization processes of the UN 2030 Agenda in six peripheral communities in Rio de Janeiro, Brazil, that is, their development of territory-based targets and indicators for the implementation of the Sustainable Development Goals. The article contrasts these local practices with UN expert agencies’ approaches to data disaggregation. This comparison shows how datafying tools and processes may vary considerably, indicating important epistemological differences in how knowledge gets validated, with impacts regarding visibility and accountability. The territory-based practices analysed defy authorized forms of knowledge by making data not only for monitoring or for action but also for caring and for making live. The article concludes that localization gives the impression that IOs’ knowledge is becoming more plural, yet these changes remain at the surface only, with other knowledges becoming parts of standardized templates and merely complementing official data.
The role of data and automated (non-artificial intelligence [AI]) algorithmic targeting in adaptive social cash systems is gaining increasing significance, but few governments have yet leveraged on AI technologies to reap its benefits. Hence, there is mounting pressure on social cash policymakers and practitioners to rapidly embrace the opportunities arising from AI applications, especially in times of crisis. While data and algorithmic targeting (non-AI and AI) are efficient in enrolling beneficiaries in emergency social cash systems, it may also pose serious challenges. Through a qualitative case study of an adaptive social cash programme in Pakistan, the research critically examines the data/algorithmic targeting process, and unveils the shortcomings prevalent in design, data and algorithmic decision-making that lead to certain exclusionary outcomes. The study makes several contributions to the data and policy literature. Drawing on the limitations, it first offers a set of practical recommendations for greater enrolment, and hence inclusion of beneficiaries. Second, it discusses novel opportunities that AI technologies may present in adaptive social cash systems, whilst carefully assessing the risks. Third, the study proposes an organisational AI governance framework to guide the development of responsible and ethical AI practices. The study affords policy and practical implications for governments, social cash policymakers, and practitioners in providing invaluable insights into how changing targeting practices, via AI technologies, under a governance framework can direct ethical practices that positively impacts on beneficiaries, social cash organisations, and stakeholders.
We are living in a time when many teachers say they are feeling burnt out, and many others have left the profession altogether. Even new teachers who might start out feeling enthusiastic are likely to leave the profession after a few years. Teachers say the pressures they feel don’t match their view of what teaching is supposed to be all about – caring for, and teaching, children and young people. So, what do teachers do? What does the public (and, for that matter, Hollywood movie producers) think teachers do? This chapter argues that we have a bit of a mismatch between what people outside the profession think, and the experiences of teachers themselves. It also argues that broader changes in education, such as the use of data to govern teachers’ work has created extra pressure on teachers.
There are all sorts of dilemmas when it comes to technology and education. How much should be allowed in schools? Do teachers have to worry about students’ data security and privacy? Is it ok for you to ask a computer to write your essay for you? Are we ruining the eyesight and attention spans of an entire generation thanks to excessive screen time? This chapter looks at the debates that exist when it comes to digital technology and education. It will be argued here that the interplay between technology and education is highly complex – and changing – at a pace that is almost unimaginable.
As business transactions and the global economy become increasingly digitalized, international investment disputes will deal with novel assets in new boundary-defiant contexts. Indeed, jurisdictional arguments and objections will likely require arbitral tribunals to confront with the uneasy task of delineating the ‘localization’ of investments in digital economy assets such as cryptocurrency, non-fungible tokens, and data-related investments. However, given that even more traditional assets have raised a variety of problems relating to territorial nexus and localization, the authors believe that the digital economy emphasizes what are essentially differences in degree rather than in kind. This chapter discusses the complexities that arise in considering the idiosyncrasies of investments in digital economy assets within a traditional territorially defined jurisdictional framework. First, the authors present some of those new digital economy assets and canvass several typical cross-border challenges inherent in international investment arbitration. Second, they question how traditional objections to jurisdiction ratione personae and jurisdiction ratione materiae might be employed when the investments in question relate to those digital developments. Third, the chapter raises questions about states’ jurisdiction to prescribe, and ponders the potential effects for purposes of jurisdiction of states asserting their authority to prescribe over investments or investors outside their territory.
Amid growing interest in the integration of health and social care to improve outcomes, communities across the United States have explored development of Community Information Exchanges (CIEs). A CIE is a community governed infrastructure that enables critical health and social information to be responsibly shared among partner organizations in support of holistic coordination of care. The development and use of a CIE give rise to a host of legal and policy challenges. Use and disclosure of data through a CIE are governed by a patchwork of different legal requirements, at times distinct and at times overlapping. Development of a legal framework for a CIE requires attention to clearly articulated data flows, detailed use cases, strong legal agreements and policy considerations. CIEs typically rely on an individual’s express consent to share their information, requiring careful evaluation of applicable laws and regulations and promotion of community trust and equity. And because many participants in a CIE are HIPAA covered entities, functions of the CIE must fit within HIPAA’s regulatory framework. This article examines in depth two components of a sound legal framework—consent models and HIPAA compliance—identifying considerations and lessons learned to support lawful and ethical information sharing through a CIE.
Disaggregation of public health data by race and ethnicity is critical to understanding health disparities and driving progress toward health equity. While the federal government updated the minimum set of categories federal agencies must use when collecting race and ethnicity data, implementation of these standards remains uncertain. Additionally, federal standards set a floor; states can adopt laws requiring additional data collection specific to their population. As of May 2025, 13 states have passed laws to require disaggregation of race and ethnicity. One state – New York – is implementing the Asian and Native Hawaiian/Pacific Islander (NHPI) Data Disaggregation Law. Challenges include finding effective pressure points for the state agencies that are required to update data and helping agencies access funding for necessary changes. In California, the 2024 Latine and Indigenous Health Disparities Act builds on previous data disaggregation legislation for the Asian and NHPI community in California. Challenges with the bill’s adoption included pressure to narrow the agencies affected by the law and creating an appropriate implementation timeline. Key considerations in the adoption of a state data disaggregation law or policy include educating policymakers about the importance of data disaggregation, determining the scope of the policy, and preparing for implementation.
This review essay critically examines three recent books on the digitalization and datafication of humanitarian action: #Help, Humanitarian Extractivism, and Technocolonialism. Each monograph offers a compelling analysis of the myriad ways that humanitarians’ use of digital technologies has reshaped governance and the international order, created new risks, and exacerbated power imbalances. Fundamentally, each book concludes that the various transformations technology has wrought in humanitarianism are, at best, unintended, inconsistent, or unfulfilled in their impact and, at worst, deeply problematic. Setting aside the books’ contributions, each leaves out two important elements. First, in selecting examples, the authors leave mostly unanswered the question of what, if any, positive impacts data and technology have had on or for humanitarian response and those whom it is intended to help. Second, each is mostly silent with regard to practical steps that can be taken to address its critiques, with only Technocolonialism offering three broad avenues for reform. In the context of the current crisis in the humanitarian sector, with the closure of USAID and dramatic declines in funding, there is a need for pragmatic options for the future that, by necessity, involve a creative reimagining of the digital infrastructures underpinning the humanitarian response.
Mounting evidence indicates a strong correlation between wildfire smoke exposure and health impacts, though limited studies have focused on urban fires or exposures other than smoke. The 2025 LA County Fires presented an opportunity to broaden this evidence base.
Methods
In response, the Public Health Extreme Events Research (PHEER) Network developed an ArcGIS Online health exposure map to curate and disseminate information about environmental exposure data collection activities across agencies and researchers.
Results
The map integrates data collection locations with publicly available datasets to reduce duplication of effort. PHEER also partnered with the University of Washington’s Natural Hazards Reconnaissance (RAPID) facility to collect hyperspectral imagery for analysis. Concerns about sharing granular exposure data led PHEER to prioritize disseminating locations and types of data collected rather than the data itself.
Conclusion
PHEER’s approach provides a model for supporting rapid, ethical disaster research in complex urban fire contexts.
Chapter 7 considers the developments that have taken place since the beginning of the von der Leyen II Commission, identifying how there has not only been continuity in the EU’s approach to technology control and its links to digital sovereignty but also an expansion and reinforcement of the approach. Faced with increased instability and geopolitical threat, the linkage of security and economy has become even more explicit for the von der Leyen II Commission, with the Competitiveness Compass taking an approach that appears to be a more assertive form of regulatory mercantilism, in which the element of defence is specifically incorporated into the EU’s rationale for action, with an expansion of technology controls including the development of an explicit push for defence technology industrial policy, the increased control over external dependencies and supply chains through its Preparedness Strategy, and an AI policy for Europe that includes significant investments for AI gigafactories.
Chapter 1 introduces basic terminology. Terms such as artificial intelligence, data, algorithm, machine learning, neural networks, deep learning, large language models, generative AI and symbolic AI are presented to develop a sense of what AI is, how it has evolved, and what it does. This chapter also introduces some of the major conceptual disagreements in the field. Different ideas about how to develop AI in the best way drive disagreements, as well as philosophical differences over what intelligence means and whether machines can develop human-like intelligence.
Chapter 6 considers the Commission’s intent to reduce critical dependencies on third-country-based data servers and computing capabilities. It explores the Brussels effect as a motivator for further action on exporting regulatory norms concerning the protection of personal data and non-personal data, coupled with concerns over lack of competitiveness in data-derived commercial activity, particularly in the field of AI. Concerns over competitiveness impacting Europe’s security, and the risks posed by unauthorised access to industrial or sensitive data from the governments in third countries, the Commission desires increased regulatory control. This is facilitated by industrial policy aimed at both promoting European data server infrastructure and encouraging switching to those Common European Data Spaces as a means of building up a European data economy, while placing strict limitations on the export of non-personal data outside of Europe’s borders. The Commission combines this with an attempt to utilise the Brussels effect to ensure that European standards concerning the safe use of AI technologies become global standards shaped by the EU and its values.
Investment facilitation is an increasingly important policy tool to promote foreign investment. However, we know very little about its prevalence. This paper introduces a new dataset for measuring the adoption of investment facilitation measures at country level. The Investment Facilitation Index (IFI) covers 101 measures, grouped into six policy areas, and maps adoption across 142 economies. The paper outlines the conceptual and methodological framework of the IFI, analyses the current levels of adoption, and demonstrates the index’s robustness. The data show that economies with lower adoption rates typically belong to the low-income or lower-middle-income groups, often located in Sub-Saharan Africa, Latin America and the Caribbean. This dataset serves as a benchmark for assessing the design and impact of international agreements, such as the Investment Facilitation for Development Agreement (IFDA). It can also support the IFDA implementation by guiding domestic assessments of technical assistance needs and capacity development.
This chapter explores the fundamentals of data in data science, covering data types (structured vs. unstructured), collection sources (open data, social media APIs, multimodal data, synthetic data), and storage formats (CSV, TSV, XML, RSS, JSON). It emphasizes the critical importance of data pre-processing, including data cleaning (handling missing values, smoothing noisy data, data munging), integration, transformation, reduction, and discretization. Through hands-on examples, the chapter demonstrates how to systematically prepare "dirty" real-world data for analysis by addressing inconsistencies, outliers, and missing information. The chapter highlights that data preparation is often half the battle in data science, requiring both technical skills and careful attention to data quality and bias.