To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We are living in a time when many teachers say they are feeling burnt out, and many others have left the profession altogether. Even new teachers who might start out feeling enthusiastic are likely to leave the profession after a few years. Teachers say the pressures they feel don’t match their view of what teaching is supposed to be all about – caring for, and teaching, children and young people. So, what do teachers do? What does the public (and, for that matter, Hollywood movie producers) think teachers do? This chapter argues that we have a bit of a mismatch between what people outside the profession think, and the experiences of teachers themselves. It also argues that broader changes in education, such as the use of data to govern teachers’ work has created extra pressure on teachers.
There are all sorts of dilemmas when it comes to technology and education. How much should be allowed in schools? Do teachers have to worry about students’ data security and privacy? Is it ok for you to ask a computer to write your essay for you? Are we ruining the eyesight and attention spans of an entire generation thanks to excessive screen time? This chapter looks at the debates that exist when it comes to digital technology and education. It will be argued here that the interplay between technology and education is highly complex – and changing – at a pace that is almost unimaginable.
As business transactions and the global economy become increasingly digitalized, international investment disputes will deal with novel assets in new boundary-defiant contexts. Indeed, jurisdictional arguments and objections will likely require arbitral tribunals to confront with the uneasy task of delineating the ‘localization’ of investments in digital economy assets such as cryptocurrency, non-fungible tokens, and data-related investments. However, given that even more traditional assets have raised a variety of problems relating to territorial nexus and localization, the authors believe that the digital economy emphasizes what are essentially differences in degree rather than in kind. This chapter discusses the complexities that arise in considering the idiosyncrasies of investments in digital economy assets within a traditional territorially defined jurisdictional framework. First, the authors present some of those new digital economy assets and canvass several typical cross-border challenges inherent in international investment arbitration. Second, they question how traditional objections to jurisdiction ratione personae and jurisdiction ratione materiae might be employed when the investments in question relate to those digital developments. Third, the chapter raises questions about states’ jurisdiction to prescribe, and ponders the potential effects for purposes of jurisdiction of states asserting their authority to prescribe over investments or investors outside their territory.
This review essay critically examines three recent books on the digitalization and datafication of humanitarian action: #Help, Humanitarian Extractivism, and Technocolonialism. Each monograph offers a compelling analysis of the myriad ways that humanitarians’ use of digital technologies has reshaped governance and the international order, created new risks, and exacerbated power imbalances. Fundamentally, each book concludes that the various transformations technology has wrought in humanitarianism are, at best, unintended, inconsistent, or unfulfilled in their impact and, at worst, deeply problematic. Setting aside the books’ contributions, each leaves out two important elements. First, in selecting examples, the authors leave mostly unanswered the question of what, if any, positive impacts data and technology have had on or for humanitarian response and those whom it is intended to help. Second, each is mostly silent with regard to practical steps that can be taken to address its critiques, with only Technocolonialism offering three broad avenues for reform. In the context of the current crisis in the humanitarian sector, with the closure of USAID and dramatic declines in funding, there is a need for pragmatic options for the future that, by necessity, involve a creative reimagining of the digital infrastructures underpinning the humanitarian response.
Mounting evidence indicates a strong correlation between wildfire smoke exposure and health impacts, though limited studies have focused on urban fires or exposures other than smoke. The 2025 LA County Fires presented an opportunity to broaden this evidence base.
Methods
In response, the Public Health Extreme Events Research (PHEER) Network developed an ArcGIS Online health exposure map to curate and disseminate information about environmental exposure data collection activities across agencies and researchers.
Results
The map integrates data collection locations with publicly available datasets to reduce duplication of effort. PHEER also partnered with the University of Washington’s Natural Hazards Reconnaissance (RAPID) facility to collect hyperspectral imagery for analysis. Concerns about sharing granular exposure data led PHEER to prioritize disseminating locations and types of data collected rather than the data itself.
Conclusion
PHEER’s approach provides a model for supporting rapid, ethical disaster research in complex urban fire contexts.
Chapter 7 considers the developments that have taken place since the beginning of the von der Leyen II Commission, identifying how there has not only been continuity in the EU’s approach to technology control and its links to digital sovereignty but also an expansion and reinforcement of the approach. Faced with increased instability and geopolitical threat, the linkage of security and economy has become even more explicit for the von der Leyen II Commission, with the Competitiveness Compass taking an approach that appears to be a more assertive form of regulatory mercantilism, in which the element of defence is specifically incorporated into the EU’s rationale for action, with an expansion of technology controls including the development of an explicit push for defence technology industrial policy, the increased control over external dependencies and supply chains through its Preparedness Strategy, and an AI policy for Europe that includes significant investments for AI gigafactories.
Chapter 1 introduces basic terminology. Terms such as artificial intelligence, data, algorithm, machine learning, neural networks, deep learning, large language models, generative AI and symbolic AI are presented to develop a sense of what AI is, how it has evolved, and what it does. This chapter also introduces some of the major conceptual disagreements in the field. Different ideas about how to develop AI in the best way drive disagreements, as well as philosophical differences over what intelligence means and whether machines can develop human-like intelligence.
Chapter 6 considers the Commission’s intent to reduce critical dependencies on third-country-based data servers and computing capabilities. It explores the Brussels effect as a motivator for further action on exporting regulatory norms concerning the protection of personal data and non-personal data, coupled with concerns over lack of competitiveness in data-derived commercial activity, particularly in the field of AI. Concerns over competitiveness impacting Europe’s security, and the risks posed by unauthorised access to industrial or sensitive data from the governments in third countries, the Commission desires increased regulatory control. This is facilitated by industrial policy aimed at both promoting European data server infrastructure and encouraging switching to those Common European Data Spaces as a means of building up a European data economy, while placing strict limitations on the export of non-personal data outside of Europe’s borders. The Commission combines this with an attempt to utilise the Brussels effect to ensure that European standards concerning the safe use of AI technologies become global standards shaped by the EU and its values.
Investment facilitation is an increasingly important policy tool to promote foreign investment. However, we know very little about its prevalence. This paper introduces a new dataset for measuring the adoption of investment facilitation measures at country level. The Investment Facilitation Index (IFI) covers 101 measures, grouped into six policy areas, and maps adoption across 142 economies. The paper outlines the conceptual and methodological framework of the IFI, analyses the current levels of adoption, and demonstrates the index’s robustness. The data show that economies with lower adoption rates typically belong to the low-income or lower-middle-income groups, often located in Sub-Saharan Africa, Latin America and the Caribbean. This dataset serves as a benchmark for assessing the design and impact of international agreements, such as the Investment Facilitation for Development Agreement (IFDA). It can also support the IFDA implementation by guiding domestic assessments of technical assistance needs and capacity development.
This chapter explores the fundamentals of data in data science, covering data types (structured vs. unstructured), collection sources (open data, social media APIs, multimodal data, synthetic data), and storage formats (CSV, TSV, XML, RSS, JSON). It emphasizes the critical importance of data pre-processing, including data cleaning (handling missing values, smoothing noisy data, data munging), integration, transformation, reduction, and discretization. Through hands-on examples, the chapter demonstrates how to systematically prepare "dirty" real-world data for analysis by addressing inconsistencies, outliers, and missing information. The chapter highlights that data preparation is often half the battle in data science, requiring both technical skills and careful attention to data quality and bias.
This introductory chapter defines data science as a field focused on collecting, storing, and processing data to derive meaningful insights for decision-making. It explores data science applications across diverse sectors including finance, healthcare, politics, public policy, urban planning, education, and libraries. The chapter examines how data science relates to statistics, computer science, engineering, business analytics, and information science, while introducing computational thinking as a fundamental skill. It discusses the explosive growth of data (the 3Vs: velocity, volume, variety) and essential skills for data scientists, including statistical knowledge, programming abilities, and data literacy. The chapter concludes by addressing critical ethical concerns around privacy, bias, and fairness in data science practice.
This chapter explores the fundamentals of data in data science, covering data types (structured vs. unstructured), collection sources (open data, social media APIs, multimodal data, synthetic data), and storage formats (CSV, TSV, XML, RSS, JSON). It emphasizes the critical importance of data pre-processing, including data cleaning (handling missing values, smoothing noisy data, data munging), integration, transformation, reduction, and discretization. Through hands-on examples, the chapter demonstrates how to systematically prepare "dirty" real-world data for analysis by addressing inconsistencies, outliers, and missing information. The chapter highlights that data preparation is often half the battle in data science, requiring both technical skills and careful attention to data quality and bias.
The Communist Party is building a digitally capable state to remain at the vanguard of social and political development in China. CPC leaders are advancing digitalization forcefully throughout the governance and economic system through various policy initiatives. Informatization serves both better governance and public service, by overcoming previous bottlenecks and spatial challenges in administration. It also enhances the party state’s surveillance and monitoring capabilities. The CPC’s goals for China’s future under its continued rule require a digital infrastructure and economy that is both efficacious and subject to the Party’s control. To this end, the CPC is building out a comprehensive governance architecture for cyberspace, including the world’s most expansive data regulatory regime. The current leadership regards these digital capabilities as a key part of its comprehensive governance model that will enable the CPC to implement its domestic and international vision for China in the coming years.
This introductory chapter defines data science as a field focused on collecting, storing, and processing data to derive meaningful insights for decision-making. It explores data science applications across diverse sectors including finance, healthcare, politics, public policy, urban planning, education, and libraries. The chapter examines how data science relates to statistics, computer science, engineering, business analytics, and information science, while introducing computational thinking as a fundamental skill. It discusses the explosive growth of data (the 3Vs: velocity, volume, variety) and essential skills for data scientists, including statistical knowledge, programming abilities, and data literacy. The chapter concludes by addressing critical ethical concerns around privacy, bias, and fairness in data science practice.
Election studies are an important data pillar in political and social science, as most political research investigations involve secondary use of existing datasets. Researchers depend on high-quality data because data quality determines the accuracy of the conclusions drawn from statistical analyses. We outline data reuse quality criteria pertaining to data accessibility, metadata provision, and data documentation using the FAIR Principles of research data management as a framework (Findability, Accessibility, Interoperability, and Reusability). We then investigate the extent to which a selection of election studies fulfils these criteria using studies from Western democracies. Our results reveal that although most election studies are easily accessible and well documented and that the overall level of data processing is satisfactory, some important deficits remain. Further analyses of technical documentation indicate that while a majority of election studies provide the necessary documents, there is still room for improvement.
This paper presents for a political science audience the Three Pillars Approach to the FAIR principles of Findability, Accessibility, Interoperability, and Re-usabilty for data and metadata. A portfolio of illustrative practical activities is offered that scholarly communities can take up to make their research more FAIR at disciplinary and subdisciplinary levels.
Data from observations of pulsars made by Murriyang, the CSIRO Parkes 64-metre radio-telescope over the last three decades are more accessible than ever before, largely due to their storage in expansive long-term archives. Containing nearly 2 million files from more than 400 Parkes pulsar projects, CSIRO’s Data Access Portal is leading the global effort in making pulsar data accessible. In this article, we present the current status of the archive and provide information about the acquisition, analysis, reduction, visualisation, preservation, and dissemination of these datasets. We highlight the importance of such an archive and present a selection of new results emanating from archival data.
Build a firm foundation for studying statistical modelling, data science, and machine learning with this practical introduction to statistics, written with chemical engineers in mind. It introduces a data–model–decision approach to applying statistical methods to real-world chemical engineering challenges, establishes links between statistics, probability, linear algebra, calculus, and optimization, and covers classical and modern topics such as uncertainty quantification, risk modelling, and decision-making under uncertainty. Over 100 worked examples using Matlab and Python demonstrate how to apply theory to practice, with over 70 end-of-chapter problems to reinforce student learning, and key topics are introduced using a modular structure, which supports learning at a range of paces and levels. Requiring only a basic understanding of calculus and linear algebra, this textbook is the ideal introduction for undergraduate students in chemical engineering, and a valuable preparatory text for advanced courses in data science and machine learning with chemical engineering applications.
This chapter adds three principal observations to the theory of singular compositional explanation. These observations are essential to locating the theory of singular compositional explanation to the context of scientific experimental work. The first observation is that scientists sometimes explain the rates of activity instances. One way they explain them is in terms of the number of lower level individuals engaged in activity instances. The second is that scientists often use singular compositional abductive explanations in explaining experimental results. Third, scientists often use singular compositional abductive explanations in the context of controlled experiments.
This chapter challenges the conventional wisdom of how users of social media platforms such as Instagram, X, or TikTok pay for service access. It argues that rather than merely exchanging data for services, users unknowingly barter their attention, emotions, and cognitive resources – mental goods that corporations exploit through technologically managed systems like targeted advertising and habit-forming design. The chapter explores how these transactions are facilitated not by legal contracts but by code, which allows social media companies to extract value in ways that traditional legal conceptual frameworks cannot capture. It further highlights the negative externalities of these exchanges, such as cognitive impairments and mental health issues, framing them as pollution byproducts of the attention economy. By examining both the visible and hidden dimensions of this technologically mediated exchange, the chapter calls for a deeper understanding of the mechanisms that govern our interactions with digital platforms rather than rushing to propose new legal solutions.