This journal utilises an Online Peer Review Service (OPRS) for submissions. By clicking "Continue" you will be taken to our partner site
https://mc.manuscriptcentral.com/dataandpolicy.
Please be aware that your Cambridge account is not valid for this OPRS and registration is required. We strongly advise you to read all "Author instructions" in the "Journal information" area prior to submitting.
To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Africa had a busy election calendar in 2024, with at least 19 countries holding presidential or general elections. In a continent with a large youth population, a common theme across these countries is a desire for citizens to have their voices heard, and a busy election year offers an opportunity for the continent to redeem its democratic credentials and demonstrate its leaning towards strengthening free and fair elections and a more responsive and democratic governance. Given the central role that governance plays in security in Africa, the stakes from many of these elections are high, not only to achieve a democratically elected government but also to achieve stability and development. Since governance norms, insecurity, and economic buoyancy are rarely contained by borders, the conduct and outcomes from each of these elections will also have implications for neighbouring countries and the continent overall. This article considers how the results of recent elections across Africa have been challenged in courts based on mistrust in the use of technology platforms, how the deployment of emerging technology, including AI, is casting a shadow on the integrity of elections in Africa, and the policy options to address these emerging trends with a particular focus on governance of AI technologies through a human rights-based approach and equitable public procurement practices.
On both global and local levels, one can observe a trend toward the adoption of algorithmic regulation in the public sector, with the Chinese social credit system (SCS) serving as a prominent and controversial example of this phenomenon. Within the SCS framework, cities play a pivotal role in its development and implementation, both as evaluators of individuals and enterprises and as subjects of evaluation themselves. This study engages in a comparative analysis of SCS scoring mechanisms for individuals and enterprises across diverse Chinese cities while also scrutinizing the scoring system applied to cities themselves. We investigate the extent of algorithmic regulation exercised through the SCS, elucidating its operational dynamics at the city level in China and assessing its interventionism, especially concerning the involvement of algorithms. Furthermore, we discuss ethical concerns surrounding the SCS’s implementation, particularly regarding transparency and fairness. By addressing these issues, this article contributes to two research domains: algorithmic regulation and discourse surrounding the SCS, offering valuable insights into the ongoing utilization of algorithmic regulation to tackle governance and societal challenges.
Online customer feedback management (CFM) is becoming increasingly important for businesses. Providing timely and effective responses to guest reviews can be challenging, especially as the volume of reviews grows. This paper explores the response process and the potential for artificial intelligence (AI) augmentation in response formulation. We propose an orchestration concept for human–AI collaboration in co-writing within the hospitality industry, supported by a novel NLP-based solution that combines the strengths of both human and AI. Although complete automation of the response process remains out of reach, our findings offer practical implications for improving response speed and quality through human–AI collaboration. Additionally, we formulate policy recommendations for businesses and regulators in CFM. Our study provides transferable design knowledge for developing future CFM products.
This paper explores the evolution of the concept of peace in the context of a globalized and digitalized 21st century, proposing a novel vision that shifts from viewing peace as a thing or a condition, to understanding peace as dynamic and relational process that emerges through human interactions. Building on - yet also going beyond - traditional definitions of peace as something to be found through inner reflection (virtue ethics), as the product of reason, contracts and institutions (Enlightenment philosophy), and as the absence of different forms of violence (modern peace research), this paper introduces a new meso-level theory on networks, emphasizing the importance of connections, interactions and relationships in the physical and online worlds. The paper is structured around three main objectives: conceptualizing relational peace in terms of the quantity and quality of interactions, mapping these interactions into networks of peace, and examining how these networks interact with their environment, including the influence of digital transformation and artificial intelligence. By integrating insights from ethical and peace research literature, the paper makes theoretical, conceptual, and methodological contributions towards understanding peace as an emergent property of human behavior. Through this innovative approach, the paper aims to provide clarity on how peace (and violence) emerges through interactions and relations in an increasingly networked and digitalized global society, offering a foundation for future empirical research and concerted policy action in this area. It highlights the need for bridging normative and descriptive sciences to better understand and promote peace in the digital age.
Recent studies utilizing AI-driven speech-based Alzheimer’s disease (AD) detection have achieved remarkable success in detecting AD dementia through the analysis of audio and text data. However, detecting AD at an early stage of mild cognitive impairment (MCI), remains a challenging task, due to the lack of sufficient training data and imbalanced diagnostic labels. Motivated by recent advanced developments in Generative AI (GAI) and Large Language Models (LLMs), we propose an LLM-based data generation framework, leveraging prior knowledge encoded in LLMs to generate new data samples. Our novel LLM generation framework introduces two novel data generation strategies, namely, the cross-lingual and the counterfactual data generation, facilitating out-of-distribution learning over new data samples to reduce biases in MCI label prediction due to the systematic underrepresentation of MCI subjects in the AD speech dataset. The results have demonstrated that our proposed framework significantly improves MCI Detection Sensitivity and F1-score on average by a maximum of 38% and 31%, respectively. Furthermore, key speech markers in predicting MCI before and after LLM-based data generation have been identified to enhance our understanding of how the novel data generation approach contributes to the reduction of MCI label prediction biases, shedding new light on speech-based MCI detection under low data resource constraint. Our proposed methodology offers a generalized data generation framework for improving downstream prediction tasks in cases where limited and/or imbalanced data have presented significant challenges to AI-driven health decision-making. Future study can focus on incorporating more datasets and exploiting more acoustic features for speech-based MCI detection.
The EU’s Common European Data Space (CEDS) aims to create a single market for data-sharing in Europe, build trust among stakeholders, uphold European values, and benefit society. However, there is the possibility that the values of the EU and the benefits for the common good of European society may get overlooked for the economic benefits of organisations if norms and social values are not considered. We propose that the concept of “data commons” is relevant for defining openness versus enclosure of data in data spaces and is important when considering the balance and trade-off between individual (market) versus collective (societal) benefits from data-sharing within the CEDS. Commons are open-access resources governed by a group, either formally by regulation or informally by local customs. The application of the data commons to the CEDS would promote data-sharing for the “common good.” However, we propose that the data commons approach should be balanced with the market-based approach to CEDS in an inclusive hybrid data governance approach that meets material, price-driven interests, while stimulating collective learning in online networks to form social communities that offer participants a shared identity and social recognition.
In today’s world, smart algorithms—artificial intelligence (AI) and other intelligent systems—are pivotal for promoting the development agenda. They offer novel support for decision-making across policy planning domains, such as analysing poverty alleviation funds and predicting mortality rates. To comprehensively assess their efficacy and implications in policy formulation, this paper conducts a systematic review of 207 publications. The analysis underscores their integration within and across stages of the policy planning cycle: problem diagnosis and goal articulation; resource and constraint identification; design of alternative solutions; outcome projection; and evaluation. However, disparities exist in smart algorithm applications across stages, economic development levels, and Sustainable Development Goals (SDGs). While these algorithms predominantly focus on resource identification (29%) and contribute significantly to designing alternatives—such as long-term national energy policies—and projecting outcomes, including predicting multi-scenario land-use ecological security strategies, their application in evaluation remains limited (10%). Additionally, low-income nations have yet to fully harness AI’s potential, while upper-middle-income countries effectively leverage it. Notably, smart algorithm applications for SDGs also exhibit unevenness, with more emphasis on SDG 11 than on SDG 5 and SDG 17. Our study identifies literature gaps. Firstly, despite theoretical shifts, a disparity persists between physical and socioeconomic/environmental planning applications. Secondly, there is limited attention to policy-making in development initiatives, which is critical for improving lives. Future research should prioritise developing adaptive planning systems using emerging powerful algorithms to address uncertainty and complex environments. Ensuring algorithmic transparency, human-centered approaches, and responsible AI are crucial for AI accountability, trust, and credibility.
The alignment of artificial intelligence (AI) systems with societal values and the public interest is a critical challenge in the field of AI ethics and governance. Traditional approaches, such as Reinforcement Learning with Human Feedback (RLHF) and Constitutional AI, often rely on pre-defined high-level ethical principles. This article critiques these conventional alignment frameworks through the philosophical perspectives of pragmatism and public interest theory, arguing against their rigidity and disconnect with practical impacts. It proposes an alternative alignment strategy that reverses the traditional logic, focusing on empirical evidence and the real-world effects of AI systems. By emphasizing practical outcomes and continuous adaptation, this pragmatic approach aims to ensure that AI technologies are developed according to the principles that are derived from the observable impacts produced by technology applications.
The increasing popularity of large language models has not only led to widespread use but has also brought various risks, including the potential for systematically spreading fake news. Consequently, the development of classification systems such as DetectGPT has become vital. These detectors are vulnerable to evasion techniques, as demonstrated in an experimental series: Systematic changes of the generative models’ temperature proofed shallow learning—detectors to be the least reliable (Experiment 1). Fine-tuning the generative model via reinforcement learning circumvented BERT-based—detectors (Experiment 2). Finally, rephrasing led to a >90% evasion of zero-shot—detectors like DetectGPT, although texts stayed highly similar to the original (Experiment 3). A comparison with existing work highlights the better performance of the presented methods. Possible implications for society and further research are discussed.
In recent years, there has been a global trend among governments to provide free and open access to data collected by Earth-observing satellites with the purpose of maximizing the use of this data for a broad array of research and applications. Yet, there are still significant challenges facing non-remote sensing specialists who wish to make use of satellite data. This commentary explores an illustrative case study to provide concrete examples of these challenges and barriers. We then discuss how the specific challenges faced within the case study illuminate some of the broader issues in data accessibility and utility that could be addressed by policymakers that aim to improve the reach of their data, increase the range of research and applications that it enables, and improve equity in data access and use.
This article proposes Bayesian adaptive trials (BATs) as both an efficient method to conduct trials and a unifying framework for the evaluation of social policy interventions, addressing the limitations inherent in traditional methods, such as randomized controlled trials. Recognizing the crucial need for evidence-based approaches in public policy, the proposed approach aims to lower barriers to the adoption of evidence-based methods and to align evaluation processes more closely with the dynamic nature of policy cycles. BATs, grounded in decision theory, offer a dynamic, “learning as we go” approach, enabling the integration of diverse information types and facilitating a continuous, iterative process of policy evaluation. BATs’ adaptive nature is particularly advantageous in policy settings, allowing for more timely and context-sensitive decisions. Moreover, BATs’ ability to value potential future information sources positions it as an optimal strategy for sequential data acquisition during policy implementation. While acknowledging the assumptions and models intrinsic to BATs, such as prior distributions and likelihood functions, this article argues that these are advantageous for decision-makers in social policy, effectively merging the best features of various methodologies.
This paper demonstrates how learning the structure of a Bayesian network, often used to predict and represent causal pathways, can be used to inform policy decision-making.
We show that Bayesian networks are a rigorous and interpretable representation of interconnected factors that affect the complex environment in which policy decisions are made. Furthermore, Bayesian structure learning differentiates between proximal or immediate factors and upstream or root causes, offering a comprehensive set of potential causal pathways leading to specific outcomes.
We show how these causal pathways can provide critical insights into the impact of a policy intervention on an outcome. Central to our approach is the integration of causal discovery within a Bayesian framework, which considers the relative likelihood of possible causal pathways rather than only the most probable pathway.
We argue this is an essential part of causal discovery in policy making because the complexity of the decision landscape inevitably means that there are many near equally probable causal pathways. While this methodology is broadly applicable across various policy domains, we demonstrate its value within the context of educational policy in Australia. Here, we identify pathways influencing educational outcomes, such as student attendance, and examine the effects of social disadvantage on these pathways. We demonstrate the methodology’s performance using synthetic data and its usefulness by applying it to real-world data. Our findings in the real example highlight the usefulness of Bayesian networks as a policy decision tool and show how data science techniques can be used for practical policy development.
Data for Policy (dataforpolicy.org), a trans-disciplinary community of research and practice, has emerged around the application and evaluation of data technologies and analytics for policy and governance. Research in this area has involved cross-sector collaborations, but the areas of emphasis have previously been unclear. Within the Data for Policy framework of six focus areas, this report offers a landscape review of Focus Area 2: Technologies and Analytics. Taking stock of recent advancements and challenges can help shape research priorities for this community. We highlight four commonly used technologies for prediction and inference that leverage datasets from the digital environment: machine learning (ML) and artificial intelligence systems, the internet-of-things, digital twins, and distributed ledger systems. We review innovations in research evaluation and discuss future directions for policy decision-making.
The various global refugee and migration events of the last few years underscore the need for advancing anticipatory strategies in migration policy. The struggle to manage large inflows (or outflows) highlights the demand for proactive measures based on a sense of the future. Anticipatory methods, ranging from predictive models to foresight techniques, emerge as valuable tools for policymakers. These methods, now bolstered by advancements in technology and leveraging nontraditional data sources, can offer a pathway to develop more precise, responsive, and forward-thinking policies.
This paper seeks to map out the rapidly evolving domain of anticipatory methods in the realm of migration policy, capturing the trend toward integrating quantitative and qualitative methodologies and harnessing novel tools and data. It introduces a new taxonomy designed to organize these methods into three core categories: Experience-based, Exploration-based, and Expertise-based. This classification aims to guide policymakers in selecting the most suitable methods for specific contexts or questions, thereby enhancing migration policies.
Even though Sub-Saharan Africa (SSA) is lagging in digital technology adoption among the global average, there is substantial progress in terms of Information and Communication Technology (ICT) access and use, where it plays a crucial role in increasing the quality of life in the regions. However, digital gaps still exist within the continents, even though technology adoption across African nations has shown an increase in progress. This paper aims to explore factors that contribute to different adoption rates among three digital technologies in SSA, specifically mobile phones, fixed broadband, and fixed telephones. The methodology utilizes panel regression analysis to examine data sourced from the World Bank, which consists of 48 SSA countries from 2006 to 2022. The findings show a consistent growth in mobile phone subscriptions, different from fixed telephone and broadband internet that shows stagnant progress. Furthermore, infrastructure, and human capital are the most significant factors in addition to other influencing factors. The results of this study provide the African governments with insightful advice on addressing the digital divide and accelerating their digital transformation.
We apply moral foundations theory (MFT) to explore how the public conceptualizes the first eight months of the conflict between Ukraine and the Russian Federation (Russia). Our analysis includes over 1.1 million English tweets related to the conflict over the first 36 weeks. We used linguistic inquiry word count (LIWC) and a moral foundations dictionary to identify tweets’ moral components (care, fairness, loyalty, authority, and sanctity) from the United States, pre- and post-Cold War NATO countries, Ukraine, and Russia. Following an initial spike at the beginning of the conflict, tweet volume declined and stabilized by week 10. The level of moral content varied significantly across the five regions and the five moral components. Tweets from the different regions included significantly different moral foundations to conceptualize the conflict. Across all regions, tweets were dominated by loyalty content, while fairness content was infrequent. Moral content over time was relatively stable, and variations were linked to reported conflict events.
Data for Policy (dataforpolicy.org), a global community, focuses on policy–data interactions by exploring how data can be used for policy in an ethical, responsible, and efficient manner. Within its journal, six focus areas, including Data for Policy Area 1: Digital & Data-driven Transformations in Governance, were established to delineate the evolving research landscape from the Data for Policy Conference series. This review addresses the absence of a formal conceptualization of digital and data-driven transformations in governance within this focus area. The paper achieves this by providing a working definition, mapping current research trends, and proposing a future research agenda centered on three core transformations: (1) public participation and collective intelligence; (2) relationships and organizations; and (3) open data and government. The paper outlines research questions and connects these transformations to related areas such as artificial intelligence (AI), sustainable smart cities, digital divide, data governance, co-production, and service quality. This contribution forms the foundational development of a research agenda for academics and practitioners engaged in or impacted by digital and data-driven transformations in policy and governance.
Recent developments in national health data platforms have the potential to significantly advance medical research, improve public health outcomes, and foster public trust in data governance. Across Europe, initiatives such as the NHS Research Secure Data Environment in England and the Data Room for Health-Related Research in Switzerland are underway, reflecting examples analogous to the European Health Data Space in two non-EU nations. Policy discussions in England and Switzerland emphasize building public trust to foster participation and ensure the success of these platforms. Central to building public trust is investing efforts into developing and implementing public involvement activities. In this commentary, we refer to three national research programs, namely the UK Biobank, Genomics England, and the Swiss Health Study, which implemented effective public involvement activities and achieved high participation rates. The public involvement activities used within these programs are presented following on established guiding principles for fostering public trust in health data research. Under this lens, we provide actionable policy recommendations to inform the development of trust-building public involvement activities for national health data platforms.
What drives changes in the thematic focus of state-linked manipulated media? We study this question in relation to a long-running Iranian state-linked manipulated media campaign that was uncovered by Twitter in 2021. Using a variety of machine learning methods, we uncover and analyze how this manipulation campaign’s topical themes changed in relation to rising Covid-19 cases in Iran. By using the topics of the tweets in a novel way, we find that increases in domestic Covid-19 cases engendered a shift in Iran’s manipulated media focus away from Covid-19 themes and toward international finance- and investment-focused themes. These findings underscore (i) the potential for state-linked manipulated media campaigns to be used for diversionary purposes and (ii) the promise of machine learning methods for detecting such behaviors.