Hostname: page-component-54dcc4c588-gwv8j Total loading time: 0 Render date: 2025-09-24T07:32:37.841Z Has data issue: false hasContentIssue false

Leveraging AI in peace processes: A framework for digital dialogues

Published online by Cambridge University Press:  22 September 2025

Martin Wählisch*
Affiliation:
Centre for Artificial Intelligence in Government (CAIG), https://ror.org/03angcq70 University of Birmingham , Birmingham, UK
Felix Kufus
Affiliation:
CMI - Martti Ahtisaari Peace Foundation, Helsinki, Finland
*
Corresponding author: Martin Wählisch; Email: m.waehlisch@bham.ac.uk

Abstract

The integration of artificial intelligence (AI)-driven technologies into peace dialogues offers both innovative possibilities and critical challenges for contemporary peacebuilding practice. This article proposes a context-sensitive taxonomy of digital deliberation tools designed to guide the selection and adaptation of AI-assisted platforms in conflict-affected environments. Moving beyond static typologies, the framework accounts for variables such as scale, digital literacy, inclusivity, security, and the depth of AI integration. By situating digital peace dialogues within broader peacebuilding and digital democracy frameworks, the article examines how AI can enhance participation, scale deliberation, and support knowledge synthesis, —while also highlighting emerging concerns around algorithmic bias, digital exclusion, and cybersecurity threats. Drawing on case studies involving the United Nations (UN) and civil society actors, the article underscores the limitations of one-size-fits-all approaches and makes the case for hybrid models that balance AI capabilities with human facilitation to foster trust, legitimacy, and context-responsive dialogue. The analysis contributes to peacebuilding scholarship by engaging with the ethics of AI, the politics of digital diplomacy, and the sustainability of technological interventions in peace processes. Ultimately, the study argues for a dynamic, adaptive approach to AI integration, continuously attuned to the ethical, political, and socio-cultural dimensions of peacebuilding practice.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Policy Significance Statement

Policymakers are increasingly called to integrate digital tools in peace processes, but without clear guidance, effort risk exclusion, mistrust, and unintended harm. This study offers a practical framework to help decision-makers assess when and how to responsibly use artificial intelligence (AI)-assisted digital dialogues. Drawing on direct implementation experience, it outlines key factors, such as digital access, participant safety, and cultural sensitivity, that influence success. It also flags risks like algorithmic bias and data misuse, emphasizing the need for hybrid models that combine technology with human facilitation. By centering ethical, inclusive, and context-specific design, the research equips peacebuilders and donors to make informed, value-driven choices, ensuring that digital dialogues strengthen, rather than undermine, trust and legitimacy in fragile settings.

1. Introduction

Artificial intelligence (AI) is increasingly permeating the field of international studies, including in the realm of peace and security, shaping how conflicts are fought and resolved. The growing integration of digital technologies into peacebuilding efforts has opened new possibilities for fostering dialogue, yet it has also raised concerns regarding their compatibility with the traditionally human-centric nature of conflict resolution. Despite the promise of AI-driven tools, skepticism persists over whether technology can, or should, replace face-to-face engagement in peace processes. However, the COVID-19 pandemic has catalyzed the adoption of digital tools for dialogue, compelling both international organizations and nongovernmental actors to explore AI-supported approaches to peacebuilding. Notably, the United Nations (UN)’s application of digital dialogues in conflict zones has attracted considerable international media attention, simultaneously nurturing a growing community of practitioners dedicated to advancing so-called “PeaceTech” generally and digital dialogues specifically. In 2020, the Office of the Special Envoy for Yemen, in collaboration with the UN Department of Political and Peacebuilding Affairs’ Innovation Cell, conducted the first-ever AI-powered digital consultation engaging hundreds of Yemeni civil society representatives (Office of the Special Envoy of the Secretary-General for Yemen [“‘OSESGY”] 2020). This virtual dialogue, powered by the private sector tool Remesh.AI, enabled participants to anonymously share their perspectives on a nationwide ceasefire, the political peace process, and humanitarian concerns, thereby informing the UN’s mediation strategies. In 2020–2021, the United Nations Support Mission in Libya (UNSMIL) actively employed AI-powered large-scale digital dialogues to engage diverse groups of the population in the peacebuilding process to discuss the security, economic, and political situation in their country, emphasizing their collective call for the cessation of foreign intervention and the unification of national institutions (“UNSMIL” 2020). Nongovernmental organizations (NGOs), such as the Finish CMI-Martti Ahtisaari Peace Foundation, have deepened those efforts, such as in their digital dialogues conducted in Sudan in 2023 and Yemen in 2024–25, focusing on women’s groups, networks, and alliances, and on youth and Resistance Committees (Poutanen and Kufus, Reference Poutanen and Kufus2024). These developments underscore the evolving recognition that digital peacebuilding represents not merely a temporary adaptation but an integral and continually expanding dimension of contemporary peace processes.

This paper differs from existing research by providing a structured, policy-practitioner-relevant framework for digital peace dialogues, drawing on comparative experience rather than single-case studies or abstract models. What this article adds is a systematic, context-sensitive, and practitioner-informed framework focused specifically on AI-assisted digital dialogues in peace processes, which is underdeveloped in current literature (Bell, Reference Bell2024; Ginty and Firchow, Reference Ginty and Firchow2024; Giovanardi, Reference Giovanardi2024). While the adoption of AI in peacebuilding has been part of scholarly exploration, much of the existing research remains descriptive, often focused on individual case studies (Masood Alavi et al., Reference Masood Alavi, Wählisch, Irwin and Konya2022; Niyitunga, Reference Niyitunga2024) and general considerations of digital technology in peacebuilding (Firchow et al., Reference Firchow, Martin-Shields, Omer and Ginty2017; Hirblinger et al., Reference Hirblinger, Wählisch, Keator, McNaboe, Duursma, Karlsrud, Sticher, Verjee, Kyselova, Kwaja and Perera2023) rather than a systematic assessment of opportunities and challenges associated with digital dialogues in peace processes. Studies have explored digital deliberation and online participation to strengthen democracy (Rose and Sæbø, Reference Rose and Sæbø2010; Shin, Reference Shin, Shin and Shin2024), yet their applicability in fragile and conflict-affected environments remains under-theorized. AI-driven approaches, such as natural language processing (NLP) techniques, including content clustering, sentiment analysis, and automated dialogue moderation, are increasingly used in conflict resolution settings (Panic and Arthur, Reference Panic and Arthur2024). However, their specific risks, including algorithmic bias, digital exclusion, and potential misuse by state and non-state actors, remain insufficiently examined in the context of peace processes. These risks are exacerbated by the asymmetrical power dynamics and disinformation challenges inherent in conflict settings, raising ethical and operational concerns about AI’s role in peacebuilding. To address these challenges, this article develops a framework that systematically assesses AI’s affordances and limitations of digital dialogue efforts within the broader peacebuilding landscape.

To guide this investigation, this article explores the following key questions: What are the essential parameters to consider when employing AI-driven tools in digital peace dialogues? What ethical and operational challenges arise in integrating AI into peacebuilding, particularly regarding inclusivity, contextual appropriateness, and the risk of technological determinism? What conceptual framework and taxonomy best capture these variations and challenges, enabling systematic comparison and evaluation?

Methodologically, this study conducts a review of existing literature on digital deliberation to establish a foundational understanding, with a particular focus on the increasingly prominent role of AI across various domains. Subsequently, the core dilemmas and challenges associated with AI-driven digital dialogues are identified and analyzed, informed not only by scholarly sources but also through practical experience and reflective insights from the authors’ direct involvement as former practitioners at the United Nations and current advisers for digital technologies at CMI-Martti Ahtisaari Peace Foundation. The exploration is further enriched by insights gained from extensive practitioner exchanges and international discussions on digital dialogues in which the authors have participated since 2019. Throughout this article, we use peacebuilding to refer to the broader set of activities aimed at addressing the root causes of conflict and supporting sustainable peace, including reconciliation, transitional justice, and peace mediation. The term peace mediation is used more narrowly to describe facilitated dialogue and negotiation processes between conflict parties. While our framework applies to both peacebuilding and mediation, we use these terms intentionally to reflect their distinct, yet interconnected, roles.

This research is not merely an academic exercise but a missing effort to guide practitioners in integrating digital approaches into peace dialogues responsibly and appropriately, while also aiding scholars in understanding the practical decision-making involved in digital dialogues. The focus of this article intentionally extends beyond merely presenting and evaluating digital dialogue tools; rather, it critically explores the conditions under which AI should or should not be deployed in peace consultations, scrutinizing the if, how, when, and why of such technological interventions. In an era where AI is often promoted as a universal solution to complex societal issues, there is a tangible risk of technological fetishism, pursuing digital solutions without thoroughly considering their ethical, political, and social consequences. This risk reflects concerns raised in science and technology studies (STS) and critiques of “tech-solutionism”, the assumption that complex political or social problems can be addressed solely through technological means (Nicolaidis and Giovanardi, Reference Nicolaidis and Giovanardi2022). By proposing a structured approach that goes beyond simply weighing advantages and disadvantages, this article seeks to transcend a purely tool-centric view, offering a systematic framework that prioritizes intentionality, effectiveness, and ethical integrity in the application of AI for peace dialogues.

2. Literature review

Digital dialogues and other technology-assisted forms of public consultation have been around for some time, advancing participatory governance and deliberative democracy (Lezaun and Soneryd, Reference Lezaun and Soneryd2007; Rose and Sæbø, Reference Rose and Sæbø2010; Mikhaylovskaya, Reference Mikhaylovskaya2024). They have gained renewed momentum due to recent advancements in generative AI, such as OpenAI’s ChatGPT, which enable more accurate and rapid summarization of public consultations compared to earlier topic modeling tools that relied primarily on statistical methods to identify prevalent themes of public interest and concern. Governments are increasingly investing in AI-powered consultation systems to elevate citizen voices in policy planning processes (Taylor et al., Reference Taylor, Murphy, Hoston and Senkaiahliyan2024). A notable recent example is the UK Incubator for Artificial Intelligence (i.AI), which developed the open-source application Consult, utilizing advanced large language models (LLMs) to enable governments to analyze public consultation responses more quickly and efficiently.Footnote 1 The transformative potential of AI has been increasingly recognized as an opportunity to enhance information accessibility, enable broad-based citizen engagement, and facilitate democratic dialogue at scale (Fishkin et al., Reference Fishkin, Bolotnyy, Lerner, Siu and Bradburn2025). In parallel, deliberative technology has been identified as a promising means to facilitate the alignment of AI with prevailing social values and collective human will (Konya et al., Reference Konya, Schirch, Irwin and Ovadya2023a). As a result, these advancements have amplified focus on the nexus of AI and digital dialogues, attracting significant attention from both practitioners and scholars, as well as among the leading AI companies.

Existing scholarly and policy frameworks within this broader public sphere have largely been shaped by perspectives rooted in governmental technology (GovTech), aimed at enhancing government operations, and civic technology (CivicTech), intended to empower citizens. However, these existing conceptual approaches often fail to fully capture the dynamic and embedded nature of digital dialogues in peacebuilding contexts, where technology and social processes coevolve (Hirblinger, Reference Hirblinger2023). Especially in conflict contexts, technological infrastructure is often fragile, online discourse can become highly polarized, and internet access may be subject to control or manipulation, all of which pose significant challenges to the effective implementation of digital dialogues. Subsequently, we will examine the existing, albeit limited, literature on digital dialogues in peace processes in greater detail and explore the emerging role of AI in this field and discuss what is new regarding the use of AI in this context.

2.1. Existing frameworks and their shortcomings

Recent scholarship on digital peacebuilding has emphasized the need to view digital dialogue platforms not merely as neutral tools but as sociotechnical systems, systems where technology and social processes mutually influence each other (Schirch, Reference Schirch2020; Hirblinger et al., Reference Hirblinger, Brummer and Kufus2024). Rather than assuming these platforms inherently democratize participation, scholars highlight how their design and implementation are deeply influenced by broader governance structures, existing power dynamics, and underlying epistemic frameworks (Hirblinger et al., Reference Hirblinger, Wählisch, Keator, McNaboe, Duursma, Karlsrud, Sticher, Verjee, Kyselova, Kwaja and Perera2023). Thus, digital dialogues may inadvertently reflect and reinforce prevailing inequalities, power imbalances, and biases rather than automatically improving democratic engagement. This perspective underscores the importance of critically examining digital peacebuilding tools within their specific political, social, and economic contexts to ensure genuinely inclusive and effective dialogue.

Previous attempts to classify the role of technology in peacebuilding have focused on core functional areas, such as data processing, communication, gaming, and engagement (Larrauri and Kahl, Reference Larrauri and Kahl2013). Following this proposed logic, engagement encompasses technologies that empower individuals and communities to actively influence local peacebuilding processes through online collaboration tools, civic participation platforms, and crowd-funding initiatives. Complementing this taxonomy of functions, Larrauri and Kahl (Reference Larrauri and Kahl2013) classified peacebuilding programs into four primary areas where these technologies can be effectively applied: early warning and early response systems, fostering collaboration and dialogue between conflicting groups, promoting peaceful attitudes, and supporting communities in influencing pro-peace policies. However, they missed out on fully exploring digital dialogues’ specific mechanisms, such as virtual mediated dialogue sessions or facilitated online exchanges that intentionally bring together conflict stakeholders in sustained, moderated conversations.

Similarly, Schirch (Reference Schirch2020) has delineated a framework with distinct areas where digital technologies intersect with peacebuilding efforts, including online intergroup dialogue facilitation, digital public opinion polling, digital conflict analysis, digital election monitoring, and digital fact checking. Schirch emphasizes that digital peacebuilding extends beyond mere technological tools; it represents a holistic approach to fostering social cohesion, civic engagement, and human security in the digital age. She argues that these digital platforms and tools can profoundly transform dialogues by promoting trust and mutual understanding among diverse groups, which are essential components in peace processes. Furthermore, Schirch traces the evolution of technology’s role in peacebuilding through five distinct generations, highlighting the dynamic interplay between technological advancements and peace efforts, and pointing out how each generation has uniquely contributed to creating more inclusive, participatory, and responsive mechanisms for conflict resolution. Her framework underscores that successful digital dialogues require careful design and ethical considerations to mitigate risks such as misinformation, polarization, and potential digital divides, thus advocating for strategic, context-sensitive applications of technology to achieve sustainable and equitable peace outcomes. However, while Schirch’s framework provides insights into elements of digital peacebuilding, it was developed before the emergence of ChatGPT and other generative AI systems, and it does not extensively address how to operationalize digital dialogues specifically.

Nolte-Laird (Reference Nolte-Laird2021) proposed a more comprehensive and theoretically robust framework for digital peace dialogues, deeply anchored in the dialogical philosophies of Martin Buber and Paulo Freire. Central to this framework is the premise that authentic, reciprocal communication is essential in fostering mutual understanding and achieving positive peace, even in virtual environments. This also includes the importance of meticulously establishing a conducive virtual environment to promote trust and openness among participants, ensuring that selected digital platforms facilitate seamless, inclusive, and accessible interactions (Nolte-Laird, Reference Nolte-Laird2021). Additionally, Nolte-Laird highlighted the unique potentialities of digital media, such as transcending geographical divides, enabling asynchronous communication, and utilizing multimedia tools to enrich the dialogue experience. Acknowledging the inherent challenges of virtual peacebuilding, such as technological barriers, risks of miscommunication, and varied levels of digital literacy, she advocates for proactive measures, including technical support, digital inclusivity initiatives, and the implementation of clear communication protocols. However, Nolte-Laird’s framework, while comprehensive and theoretically robust, notably omits critical discussion of emerging challenges posed by AI in digital peace dialogues. As AI increasingly mediates online interactions, through automated moderation, content curation, and algorithmic influence on participant engagement, it introduces novel ethical, practical, and psychological complexities that remain unaddressed in her work.

While existing research provides helpful considerations, it falls short in establishing a unified classification system capable of encompassing the full spectrum of approaches to digital dialogue. Current frameworks predominantly conceptualize digital deliberation narrowly, as either a government-initiated participatory process, a computational modeling challenge, or as a developmental instrument employed in under-resourced regions. Consequently, these frameworks inadequately represent the extensive diversity observed in contemporary digital dialogue platforms, particularly in terms of their varying functionalities, degrees of AI integration, and broader sociopolitical implications.

2.2. What is new about AI-powered digital dialogues?

The emerging debate among scholars and practitioners in digital dialogue and digital deliberation contexts increasingly centers around the implications of integrating AI. Recent publications underscore that AI, particularly LLMs, holds significant promise for revitalizing and scaling public deliberation, creating more inclusive, diverse, and nuanced dialogue platforms (Konya et al., Reference Konya, Turan, Ovadya, Qui, Masood, Devine, Schirch, Roberts and Forum2023b). Yet, this integration is fraught with critical concerns, including risks of amplifying polarization, challenges of maintaining user trust, and potential ethical pitfalls related to synthetic participation. The discourse emphasizes the necessity of balanced, principled approaches to deploying AI, stressing that while the technology could reshape the nature of public discourse toward a more participatory and democratic ideal, achieving these benefits depends on carefully addressing the associated risks and unintended consequences.

AI introduces several novel dimensions to digital dialogues that distinguish contemporary platforms from their predecessors. These new dimensions emerge from significant advancements in machine learning, NLP, and automated decision-making algorithms. Collectively, these technological innovations enable digital dialogue systems to operate with unprecedented degrees of autonomy, personalization, and scalability, thereby reshaping both their functions and sociopolitical implications. First, AI-powered digital dialogues exhibit heightened capacities for adaptive personalization. Unlike traditional dialogue systems that relied heavily on predefined scripts and static interfaces, modern AI systems employ sophisticated NLP techniques to dynamically interpret and respond to user inputs. Typically, AI integration into digital dialogues can manifest in several distinct ways:

  1. 1. Interactive moderation: AI bots actively facilitate or moderate conversations between stakeholders, guiding discussions, clarifying questions, and managing conversational flow.

  2. 2. Interactive engagement (Survey bots): AI-driven conversational agents directly interact with participants, gathering responses or feedback in a structured yet interactive manner.

  3. 3. Sensemaking and analytical support: AI tools like Remesh.AI or Talk to the Ciiy analyze and summarize large volumes of participant input, performing viewpoint clustering, sentiment analysis, peer voting, and consensus visualization, thus enriching and driving dialogue with analytical insights.

Recent advancements in transformer-based language models, such as GPT (generative pre-trained transformer) and BERT (bidirectional encoder representations from transformers), exemplify these capabilities by accurately interpreting nuances in human language and generating coherent, contextually relevant responses in real time (Rawat et al., Reference Rawat, Chakrawarti, Sarangi, Rajavat, Alamanda, Srividya and Sankaran2024). This multilayered integration allows dialogues to evolve organically, fostering authentic, context-sensitive engagements with enhanced scalability and analytic depth.

Second, AI introduces significant advancements in scaling digital dialogues and inclusivity. Historically, the potential of digital deliberation has been constrained by logistical limitations, particularly concerning the number of participants, the complexity of moderating large-scale conversations, and handling large amounts of qualitative input. AI-driven moderation tools and content analysis algorithms address these issues by automating critical elements of discussion facilitation and providing advanced sensemaking. Systems employing machine-learning-based classifiers can identify and respond to toxic or disruptive contributions effectively. Additionally, automated summarization, categorization, and viewpoint clustering tools distill extensive discussions into accessible, actionable insights (Tessler et al., Reference Tessler, Bakker, Jarrett, Sheahan, Chadwick, Koster, Evans, Campbell-Gillingham, Collins, Parkes, Botvinick and Summerfield2024). This capability not only facilitates broader public participation but also enhances the quality and manageability of deliberative processes.

Moreover, AI-powered digital dialogues can significantly aid situational awareness and early-warning capabilities within deliberative contexts. By analyzing extensive interaction data, AI systems can identify emerging topics, trends, and sentiment shifts in real time, thereby increasing stakeholders’ ability to recognize and respond swiftly to evolving conflict contexts. Advanced analytical tools employing techniques such as sentiment analysis and topic modeling enable the early detection of emerging issues, offering policymakers and peace practitioners valuable insight to better anticipate public concerns or emerging conflict drivers (Geurts et al., Reference Geurts, Gutknecht, Warnke, Goetheer, Schirrmeister, Bakker and Meissner2022).

The integration of AI also raises new ethical and sociopolitical considerations that distinguish contemporary digital dialogues from earlier forms. The extensive use of AI invites critical reflections on algorithmic bias, transparency, and accountability (Buhmann and Fieseler, Reference Buhmann and Fieseler2023). Issues of algorithmic bias, arising from training data or model design, can inadvertently reinforce existing societal inequalities or exclude marginalized voices, contradicting the inclusive deliberative ideals these systems aim to embody. Thus, developing robust frameworks for algorithmic transparency and ethical oversight becomes imperative, shaping the development and governance of these advanced dialogue systems. Our reflexive stance also demands that we critically examine how these biases reflect broader geopolitical and linguistic asymmetries in AI development, which, in turn, influence whose voices are amplified or silenced in digital dialogues.

In conclusion, AI-powered digital dialogues represent a substantial departure from traditional digital engagement platforms, characterized by increased adaptivity, scalability, analytical capabilities, and ethical complexity. These novel attributes offer transformative opportunities for enhancing broad-based deliberation while simultaneously introducing new challenges requiring thoughtful consideration and governance.

3. Dilemmas in leveraging AI in digital peace dialogues

Integrating AI-driven technologies into peacebuilding efforts offers significant opportunities but also presents substantial challenges. The dilemmas associated with these technologies are dual in nature: AI tools are not only part of the solutions aimed at addressing peacebuilding challenges but also sources of new complications themselves. These complexities affect both those who participate in digital dialogues and those who conduct them.

For instance, AI solutions can enhance scalability and usability; however, their use in conflict-affected areas introduces critical issues related to access, digital literacy, cultural fit, security, and ethics. These challenges must be addressed carefully to ensure digital dialogues remain relevant and responsive to the diverse needs of stakeholders involved in peace processes. Without claiming completeness, the following top ten dilemmas are distilled from the literature and practical experience (Table 1).

Table 1. Summary of key dilemmas in AI-driven digital peace dialogues

3.1. Technological accessibility and digital infrastructure

The effectiveness of AI-powered digital dialogues hinges on the availability of reliable information and communications technology (ICT) infrastructure. Many conflict-affected regions experience persistent challenges related to internet connectivity, electricity supply, and access to digital devices. This digital divide risks deepening inequalities by excluding marginalized populations from participating in deliberative processes. Furthermore, AI-driven tools often require high computational resources and stable networks, which are often costly and unreliable in fragile settings. To mitigate these barriers, peacebuilding practitioners adopt hybrid engagement approaches that integrate online and offline participation methods, leveraging mobile-based solutions, and promoting community-led digital access initiatives. Particularly promising is the combination of sophisticated AI systems on the backend with conventional low-bandwidth frontend applications, such as simple web apps or WhatsApp chatbots.

For example, studies on digital peacebuilding in Colombia show large differences in internet access between cities and the countryside. By the end of 2020, slightly more than half of Colombians had steady internet access, with about two-thirds in urban areas compared to less than a quarter in rural areas (Ryan-Mosley, Reference Ryan-Mosley2024). Similarly, by late 2018, about three-quarters of Colombians owned cellphones, but ownership was much lower in rural areas than in cities.

At the United Nations, prior to initiating any digital dialogue, we conducted a comprehensive “digital baseline study” to understand the reach, audience, and potential impact of digital dialogues. Similarly, CMI performs a detailed “digital ecosystem analysis” to carefully determine whether digital engagement is feasible and, if so, to identify the best methods to implement it effectively in conflict-affected regions. These assessments help ensure that digital tools are employed responsibly and inclusively, considering local contexts and constraints.

3.2. Digital and technological literacy

A significant challenge in deploying AI-driven digital dialogues is the varying levels of digital literacy among participants, both the participating stakeholders and the facilitators. Many stakeholders, including conflict-affected populations and mediators, may lack the technical expertise required to effectively navigate and contribute to AI-mediated deliberation processes. This digital divide affects not only participants’ ability to engage in meaningful discourse but also the capacity of the mediator to critically assess AI-generated data outputs.

We have often observed that the adoption of new tools in peacebuilding, including AI-driven technologies, varies across different groups and contexts. While some young peacebuilders actively explore digital innovations to support dialogue, others engage with varying levels of enthusiasm, shaped by factors such as access, familiarity, and perceived relevance. At the same time, we have seen that older generations are often curious and open to trying new technologies. Age is not necessarily a determining factor in technology uptake—while there may be some tendencies, it is important to avoid age-based stereotypes (Birkland, Reference Birkland2024). Likewise, assumptions about who engages with technology should also be mindful of gendered stereotypes, as interest and proficiency in digital tools are shaped by diverse experiences rather than demographic categories alone (Comunello et al., Reference Comunello, Fernández Ardèvol, Mulargia and Belotti2017). Similarly, more traditional conflict support actors may approach emerging technologies with caution, considering both their potential and associated risks. Recognizing these diverse perspectives is key to ensuring that technological advancements are integrated in ways that are inclusive, context sensitive, and effective.

Addressing these gaps necessitates targeted digital literacy programs, intuitive platform designs, and localized technical support to ensure equitable participation across diverse communities. At the University of Birmingham’s Center for AI in Government and CMI, we have already provided dedicated training on digital dialogues to help bridge this divide. Finally, the article at hand aims to contribute to filling this gap by offering additional knowledge resources and insights into effective practices for digital dialogues.

3.3. Cultural and social perceptions of AI in deliberation

The integration of AI in peace dialogues introduces concerns related to cultural resistance and societal perceptions of technology. In many contexts, digital deliberation is perceived as impersonal or misaligned with traditional conflict resolution mechanisms, which emphasize face-to-face engagement and interpersonal trust-building. Additionally, AI-driven moderation and synthesis tools may struggle to capture cultural nuances, leading to misinterpretations or alienation of participants. To foster acceptance, digital dialogue platforms must be designed with cultural sensitivity and transparency in mind, integrating human moderation alongside AI tools and ensuring that technology complements rather than replaces local conflict resolution practices.

For example, in many African countries emerging from conflict, such as Burundi and the Democratic Republic of Congo, traditional conflict resolution practices emphasize community ownership and face-to-face dialogue, which fosters interpersonal relationships and collective accountability. Introducing digital platforms in these settings risks undermining local authority and community cohesion if perceived as foreign impositions or disconnected from local realities (Niyitunga, Reference Niyitunga2024). Moreover, a significant digital divide, marked by inadequate technological infrastructure and limited digital literacy, exacerbates cultural resistance, further alienating marginalized communities from digital peace initiatives. Therefore, as Niyitunga (Reference Niyitunga2024) underscored, too, AI applications in peacebuilding settings must prioritize culturally resonant engagement methods, inclusive access, and the strategic integration of local leaders and peace practitioners to ensure technology enhances, rather than diminishes, cultural values and social cohesion.

3.4. Inclusivity and representation in AI-driven deliberation

Comprehensive peace dialogues often strive for inclusive representation of marginalized communities, displaced populations, and traditional stakeholder groups. AI-driven deliberation tools, while enhancing scalability, may inadvertently introduce biases that privilege certain linguistic or demographic groups over others. Algorithmic exclusion, language limitations, and content filtering biases can undermine the legitimacy of deliberative processes. A deliberate effort must be made to design AI models that support multilingual and culturally diverse participation, incorporating real-time translation, inclusive content moderation, and adaptive accessibility features.

This recognition underscores why we dedicated efforts at the UN to develop new language resources for AI, focusing particularly on low-resource languages often underserved by major technology companies. One such initiative is the creation of the language corpus “Lisan,” a joint initiative with NLP Arabic experts from Birzeit University in Palestine and the American University of Beirut in Lebanon, tailored specifically to enhance AI capabilities in Yemeni, Sudanese, Iraqi, and Libyan Arabic dialects (Jarrar et al., Reference Jarrar, Zaraket, Hammouda, Alavi and Wählisch2023). While such specialized corpus development was essential in the early stages of AI applications in peacebuilding, recent advancements are reducing these barriers. For instance, Mistral’s Saba model, a 24B parameter LLM specifically designed for Arabic and languages from the Middle East and South Asia, demonstrates how commercial AI development is increasingly addressing linguistic diversity. This model, trained on highly curated regional datasets, delivers much more accurate and culturally nuanced responses while being faster and more cost-effective than traditional approaches.Footnote 2 Such developments are accelerating the possibility of linguistically inclusive digital peace dialogues and content analysis without the painstaking creation of dedicated language resources for each dialect. Nevertheless, by investing in underrepresented languages, we continue to reduce linguistic disparities in digital deliberation platforms, ensuring that AI-driven peace dialogues remain inclusive and reflective of the linguistic diversity found within conflict-affected regions.

3.5. Data security, anonymity, and trust

Given the sensitive nature of conflict-related deliberation, ensuring robust data protection and participant anonymity is paramount. This is especially critical in conflict settings where cyber espionage, hacking, and other digital attacks are part of the standard repertoire of warfare. Participants may fear surveillance, retaliation, or misuse of their contributions, particularly in authoritarian or high-risk environments. Consequently, AI-driven platforms must incorporate rigorous cybersecurity measures, including end-to-end encryption, data anonymization, and secure storage mechanisms to safeguard users effectively. Additionally, transparent data governance frameworks must be established, clearly delineating how data is collected, processed, and utilized. Such transparency is essential to building and maintaining trust among participants and stakeholders, mitigating the risks associated with digital vulnerabilities inherent in conflict-prone contexts.

To enhance capacity-building in this domain, the United Nations, the Cyber Peace Institute, and CMI established the Digital Risk Management E-Learning Platform for Mediators, designed to raise awareness of digital risks and strengthen mediators’ ability to manage them effectively. This platform seeks to deepen understanding of cybersecurity and other digital threats while equipping mediators with the necessary skills to mitigate and address these challenges.Footnote 3 As AI tools become increasingly integrated into peace processes, however, existing digital risk frameworks require expansion. Recognizing this gap, CMI is currently developing an AI code of conduct specifically for peace practitioners that addresses the unique challenges of handling sensitive conflict data with AI systems. This initiative aims to provide practical ethical guidelines on issues including data sovereignty, informed consent, algorithmic transparency, and secure storage practices. By establishing clear principles for responsible AI use in conflict settings, such a code of conduct would help practitioners navigate the complex ethical terrain of applying powerful language models to sensitive peace dialogues. By fostering both digital and AI resilience, these complementary initiatives help mediators navigate increasingly complex technological landscapes, ensuring that conflict resolution efforts remain secure, credible, and resistant to digital exploitation.

3.6. The risk of algorithmic bias and manipulation

AI-driven digital dialogues are susceptible to biases embedded in training data, algorithmic design, and moderation mechanisms. If not carefully managed, these biases can skew deliberative outcomes, reinforce existing inequalities, or even be exploited for disinformation campaigns. Additionally, adversarial actors may manipulate AI-generated discussions through coordinated misinformation efforts, bot-driven interventions, or targeted suppression of dissenting voices. Mitigating these risks requires rigorous algorithmic auditing, transparency in AI decision-making, and the inclusion of human oversight in moderation and synthesis processes.

A pertinent case study highlighting the perils of algorithmic bias and manipulation is observed in the context of the Israeli–Palestinian conflict. Empirical research by Steinert and Kazenwadel (Reference Steinert and Kazenwadel2024) demonstrated that AI models, such as ChatGPT, systematically provide lower fatality estimates when queried in the language of the attacking party compared to the language of the targeted group. Specifically, their study found that GPT-3.5 reports, on average, 34% fewer casualties when asked about Israeli airstrikes in Hebrew than in Arabic. Moreover, the model is significantly more likely to deny the occurrence of specific attacks or redirect the query to unrelated events when prompted in the aggressor’s language (Steinert and Kazenwadel, Reference Steinert and Kazenwadel2024). This discrepancy is attributed to both media biases in AI training data and the model’s inability to correctly match specific events with accurate fatality numbers. Such biases, if left unaddressed, can profoundly distort deliberative processes, reinforce asymmetric narratives, and undermine the legitimacy of AI-driven discussions in conflict settings. These findings underscore the necessity for rigorous algorithmic auditing, increased transparency in AI decision-making, and robust human oversight to mitigate the risks of bias and manipulation in AI-mediated deliberation.

3.7. Ethical and operational trade-offs in AI integration

The application of AI in peace dialogues presents ethical and operational dilemmas regarding the extent of automation versus human facilitation. While AI enhances scalability and efficiency, it also risks dehumanizing deliberation, reducing the depth of engagement, and limiting reflexivity in discussions. Overreliance on AI can lead to technological determinism, where digital dialogues become constrained by algorithmic structures rather than adaptive human-led facilitation. A hybrid approach, integrating AI-assisted tools with human moderators, offers a potential solution to balance efficiency with authenticity, ensuring that peace processes remain contextually grounded and ethically sound.

Crucially, dialogues in peacebuilding contexts serve a purpose that extends well beyond finding technical agreements between conflict parties. These interactions are fundamentally about building trust between stakeholders, establishing channels of exchange, and fostering mutual recognition—laying the essential groundwork for future collaboration and de-escalation of hostility. Peace dialogues are often not primarily conclusion oriented but rather represent an inherently human process aimed at cultivating a culture of trust and mutual acceptance of diverse needs and perspectives. The relational aspects of these exchanges, the gradual building of rapport, the acknowledgment of past harms, the recognition of shared humanity, cannot be meaningfully accelerated or automated without undermining the very foundation of sustainable peace.

At the same time, recent studies highlight that people increasingly appreciate AI in deliberative contexts, particularly for its ability to provide an “emotional sanctuary” (Siddals et al., Reference Siddals, Torous and Coxon2024), a nonjudgmental, patient, and always-available space for engagement. Research on generative AI chatbots for mental health suggests that users value AI’s ability to listen attentively, validate emotions, and offer a consistent and impartial presence. Many participants reported that AI-mediated conversations helped them process trauma, navigate relationships, and improve emotional well-being, sometimes even preferring AI interactions over human-led support due to its unwavering availability and nonjudgmental nature. However, while AI fosters a sense of understanding and connection, some users also noted limitations, such as scripted responses, lack of deeper memory retention, and occasional frustration with safety guardrails that disrupted the perceived emotional bond (Siddals et al., Reference Siddals, Torous and Coxon2024).

Research on AI-mediated deliberation further supports this, demonstrating that AI-generated statements are perceived as clearer, more informative, and more impartial than those crafted by human mediators (Tessler et al., Reference Tessler, Bakker, Jarrett, Sheahan, Chadwick, Koster, Evans, Campbell-Gillingham, Collins, Parkes, Botvinick and Summerfield2024). However, this presents a dilemma: While AI can offer an optimized, unbiased, and confidential platform for dialogue, its very nature risks diminishing the depth of human connection and the emotional intelligence crucial to meaningful peace processes. As the same findings from AI-mediated deliberation suggest, AI’s ability to unify perspectives does not inherently replace the nuanced understanding and adaptability of human facilitators (Tessler et al., Reference Tessler, Bakker, Jarrett, Sheahan, Chadwick, Koster, Evans, Campbell-Gillingham, Collins, Parkes, Botvinick and Summerfield2024).

The challenge, therefore, is not to automate or accelerate human exchange, an approach that would undermine the trust-building process essential to peacebuilding, but to determine where AI can most effectively complement and enhance human capabilities. We believe AI’s greatest potential in peace processes lies in augmenting understanding through advanced analysis and sensemaking of stakeholder inputs, uncovering patterns and connections that might otherwise remain obscured. Additionally, AI holds significant promise for fostering inclusion by expanding participation beyond those physically present at negotiation tables, amplifying voices from marginalized communities and potential spoilers who might otherwise be excluded. By fulfilling these supportive roles, AI can strengthen the human dimensions of peace dialogues rather than seeking to replace them, ensuring that technology enhances rather than diminishes the effectiveness of peace processes.

3.8. Sustainability and long-term viability of digital dialogue platforms

Ensuring sufficient resources for building and maintaining digital deliberation tools remains a significant challenge, as many AI-driven platforms rely on short-term funding cycles, making long-term sustainability uncertain. At organizations like the United Nations and CMI, public–private partnerships play a crucial role in making these tools accessible to the peacebuilding community, often by repurposing or adapting technologies originally developed for other contexts. A key example is the UN’s partnership with Remesh.AI, which has helped significantly lower costs by leveraging an existing AI-powered engagement platform, demonstrating how collaboration with the private sector can make digital deliberation tools more affordable and scalable.Footnote 4 However, such dependencies also limit technical flexibility, as platforms must align with donor priorities, proprietary constraints, or commercial licensing models rather than being fully customized for peace processes.

In addition to establishing partnerships with private actors, it is essential that peace and humanitarian organizations continue to develop in-house digital capacities. Specialized digital peacebuilding teams, which combine a deep understanding of conflict resolution methodologies with knowledge of technological possibilities, serve as critical bridges between the technical and peacebuilding domains. Such internal expertise enables organizations to more effectively establish and manage partnerships with external technical specialists, accurately assess technological solutions against peacebuilding needs, and independently run digital processes when appropriate. This hybrid approach ensures that technology serves peacebuilding objectives rather than the reverse, maintaining the primacy of conflict resolution principles, especially as digital tools evolve.

To address these sustainability challenges more broadly, peacebuilding initiatives must explore more autonomous funding mechanisms, open-source development models, and institutional commitments to ensure that digital deliberation tools remain adaptable, secure, and available for sustained conflict resolution efforts. An alternative route to addressing sustainability challenges is the use of open-source solutions like Pol.is, a real-time system for gathering, analyzing, and understanding what large groups of people think in their own words, enabled by advanced statistics and machine learning.Footnote 5 While its open-source nature makes it accessible, using the tool effectively still requires technical expertise to deploy, maintain, and customize the system, as well as funding for server infrastructure and computational resources. Beyond the technical aspects, successful implementation also demands expertise in question design and data interpretation, as poorly framed prompts or misanalysis of responses can undermine the deliberation process. This presents a significant dilemma: While conflict stakeholders may seek digital dialogues to enhance inclusivity and engagement, these efforts demand critical financial, technical, and analytical resources, making it challenging to establish fully independent and sustainable platforms without external support.

3.9. Measuring impact and translating digital deliberation into action

One of the key challenges in digital peace dialogues is ensuring that deliberative processes lead to meaningful outcomes rather than merely serving as symbolic exercises. As noted by Stigant and Murray (Reference Stigant and Murray2015), dialogues can be misused by leaders to consolidate power or serve as performative placeholders, highlighting the importance of structuring these processes to achieve meaningful political transformation. Likewise, AI-powered consultations must have clearly defined mechanisms for translating discussions into actionable policy recommendations or conflict resolution strategies. Without concrete pathways for institutional integration, participants may perceive digital dialogues as inconsequential, leading to disengagement. Establishing formal linkages between digital platforms and decision-making bodies, alongside systematic monitoring and evaluation mechanisms, is essential to enhance the credibility and effectiveness of AI-driven deliberation in peace processes.

Ensuring transparency in digital deliberation platforms is crucial for maintaining trust and accountability in peace processes (Hirblinger, Reference Hirblinger2023). While tools like real-time documentation and open data repositories can enhance visibility, many peace dialogues suffer from opaque decision-making and selective information disclosure, which can erode public confidence. To address these challenges, structured transparency mechanisms, such as summary reports, multilingual accessibility, and public verification processes, must be integrated to ensure that information is not just available, but also understood and actionable. Without clear communication strategies, transparency efforts can backfire, either overwhelming the public with raw data or creating new power imbalances if only certain actors have the resources to analyze it effectively.

For instance, the Libyan Political Dialogue Forum (LPDF), a UN-backed initiative aimed at resolving Libya’s political deadlock and establishing a transitional government, hosted an official portal detailing all major decisions, agreements, and procedural updates. This platform provided real-time access to negotiation outcomes, allowing stakeholders and the public to track progress on key issues such as election timelines, governance structures, and security arrangements. Similarly, the results of UNSMIL’s digital dialogues in 2021 and 2022 were made public to serve as a reference and potential pressure point for formal negotiations, ensuring that public input remained visible in the decision-making process (“UNSMIL” 2020). Stephanie Williams, then the UN Special Adviser on Libya, ensured that these digital consultation findings were brought back to the negotiating parties and made them transparent in briefings to the UN Security Council, making sure that people’s voices were brought to attention and considered in the decision-making process and high-level diplomatic discussions.

3.10. Cybersecurity threats and technological resilience

The integration of AI-driven deliberation platforms in peacebuilding introduces significant cybersecurity risks that can compromise the integrity of discussions, endanger participants, and disrupt diplomatic efforts. Digital platforms designed for peace dialogues operate in politically sensitive and conflict-prone environments, making them prime targets for cyberattacks, including hacking, surveillance, disinformation campaigns, and data breaches. If left unaddressed, these risks undermine trust in digital peace processes and can lead to the manipulation of deliberative outcomes by malicious actors.

A notable example of cyber threats in conflict settings is the repeated targeting of peace-related digital infrastructures in Ukraine during the ongoing war. Russian-linked cyber groups have attacked Ukrainian government servers, digital communication platforms, and online civic engagement tools, often attempting to spread disinformation, collect intelligence, or disrupt digital discussions (Ford, Reference Ford2024). Similarly, in Myanmar, military-affiliated cyber units have infiltrated online activist forums, tracking digital peace dialogues and using participant data for retaliation (Guntrum, Reference Guntrum2024). These incidents highlight how cybersecurity threats are actively weaponized to suppress democratic discourse and peace initiatives in fragile settings.

Recognizing these growing risks, CMI has taken a proactive approach to strengthening cybersecurity in peacemaking by partnering with WithSecure, a leading Finnish cybersecurity company, to implement a comprehensive cybersecurity strategy (“CMI” 2022). This initiative aims to enhance awareness, protect sensitive information, and integrate cyber considerations into conflict analysis. The partnership focuses on three key areas: cybersecurity training for peacebuilders, strengthening organizational duty of care, and embedding cyber dimensions into conflict resolution strategies. These efforts demonstrate how cybersecurity must be seen not as a technical afterthought but as a fundamental pillar of modern peacemaking, ensuring that digital dialogue platforms remain secure, resilient, and capable of supporting meaningful dialogue in high-risk environments.

4. Framework development

Based on the review of the literature, our own practical experience, and the key dilemmas identified, we contend that the development of a framework for digital peace dialogues necessitates a structured yet flexible approach rather than rigid classifications. Given the diverse landscape of digital dialogue tools, this framework does not seek to impose fixed categories but instead articulates key parameters that guide decision-making in the selection and deployment of AI-driven mechanisms for peace dialogues. In this regard, we align with scholars such as (Hirblinger et al. (Reference Hirblinger, Wählisch, Keator, McNaboe, Duursma, Karlsrud, Sticher, Verjee, Kyselova, Kwaja and Perera2023), who advocate for the application of critical reflexive engagement in digital peacebuilding, recognizing that this remains an emerging and, for some, still an experimental field. By critical reflexivity, we mean an ongoing analytical stance that interrogates how assumptions about technology and society mutually shape digital peace dialogues, their inclusivity, and their perceived legitimacy. This approach requires questioning not only the technical affordances of AI tools but also the social, political, and cultural assumptions embedded in their design and deployment. Rather than prescribing predefined classifications for specific tools, this framework highlights the fundamental considerations that peacebuilders and conflict stakeholders must systematically assess when integrating digital technologies into peace processes.

Moreover, such a framework aspires to provide scholars of international studies and peacebuilding with a critical lens through which to evaluate and assess digital dialogues, moving beyond deterministic perspectives that either uncritically endorse or outright reject digital approaches. While practitioners may be inclined to favor or dismiss digital mechanisms based on operational convenience or ideological predispositions, this taxonomy encourages a more nuanced and systematic interrogation of the affordances and limitations of AI-driven peace dialogues. By fostering a more reflexive engagement with digital deliberation tools, this framework enables both scholars and practitioners to make informed, context-sensitive decisions that enhance the legitimacy and efficacy of digital interventions in peace processes.

The logic and structure of this framework, as outlined in Table 2 below, are built around a set of key dimensions that define the operational and ethical considerations of digital peace dialogues. Each aspect in the framework corresponds to critical decision points that influence the design and implementation of AI-driven dialogue mechanisms. For brevity and relevance, this article highlights selected dimensions most salient to implementation challenges, though all mentioned elements remain critical to context-specific application. Risk assessment refers to the need to evaluate potential harms, from physical threats to reputational and data-related risks. “Timing” addresses when dialogue interventions occur relative to the conflict or negotiation stage, acknowledging that peace processes are rarely linear. Rather than offering a one-size-fits-all solution, the framework presents guiding questions that facilitate a systematic evaluation of dialogue goals, participation scale, AI integration, privacy, security, and sustainability concerns.

Table 2. Digital dialogue framework guidance

This ensures that peacebuilders can align digital dialogue tools with the specific needs of a given conflict context. The framework also acknowledges the dynamic and evolving nature of digital peace dialogues by allowing for multiple configurations, ranging from fully AI-moderated discussions to hybrid human–AI approaches. Furthermore, this framework is technology and tool agnostic, recognizing the high turnover of new tools and the rapid pace of technological advancements. By mapping the interplay between inclusivity, effectiveness, digital literacy, and security, the framework serves as a flexible yet structured guide for informed decision-making in digital peacebuilding efforts.

4.1. Purpose and goal of the dialogue

Before selecting a tool, it is crucial to determine the primary objective of the dialogue. The nature of engagement must be clearly defined—whether it is an open-ended dialogue, a structured deliberation, broad engagement, information collection, or consensus building. Platforms may serve different functions, such as supporting structured decision-making, AI-assisted thematic analysis, or simple conversation facilitation. For example, Remesh.AI enables real-time large-scale deliberation, whereas Pol.is focuses on collective opinion clustering for consensus building. There are also pure information management tools that are AI powered, like Akord AI by the peacebuilding NGO Conflict Dynamics International, which developed a chatbot to access knowledge, frameworks for peacebuilding and political accommodation, and an extensive archive of Sudanese Political Accords.Footnote 6 Additionally, the University of Birmingham is developing Publica, an LLM-powered application for self- and group-deliberation aimed at enhancing structured dialogue and decision-making, with contributions from the authors as part of its development team. Ultimately, selecting the appropriate tool requires a careful assessment of the specific objectives, the nature of the conflict, and the desired impact on the peace process, ensuring that the chosen platform aligns with both strategic goals and the needs of dialogue stakeholders.

The taxonomy presented in Table 3 synthesizes various dialogue and deliberation approaches, aligning with frameworks established in academic literature and practitioner guides (van de Kerkhof, Reference van de Kerkhof2006; National Coalition for Dialogue and Deliberation, 2014). Each approach serves distinct purposes in peacebuilding and governance, ensuring that digital engagement mechanisms are adapted to the needs of specific conflict contexts. While some tools are mentioned as examples, this list is not exhaustive, as digital applications continuously evolve, new ones emerge while others become obsolete. Additionally, tools often serve multiple functions and may fit within parallel categories of the taxonomy, reflecting the complexity and versatility of digital engagement mechanisms. The Innovation in Politics Institute maintains a helpful database that catalogs a collection of digital tools for citizen participation, e-voting, and other democratic engagement mechanisms, which has been kept up to date over the past years.Footnote 7

Table 3. Taxonomy of dialogue formats and methods

4.2. Scale and participation

Digital dialogues differ significantly in scale, ranging from small expert discussions to large-scale public engagements. When designing a dialogue, it is essential to determine whether the platform should be open to the public or restricted to designated stakeholders, define the ideal number of participants, and assess whether participation should be equal for all or tiered based on roles. Broad, public participation requires scalable AI-supported moderation and synthesis tools, while targeted stakeholder engagements can rely on smaller-scale deliberation platforms with human-led facilitation.

For instance, the aforementioned UNSMIL digital dialogues in 2021-2022 were broad with an open-public call but then also focused on youth groups. The OSESGY dialogue in 2021 was selective and exclusive for representatives from civil society and NGOs. The CMI digital dialogue in Sudan in 2023 focused on women’s groups, networks, and alliances, and the second round concentrated on youth and resistance committees. These cases illustrate how the scale and composition of digital dialogues must be strategically aligned with their objectives, ensuring that participation structures, whether broad and inclusive or selective and targeted, optimize both representational legitimacy and substantive deliberative quality.

4.3. Digital literacy and AI readiness

The level of digital literacy among participants directly influences the choice of technology. Some participants may lack the technical skills to navigate AI-assisted platforms, while in other cases, cultural or political concerns may shape attitudes toward AI moderation. Organizers must decide whether to implement a hybrid model that combines AI with human facilitation to ensure inclusivity. In low-tech environments, text-based or voice-based platforms may be more suitable than high-bandwidth AI systems, which require stable internet connections and a higher degree of digital proficiency.

In a recent dialogue in Yemen, CMI used Talk to the City, an open-source LLM sensemaking tool, to improve large-scale consultations by analyzing detailed qualitative responses to questions.Footnote 8 As an input modality, WhatsApp voice messages were utilized to lower the threshold for participation and enable more natural engagement. These cases illustrate how the scale and composition of digital dialogues and consultations must be strategically aligned with their objectives and the context of their deployment, ensuring that participation structures, whether broad and inclusive or selective and targeted, optimize both representational legitimacy and substantive deliberative quality.

4.4. Digital infrastructure and connectivity

The availability of digital infrastructure plays a significant role in shaping digital peace dialogues. In regions with limited internet connectivity, alternative methods must be considered, such as text-based, low-bandwidth, or offline-compatible platforms. Additionally, in settings where government restrictions limit online communication, encrypted or decentralized platforms may be necessary to protect participants and ensure secure dialogue. In authoritarian or conflict-prone environments, encrypted platforms allow for safe participation, whereas real-time interactive tools are more effective in high-connectivity settings.

In order to accommodate low internet bandwidth in conflict settings, we opted for the UN and CMI for the use of tools such as AI-powered WhatsApp bots or Remesh.AI because it is purely text and voice based. While some images can be included, it does not rely on visual components that could pose bandwidth challenges. Additionally, text-based conversations without video have proven beneficial for maintaining participant anonymity. However, this presents a dilemma in cases where anonymity is not desired, some participants may wish to use digital dialogues to go on record as a form of quasi-testimony. Thus, the choice of technology must be context dependent, balancing anonymity with the need for public accountability.

Several innovative approaches have emerged to overcome connectivity challenges in conflict-affected regions. For instance, Wi-Fi Over Long Distance (WiLD) technology provides a cost-effective solution for rural and peri-urban areas, offering reliable connectivity over several kilometers, making it suitable for remote communities and dispersed groups, such as refugee settlements. Similarly, initiatives like Airtime Donation, such as those developed by CMI together with the Finnish tech company Zippie, allow to purchase airtime and distribute it directly to participants, thereby significantly lowering mobile costs and expanding accessibility.

Finally, leveraging intermittent connectivity through solutions like RedRose allows remote engagement to function even when continuous connectivity is unavailable. These systems capture participant inputs offline via smart cards or mobile devices, uploading the information once connectivity is reestablished.

4.5. Privacy, anonymity, and security

Given the sensitive nature of many peace dialogues, ensuring data security and participant protection is a top priority. Organizers must decide whether participation should be anonymous or identity verified and assess the risks associated with surveillance, hacking, or data manipulation. AI-driven platforms must incorporate encryption, multifactor authentication, and secure data storage to prevent breaches. While anonymity may be necessary in high-risk environments to protect participants, identity verification can enhance credibility and accountability in contexts where security concerns are minimal.

In the digital dialogues we conducted, we only collected general demographic information to enable data segmentation while ensuring that no personally identifiable information, such as internet protocol (IP) addresses or phone numbers, was stored. We are mindful that conducting digital dialogues in the EU context presents additional challenges due to General Data Protection Regulation (GDPR) regulations. However, we also learned that some participants may wish to lend their voice, name, and photo to their contributions, challenging the assumption that all participants prefer anonymity. An example of this approach is Fora, a tool developed by Cortico.AI in collaboration with the MIT Center for Constructive Communication, which facilitates deep listening and amplifies underrepresented voices in community discussions.Footnote 9 Fora is not anonymous, participants provide voice-based testimonials, ensuring that conversations maintain authenticity and accountability. This approach allows for deeper, more personal storytelling while still respecting privacy through ethical data practices. These insights highlight that privacy in digital dialogues should not be seen as a binary choice between anonymity and identification but rather as a spectrum where participants’ agency, contextual risks, and the goals of the dialogue must be carefully balanced.

4.6. AI integration and moderation

AI plays various roles in digital dialogues, from automated moderation to real-time summarization. Organizers must determine whether AI should drive the entire process or if human moderators should be involved. Another key consideration is whether AI-generated insights will shape deliberative outcomes. Fully AI-driven platforms facilitate large-scale public engagement, while semi-AI-assisted platforms allow for human oversight in consensus-building. The choice of AI involvement should align with the need for efficiency while maintaining trust and ensuring that discussions remain meaningful and representative.

Recent research underscores the importance of human–AI collaboration in digital dialogues, advocating for participatory AI models that support rather than replace human decision-making. Zhang et al. (Reference Zhang, Walker, Nguyen, Dai, Chen and Lee2023) propose an alternative approach where AI serves as a deliberative aid, facilitating stakeholder reflection on biases and systemic shortcomings in decision-making. Their study demonstrates that AI-generated insights, when integrated into participatory platforms, can enhance transparency and accountability while promoting structured discussion. However, as they highlight, overreliance on AI without stakeholder engagement risks reinforcing existing biases and diminishing trust in deliberative processes. Similarly, Grønsund and Aanestad (Reference Grønsund and Aanestad2020) emphasize the significance of “human-in-the-loop” frameworks, ensuring that AI-driven insights are critically examined rather than blindly adopted. The challenge, then, lies in striking a balance, leveraging AI to manage large-scale discourse while maintaining human oversight to ensure discussions remain meaningful, inclusive, and representative. Without careful integration, AI’s role in deliberative processes may shift from an enabler of engagement to an arbiter of discourse, shaping deliberative outcomes in ways that may not align with democratic ideals.

5. Conclusion

Our study advances a systematic framework for evaluating AI applications within digital dialogues, moving beyond deterministic or purely instrumental conceptualizations of technology. Instead, we promote a contextually nuanced and reflexive analytical approach, one that accounts for the political, ethical, and sociocultural dimensions shaping peacebuilding efforts across diverse international settings. By emphasizing these dynamics, this work offers critical insights for scholars and practitioners in international studies, particularly regarding the evolving function of AI in diplomacy, mediation, and governance.

5.1. Key findings

A central insight of this study is that AI does not merely facilitate digital peace dialogues but actively transforms the very conditions under which they take place. AI-driven tools influence the inclusivity, transparency, and deliberative capacity of peace negotiations, raising critical questions about the governance of such technologies in international conflict resolution. The integration of AI in peace dialogues intersects with multiple international policy areas, including cybersecurity, digital human rights, and peace and security frameworks, reflecting its entanglement with broader political, ethical, and technological dynamics in conflict settings.

Critically, our analysis underscores that AI’s role in peace processes should not be seen as replacing or automating essential human interactions but as strategically enhancing them. The core trust-building functions of peace dialogues, cultivating relationships, acknowledging diverse perspectives, and fostering shared understanding among conflict parties, are inherently human and should neither be automated nor accelerated. Instead, AI’s greatest value lies in strengthening these processes by providing deeper analytical insights, expanding participation to include marginalized voices, and synthesizing complex information more effectively. This distinction between automation and enhancement marks a crucial paradigm shift in how we understand AI’s role in peacebuilding, moving away from technology-driven approaches toward human-centered applications that reinforce, rather than replace, the relational foundations of sustainable peace.

Another key finding is that while AI enhances scalability and data-driven decision-making, its integration into peace dialogues must be carefully managed to prevent exacerbating existing inequalities or reinforcing geopolitical power imbalances. Many AI-driven tools originate from private-sector technology firms based in the Global North, raising concerns about technological dependencies and the potential for digital neocolonialism in conflict-affected societies. As digital peacebuilding increasingly relies on AI-enabled platforms, international studies scholars and practitioners must critically assess who controls these technologies, how they are deployed, and whose interests they ultimately serve. This is an important dimension in the critical reflection on technology and deliberative democracy from the perspective of international studies (Mendonça and Asenbaum, Reference Mendonça and Asenbaum2025), highlighting the need to interrogate the power structures embedded in AI-driven peacebuilding initiatives.

Furthermore, this study highlights that peace dialogues facilitated by AI are embedded in broader geopolitical and security dynamics. In authoritarian and conflict-prone settings, digital dialogues can be weaponized for surveillance, censorship, or political manipulation, complicating efforts to create neutral and secure deliberative spaces. The securitization of AI in peace processes, where digital peacebuilding tools could be framed as instruments of intelligence gathering or social control, poses significant risks to the legitimacy of these initiatives. Thus, robust international legal and normative guardrails are required to ensure that AI-driven digital dialogues uphold democratic principles, human rights, and the agency of conflict-affected communities.

5.2. Future research directions

Several avenues for future research emerge from this study. First, there is a need for empirical studies that analyze AI-driven digital dialogues within specific geopolitical contexts. Comparative case studies examining how AI is employed in peace processes across different regions, such as the Middle East, Africa, and Southeast Asia, would provide valuable insights into the sociopolitical conditions that shape the effectiveness and risks of AI-mediated deliberation.

Further research could explore specific deliberative methodologies embedded in AI-facilitated platforms, particularly how these influence the design, dynamics, and outcomes of digital peace dialogues. Techniques such as multi-criteria decision analysis (MCDA), nominal group technique (NGT), and the ORID (objective, reflective, interpretive, decisional) method are increasingly integrated into AI decision-support systems and digital facilitation tools. Each of these carries distinct assumptions about how consensus is built, how preferences are aggregated, and how dialogue is structured. A comparative study of these embedded deliberative models could reveal how different AI-enabled processes support or hinder inclusive, equitable, and context-sensitive digital deliberation in peace processes. Such research would help bridge the gap between computational design and socio-political impact, ensuring that AI tools not only scale dialogue but also respect its complex, human dimensions.

Third, interdisciplinary engagement between international studies, AI ethics, and digital governance is essential to developing responsible and equitable AI applications in peace dialogues. Scholars from political science, sociology, law, and computer science must collaborate to address the complex ethical dilemmas posed by AI in conflict resolution, including algorithmic bias, data sovereignty, and the ethics of digital deliberation. By fostering cross-disciplinary dialogue, the international studies community can contribute to shaping AI governance models that align with principles of peace, justice, and human rights.

5.3. Final reflections

The rise of AI-powered digital peace dialogues signals a broader transformation in the landscape of international peace and security. While digital technologies have already expanded the scope of participation, improved the efficiency of consultations, and enhanced evidence-based policymaking, their full implications for global peacebuilding remain uncertain. AI is not a neutral tool but a technology embedded in political, economic, and cultural structures that shape international relations. Thus, its deployment in peace processes must be critically examined to ensure that it serves as a force for inclusion, dialogue, and conflict resolution rather than exacerbating existing asymmetries and vulnerabilities. Adopting a reflexive approach has allowed us to recognize digital dialogues not as neutral interventions but as sociotechnical practices embedded in power relations. This lens clarifies how AI can be designed and governed to support, not undermine, the relational foundations of peace processes.

This study underscores that AI-driven peace dialogues should not be pursued as a universal or technocratic solution to complex conflicts. Instead, their design and application must be continuously adapted to the political, ethical, and contextual realities of each peace process. While this article does not fully adopt a critical-reflexive theoretical framework, it draws inspiration from this perspective by acknowledging the mutual shaping of technology and society, particularly how digital tools influence the structure, inclusivity, and dynamics of peacebuilding initiatives, and how societal values, norms, and power relations, in turn, shape the design and use of such technologies. By advocating for a reflexive, multi-stakeholder approach, one that incorporates insights from international studies, AI ethics, and peacebuilding practice, this study suggests that digital peace dialogues can evolve into more than technological instruments. They can become deliberative spaces that foster inclusive governance, democratize decision-making, and support sustainable peace in an increasingly digital world. Ultimately, the success of AI-enabled peace dialogues will depend on whether their design reflects the values, contexts, and needs of the societies they aim to serve.

Data availability statement

No primary data were used in this manuscript. All external data sources are cited in the references section; some may require institutional access.

Author contribution

Conceptualization: F.K., M.W.

Funding statement

This work received no specific grant from public, commercial, or nonprofit funding agencies.

Competing interests

One of the authors is employed by CMI, an organization with which one of the authors is affiliated. Both authors have also previously worked with the United Nations. These affiliations are acknowledged in the interest of full transparency. The authors declare that these relationships have not influenced the research findings or conclusions. The views expressed are those of the authors alone. They do not speak on behalf of their respective organizations and, to the best of their knowledge and ability, are not disclosing any confidential information in this article.

References

Bell, C (2024) PeaceTech: Digital Transformation to End Wars. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-031-38894-1.CrossRefGoogle Scholar
Birkland, JL (2024) How older adult information and communication technology users are impacted by aging stereotypes: A multigenerational perspective. New Media & Society 26 (7), 39673988. https://doi.org/10.1177/14614448221108959.CrossRefGoogle Scholar
Buhmann, A and Fieseler, C (2023) Deep learning meets deep democracy: Deliberative governance and responsible innovation in artificial intelligence. Business Ethics Quarterly 33 (1), 146179. https://doi.org/10.1017/beq.2021.42.CrossRefGoogle Scholar
CMI teams up with WithSecure to boost cyber security in peacemaking (2022, April 8). https://cmi.fi/2022/04/08/cmi-teams-up-with-withsecure-to-boost-cyber-security-in-peacemaking/ (accessed 23 February 2025).Google Scholar
Comunello, F, Fernández Ardèvol, M, Mulargia, S and Belotti, F (2017) Women, youth and everything else: Age-based and gendered stereotypes in relation to digital technology among elderly Italian mobile phone users. Media, Culture & Society 39 (6), 798815. https://doi.org/10.1177/0163443716674363.CrossRefGoogle Scholar
Cutting-Edge Tech in the Service of Inclusive Peace in Yemen (2020, August 3). https://osesgy.unmissions.org/cutting-edge-tech-service-inclusive-peace-yemen (accessed 22 February 2025).Google Scholar
Firchow, P, Martin-Shields, C, Omer, A and Ginty, RM (2017) PeaceTech: The liminal spaces of digital technology in peacebuilding. International Studies Perspectives 18 (1), 442. https://doi.org/10.1093/isp/ekw007.Google Scholar
Fishkin, J, Bolotnyy, V, Lerner, J, Siu, A and Bradburn, N (2025) Scaling dialogue for democracy: Can automated deliberation create more deliberative voters? Perspectives on Politics, 118. https://doi.org/10.1017/S1537592724001749.Google Scholar
Ford, M (2024) From innovation to participation: Connectivity and the conduct of contemporary warfare. International Affairs 100 (4), 15311549. https://doi.org/10.1093/ia/iiae061.CrossRefGoogle Scholar
Geurts, A, Gutknecht, R, Warnke, P, Goetheer, A, Schirrmeister, E, Bakker, B and Meissner, S (2022) New perspectives for data-supported foresight: The hybrid AI-expert approach. Futures & Foresight Science 4 (1), e99. https://doi.org/10.1002/ffo2.99.CrossRefGoogle Scholar
Ginty, RM and Firchow, P (2024) The data myth: Interrogating the evidence base for evidence-based peacebuilding. Data & Policy 6, e80. https://doi.org/10.1017/dap.2024.80.CrossRefGoogle Scholar
Giovanardi, M (2024) AI for peace: Mitigating the risks and enhancing opportunities. Data & Policy 6, e41. https://doi.org/10.1017/dap.2024.37.CrossRefGoogle Scholar
Grønsund, T and Aanestad, M (2020) Augmenting the algorithm: Emerging human-in-the-loop work configurations. The Journal of Strategic Information Systems 29 (2), 101614. https://doi.org/10.1016/j.jsis.2020.101614.CrossRefGoogle Scholar
Guntrum, LG (2024) Keyboard fighters: The use of ICTs by activists in times of military coup in Myanmar. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery, 119. https://doi.org/10.1145/3613904.3642279.Google Scholar
Hirblinger, AT (2023) National Dialogues x Digitalisation. Berlin: Berghof Foundation.Google Scholar
Hirblinger, AT, Brummer, V and Kufus, F (2024) Leveraging digital methods in the quest for peaceful futures: The interplay of sincere and subjunctive technology affordances in peace mediation. Information, Communication & Society 27 (11), 20392058. https://doi.org/10.1080/1369118X.2023.2247070.CrossRefGoogle Scholar
Hirblinger, AT, Wählisch, M, Keator, K, McNaboe, C, Duursma, A, Karlsrud, J, Sticher, V, Verjee, A, Kyselova, T, Kwaja, CM and Perera, S (2023) Forum: Making peace with un-certainty: Reflections on the role of digital technology in peace processes beyond the data hype. 41. https://doi.org/10.1093/isp/ekad004.CrossRefGoogle Scholar
Jarrar, M, Zaraket, FA, Hammouda, T, Alavi, DM and Wählisch, M (2023) Lisan: Yemeni, Iraqi, Libyan, and Sudanese Arabic dialect corpora with morphological annotations. In 2023 20th ACS/IEEE International Conference on Computer Systems and Applications (AICCSA). 17. https://doi.org/10.1109/AICCSA59173.2023.10479250.CrossRefGoogle Scholar
Konya, A, Schirch, L, Irwin, C and Ovadya, A (2023a November 3) Democratic policy development using collective dialogues and AI. arXiv. https://doi.org/10.48550/arXiv.2311.02242.Google Scholar
Konya, A, Turan, D, Ovadya, A, Qui, L, Masood, D, Devine, F, Schirch, L, Roberts, I and Forum, DA (2023b December 6) Deliberative technology for alignment. arXiv. https://doi.org/10.48550/arXiv.2312.03893.Google Scholar
Larrauri, HP and Kahl, A (2013) Technology for peacebuilding. Stability: International Journal of Security and Development 2 (3), 6161. https://doi.org/10.5334/sta.cv.Google Scholar
Lezaun, J and Soneryd, L (2007) Consulting citizens: Technologies of elicitation and the mobility of publics. Public Understanding of Science 16 (3), 279297. https://doi.org/10.1177/0963662507079371.CrossRefGoogle Scholar
Masood Alavi, D, Wählisch, M, Irwin, C and Konya, A (2022) Using artificial intelligence for peacebuilding. Journal of Peacebuilding & Development 17 (2), 239243. https://doi.org/10.1177/15423166221102757.CrossRefGoogle Scholar
Mendonça, RF and Asenbaum, H (2025) Decolonizing deliberative democracy. European Journal of Social Theory 13684310241297906. https://doi.org/10.1177/13684310241297906.CrossRefGoogle Scholar
Mikhaylovskaya, A (2024) Enhancing deliberation with digital democratic innovations. Philosophy & Technology 37 (1), 3. https://doi.org/10.1007/s13347-023-00692-x.CrossRefGoogle Scholar
National Coalition for Dialogue and Deliberation (2014) Engagement Streams Framework. Retrieved from https://www.ncdd.org/uploads/1/3/5/5/135559674/2014_engagement_streams_guide_web.pdf?utm_source=chatgpt.com.Google Scholar
Nicolaidis, K and Giovanardi, M (2022 October 28) Global PeaceTech: Unlocking the Better Angels of our Techne SSRN Scholarly Paper. Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.4267063.Google Scholar
Niyitunga, EB (2024) The role of artificial intelligence in promoting digital public participation for successful peacebuilding in Africa. African Journal of Peace and Conflict Studies (Formerly Ubuntu: Journal of Conflict and Social Transformation) 13 (1), 2549. https://doi.org/10.31920/2634-3665/2024/v13n1a2.CrossRefGoogle Scholar
Nolte-Laird, R (2021) Peacebuilding Online: Dialogue and Enabling Positive Peace. Springer Nature.Google Scholar
Panic, B and Arthur, P (2024) AI for Peace. CRC Press.10.1201/9781003359982CrossRefGoogle Scholar
Poutanen, J and Kufus, F (2024) Pioneering the digital frontier: CMI’S approach to forward-looking dialogues. New England Journal of Public Policy 36(1). https://scholarworks.umb.edu/nejpp/vol36/iss1/12.Google Scholar
Rawat, R, Chakrawarti, RK, Sarangi, SK, Rajavat, A, Alamanda, MS, Srividya, K and Sankaran, KS (2024) Conversational Artificial Intelligence. John Wiley & Sons.10.1002/9781394200801CrossRefGoogle Scholar
Rose, J and Sæbø, Ø (2010) Designing deliberation systems. The Information Society 26 (3), 228240. https://doi.org/10.1080/01972241003712298.CrossRefGoogle Scholar
Ryan-Mosley, T (2024) Digital peacebuilding in post-conflict Colombia – A conceptual framework. Global Policy 15 (S3), 4757. https://doi.org/10.1111/1758-5899.13330.CrossRefGoogle Scholar
Schirch, L (2020) 25 Spheres of Digital Peacebuilding and PeaceTech. Toda Peace Institute and Alliance for Peacebuilding. Retrieved from https://hdl.handle.net/1920/12785.Google Scholar
Shin, D and Shin, D (2024) Conclusion: Misinformation and AI—How algorithms generate and manipulate misinformation. In Shin, D (ed), Artificial Misinformation: Exploring Human-Algorithm Interaction Online. Cham: Springer Nature Switzerland, pp. 259277. https://doi.org/10.1007/978-3-031-52569-8_10.CrossRefGoogle Scholar
Siddals, S, Torous, J and Coxon, A (2024) “It happened to be the perfect thing”: Experiences of generative AI chatbots for mental health. Npj Mental Health Research 3 (1), 19. https://doi.org/10.1038/s44184-024-00097-4.CrossRefGoogle ScholarPubMed
Steinert, CV and Kazenwadel, D (2024) How user language affects conflict fatality estimates in ChatGPT. Journal of Peace Research 00223433241279381. https://doi.org/10.1177/00223433241279381.CrossRefGoogle Scholar
Stigant, S and Murray, E (2015) A Tool for Conflict Transformation?.Google Scholar
Taylor, RR, Murphy, JW, Hoston, WT and Senkaiahliyan, S (2024) Democratizing AI in Public Administration: Improving Equity through Maximum Feasible Participation. AI & Society. https://doi.org/10.1007/s00146-024-02120-w.Google Scholar
Tessler, MH, Bakker, MA, Jarrett, D, Sheahan, H, Chadwick, MJ, Koster, R, Evans, G, Campbell-Gillingham, L, Collins, T, Parkes, DC, Botvinick, M and Summerfield, C (2024) AI can help humans find common ground in democratic deliberation. Science 386 (6719), eadq2852. https://doi.org/10.1126/science.adq2852.CrossRefGoogle Scholar
UNSMIL Conducts the First-ever Large-scale Digital Dialogue with 1000 Libyan Youth Online (2020, October 17). http://unsmil.unmissions.org/unsmil-conducts-first-ever-large-scale-digital-dialogue-1000-libyan-youth-online (accessed 22 February 2025).Google Scholar
van de Kerkhof, M (2006) Making a difference: On the constraints of consensus building and the relevance of deliberation in stakeholder dialogues. Policy Sciences 39 (3), 279299. https://doi.org/10.1007/s11077-006-9024-5.CrossRefGoogle Scholar
Zhang, A, Walker, O, Nguyen, K, Dai, J, Chen, A and Lee, MK (2023) Deliberating with AI: Improving decision-making for the future through participatory AI design and stakeholder deliberation. Proceedings of the ACM on Human-Computer Interaction 7 ((CSCW1), 125:1125:32. https://doi.org/10.1145/3579601.CrossRefGoogle Scholar
Figure 0

Table 1. Summary of key dilemmas in AI-driven digital peace dialogues

Figure 1

Table 2. Digital dialogue framework guidance

Figure 2

Table 3. Taxonomy of dialogue formats and methods

Submit a response

Comments

No Comments have been published for this article.