Harnessing the potential of artificial intelligence for humanitarian action: Opportunities and risks

Abstract Data-driven artificial intelligence (AI) technologies are progressively transforming the humanitarian field, but these technologies bring about significant risks for the protection of vulnerable individuals and populations in situations of conflict and crisis. This article investigates the opportunities and risks of using AI in humanitarian action. It examines whether and under what circumstances AI can be safely deployed to support the work of humanitarian actors in the field. The article argues that AI has the potential to support humanitarian actors as they implement a paradigm shift from reactive to anticipatory approaches to humanitarian action. However, it recommends that the existing risks, including those relating to algorithmic bias and data privacy concerns, must be addressed as a priority if AI is to be put at the service of humanitarian action and not to be deployed at the expense of humanitarianism. In doing so, the article contributes to the current debates on whether it is possible to harness the potential of AI for responsible use in humanitarian action.


Introduction
The use of digital technologies in humanitarian action is not a new phenomenon. Humanitarian actors have been utilizing digital technologies to assist and protect populations affected by conflict and crisis for decades. 1 Yet, contemporary advances in computational power, coupled with the availability of vast amounts of data (including big data), have allowed for more widespread use of digital technologies in the humanitarian context. 2 The COVID-19 pandemic has further accelerated the trend of the use of digital technologies to help maintain humanitarian operations. 3 Artificial intelligence (AI) is one such digital technology that is progressively transforming the humanitarian field. Although there is no internationally agreed definition, AI is broadly understood as "a collection of technologies that combine data, algorithms and computing power". 4 These technologies consist of software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. 5 This definition comprises two main elements: knowledge-based systems and machine learning systems. Knowledge-based systems are seen in computer programs that use an existing knowledge base to solve problems usually requiring specialized human expertise. 6 Machine learning is "the systematic study of algorithms and systems that improve their knowledge or performance with experience". 7 Through machine learning, machines can be trained to make sense of data. For example, AI systems can be trained to perform tasks such as natural language processing, utilizing the computer's capacity to parse and interpret text and spoken words. 8 Deep learning, a subset of machine learning, is particularly used to perform tasks such as image, video, speech and audio processing. 9 The analysis in this article applies to both categories of systems.
AI systems often draw on large amounts of data, including information directly collected by humanitarian actors and other sources such as big data, to learn, find patterns, make inferences about such patterns, and predict future behaviour. 10 Big data, or "large volumes of high velocity, complex and variable data", 11 is also increasingly relevant in the humanitarian context. An important part of big data originates in user-generated content available on social media and online platforms, such as text, images, audio and video. 12 Social media platforms tend to provide specific channels for users to engage and communicate during conflicts or crises. 13 For example, Facebook has enabled safety checks whereby users can report their status as natural disasters or other conflicts or emergencies unfold. 14 AI systems can build on these different types of data to map the evolution of conflicts and crises.
In this regard, AI technologies have the potential to support humanitarian actors as they implement a paradigm shift from reactive to anticipatory approaches to humanitarian action in conflicts or crises. 15 For example, in 2019, AI-supported disaster mapping helped humanitarians to provide a swift emergency response in Mozambique. 16 Data-driven AI systems can also build on predictive analytics techniques, which seek to identify patterns and relationships in data, to predict developments in the field. 17 For example, Project Jetson, an initiative of the Office of the United Nations High Commissioner for Refugees' (UNHCR), uses predictive analytics to forecast forced displacement of people. 18 However, scholars and activists have increasingly voiced concerns about the risks posed by the deployment of AI in the humanitarian context. These concerns range from the dangers of "surveillance humanitarianism" 19 to the excesses of "techno-solutionism" 20 and the problems related to a potential rise in "technocolonialism". 21 These are significant risks, as they may expose populations already affected by conflict or crises to additional harms and human rights violations.
Against this backdrop, this article investigates the opportunities and risks of using AI in humanitarian action. It draws on legal, policy-oriented and technologyfacing academic and professional literature to assess whether and under what circumstances AI can be safely deployed to support the work of humanitarian actors in the field. Although the academic and professional literature points to the heightened interest in using AI for military action in armed conflicts, that area remains outside of the scope of this article. 22  growing uses of AI outside of military action, in support of humanitarian assistance in situations of conflict, disaster and crisis. The analysis proceeds in three steps. Firstly, the article examines the opportunities brought about by AI to support humanitarian actors' work in the field. Secondly, it evaluates the existing risks posed by these technologies. Thirdly, the article proposes key recommendations for deploying AI in the humanitarian context, based on the humanitarian imperative of "do no harm". Finally, the article draws conclusions on whether it is possible to safely leverage the benefits of AI while minimizing the risks it poses for humanitarian action.
AI in support of a paradigm change: From reactive to anticipatory approaches to humanitarian action As noted earlier, AI has the potential to support humanitarian actors as they implement a paradigm shift from reactive to anticipatory approaches to humanitarian action. 23 This shift entails acting as soon as a crisis may be foreseen and proactively mitigating the adverse impact on vulnerable people. 24 In this regard, AI technologies may further expand the toolkit of humanitarian missions in their three main dimensions: preparedness, response and recovery.
Preparedness is the continuous process that aims to understand the existing risks and propose actions to respond to those risks, thus supporting a more effective humanitarian response to crises and emergencies. 25 Response focuses on the delivery of assistance to those in need, 26 while recovery refers to programmes that go beyond the provision of immediate humanitarian relief. 27 As such, recovery is an important element, as contemporary humanitarian crises tend to be increasingly complex and protracted, transcending the boundaries between humanitarian aid and development cooperation. 28 Preparedness AI technologies can support humanitarian preparedness as AI systems can be used to analyze vast amounts of data, thus providing essential insights about potential risks to affected populations. These insights can inform humanitarians about such risks before a crisis or humanitarian disaster unfolds. 29 In this regard, predictive analytics, which builds on data-driven machine learning and statistical models, can be used to calculate and forecast impending natural disasters, displacement and refugee movements, famines, and global health emergencies. 30 To date, such systems have performed best for early warnings and short-term predictions. 31 Yet, their potential is significant, as AI systems performing predictive analytics can be instrumental for preparedness.
For example, the Forecast-based Financing programme deployed by the International Federation of Red Cross and Red Crescent Societies (IFRC) enables the swift allocation of humanitarian resources for early action implementation. 32 This programme uses a variety of data sources, such as meteorological data and market analysis, to determine when and where humanitarian resources should be allocated. 33 Another example is UNHCR's Project Jetson, which uses predictive analytics to forecast forced displacement contributing to the escalation of violence and conflict in Somalia. 34 Project Jetson builds on various data sources, including climate data (such as river levels and rain patterns), market prices, remittance data, and data collected by the institution to train its machine learning algorithm.
In another context, the World Food Programme has developed a model that uses predictive analytics to forecast food insecurity in conflict zones, where traditional data collection is challenging. 35 This model provides a map depicting the prevalence of undernourishment in populations around the world.
But would deploying AI systems, particularly those using predictive analytics models, lead to better preparedness in humanitarian action? Any answer to this question must be nuanced. On the one hand, in some contexts, AI systems may be beneficial to humanitarian action as they may contribute to a better understanding of the situation and better anticipation of responses. For instance, better preparedness can contribute to early allocation of resources, which may be crucial for the effectiveness of operations on the ground. On the other hand, the analysis of historical data should not be the only way to inform and frame future action. Models based on the analysis of past data may fail to consider variables such as changes in human behaviour and the environment, and may thus provide incomplete or erroneous predictions. For instance, during the COVID-19 pandemic, most AI models failed to provide efficient support to medical decisionmaking in tackling outbreaks of the disease. 36 That was partly due to the low quality of the data (historical data not relating to COVID-19) and the high risk of bias. 37 In addition, AI systems focusing on the analysis of past data might continue to reproduce errors and inaccuracies and perpetuate historical inequalities, biases and unfairness. 38 Accordingly, careful consideration of the specificities of the humanitarian context in which AI systems are to be deployed may help avoid unnecessary recourse to technologies and prevent exacerbated techno-solutionism.
Techno-solutionism, or faith in technologies to solve most societal problems, has proven to yield mixed results in the humanitarian field. For instance, studies have shown that focusing on big data analysis for anticipating Ebola outbreaks in West Africa was not always as effective as investing in adequate public health and social infrastructure. 39 Working closely with affected communitiesfor example, through participatory design 40could help to tailor anticipatory interventions to key community needs, thus better informing and preparing humanitarian action before a conflict or crisis unfolds. This can also apply to AI systems used in humanitarian response, as discussed in the following subsection.

Response
AI systems can be used in ways that may support humanitarian response during conflicts and crises. For instance, recent advances in deep learning, natural Harnessing the potential of artificial intelligence for humanitarian action: Opportunities and risks language processing and image processing allow for faster and more precise classification of social media messages during crisis and conflict situations. This can assist humanitarian actors in responding to emergencies. 41 In particular, these advanced AI technologies can help identify areas that would benefit from streamlined delivery of assistance to those in need. For example, the Emergency Situation Awareness platform monitors content on Twitter in Australia and New Zealand to provide its users with information about the impact and scope of natural disasters such as earthquakes, bushfires and floods as they unfold. 42 Similarly, Artificial Intelligence for Disaster Response, an open platform that uses AI to filter and classify social media content, offers insights into the evolution of disasters. 43 Platforms such as these can triage and classify content, such as relevant images posted on social media showing damages to infrastructure and the extent of harm to affected populations, which can be useful for disaster response and management. 44 Another example is the Rapid Mapping Service, a project jointly developed by the United Nations (UN) Institute for Training and Research, the UN Operational Satellite Applications Programme, and UN Global Pulse. 45 This project applies AI to satellite imagery in order to rapidly map flooded areas and assess damage caused by conflict or natural disasters such as earthquakes and landslides, thus informing the humanitarian response on the ground.
Could these examples indicate that AI can lead to more effective responses in the humanitarian context? Depending on their design and deployment, AI systems may support humanitarian responses to conflict and crisis. However, much is context-dependent.
Using AI technologies to map areas affected by disasters seems to yield satisfactory results. For instance, the Humanitarian OpenStreetMap project relies on AI systems capable of mapping areas affected by disasters. 46 This project uses crowdsourced social media data and satellite and drone imagery to provide reliable information about which areas are affected by disaster situations and need prioritization. However, such a project might not produce relevant results in the context of humanitarian responses in situations of armed conflict. For instance, disinformation campaigns may affect access to trustworthy data  47 More generally, problems with access to good-quality data, which can be scarce during armed conflict situations, might affect the design and development of AI systems in that context and thereby compromise the suitability of their mapping tools.
Accordingly, while AI technologies may present opportunities to support effective humanitarian relief responses, they should not be understood as a readymade, "one-size-fits-all" solution for any context within the realm of humanitarian action.

Recovery
AI may be effectively used in the context of recovery, as the complexities of contemporary crises often lead to protracted conflict situations. 48 Information technology can be an additional asset for facilitating engagement between humanitarians and affected communities in such contexts. 49 AI technologies may support humanitarian action in protracted situations. For example, the Trace the Face tool developed by the International Committee of the Red Cross (ICRC) was designed to help refugees and migrants find missing family members. 50 This tool uses facial recognition technologies to automate searching and matching, thus streamlining the process. Another example can be found in the AI-powered chatbots that may provide a way for affected community members to access humanitarian organizations and obtain relevant information. These chatbots are currently providing advisory services to migrants and refugees. 51 Similarly, humanitarian organizations may use messaging chatbots to connect with affected populations. 52 However, it is vital to question whether it is possible to generalize from these examples that AI contributes to better recovery action. As noted earlier in the analysis of preparedness and response, the benefit of using AI depends very much on the specific context in which these technologies are deployed. This is also true for recovery action. Community engagement and people-centred approaches may support the identification of areas in which technologies may effectively support recovery efforts on the ground, or conversely, those in which AI systems would not add value to recovery efforts. This should inform decisionmaking concerning the use of AI systems in recovery programmes. Moreover, AI technologies may also pose considerable risks for affected populations, such as exacerbating disproportionate surveillance or perpetuating inequalities due to algorithmic biases. Such risks are analysed in the following section.
AI at the expense of humanitarianism: The risks for affected populations While AI may lead to potentially valuable outcomes in the humanitarian sector, deploying these systems is not without risks. Three main areas are of particular relevance in the context of humanitarian action: data quality, algorithmic bias, and the respect and protection of data privacy.

Data quality
Concerns about the quality of the data used to train AI algorithms are not limited to the humanitarian field, but this issue can have significant consequences for humanitarian action. In general terms, poor data quality leads to equally poor outcomes. 53 Such is the case, for instance, in the context of predictive policing and risk assessment algorithms. These algorithms often draw from historical crime data, such as police arrest rates per postcode and criminal records, to predict future crime incidence and recidivism risk. 54 If the data used to train these algorithms is incomplete or contains errors, the outcomes of the algorithms (i.e., crime forecasts and recidivism risk scores) might be equally poor in quality. Studies have indeed found that historical crime data sets may be incomplete and may include errors, as racial bias is often present in police records in some A. Beduschi jurisdictions such as the United States. 55 If such algorithms are used to support judicial decision-making, it can lead to unfairness and discrimination based on race. 56 In the humanitarian context, poor data quality generates poor outcomes that may directly affect populations in an already vulnerable situation due to conflicts or crises. AI systems trained with inaccurate, incomplete or biased data will likely perpetuate and cascade these mistakes forward. For instance, a recent study found that ten of the most commonly used computer vision, natural language and audio data sets contain significant labelling errors (i.e., incorrect identification of images, text or audio). 57 As these data sets are often used to train AI algorithms, the errors will persist in the resulting AI systems.
Unfortunately, obtaining high-quality data for humanitarian operations can be difficult due to the manifold constraints on such operations. 58 For instance, humanitarians may have problems collecting data due to low internet connectivity in remote areas. Incomplete and overlapping datasets that contain information collected by different humanitarian actors may also be a problemfor example, inaccuracies can be carried forward if outdated information is maintained in the data sets. 59 Errors and inaccuracies can also occur when using big data and crowdsourced data. 60 Accordingly, it is crucial that teams working with these data sets control for errors as much as possible. However, data sets and AI systems may also suffer from algorithmic bias, a topic that relates to data quality but has larger societal implications and is thus discussed in the following subsection.

Algorithmic bias
Connected to the issue of data quality is the question of the presence of bias in the design and development of AI systems. Bias is considered here not only as a technological or statistical error, but also as the human viewpoints, prejudices 55 S. Brayne, above note 54, pp. 33 Harnessing the potential of artificial intelligence for humanitarian action: Opportunities and risks and stereotypes that are reflected in AI systems and can lead to unfair outcomes and discrimination. 61 AI systems can indeed reflect the biases of their human designers and developers. 62 Once such systems are deployed, this can in turn lead to unlawful discrimination.
International human rights law prohibits direct and indirect forms of discrimination based on race, colour, sex, gender, sexual orientation, language, religion, political or other opinion, national or social origin, property, birth or other status. 63 Direct discrimination takes place when an individual is treated less favourably on the basis of one or more of these grounds. Indirect discrimination exists even when measures are in appearance neutral, as such measures can in fact lead to the less favourable treatment of individuals based on one or more of the protected grounds.
Bias in AI systems may exacerbate inequalities and perpetuate direct and indirect forms of discrimination, notably on the grounds of gender and race. 64 For instance, structural and historical bias against minorities may be reflected in AI systems due to the pervasive nature of these biases. 65 Bias also commonly arises from gaps in the representation of diverse populations in data sets used for training AI algorithms. 66 For example, researchers have demonstrated that commercially available facial recognition algorithms were less accurate in recognizing women with darker skin types due in part to a lack of diversity in training data sets. 67 Similarly, researchers have shown that AI algorithms had more difficulties identifying people with disabilities when such individuals were using assistive technologies such as wheelchairs. 68 In this regard, biased AI systems may go undetected and continue supporting decisions that could lead to discriminatory outcomes. 69 That is partly due to the opacity with which certain machine learning and deep learning algorithms operatethe so-called "black box problem". 70 In addition, the complexity of AI systems based on deep learning techniques entails that their designers and developers are often unable to understand and sufficiently explain how the machines have reached certain decisions. This may in turn make it more challenging to identify biases in the algorithms.
The consequences of deploying biased AI systems can be significant in the humanitarian context. For example, in a scenario where facial recognition technologies are the sole means for identification and identity verification, inaccuracies in such systems may lead to the misidentification of individuals with darker skin types. If identification and identity verification by those means is a precondition for accessing humanitarian aid, misidentification may lead to individuals being denied assistance. This could happen if the system used for triage mistakenly indicates that an individual has already received the aid in question (such as emergency food supplies or medical care). Such a situation would have dramatic consequences for the affected individuals. If the AI systems' risks were known and not addressed, it could lead to unlawful discrimination based on race. This could also be contrary to the humanitarian principle of humanity, according to which human suffering must be addressed wherever it is found. 71 Accordingly, safeguards must be put in place to ensure that AI systems used to support the work of humanitarians are not transformed into tools of exclusion of individuals or populations in need of assistance. For example, if online photographs of children in war tend to show children of colour with weapons (i.e., as child soldiers) disproportionately more often, while depicting children of white ethnic background as victims, then AI algorithms trained on such data sets may continue to perpetuate this distinction. This could in turn contribute to existing biases against children of colour in humanitarian action, compounding the suffering already inflicted by armed conflict. Awareness and control for this type of bias should therefore permeate the design and development of AI systems to be deployed in the humanitarian context. Another example relates to facial recognition technologiesas long as these technologies remain inaccurate in recognizing people with darker skin types, they should not be used to assist decision-making essential to determining humanitarian aid delivery.

Data privacy
As is internationally agreed, "the same rights that people have offline must also be protected online". 72 This should include AI systems.
International human rights law instruments recognize the right to privacy. 73 In addition, specific legal regimes, such as the General Data Protection Regulation (GDPR), establish fundamental standards for protecting personal data. While the GDPR is a European Union (EU) law regime that does not bind all humanitarian actors across the globe, it remains relevant beyond the EU as it has inspired similar regulations worldwide. 74 The principles set forth in the GDPR have also been taken into account by the Handbook on Data Protection in Humanitarian Action, 75 which is considered a leading resource that sets a minimum standard for processing personal data in the humanitarian context. These principles include lawfulness, fairness and transparency in the processing of personal data (Article 5 of the GDPR).
Having a lawful basis for the processing of personal data is a legal requirement (Article 6 of the GDPR). Consent is often used as a lawful basis for processing personal data in the humanitarian context. According to the legal standards, consent must be fully informed, specific, unambiguous and freely given (Article 4(11) of the GDPR). Yet, in the humanitarian context, consent may not be entirely unambiguous and freely given due to the inherent power imbalance between humanitarian organizations and beneficiaries of humanitarian assistance. A refusal to consent to collecting and processing personal data may, in practical terms, lead to the denial of humanitarian assistance. 76 However, it may be difficult for humanitarian actors to ensure that recipients of humanitarian assistance effectively understand the meaning of consent due to linguistic barriers and administrative and institutional complexities.
Fully informed, specific, unambiguous and freely given consent may also be challenging to achieve given that AI systems often use data to further refine and

A. Beduschi
develop other AI solutions. While individuals may agree to have their personal information processed for a specific purpose related to humanitarian action, they may not know about or agree to that data being later used to develop other AI systems. 77 Such concerns are further aggravated by the criticisms concerning "surveillance humanitarianism", whereby the growing collection of data and uses of technologies by humanitarians may inadvertently increase the vulnerability of those in need of assistance. 78 These practices require even more scrutiny due to the increasingly common collaborations between technology companies and humanitarian organizations. 79 These companies play a central role in this area as they design and develop the AI systems that humanitarians later deploy in the field. Arguably, technology companies' interests and world view tend to be predominantly reflected in the design and development of AI systems, thus neglecting the needs and experiences of their users. 80 This is particularly concerning for the deployment of AI systems in the humanitarian context, where the risks for populations affected by conflicts or crises are significant. Accordingly, it is essential to have a clear set of guidelines for implementing AI in the humanitarian context, notably placing the humanitarian imperative of "do no harm" at its core, as discussed in the following section.
AI at the service of humanitarian action: The humanitarian imperative of "do no harm" As noted earlier, while AI may bring about novel opportunities to strengthen humanitarian action, it also presents significant risks when deployed in the humanitarian context. This section elaborates on the humanitarian imperative of "do no harm" and offers recommendations on making AI work in support of humanitarian action and not to the detriment of populations affected by conflict and crisis.
"Do no harm" in the age of AI In the face of ever-evolving AI technologies, it is crucial that humanitarians consider the imperative of "do no harm" as paramount to all deployment of AI systems in humanitarian action. This principle of non-maleficence has long been recognized as one of the core principles of bioethics. 81 It was first proposed in the humanitarian context by Mary Anderson; 82 subsequently, various humanitarian organizations have further developed its application. 83 Today, this principle is also commonly invoked in the fields of ethics of technology and AI. 84 The "do no harm" principle entails that humanitarian actors consider the potential ways in which their actions or omissions may inadvertently cause harm or create new risks for the populations they intend to serve. 85 For example, humanitarian "innovation" may introduce unnecessary risks to already vulnerable populations, such as when technical failures in newly introduced systems lead to delays, disruption or cancellation of aid distribution. 86 Therefore, avoiding or preventing harm and mitigating risks is at the heart of this humanitarian imperative.
Risk analysis and impact assessments may be used to operationalize the "do no harm" principle. Risk analysis can help to identify potential risks arising from humanitarian action and provide a clear avenue for risk mitigation. Impact assessments can provide the means to identify the negative impacts of specific humanitarian programmes and the best ways to avoid or prevent harm. These processes may assist humanitarian organizations as they envisage the utilization of AI technologies for humanitarian action. At times, they may even lead to the conclusion that no technologies should be deployed in a specific context, as these would cause more harm than good to their beneficiaries. On certain occasions, the fact that a technology is available does not mean that it must also be used.
AI technologies present some well-known risks, which ought to be addressed by humanitarian actors before the deployment of AI systems in humanitarian action. For example, humanitarian organizations using data-driven AI systems should identify risks concerning data security breaches that could lead to the disclosure of sensitive information about their staff and their beneficiaries. They should also evaluate whether using AI systems would negatively impact affected populationsfor example, by revealing their location while mapping the evolution of a conflict and thereby inadvertently exposing them to persecution. In sum, the deployment of AI systems should never create additional harm or risks to affected populations.
Accordingly, humanitarian actors must not over-rely on AI technologies, particularly those that remain insufficiently accurate in certain contexts, such as facial recognition technologies. 87 Before adopting AI systems, humanitarian actors should evaluate whether there is a need to deploy these technologies in the field, whether they add value to the humanitarian programmes in question, and whether they can do so in a manner that protects vulnerable populations from additional harm.

Mechanisms for avoiding and mitigating data privacy harms
In the digital age, avoiding or mitigating harm also entails the protection of data privacy. Data privacy should be protected and respected throughout the AI life cycle, from design to development to implementation.
In this regard, "privacy by design" principles provide a good starting point. 88 They offer a proactive (instead of reactive) and preventive (instead of remedial) set of principles based on user-centric approaches. These are valuable tools for building better data privacy protection. 89 For humanitarian organizations that are subject to EU law, Article 25 of the GDPR imposes a more comprehensive requirement for data protection by design and by default. 90 This provision requires the implementation of appropriate technical and organizational measures aimed at integrating the core data protection principles (enumerated in Article 5 of the GDPR) into the design and development of systems processing personal data. As noted earlier, these core principles are lawfulness, fairness and transparency, along with purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality, and accountability. These are also consistent with the basic data protection principles proposed by the ICRC. 91 Accordingly, humanitarian organizations designing AI solutions or procuring AI systems from private sector providers should ensure that data protection is implemented by design and by default in these AI systems. For instance, they should ensure that they have obtained consent for processing personal information or that they rely on another legal basis for processing, such as the vital interest of the data subject or of another person, public interest, legitimate interest, performance of a contract, or compliance with a legal obligation. 92 Similarly, data collection should be kept to the minimum needed, storage should be cyber-secure, personal data should be destroyed once it is no longer required, and personal information should only be used for the purpose for which it was collected in the first place.
Moreover, carrying out data protection impact assessments (DPIAs) may also help humanitarian actors understand the potential negative impacts of AI technologies used in humanitarian programmes. A DPIA is a process that identifies the risks for the protection of individuals' data privacy and the ways of mitigating those risks. 93 Humanitarian organizations subject to the GDPR will have an obligation to carry out a DPIA before processing data if there is a high risk of harm to individuals' rights and freedoms (Article 35(1) of the GDPR). DPIAs can add value to humanitarian projects, even if the organizations involved are not legally obliged to carry out such a process. A DPIA can help to provide a clear roadmap for identifying risks, solutions and recommendations concerning data-driven AI systems.
For example, a DPIA can be used to identify situations in which anonymized data used to train AI algorithms may be re-identified, thus becoming personal information again and attracting the application of legal regimes on data protection. Re-identification occurs when data that was initially anonymized is de-anonymized. This can happen when information from different sources is matched to identify individuals from an initially anonymized data set. For instance, a study found that it was possible to match information in order to identify individuals from a list containing the anonymous movie ratings of 500,000 Netflix subscribers, also uncovering their apparent political preferences and other potentially sensitive information. 94 Overall, research demonstrates that individuals have an over 99% chance of being re-identified in certain circumstances, even when data sets were initially anonymized. 95 In the humanitarian context, anonymization may not be enough to prevent the re-identification of vulnerable populations, and failure to retain information in a cyber-secure manner risks exposing such populations to persecution and harm.
A DPIA can help identify other solutions and organizational measures that could prevent re-identification from occurring.

Transparency, accountability and redress
The principle of "do no harm" also implies that humanitarian actors should consider establishing an overarching framework to ensure much-needed transparency and accountability on the uses of AI in humanitarian action.
The term "transparency" is used here to indicate that humanitarian actors should communicate about whether and how they use AI systems in humanitarian action. They should disclose information about the systems they use, even when the way in which these systems work is not fully explainable. In this sense, transparency is a broader concept than the narrower notion of explainability of AI systems. 96 For example, consider a scenario in which AI systems are used for biometric identity verification of refugees as a condition for distributing aid in refugee camps. 97 In this case, the humanitarian actors using such AI systems should communicate to the refugees that they are doing so. It is equally important that they disclose to those refugees how they are employing the AI systems and what it entails. For instance, they should disclose what type of information will be collected and for what purpose, how long the data will be stored, and who will access it. Similarly, they should communicate which safeguards will be put in place to avoid cyber security breaches.
Accountability is understood as the action of holding someone to account for their actions or omissions. 98 It is a process aimed at assessing whether a person's or an entity's actions or omissions were required or justified and whether that person or entity may be legally responsible or liable for the consequences of their act or omission. 99 Accountability is also a mechanism involving an obligation to explain and justify conduct. 100 In the humanitarian context, accountability should be enshrined in the relationships between humanitarian actors and their beneficiariesparticularly when AI systems are used to support humanitarian action, due to the risks these technologies may pose to their human rights. For instance, humanitarian actors should inform their beneficiaries of any data security breach that may expose the beneficiaries' personal information and give an account of the measures taken to remedy the situation. The recent swift response by the ICRC to a data security breach has set an example of good practice in this area. The institution undertook direct and comprehensive efforts to explain the actions taken and inform the affected communities worldwide of the consequences of the cyber security incident. 101 Finally, individuals should be able to challenge decisions that were either automated or made by humans with the support of AI systems if such decisions adversely impacted those individuals' rights. 102 Grievance mechanisms, either judicial or extra-judicial, could thus provide legal avenues for access to remedy, notably in cases where inadvertent harm was caused to the beneficiaries of humanitarian assistance. Extra-judicial mechanisms such as administrative complaints or alternative dispute resolution could be helpful to individuals who may not be able to afford the costs of judicial proceedings.

Conclusion
Data-driven AI technologies are progressively transforming the humanitarian field. They have the potential to support humanitarian actors as they implement a paradigm shift from reactive to anticipatory approaches to humanitarian action. AI may thus contribute to humanitarian action in its three main dimensions: preparedness, response and recovery.
AI technologies can support humanitarian preparedness. They can do so by analyzing vast amounts of multidimensional data at fast speeds, identifying patterns in the data, making inferences, and providing crucial insights about potential risks before a crisis or humanitarian disaster unfolds. AI technologies can also present opportunities to support effective humanitarian relief responses and promote recovery programmes, notably in protracted conflict situations.
Several AI-based initiatives are currently being deployed and tested by humanitarian organizations. These include AI systems deployed to forecast population movements, map areas affected by humanitarian crises and identify missing individuals, thus informing and facilitating humanitarian action on the ground. Yet, deploying these systems is not without risks. This article has analyzed three main areas of concern: the quality of the data used to train AI algorithms, the existence of algorithmic bias permeating the design and development of AI systems, and the respect for and protection of data privacy.
While these concerns are not exclusive to the humanitarian field, they may significantly affect populations already in a vulnerable situation due to conflict and crisis. Therefore, if AI systems are not to be deployed at the expense of humanitarianism, it is vital that humanitarian actors implement these technologies in line with the humanitarian imperative of "do no harm". Risk analysis and impact assessments may help to operationalize the "do no harm" imperative. Both processes may be valuable for mitigating risks and minimizing or avoiding negative impacts on affected populations.
The "do no harm" imperative is especially crucial in situations of armed conflict such as the one currently ravaging Ukraine and prompting the displacement of over 4 million people in Europe. 103 In such contexts, AI technologies can be used in both helpful and damaging ways within and outside the battlefield. For instance, AI can support the analysis of social media data and evaluate the veracity of information, 104 but it can also support the creation of false videos using deepfake technologies, fuelling disinformation campaigns. 105 As AI systems are not inherently neutral, depending on how they are used, they may introduce new, unnecessary risks to already vulnerable populations. For instance, AI-powered chatbots can help streamline visa applications in the face of large movements of people fleeing conflict, 106 but if these systems are used without proper oversight, they could expose individuals' personal information to needless cyber security risks and potential data breaches. Accordingly, to put AI at the service of humanitarian action, leveraging its benefits while outweighing its risks, humanitarian organizations should be mindful that there is no ready-made, "one-size-fits-all" AI solution applicable to all contexts. They should also evaluate whether AI systems should be deployed at all in certain circumstances, as such systems could cause more harm than good to their beneficiaries. On certain occasions, the fact that technology is available does not mean that it must be used.
Finally, when deploying these technologies, it is crucial that humanitarian organizations establish adequate frameworks to strengthen accountability and transparency in the use of AI in the humanitarian context. Overall, such mechanisms would contribute towards the goal of harnessing the potential of responsible use of AI in humanitarian action.