Introduction
The humanitarian system is at a breaking point, following global cuts to humanitarian aid and a dramatic reduction in institutional donors’ official development assistance, including close to 80% of USAID’s budget.Footnote 2 The cessation of US government funding alone has disrupted life-saving programmes around the world, including support for mental health and psychosocial support (MHPSS) and gender-based violence (GBV) programmes.Footnote 3 These cuts come at a time when humanitarian need is rising and adherence to international humanitarian law is waning, making it harder for humanitarian actors to access crisis-affected communities.
In addition, some experts argue that mental health needs across the globe are growing and are outpacing available services and the infrastructure required to deliver those services, particularly in areas that are remote or difficult to access.Footnote 4 Access to quality, specialized GBV-related care in conflicts and humanitarian emergencies, including MHPSS services, can be harder to access in contexts where cultural barriers, including shame and stigma, prevent survivors from seeking and accessing care, where it exists.Footnote 5 In other places, mandatory reporting regulations – which require service providers to refer cases of rape and sexual violence to the police – can increase obstacles to care for survivors who do not wish to pursue legal or criminal action.Footnote 6
In some countries, chatbots are being developed and deployed to bridge the so-called “treatment gap”,Footnote 7 extending talking therapies to low-resource settings and/or increasing referrals to existing, human-delivered MHPSS services.Footnote 8 Advances in computational power and deep learning techniques over the last several years, as well as the scale and availability of large datasets, have improved the accuracy of large language models (LLMs) and generative artificial intelligence (AI) tools like ChatGPT.Footnote 9 This has accelerated interest in and experimentation with LLMs and their integration into digital mental health interventions.Footnote 10 Some evidence from high-resource settings suggests that these tools might be helpful in contexts where accelerating mental health needs outpace the availability or exceed the capacity of existing services and/or where individuals are unable or unwilling to access in-person services.Footnote 11 As humanitarian actors and duty bearers increasingly attempt to find ways to deliver services and support against a backdrop of dramatically reduced funding, some are exploring the extent to which chatbots could be used to support crisis-affected communities, including GBV survivors.
In this paper, we seek to address the following questions. Firstly, what are the potential opportunities and limitations of using chatbots to support the mental health of women and girls, including survivors of GBV, in contexts of armed conflict and humanitarian emergencies? And secondly, to what extent does the use of chatbots to support GBV survivors’ mental health align with key ethical principles and guidelines, including the GBV guiding principles and humanitarian principles?
Given the relatively recent and rapid rise of AI-powered chatbots, academic and peer-reviewed research on the impact of mental health chatbots in humanitarian crises is extremely limited. Therefore, we investigated research and evidence from adjacent areas of scholarship, such as feminist AI, trauma treatment, GBV in emergencies (GBViE) and MHPSS in conflicts and emergencies. Based on our research, we conclude that the potential benefits of mental health chatbots do not yet outweigh the risks of using them to treat potentially traumatic events (PTEs)Footnote 12 or using them in high-stakes scenarios and contexts such as armed conflicts, humanitarian crises and GBViE programming. Furthermore, we argue that while some chatbots demonstrate potential – largely in contexts related to the global North or in addressing symptoms linked to non-traumatic events, such as social isolation and anxiety – their use to support the mental health and psychosocial needs of survivors of GBV in armed conflicts and humanitarian emergencies, for the most part, does not yet align with many of the GBV guiding principles and humanitarian commitments.
However, the opportunities that these tools present for crisis-affected women and girls, including GBV survivors, warrant further research and examination, as we recommend in the conclusion of this paper. We reject the false binary that often clouds debates on the use of chatbots in humanitarian crises, which asserts that we must use either technological, AI-driven solutions or non-technological ones but not both. We also call for deeper engagement and partnership with crisis-affected women and girls to strengthen their digital literacy and AI familiarity, surface their views on chatbots and increase meaningful opportunities to co-design trauma-informed AI solutions with them, grounded in feminist AI principles and approaches which prioritize equity, social justice, and dismantling systematic bias and oppression.Footnote 13
Methods, scope and limitations
For the purposes of this paper and given the necessary limitations on length, we focus only on LLM-powered chatbots and conversational agents explicitly trained to provide direct support, advice and/or counselling to individuals seeking treatment and support for symptoms related to a range of mental health and psychosocial conditions. Excluded from our analysis are mental health apps or digital technologies that do not use AI, such as button-based or rules-based chatbots;Footnote 14 apps that only use AI to match and connect clients with human-provided services; apps that use AI to provide specific advice and information to human service providers, rather than offering direct support to clients and service users; and conversational agents or companion AI, like Replika and Character.AI, which have not been specifically or intentionally designed to offer specialized mental health support.Footnote 15
To explore the potential opportunities and limitations of mental health chatbots for crisis-affected women and girls, including GBV survivors, we drew on three methods: (1) a literature review, (2) an analysis of several publicly available chatbots explicitly designed to deliver mental health-related support, and (3) key expert interviews.
We conducted a review of academic and grey literature related to several complementary and interrelated themes, including the mental health effects of conflict, humanitarian crises, and GBV; the use of mental health chatbots and evidence of their impact; and the links between human–computer interaction, AI, and mental health and psychosocial well-being.
The use of LLM-powered chatbots supporting MHPSS aims for crisis-affected communities, including women and girls, is still nascent. Through internet search tools and outreach to country-based staff delivering humanitarian programmes, we selected five AI-powered chatbots for analysis which collectively represented a range of capabilities and aims that we wished to explore (see Table 1). This includes chatbots built on open source LLMs; supporting users in the Global South; developed in heavily regulated environments; designed to offer MHPSS-related support across multiple contexts; and/or built with significant financial, technical and technological investments, from mental health experts as well as AI and machine learning engineers. Not all of the chatbots that we reviewed demonstrated all these capabilities.
Table 1. Chatbots selected for analysis

We used the results of this chatbot analysis as a proxy means of identifying and assessing the potential risks and opportunities that might arise when using mental health chatbots in situations related to armed conflict, humanitarian crises and GBV. Where possible, we documented information related to the terms and conditions of these five chatbots, the intended user and target groups, and the languages and locations in which the bot was being used (see Appendices 1 and 2).
To round out this desk-based research, we conducted interviews with key experts across multiple domains, including humanitarian aid, AI and digital technologies, data protection, academia, GBViE, social work, and women’s rights and empowerment (see the list of acknowledgements presented in the first footnote to this paper).
Although both authors have led consultations and participatory AI processes with communities affected by crises, for this paper we did not conduct any interviews with GBV survivors for several reasons. First, while interest in using AI-powered chatbots to support GBV survivors in crises and conflicts is increasing, very little information about pilot projects to this end has been made public. Furthermore, resource constraints precluded the development of a sufficiently robust and ethically grounded research project – aligned with GBV guiding principles and AI ethics – that would have enabled us to safely and responsibly solicit the views of women and girls on the use of chatbots to support GBV survivors. Thus, we limited our methods to desk-based research. However, further research to this end would be welcome and indeed critical for any design processes related to the safe and ethical use of AI in high-stakes contexts like humanitarian crises and conflicts.
Our analysis, our methods and our analytical approaches are influenced by feminist AI, science and technology studies, trauma-informed design principles and a commitment to co-designing methodologies that privilege the voices of crisis-affected women and girls. We approached our research through the lens of participatory AI governance, informed by our experience working with community-based women’s organizations and for humanitarian agencies to design and deliver life-saving and protective interventions. We are committed to democratizing AI and amplifying the agency, voice and views of communities in the global South in decisions related to AI development and deployment, in line with feminist AI and decolonizing approaches to AI.
GBV and MHPSS in conflicts and emergencies
MHPSS needs in conflicts and complex humanitarian emergencies are significant.Footnote 16 PTEsFootnote 17 linked to armed conflicts – including rape and sexual violence, torture, kidnapping, and forced recruitment into armed groups – are associated with higher rates of post-traumatic stress disorder (PTSD) and depression.Footnote 18 Some estimates suggest that as many as one in five people living in conflicts and humanitarian crises are affected by mental health and psychosocial disorders, including depression, anxiety, PTSD, bipolar disorder and schizophrenia, as compared with a global mean prevalence of one in 14.Footnote 19
As with other PTEs, sexual violence and other forms of GBV have harmful mental health and psychosocial effects on individuals as well as families and communities.Footnote 20 At the individual level, incidents of GBV, including sexual violence, are strongly associated with post-traumatic stress, depression and somatic symptomology as well as chronic mental health conditions.Footnote 21
Over the last two decades, MHPSS interventions have increasingly been seen as essential components of humanitarian action and GBViE programming.Footnote 22 The 2019 Inter-Agency Minimum Standards for Gender-Based Violence in Emergencies Programming set out a range of standards related to the assessment and treatment of both mental health and psychosocial needs, including specialist and non-specialist care.Footnote 23 The Inter-Agency Standing Committee (IASC) 2022 MHPSS Minimum Service Package provides specific guidance and key considerations on how to integrate MHPSS support into GBV programming, and the 2007 IASC Guidelines on Mental Health and Psychosocial Support in Emergency Settings offer a framework for developing evidence-based services and support.Footnote 24
Despite the improvements in standards and commitments, quality and specialized MHPSS services – including GBV case management and specialized GBV MHPSS services – remain limited, under-resourced or inaccessible in many humanitarian crises and conflicts.Footnote 25 This is due, in part, to funding shortfalls, staff availability and turnover, inconsistent training, and volatile security situations which limit access to conflict-affected areas, making sustained humanitarian interventions and service delivery difficult.Footnote 26 For example, the security situations in Sudan and Gaza have severely impaired access to services, as have the recent and rapid reductions to funding for humanitarian assistance.Footnote 27
The rise of chatbots
The use of chatbots designed to deliver mental health services and support well-being appears to be increasing, particularly in the global North.Footnote 28 Their rapid rise is closely linked to both the COVID-19 pandemic and the increasing availability and improved performance of proprietary and open-source LLMs, the foundation of most commercially available conversational agents. In early 2020, many governments around the world adopted non-pharmaceutical interventions, such as social distancing, to limit the spread of COVID-19. Research suggests that during this period, the number of people who experienced loneliness, anxiety and mental health problems increased.Footnote 29 At the same time, in-person public and social services around the world, including MHPSS services, were suspended or offered only to the most urgent cases, increasing the so-called “treatment gap” – that is, the difference between the number of individuals who need treatment and the number who actually receive it.Footnote 30
During this period, many social and public service providers explored ways to remotely deliver MHPSS services, including to GBV survivors, using a range of telecommunications technologies. This included tele-consultations with patients and the tele-supervision of practitioners and service providers.Footnote 31 For example, Médecins Sans Frontières (MSF) shifted from delivering therapeutic MHPSS interventions in person to delivering them through telecommunications technologies like WhatsApp and Skype.Footnote 32 Recent research suggests that some digital mental health interventions offered during the COVID-19 pandemic, including tele-mental health services, delivered positive impact at individual levels and were associated with closing the treatment gap created by social distancing requirements.Footnote 33
Nowadays, a range of chatbots are used to offer MHPSS services across multiple contexts. The design and purpose of these chatbots vary by the intended user (service provider or potential client), their technical design, the extent to which they use AI, and their stated aim or purpose. Some chatbots, like button-based and rules-based chatbots, do not use AI and instead rely on logic trees or decision trees, following predefined scripts that are linked to a fixed database of questions and answers.Footnote 34 These chatbots match user inputs to a set of predetermined responses based on identifiable keywords or patterns of language. For example, Planned Parenthood’s chatbots, Roo and myPlan, answer questions from young people about sexual and reproductive health and relationships using buttons and rules;Footnote 35 in line with best practice, myPlan was co-designed with GBV survivors to help those experiencing intimate partner violence make informed decisions about their safety. Other chatbots, like AnenaSawa, an SMS chatbot developed for use in South Sudan, have been designed to provide information to GBV service providers and help caseworkers refer survivors to care.Footnote 36 However, while button-based and rules-based chatbots can be particularly useful for straightforward tasks, such as providing answers to frequently asked questions, they can also raise users’ expectations and increase safety risks; see the below section on “Safety and ‘Do No Harm’” for a more detailed discussion on this.
Driven by recent advances in AI, an increasing number of chatbots are powered by LLMs, which are advanced AI systems designed to understand and generate human-like text. LLMs are trained on vast amounts of data – books, articles, images and conversations – so that they can compose text and generate responses in ways that feel human. LLM-powered chatbots have been used to support both MHPSS service providers and potential clients.Footnote 37 These chatbots can provide information about mental health symptomology and available treatment options, suggest self-guided activities to users, support the initial intake and assessment process by taking information from clients before they see a human specialist, and refer potential clients to specialist providers.Footnote 38 Some chatbots have been designed to directly provide forms of talking therapyFootnote 39 to individuals.Footnote 40 This can include activities linked to CBT, mindfulness and well-being, and general emotional support to address loneliness, anxiety, and symptoms associated with depression.Footnote 41
As the accuracy and speed of LLMs increases and their apparent cost decreases, an increasing number of actors are exploring ways to develop chatbots to support the mental health of women and girls, including those affected by armed conflicts and crises and survivors of GBV. In the following sections, we discuss the potential opportunities and limitations of these chatbots and the extent to which they align with critical ethical principles and guidelines, including the GBV guiding principles and humanitarian principles. We draw on the findings from our literature review, key expert interviews and an in-depth analysis of five publicly available chatbots designed to offer MHPSS-related care.
Accessibility: The “hero use case”?
Perhaps one of the strongest arguments for using chatbots to provide MHPSS-related support is that they extend some level of care, even if basic, to those who cannot or will not access human-delivered services. As one academic recently noted, “[m]ost people don’t have access to a therapist. So, for them, it’s not ‘chatbot versus therapist’. It’s ‘chatbot versus nothing’.”Footnote 42 The public descriptions of the chatbots that we analyzed echo this sentiment, highlighting their ability to offer round-the-clock access to confidential and anonymous care and to address the paucity of human providers. Wysa suggests that its chatbot is “completely anonymous”, with “no stigma, no limits”, and provides “accessible mental health support for everyone, anytime”.Footnote 43 Woebot promises to make “mental health support radically accessible” because “mental health needs have multiplied [but] support hasn’t”,Footnote 44 while the KRCS’s ChatCare bot is “[a]lways free. Always there.”Footnote 45
This so-called “hero use case” – where a technology demonstrates its most transformative potential – could prove particularly relevant for GBV survivors in cases where cultural barriers to services, like shame and stigma, impede access to human-provided MHPSS services or where confidentiality and anonymity may not be guaranteed. Chatbots might also help to extend services into crisis-affected areas where violence and insecurity create persistent barriers to quality care and support. Some research from non-conflict settings suggests that LLM-enabled chatbots can improve access to mental health services and increase referrals to specialized services when used in connection with existing human-delivered care.Footnote 46 In the UK, for example, Limbic’s AI tools, including its chatbot, reportedly increased referrals to specialized mental health care by 15%. This increase was particularly high for individuals who identified as non-binary (235% increase) or as an ethnic minority (31% increase), or who otherwise faced higher barriers and stigma in accessing care.Footnote 47 Other research indicates that people may be more willing to disclose sensitive information related to sexual and reproductive health to chatbots, particularly in contexts where shame and stigma make it more difficult to access information and/or where users believe these chatbots can ensure confidentiality and anonymity.Footnote 48
When considering the applicability and potential impact of these tools in the context of GBV in emergencies and the benefits for women and girls, it is important to interrogate the concept of accessibility through a gender lens. For the most part, using a chatbot requires access to a personal computing device such as a phone or a laptop, reliable access to electricity to charge the device, and sufficient literacy to converse with a chatbot. Yet, across the globe, women and girls are more likely to have lower access to digital technologies and lower digital familiarity than their male counterparts.Footnote 49 Additionally, in many contexts, particularly those related to GBV, women and girls face significant risks of violence connected with owning or using mobile phones and other personal computing devices; phones may be shared within a family or monitored by male family members or even perpetrators. Women and girls also have generally lower rates of digital familiarity and education, which can make writing messages to and reading messages from a chatbot particularly challenging. Moreover, women and girls often use indirect or coded language to discuss and disclose incidents of GBV which vary by community and context and which chatbots sometimes fail to properly understand (see “Efficiency and Cost”, below).
Under this light, the so-called “hero use case” seems somewhat weakened, and it remains to be seen whether the most marginalized or vulnerable groups of women and girls, including GBV survivors, would have access to the resources required to engage with a chatbot and mitigate potential harms and safety backlashes related to chatbot use. Evidence from the COVID-19 pandemic suggests that those with lower digital familiarity, those with less access to personal computing devices and those living at the margins had lower access to digital mental health interventions.Footnote 50
Moreover, even in cases where women and girls have safe access to chatbots, it seems that the promise of 24/7 connectivity does not always hold. During tests with one chatbot which an author of this paper carried out while adopting the persona of a GBV survivor, the chatbot repeatedly failed to respond to disclosures of violence, stating that “[o]ur AI servers seem to be down” and asking the author to “please try later”. Clearly, this incident highlights the ethical risks and potential harms that could arise if a survivor discloses GBV incidents to chatbots that are unavailable or offline, but it also raises questions about the potential long-term impact of chatbot use on GBV reporting and disclosures. Several studies suggest that the ways in which individuals and institutions respond when survivors disclose incidents of GBV significantly impacts reporting rates.Footnote 51 Responses to GBV disclosure like the example above could, over the long term, reduce trust in chatbots or AI-powered solutions while also producing harms at the individual level (see “Technological Guard-rails”, below). Trauma-informed training, particularly with criminal justice actors, has improved survivors’ encounters with some service providers, and similar trauma-informed approaches might prove helpful when designing and deploying AI solutions (see “Conclusion and Recommendations”, below).
Efficiency and cost
Another potential promise associated with using chatbots for humanitarian impact is that they may improve efficiency, helping aid actors to do more with fixed or diminishing resources. Efficiency, however, depends on multiple variables, one of which is cost. At first blush, the cost to the user in accessing chatbots may appear low, particularly in cases where humanitarian agencies offer free access, as the KRCS has done with ChatCare. However, this ignores the costs associated with mobile devices, particularly the costs of data, airtime and recharging. These costs can be particularly prohibitive for communities affected by conflicts and crises and for women and girls, who typically have lower access to disposable income and digital devices and significantly lower digital skills than their male counterparts.Footnote 52
The cost of developing chatbots isn’t necessarily low either. Some humanitarian actors have built chatbots using open-source LLMs, with few or no costs associated with using the model, though this leads to poorer chatbot performance than newer or proprietary models. Others are building chatbots with LLMs that have better rates of performance, but access to these models incurs associated subscription fees, licensing costs and usage-based pricing, as well as additional costs related to fine-tuning the models, integrating application programming interfaces (APIs),Footnote 53 accessing and protecting data sets to train the model, and acquiring the cloud computing resources necessary to meet the demands of customization and deployment.
Moreover, the continuous cost of regular model monitoring, testing and evaluations throughout the lifespan of the chatbot must also be factored into any assessments of chatbot efficiencies. AI model testing and evaluations include any number of methods designed to measure performance and reliability. This includes benchmark testing, which compares a model’s accuracy against known datasets; robustness testing to ensure that the model responds well to slight variations in input; fairness assessments to detect biases; and stress testing by providing extreme or unexpected inputs to see how a model performs, helping to identify weaknesses, improve reliability and ensure that the model can handle real-world challenges safely and ethically. During a conversation with one of the present authors, one UK-based firm that provides such tests stated that this kind of support could cost an estimated $170,000 for just an initial six-week period.
These types of costs are often overlooked by humanitarian actors when evaluating the potential benefits of AI solutions, and given the recent cuts to humanitarian funding, it may be difficult to find funding to cover these costs. However, deploying a model without allocating adequate resources to regularly testing and evaluating its performance is a little like buying a car and expecting it to perform perfectly throughout its lifespan without any investment into regular maintenance. To properly assess whether an AI-powered app or tool such as a mental health-related chatbot will improve efficiency, aid actors need to dig into this level of granularity in order to properly assess the costs associated with developing and responsibly deploying the tool. Only then will it be possible to determine whether the chatbot has greater or lower costs than, say, training and deploying locally recruited, front-line GBV case workers or MHPSS staff.
Scale, generalizability and user engagement
Similarly, many arguments related to the efficiency of chatbots overall, including those addressing MHPSS, seem to rely on the notion that once a chatbot demonstrates impact, the tool can be scaled to other groups or contexts with only marginal increases in costs. This line of argument warrants a deeper exploration of scale and generalizability and how these relate to user engagement, particularly in the context of GBV in emergencies.
In humanitarian contexts, scale refers to increasing the use of a humanitarian intervention or programme across communities or contexts and ensuring that it has the greatest possible impact.Footnote 54 Though this definition shares some similarities with scaling AI at the broadest level, there are some key differences at a more granular level. Scaling AI solutions to serve more users requires maintaining AI performance while processing more data and meeting other computational demands.Footnote 55 This means having the algorithms, data, models and infrastructure required to operate at the necessary size, speed and complexity for a particular solution or context. Therefore, increasing access to a chatbot may not be possible without a correlating increase in technological investments and thus costs.
The concept of generalizability refers to the extent to which AI models can maintain performance when applied to different datasets or new contexts. A model that generalizes well will remain accurate across different contexts, regardless of the number of users.Footnote 56 Chatbots or AI models that often generalize well include those that are trained on diverse, representative datasets rather than narrow datasets – for example, from a particular socio-cultural context or a specific language – or biased data; those that are fine-tuned across different domains or exposed to varied environments beyond the original context in which they were trained; those that are evaluated against diverse benchmarks when used in real-world (versus training) settings; and those that are trained in data-rich languages like English, Spanish or Mandarin.Footnote 57
This last point is particularly relevant in humanitarian contexts, where those affected by crises and emergencies may not use the high-resource languages on which AI models were trained, or they may use those languages but not in the same structure or style. For example, as CLEAR Global has documented, crisis-affected communities may speak variants of high-resource languages or a blend of multiple languages. Those with lower access to education, particularly women and girls, may have a lower ability to read and write in dominant languages or have a smaller vocabulary.Footnote 58 When chatbots encounter new or unfamiliar grammatical structures or expressions, they can generate less accurate responses. Moreover, the chronic lack of consistency around key terms related to GBV and the plethora of euphemistic terms used to discuss GBV pose significant challenges to the generalizability of LLMs and related chatbots. This may pose specific challenges for survivors of GBV, who often use indirect language or euphemisms to discuss incidents of GBV and issues related to sexual and reproductive health.Footnote 59
Improving the generalizability of a chatbot and scaling it effectively and efficiently thus requires sufficient time, technological resources and financial resources. This can include approaches like community-driven efforts to expand datasets for minority languages or data augmentation, where AI developers introduce additional language data or create variations of existing data to help the model improve its ability to learn patterns and engage in conversation. While these activities can help scale and generalize an AI solution, they might also result in poor or even reverse economies of scale, where the per capita costs of delivering the AI solution remain fixed or possibly increase as the number of users increases.
GBV guiding principles and humanitarian commitments
Paramount in supporting survivors of GBV and the provision of GBV-related care, including MHPSS, are the principles of respect, safety, confidentiality and non-discrimination. These are underpinned by and intrinsically related to the humanitarian imperative of “do no harm” as well as commitments to accountability, community participation and survivor-centred programming.Footnote 60 Yet, in our interviews, literature review and analysis of five chatbots, the extent to which mental health chatbots can consistently uphold these principles remains to be seen, even when they are explicitly designed with the principles in mind.
Confidentiality, privacy and informed consent
All the chatbots that we analyzed claim to offer complete anonymity and uphold confidentiality. Wysa notes that its chatbot is “completely anonymous”, with “[n]o stigma. No limits.” The KRCS, meanwhile, states that its ChatCare bot “offer[s] immediate, confidential support for individuals seeking mental health guidance”.Footnote 61 Such promises might boost user trust in chatbots and increase the likelihood of GBV-related disclosures, as research suggests that chatbot users may be more willing to disclose sensitive information when they believe chatbots ensure confidentiality and anonymity.Footnote 62 A 2023 study noted that those who are more trusting of mental health chatbots are generally more willing to reveal personal information to the chatbot and take fewer precautions related to security and privacy.Footnote 63 As discussed below, this could prove particularly risky in contexts where data protection and privacy requirements are weak and GBV-related service providers are required to refer all cases of rape and sexual violence to the police.
A deeper exploration of the terms and conditions associated with the chatbots that we analyzed seems to challenge their promises of confidentiality. Woebot notes that “conversational interactions with Woebot, like what you write or options you select during the conversation”, will be used to “improve [Woebot’s] Services”, but no further detail is provided on what that means in practical terms;Footnote 64 it might, for example, include using inputs to train and improve the proprietary AI model that underpins the chatbot. The KRCS’s terms and conditions for its ChatCare bot note that “[w]hile efforts are made to secure data, the Kenya Red Cross cannot guarantee complete data security. Users interact with ChatCare at their own risk.”Footnote 65 They further advise that users “do not share personal, sensitive, or confidential information”.Footnote 66 However, as discussed below, few users read the specific terms and conditions of apps and software (see “Terms of Service and Limitations of Use”, below). Moreover, as the social enterprise Chayn has documented, women and girls often use chatbots in ways that designers do not intend and, often, as a crisis service (see “The Aims and Expected Outcomes of Mental Health Chatbots”, below).Footnote 67
Additionally, some scholars question the extent to which chatbots are designed to uphold confidentiality or protect the privacy of user data.Footnote 68 After all, for private sector firms developing mental health chatbots, the aim is not only to improve mental health outcomes for individuals but also to increase profit for the company. In the United States, several mental health and well-being apps have been accused of sharing user information with third parties, including insurance agencies, data brokers and advertising firms.Footnote 69 This seems to undermine informed consent, a component that is critical to GBV programming and central to the empowerment of women and girls. If the partnerships between humanitarian actors and chatbot firms are not carefully negotiated, individual privacy, confidentiality, and informed consent may take a back seat to corporate gain (see “The Aims and Expected Outcomes of Mental Health Chatbots”, below).
Ultimately, the specific risks and threats to data privacy and security presented by chatbots remain under-researched and thus difficult to assess. A full understanding of a chatbot’s limitations in terms of privacy, informed consent and data protection is critical before deploying the chatbot, particularly in the high-stakes context of humanitarian crises.Footnote 70 This may prove especially relevant in situations where mandatory reporting requirements exist and intersect with increased data surveillance and/or weak data protection and privacy requirements. In these situations, GBV survivors using AI-powered chatbots could risk unintentionally revealing sensitive data to local authorities and may be forced to report violence that they have experienced without their consent.
Safety and “do no harm”
Among others, there are three important features related to chatbots that directly impact the principles of “do no harm” and safety: (1) the accuracy of a chatbot’s outputs, (2) the technological guard-rails built into AI models to ensure that they operate ethically and responsibly, and (3) the policies and terms of service that limit a chatbot’s use.
Inaccurate outputs
AI hallucinations and other LLM errors, such as biases or misinterpretations in text generation, have been well documented over the last several years.Footnote 71 AI hallucinations occur when an AI convincingly presents false, misleading or fictional information that sounds believable but is incorrect. Hallucinations and other LLM errors can stem from problems with a model’s training data, fine-tuning, or limitations in contextual understanding. They can also be triggered by external adversaries or threat actors who use a range of techniques to manipulate, exploit or attack an AI system so that it generates harmful or unintended outputs.Footnote 72 Until further technological breakthroughs are made, some experts argue that it remains impossible to fully prevent LLM errors, though certain techniques can help mitigate some harm.Footnote 73 This could prove particularly problematic for GBV survivors who disclose incidents to chatbots, regardless of whether the chatbots are designed for that purpose or not.Footnote 74
None of the chatbots that we analyzed appear to publish information related to their performance or accuracy. However, the risks related to LLM errors and inaccurate outputs seem to be significant enough to warrant specific clauses in the terms and conditions we reviewed related to algorithmic errors. For example, KRSC’s ChatCare notes, in its initial outputs, that “as much as [ChatCare is] very efficient and effective in offering psychosocial support, it is however bound to make a few mistakes in the course of your conversations”.Footnote 75 ChatCare further adds that it “may not always be correct or up to date. Users should verify critical information independently.”Footnote 76 In a similar vein, several other companies note that they do not make any claims about the accuracy or reliability of the information provided by their chatbots and add that any information provided by the chatbot should not be acted upon before discussing it with a qualified health-care professional.Footnote 77
This language raises concerns about where accountability lies if and when chatbots produce inaccurate outputs which are then acted upon (see “Accountability to Affected Populations”, below, for a more detailed discussion on these issues). Similar debates are ongoing in the United States, where at least two companion chatbots,Footnote 78 neither of which was explicitly trained to deliver mental health support, have instructed users to end their lives.Footnote 79 Both of these chatbots have technical guard-rails build into their systems (see below) that prohibit outputs which incite violence, but those guard-rails ultimately failed, in one case allegedly resulting in an instance of death by suicide.Footnote 80 These examples, and others like them, underscore the challenge of integrating effective guard-rails into these chatbots.
Technological guard-rails
Guard-rails are built-in technological safety measures and rules that help keep AI systems functioning safely, ethically and as intended. In the context of LLMs, guard-rails can include filters to block harmful content, rules to ensure that a model declines unethical or harmful requests, safeguards to prevent inappropriate responses, and cross-checking generated responses to reduce hallucinations. Guard-rails can also be built into the chatbot at multiple levels – for example, by restricting certain types of inputs or user queries to prevent harmful or off-topic discussions or by building in security protections to prevent adversarial manipulation or unauthorized access to the chatbot.
Human language, however, is complicated and often deeply contextual. Words or expressions may have multiple meanings which vary across cultures, communities and scenarios. In the context of GBV, many survivors are unlikely to use formal or direct language to discuss abuse and violence.Footnote 81 Typographical shortcuts used for texting (for example, replacing “you” with “u” or “later” with “l8r”), colloquial language, misspellings and grammatical errors can introduce ambiguity and increase the chance that a chatbot will misinterpret users’ inputs. This could lead to significant harms if the quality of an LLM’s outputs is degraded or a chatbot bypasses its guard-rails, particularly in high-risk contexts like armed conflict or when engaging with communities with low digital familiarity.
During tests with Elomia’s demo chatbot, the authors of this paper encountered the guard-rails that Elomia had established to address potential cases of violence and crisis. When asked direct questions about violence and potentially abusive behaviour, Elomia’s chatbot repeatedly flagged that the app was “not sufficient” for “dealing with abuse, trauma, or crisis” and offered access to a crisis hotline (see Figure 1).

Figure 1. A pop-up window that appears when Elomia’s chatbot receives text which is classified as language related to abuse or trauma. © Elomia Health.
While this demonstrates strong guard-rails and is in line with Elomia’s terms of service (see Appendix 2), this response could prove off-putting and potentially distressing for individuals in crisis, including GBV survivors, and could potentially decrease user trust in the chatbot in the long term. It might also create additional barriers to disclosure and discourage survivors from seeking support from human-based service providers, as when a survivor is referred to another service or to multiple services, they may choose to stop seeking care instead.
Terms of service and limitations of use
Whereas guard-rails are technological measures that directly control an LLM’s outputs, the terms of service and limitations of use are policies that set the boundaries for how users are permitted to use and interact with an LLM. For mental health chatbots, these often include terms and conditions related to data privacy and confidentiality (as discussed above); terms stating that chatbots do not constitute or substitute advice from qualified health-care professionals; liability disclaimers; terms related to AI accuracy; and restrictions on usage related to age or geographic location.
Every chatbot that we analyzed states that it is not designed to provide crisis counselling or to support people in crisis. In the onboarding process to access a demo of Elomia’s chatbot, the company states very clearly that the chatbot is a only self-help tool and “is not intended to be a substitute for in-person assistance, medical intervention, crisis service, or medical or clinical advice”.Footnote 82 All but one of the chatbots that we reviewed prohibit children under the age of 13 from accessing their tools, and some require an adult to supervise chatbot use by those under 15; ChatCare worryingly has no age restrictions.Footnote 83
It is widely recognized, however, that few users read the specific terms and conditions related to apps, chatbots and other software, particularly terms that are written in complex legal jargon or in languages that are unfamiliar to the user.Footnote 84 And even when AI developers and executive leaders establish robust guard-rails and user limitations, evidence suggests that survivors of GBV will still disclose and discuss incidents of violence with chatbots, irrespective of the aim or intent of the chatbot.Footnote 85 This perhaps necessitates taking a trauma-centred approach to the design of any chatbots, particularly those deployed in high-risk contexts or engaging with women and girls in situations of armed conflict.Footnote 86 UNICEF’s Safer Chatbots initiative offers step-by-step guidance on how to mitigate some of the risks related to chatbots that we have outlined aboveFootnote 87 (see “Conclusion and Recommendations”, below).
Respect, empowerment and agency
Individual empowerment, improved agency and, ultimately, case closure – which occurs when a survivor’s needs are met and their support systems are functioning or when a survivor wishes to close a case – are key aims of case management.Footnote 88 In our analysis, we were unable to conclude whether any of the chatbots that we reviewed had been built to optimize or promote user agency, empowerment or case closure. Elomia’s demo chatbot seemed to generate non-prescriptive language and prioritize user-driven conversations; this might suggest that it was designed to respect user choice and agency by avoiding predefined solutions and recommendations, for example by asking “Would you like to explore several coping strategies?” rather than stating “You should try therapy.”
What remains less clear is how one might build an LLM-powered chatbot that promotes individual agency and empowerment over the longer term, with an eye to case closure, when most chatbots are optimized for long-term user engagement.Footnote 89 Empathy – or, more appropriately, artificial empathyFootnote 90 – is a design feature of most LLM-enabled chatbots and is used to maximize and sustain user engagement, but while artificial empathy can extend the length of engagement with LLMs, it also poses challenges, especially with regard to agency and empowerment.
At its essence, artificial empathy consists of feelings and statements that are coded by the system’s programmers, and those writing the code don’t always strike the right balance between tough love and unbridled adulation.Footnote 91 Take, for example, OpenAI’s April 2025 update to ChatGPT, which the company rolled back just three days later. The update, in OpenAI’s words, “made the model noticeably more sycophantic. It aimed to please the user, not just as flattery, but also as validating doubts, fuelling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.”Footnote 92
When chatbots generate language that is consistently empathic, this can increase the release of dopamine – the human brain’s “feel-good” chemical – and potentially boost users’ emotional connections with their chatbots.Footnote 93 Recent reports suggest that some individuals are forming human-like bonds with chatbots, expressing feelings of love and deep emotional intimacy.Footnote 94 Increased levels of dopamine are also associated with a higher potential for addiction.Footnote 95 This could lead to higher levels of chatbot overuse or dependency, quite the opposite of empowerment and agency.
Recent research led by OpenAI and the Massachusetts Institute of Technology (MIT) found that excessive use of chatbots like ChatGPT may be linked to increased loneliness and higher levels of emotional dependence, with some users spending less time socializing with humans.Footnote 96 At a societal level, some emerging evidence suggests that overuse of chatbots might pose longer-term risks to human-to-human relationships, empathy, and ultimately human cooperation, especially if human connection becomes performative and expectations of intimacy and empathy begin to shift.Footnote 97 As MIT professor and sociologist Sherry Turkle argues:
What we lose is what it means to be in a real relationship and what real empathy is, not pretend empathy. And the danger – and this is on the most global level – is that we start to judge human relationships by the standard of what these chatbots can offer. … Human beings are about working it out. It’s about negotiation and compromise and really putting yourself into someone else’s shoes. And we’re losing those skills if we’re practicing on chatbots.Footnote 98
This suggests that there might be a point at which chatbot engagement turns from helpful to harmful. While more research is needed on the links between artificially empathetic chatbots and overuse of chatbots, early indications should give pause to those developing these types of chatbots for use by already vulnerable groups and in high-stakes contexts like armed conflicts and crises. These contexts require consideration of risks related to human–computer interaction, over-reliance on chatbots and the displacement of human relationships.
Accountability to affected populations
Worryingly, though perhaps unsurprisingly, none of the chatbots that we reviewed accepts liability for its outputs. This includes any errors or omissions in the information or content provided by the chatbot or any damages that the user might experience as a result of engaging with the chatbot (see Appendix 2). This begs the question: when chatbots hallucinate or make errors, who should be held to account? As described above, engaging with chatbots can introduce or increase risks of harm. These risks can be particularly acute for people in high-risk contexts or where users are more vulnerable, such as survivors of GBV or populations that have lower rates of digital familiarity.
It is perhaps unrealistic to expect the developers of foundational AI models – like OpenAI, Anthropic and Meta – to be held to account for the performance of their LLMs, as those systems are later fine-tuned and adjusted with APIs by third parties, and the resulting changes sit outside the control and approval of the original developers. Thus, accountability seems arguably to sit primarily with the humanitarian agencies deploying chatbots in conflicts or in support of GBV survivors. However, as others have argued, the power asymmetries within the humanitarian system and the structural and systemic weaknesses in the existing infrastructure designed to promote accountability to affected populations have both contributed to a long-standing accountability gap across the sector.Footnote 99 This accountability gap risks being amplified if humanitarian actors design and deploy chatbots without taking responsibility for their errors and the individual harms that those might create.
Survivor-centred, participatory and trauma-informed
Decades of research and scholarship underscore the importance of domain expertiseFootnote 100 and community participation in the design of AI solutions.Footnote 101 This also echoes best practice both in GBV programming and humanitarian action more broadly, as well as principles related to trauma-informed design and ethical standards related to chatbot design.Footnote 102 Yet, just as in the humanitarian sector, evidence suggests that the gap between reality and rhetorical commitments to participatory AI remains wide.Footnote 103
In our chatbot analysis, we were unable to find any evidence to suggest that our five selected chatbots were co-designed in partnership with their intended end users, though we presume some user-centred design approaches were employed, as is common practice across the technology sector. Both co-design and user-centred design approaches aim to create AI solutions that meet user needs, but they differ in who participates, how decisions are made, and the level of collaboration throughout the design process. When co-designing chatbots for humanitarian contexts, end users, developers and key stakeholders work together throughout the entire development process.
A number of leading organizations have demonstrated commitment to and developed standards and guidelines on co-designing trauma-informed chatbots with women and girls, including Chayn, Girl Effect, UNICEF and MERL Tech.Footnote 104 These approaches centre the views and lived experiences of girls, women and GBV survivors throughout all stages of the development of a chatbot, from problem definition to implementation and governance, prioritizing their empowerment, agency and well-being.
Traditional approaches to AI design and deployment often reinforce societal inequalities, amplify gender stereotypes and exclude marginalized and disenfranchised populations. Feminist AI – an emerging area of interdisciplinary scholarship and praxis that draws on feminist theory and epistemologies – challenges biases, promotes inclusivity and empowers women and girls by embedding principles of equity and justice into AI systems.Footnote 105 By actively integrating intersectional perspectives into chatbots, identifying and addressing biases and ensuring that chatbots are designed with diverse datasets, participatory input and ethical oversight, feminist AI approaches can support the development of chatbots that recognize and respond to the unique challenges faced by women and girls.
The aims and expected outcomes of mental health chatbots
The analysis presented in this study so far ultimately leads us to wonder: what problem or problems are we trying to solve with LLM-enabled chatbots, and what mental health outcomes are we expecting at the individual level? In our research, we identified several expected outcomes for mental health chatbots which did not always seem to align with more common expected outcomes associated with mental health or GBV programmes in humanitarian contexts. These outcomes include the number of people engaged, with varying definitions of user engagement; the number of individual CBT-related activities delivered; client-reported decreases in depression, anxiety and/or loneliness; and client satisfaction rates.
Many technology firms and chatbot developers measure impact through user engagement. Wysa, for example, claims that it has “11 million lives covered” across “95 countries”, with “6 million people helped”.Footnote 106 On its home page, Elomia offers a live count of the number of messages that its chatbot has sent: close to 5 million at the time of writing.Footnote 107 While these metrics may bear some similarity with humanitarian programme indicators (for example, the number of individuals served), they fail to adequately capture more complex measurements of empowerment, agency and respect for survivors’ wishes – key principles in GBViE programming, as discussed above.
Going a level deeper, some chatbot providers measure the number of sessions or conversations held. Wysa, for example, reports that it has facilitated “500 million conversations” and delivered 2 million CBT sessions using AI.Footnote 108 Others report on client satisfaction: Elomia’s research suggests that self-reported depressive and anxiety tendencies decreased amongst respondents after using its chatbot, while Woebot has found that its users’ reported levels of therapeutic bonding with its chatbot match those of human-delivered support.Footnote 109 Some 91% of users reportedly find Wysa helpful.Footnote 110
Perhaps most telling, we found through our research that most evaluations of mental health chatbots focused predominantly on anxiety, loneliness and social isolation. While social isolation and loneliness can be linked to GBV, incidents of GBV are almost always considered traumatic events and, as such, typically require deeper engagement from specialist service providers and experts in crisis counselling. Limited evidence suggests that chatbots may deliver improvements on some mental health symptomology, but we found no evidence of the impact of chatbots in treating the symptoms of PTEs or in crisis counselling.Footnote 111 Nor did we find any publicly available information or research related to the use of chatbots to support mental health in conflicts and crises, despite the increasing interest among humanitarian actors in these tools.Footnote 112 It is, of course, important to acknowledge that inconclusive evidence or the absence of evidence for an intervention is not sufficient, in and of itself, to argue against that intervention’s use. Moreover, we suspect that the absence of evidence is more closely related to the newness of LLM-enabled chatbots for mental health.
It is perhaps most revealing to note that almost all of the chatbots we reviewed seem to be ultimately focused on improving efficiency of care, enabling its scalability and reducing costs for providers. In fact, many of the apps seem to be designed with the provider in mind, be it a health-care provider, an employer offering well-being support to its employees, or an insurance company. Elomia promises to help employers “improve productivity” and “decrease burnout”.Footnote 113
Moreover and perhaps most importantly, the terms and conditions of all the chatbots that we reviewedFootnote 114 – with the possible exception of Wysa – distanced their tool significantly from the actual provision of mental health services or therapy. Elomia and Woebot note that their AI “is designed to be a pure self-help program. The Services do not include the provision of medical care, mental health services or other professional services.” They further note that their services
may provide general information to patients for educational, informative and engagement purposes. All such information is not intended to replace or complement any professional medical consultation, advice, or treatment, which should be provided by a physician or other qualified healthcare professional. … If you require mental health therapy services, you should contact a qualified healthcare professional in your area.Footnote 115
Wysa goes so far as to say that “no action should be taken based upon any Information contained in the Service. You should seek independent professional advice from a person who is licensed and/or qualified in the applicable area before any use.”Footnote 116 This sentiment is echoed by the KRCS, which advises that its tool “does not offer professional advice (e.g., medical, legal, financial). For professional advice, please consult a qualified expert.”Footnote 117
This raises an important question: if these tools aren’t providing mental health services, what are they providing? Although the concept of chatting to a bot seems intrinsically linked to talking therapies like counselling or CBT, all the chatbots that we reviewed, with the possible exception of Wysa, appear to be extending their access to self-help and improving general well-being. One might reasonably argue, therefore, that these tools more closely resemble a multi-dimensional, dynamic version of the self-help section of a bookshop rather than a vehicle for clinically evaluated talking therapies.
Additionally, some research suggests that even if chatbots aren’t intentionally or explicitly designed for crisis counselling and support, women and girls use them for this purpose anyway. Chayn, a global non-profit run by GBV survivors and allies from around the world, creates open, online resources and services for GBV survivors that are trauma-informed, intersectional, multilingual and feminist. In 2020, Chayn retired a chatbot that it had built to provide information to women about online resources and locally available support because despite having built in multiple safeguards with
the best of intentions, including a careful design process that involved survivors at every stage, a review of the chat logs clearly showed that people were using [the chatbot] in the exact ways we wanted to avoid – as a crisis service.Footnote 118
So even in cases where chatbots are intentionally designed to avoid the provision of crisis counselling, women and girls may nonetheless use them for that purpose. Thus, to maximize safety and reduce harms related to chatbots and mental health, these tools should always be complemented by the provision of human-delivered services, though this will have an impact on the potentially low costs that AI-powered chatbots are believed to offer (see “Efficiency and Cost”, above).
Conclusion and recommendations
Interest in AI-powered mental health chatbots in humanitarian contexts is growing, particularly in the face of reduced humanitarian funding. This increasing interest presents both opportunities and critical risks, particularly for women and girls. While chatbots could serve as a last-resort option for individuals who lack access to human-delivered care, the use of AI-powered conversational agents introduces complex ethical, operational and cultural challenges that must be weighed against the potential benefits.
To reduce the risks associated with LLM-powered chatbots, the following actions should be considered in order to develop more ethical and effective AI applications to support GBV survivors and address mental health symptomology in humanitarian emergencies.
1. Chatbots should only be used where human-delivered services exist, complementing rather than replacing the latter and ensuring timely and confidential referrals to human-based services.
2. Clear definitions of impact and goals must be established to ensure that any AI-powered solution serves survivors in a safe and ethical manner.Footnote 119
3. To ensure that AI solutions reflect users’ priorities and aspirations for how they might be used, MHPSS and GBV-focused chatbots must be co-designed with affected communities, survivors and subject-matter experts using survivor-centred approaches grounded in feminist AI that integrate ethical considerations and responsible innovation. This includes adopting trauma-informed design principles for chatbots – drawing from established frameworks that prioritize user empowerment, safety and agency – as well as using UNICEF’s Safer Chatbots guidance to minimize risks of harm.Footnote 120
4. Integrate feminist AI principles into the development of AI-driven chatbots in order to ensure that social justice, empathy and user agency remain at the centre of the solution’s design and deployment.
5. Conduct ethically grounded research aligned with GBV guiding principles and AI ethics, safely and responsibly soliciting the views of women and girls on the use of chatbots to support GBV survivors.
6. Prioritize the development of chatbots for low-resource languages, using community-driven efforts to expand datasets for minority languages or data augmentation methods to improve model performance for non-dominant languages.
7. Conducting rigorous testing prior to launch is essential to ensuring chatbot functionality, accuracy and ethical alignment with GBV principles and humanitarian commitments.
8. Regular monitoring and evaluation must be conducted to assess the chatbot’s efficacy and potential risks, including unintended harms and AI hallucinations.
9. Alternative solutions should be explored, including digital mental health interventions that refer users to human service providers rather than relying on AI-powered talking therapies.
10. AI governance structures must account for data privacy concerns, requiring clear terms around AI accuracy, liability and use limitations, particularly regarding crisis-response scenarios. This includes data protection impact assessments and similar exercises to help surface privacy and data protection risks related to AI. It also includes establishing policies that prioritize informed consent, ensuring that user data is never used without the explicit informed consent of users. Users should understand the terms and conditions of use, the purpose and limitations of the chatbots, and how their data will be used.
11. Humanitarian actors should develop a risk management framework for assessing the suitability of MHPSS-related chatbots, particularly in the context of humanitarian crises and GBViE. This could draw on elements of existing work related to AI risk management, such as the National Institute of Standards and Technology’s AI risk management framework or the ISO/IEC non-technical standard on AI risk (42001:2023).
12. Additional research on how AI-enabled chatbots impact community cohesion and human-to-human relationships, and whether AI tools displace human-provided care, would be welcome and would help steer discussion on the ethical limits of using chatbots in humanitarian emergencies. This includes increased research partnerships between AI companies and MHPSS or GBViE practitioners, as well as women and girls themselves.
13. Any actor developing a solution related to LLM-powered chatbots must thoroughly examine any wider incentives that might exist related to these tools – for example, they should be wary of the tool’s potential for building more sophisticated datasets on human–computer interaction, human engagement and human emotions, such as which types of interactions result in deep engagement or provoke specific emotions, and how this data might be useful or valuable to third parties. This has been flagged as a particular concern related to Meta’s recent deployment of conversational agents.Footnote 121
The early results and findings related to mental health chatbots are far from universal and are difficult to generalize beyond the context in which they were tested. This is particularly true for MHPSS-related research, as the contextual factors related to mental health and psychosocial symptomology, as well as GBViE, are often grounded in culturally ascribed values and beliefs about gender, race and ethnicity, the nature of specific armed conflicts, and the manifestation of violence against women and girls, among other factors.
In this paper, we argue that the limitations and risks associated with chatbot use currently outweigh the potential benefits when deploying these tools in high-stakes contexts like humanitarian crises or supporting GBV survivors. But humanitarian actors must refute the existence of a false binary – i.e., that we must choose between either AI-powered chatbots or human-delivered MHPSS services, but not both. In many cases, the choice may not be an either-or situation but may instead require an exploration of which of the tools at our disposal are the safest and most ethical, and will deliver the most impact for women and girls. Above all, we must ensure that any solutions intended to improve service accessibility do not inadvertently replace care with code.Footnote 122
Appendix 1: Table of chatbots
The information in the following table was extracted from the respective app web pages between March and April 2025 and/or through conversations with stakeholders working for or in support of the respective apps. It excludes other publicly available information about these apps, such as that contained in academic papers, journal articles and the like.

Appendix 2: Terms and conditions of chatbots extracted from relevant websites between March and April 2025
NB: The bolded text is the authors’ emphasis only and is not present in the terms and conditions of the relevant apps. Text in all capital letters is presented in this way in the terms and conditions.
Excerpts of terms and conditions for ChatCareFootnote 123
1. ChatCare is designed to provide assistance and information on various topics related to the services provided by the Kenya Red Cross Society.
2. Personal Data: Do not share personal, sensitive, or confidential information. ChatCare is not intended to handle personal data such as medical records, financial information, or identification details.
3. Data Usage: Data collected will be used for enhancing user experience, improving the AI model, and for analytics. Data will not be sold or shared with third parties except as required by law.
4. Security: While efforts are made to secure data, the Kenya Red Cross cannot guarantee complete data security. Users interact with ChatCare at their own risk.
5. Advice: ChatCare does not offer professional advice (e.g., medical, legal, financial). For professional advice, please consult a qualified expert.
6. No Warranties: ChatCare is provided “as is” without warranties of any kind, either express or implied. The Kenya Red Cross does not guarantee the accuracy, reliability, or completeness of the information provided.
Excerpts of terms and conditions for both Elomia and WoebotFootnote 124
1. [App name] is designed to be a pure self-help program. The Services do not include the provision of medical care, mental health services or other professional services.
2. We do not knowingly collect or solicit personally identifiable information from children under 13; if you are a child under 13, please do not attempt to register for the Services or send any personal information about yourself to us. [NB: This is in line with the US Children’s Online Privacy Protection Act.]
3. [App name] seeks to protect users by providing services only to those who are likely to benefit from the Services provided. If your needs appear beyond the scope of guided self-help you will be advised to seek alternative services which can be found through the services of a mental health professional, medical professional, or support organization such as NAMI.org.
4. While the Services may provide access to certain general medical information the Services cannot and are not intended to provide medical advice. The Services may provide general information to patients for educational, informative and engagement purposes. All such information is not intended to replace or complement any professional medical consultation, advice, or treatment, which should be provided by a physician or other qualified healthcare professional. We advise you to always seek the advice of a physician or other qualified healthcare provider with any questions regarding your personal health or medical conditions. If you have or suspect that you have a medical problem or condition, please contact a qualified healthcare professional immediately.
5. The provision of Content through the Services does not create a medical professional-patient relationship, and does not constitute an opinion, medical advice, or diagnosis or treatment of any particular condition, but is provided to assist you in completing your self-help program.
6. IF THE SERVICES PROVIDE YOU WITH ANY POTENTIALLY ACTIONABLE INFORMATION, THIS INFORMATION IS INTENDED FOR INFORMATIONAL PURPOSES ONLY AND FOR DISCUSSION WITH YOUR PHYSICIAN OR OTHER QUALIFIED HEALTH CARE PROFESSIONAL.
7. IF YOU REQUIRE MENTAL HEALTH THERAPY SERVICES, YOU SHOULD CONTACT A QUALIFIED HEALTHCARE PROFESSIONAL IN YOUR AREA.
8. YOUR USE OF INFORMATION PROVIDED ON AND THROUGH THE SERVICES IS SOLELY AT YOUR OWN RISK. NOTHING STATED, POSTED, OR MADE AVAILABLE THROUGH THE SERVICES IS INTENDED TO BE, AND MUST NOT BE TAKEN TO BE, THE PRACTICE OF MEDICINE OR THE PROVISION OF MEDICAL CARE.
9. You access all such information and content at your own risk, and we are not liable for any errors or omissions in that information or content or for any damages or loss you might suffer in connection with it. We cannot control and have no duty to take any action regarding how you may interpret and use the Content or what actions you may take as a result of having been exposed to the Content, and you hereby release us from all liability for you having acquired or not acquired Content through the Services.
10. Our Services and the Content provided therein are for informational and educational purposes and are not a substitute for the professional judgment and advice of health care professionals. The Content and the Services are not intended to be used for medical diagnosis or treatment. Persons accessing this information assume full responsibility for the use of the information. [App name] is not responsible or liable for any claim, loss, or damage arising from the use of the information.
Excerpts of terms and conditions for LimbicFootnote 125
1. You acknowledge that your medical care is under the direction of a licensed clinician and that the Software and Services are not intended to, and shall not be used as, a substitute for professional clinical care. [Extracted from US terms of use.]
2. The Software is intended to be used to assist both patients and clinicians in obtaining and analysing data relating to matters concerning mental health. [Extracted from UK terms of use.]
3. THE SOFTWARE IS INTENDED TO BE USED AS A TOOL FOR INFORMATION COLLECTION AND ANALYSIS ONLY, AND AS PART OF A WIDER PROGRAM OF PATIENT CARE BY CLINICIANS. [Extracted from UK terms of use.]
4. THE COMPANY OFFERS NO MEDICAL OR OTHER ADVICE AND NO RELIANCE SHOULD BE PLACED ON THE SOFTWARE, THE SERVICES OR ANY PRODUCT OF THEM. THE COMPANY PROMISES NO BENEFITS OR RESULTS OF ANY USE OF THE SOFTWARE OR SERVICES. THE SOFTWARE AND THE SERVICES HAVE NOT BEEN SPECIFICALLY DEVELOPED TO MEET YOUR INDIVIDUAL REQUIREMENTS. USE OF THE SOFTWARE MUST BE MADE ONLY AT PATIENTS’ AND CLINICIANS’ OWN DISCRETION, FOLLOWING GUIDANCE FROM THE RELEVANT CLINICIAN AND/OR OTHER SUITABLY QUALIFIED PROFESSIONALS.
5. [I]f you are using the Software or Service without a clinician, in the context of waitlist or post-discharge support, you understand and agree that you are not under the care of the service or of Limbic in your use of the Software. [Extracted from UK terms of use.]
Excerpts of terms and conditions for WysaFootnote 126
1. The AI Coach is an AI technology powered software, not a real person, so it can only help in restricted ways. It will not offer medical or clinical advice and will only suggest that you seek medical help.
2. If you talk to a human mental health and well-being professional through the app, they are assigned by Wysa to provide Wysa Coach services or Wysa Medical Assistant services.
3. The app can help with general mental health but not with serious health issues like severe mental health conditions or long-term illness or conditions. If you have health problems, check with your local clinician before using the app.
4. Where you use voice based services of our AI coach, the intended purpose is to provide support to manage your mental well-being. The voice based service is not intended for emergency purposes.
5. The AI Coach is here to help you become mentally strong, manage your feelings, learn to adapt, and handle your stress better.
6. The app is not a replacement for face-to-face therapy. It cannot diagnose, treat, monitor or cure any diseases or conditions. It cannot offer medical or clinical advice – [it] only suggests you seek professional help. The app is not suitable for complex medical conditions or severe mental health issues such as eating disorders [or] psychosis, or crisis support such as in case of suicidal thoughts.
7. The app is intended for anyone who is older than 13 years or where approved by your Institution, follow the age criteria and rules set by your Institution.
8. If the Service provides any Information (which includes Wysa Content, medical or legal information among others) including recommending tools and techniques (e.g. Yoga or activity or exercises), such Information is for Informational purposes only and should not be construed as professional advice. No action should be taken based upon any Information contained in the Service. You should seek independent professional advice from a person who is licensed and/or qualified in the applicable area before any use.
9. Wysa App is designed to offer general mental health advice and support and cannot offer condition specific advice for complex medical conditions.
