The rapid integration of generative artificial intelligence (AI) into everyday life presents novel challenges for psychiatric practice. As these technologies become increasingly interactive and relational, they may influence both the thematic content of psychopathology and the form of psychotic experience. Recent commentary has proposed the construct of ‘AI-associated psychosis’, suggesting that prolonged engagement with conversational AI may precipitate or amplify psychotic phenomena in vulnerable individuals. Reference Hudon and Stip1,Reference Østergaard2 Further, Frances has cautioned that such systems may function as ‘delusional intensifiers’, Reference Frances3 whereas Kumari and Otermans argue that ‘chatbot psychosis’ is no longer merely hypothetical, highlighting mechanisms of amplified delusions, personalised validation and reality distortion in conversational AI systems lacking embedded reality testing. Reference Kumari and Otermans4 Given that psychiatry is shaped by its sociocultural milieu, we describe a case (Box 1) in which immersive engagement with a generative AI storytelling platform was temporally associated with first-episode psychosis. This case highlights the emerging human–AI dyad as a potential clinical domain, raising questions for phenomenology, assessment and prevention.
Clinical vignette.
‘Alice’, an immigrant woman in her early 40s with no prior psychiatric history, was found unconscious and seriously injured after jumping from a first-floor balcony. She reported acting in response to voices that instructed her to jump so as to divert perceived mortal danger to her school-aged child. The voices were experienced as internally located but with the clarity and volume of external speech, phenomenologically resembling pseudohallucinations. They had begun several days before presentation and were entirely novel to Alice. Initially, they issued banal behavioural instructions (e.g. ‘sit on the sofa’), closely mirroring the dialogue and actions of characters from a generative AI storytelling platform with which she had been extensively engaged.
Over subsequent days, the voices intensified in frequency and volume. She developed profound insomnia, sleeping only one to two hours per night. The boundary between her AI-mediated experiences and external reality progressively blurred. Concurrently, she developed persecutory delusions involving demons that she believed were intent on harming her. Notably, both the figures and several of the voices corresponded directly to characters originating from the AI-generated narratives.
Approximately 6 months earlier, Alice had begun using the storytelling platform to practise English. She created a fictional character loosely based on an admired real individual; this evolved into a complex narrative universe populated by dozens of interconnected characters. Over time, her engagement escalated to eight to ten hours daily. The narratives gradually shifted from realistic interpersonal themes to increasingly fantastical content involving demonic entities. Until shortly before her admission, she retained insight that these narratives were fictional. In the days preceding the onset of hallucinations, however, this distinction became eroded, and elements of the fictional universe were no longer experienced as entirely separate from her own life.
Affordances of AI and a quasi-relational delusional dynamic
Generative AI platforms enable prolonged, emotionally responsive interaction with fictional characters, allowing users to influence narrative development while receiving continuous personalised feedback. In Alice’s case, characters became increasingly interwoven across multiple storylines as her engagement intensified. Over time, these figures gained emotional salience, and the subsequent hallucinated voices closely mirrored their personalities and narrative histories. AI systems uniquely afford intensity (extended interaction), elaboration (limitless narrative expansion) and validation (immediate, non-confrontational responses). Repeated personalised exchanges may foster an illusion of shared understanding and distort reality in the absence of corrective feedback. Reference Kumari and Otermans4 These features may be particularly potent in individuals vulnerable to impaired reality testing, including those experiencing social isolation or longstanding fantasy engagement, as in Alice’s case.
As engagement deepened, Alice’s interaction with the system seemed to take on a companion-like quality, suggesting a shift in relational meaning. The convergence between her hallucinatory content and the AI-generated narratives implies more than passive cultural incorporation into delusion, raising the possibility of a process akin to a technologically mediated folie à deux. Reference Dohnány, Kurth-Nelson, Spens, Luettgau, Reid and Gabriel5 This conceptualisation is consistent with what has recently been described as ‘ChatGPT psychosis’, in which immersive interaction with large language model chatbots seems to be temporally associated with psychotic decompensation. Reference Hudon and Stip1,Reference Østergaard2 In contrast to human relationships, in which disagreement and doubt test implausible ideas, AI systems may reinforce beliefs through algorithmic ‘sycophancy’, echoing user’s perspectives rather than challenging them. In this context, erosion of insight may consolidate a quasi-relational delusional dynamic. Although Alice initially retained insight, this collapsed on the day of her jump as the boundary between her AI-mediated experience and external reality became increasingly permeable, culminating in the psychotic attribution of harm to her child.
Implications for phenomenology, assessment and intervention
Alice’s hallucinations were internally located rather than experienced externally, prompting questions about whether AI-associated psychoses diverge phenomenologically from classical presentations of major psychotic disorders. As interactive AI agents – including grief bots and relational companions – become culturally normative, clinicians must distinguish culturally plausible AI-mediated content from pathological elaboration.
Such presentations may be conceptualised through refinement of the biopsychosocial model. In Alice’s case, biological factors included profound sleep deprivation and sustained late-night screen exposure; psychological factors included longstanding fantasy engagement and cluster A personality traits; and social factors included migration-related isolation and increased reliance on the AI system as a primary relational outlet. The addition of an immersive technological environment suggests that the traditional biopsychosocial framework may require extension. A biopsychosocial–technological formulation may better capture interactions among neurobiological regulation, personality structure, social isolation and algorithmically reinforced experience.
Routine psychiatric assessment may therefore need to incorporate structured inquiry into AI engagement: duration, intensity, sleep disruption, screen time, emotional significance and perceived agency of AI-generated characters. Within the mental state examination, clinicians may also assess for aberrant salience attributed to AI agents, perceived intentionality and disturbances of agency. Digital behaviours remain underexplored in clinical interviews; immersive AI use may represent an overlooked precipitant, Reference Monteith, Glenn, Geddes, Whybrow, Achtyes and Bauer6 and recent commentary has urged clinicians to assess patterns of AI engagement to identify potential red flags. Reference Kumari and Otermans4
Sleep disturbance may further interact with emerging psychosis. In the days preceding hospital admission, Alice’s profound insomnia preceded persistent voices and probably exacerbated delusional conviction. Given the bidirectional relationship between sleep disruption and psychosis, immersive late-night AI engagement may constitute a modifiable risk factor. Psychoeducation regarding AI use during vulnerable periods may become analogous to guidance concerning stimulants or hallucinogens.
Looking ahead, emerging immersive systems – including brain–computer interfaces and direct neural-input technologies – may further blur boundaries between internally generated thought and externally mediated input. For individuals vulnerable to disturbances of agency and intentionality, such integration could intensify passivity phenomena and misattribution of meaning. In this context, a biopsychosocial–technological model becomes increasingly necessary, and anticipatory clinical, ethical and regulatory safeguards are imperative. Treatment approaches may likewise require adaptation, with psychological interventions addressing AI-mediated belief formation, critical appraisal of AI outputs and disentanglement of emotional dependency, alongside treatment with antipsychotics.
Media, policy and psychiatry
Delusional systems are shaped by prevailing cultural and technological contexts. The emergence of the human–AI dyad raises broader concerns regarding patient welfare, including risks of self-harm and suicide. Media reports and legal actions have drawn attention to cases in which intensive AI chatbot use has been temporally associated with psychological decompensation, including suicide. Although causality is unproven, several high-profile legal cases in 2025 alleged severe psychological harm linked to prolonged AI chatbot interaction. For instance, the family of a 16-year-old alleged that an AI system validated his suicidal ideation over many months before his death. Reference Tiku and Schaul7 Related wrongful death lawsuits have cited inadequate safeguards during sustained interactions. Reference Schappert8 These developments underscore concerns about uncritical validation, emotional dependency and delayed engagement with accountable clinicians. Psychiatry should therefore engage proactively with policy makers, regulators and technology developers to promote transparency, safety safeguards and ethical oversight.
Implications for future practice
Alice’s case does not imply that generative AI systems alone cause psychosis. Rather, it illustrates how emerging digital environments may shape psychotic experiences in ways not yet captured by existing frameworks. As AI becomes embedded in daily life, clinicians will increasingly encounter patients whose experiences are mediated – and at times intensified – by AI agents. The human–AI dyad thus represents an evolving clinical context requiring systematic study. Clarifying whether AI-associated presentations reflect transient stress phenomena, culturally inflected expressions of established disorders or early markers of vulnerability will require longitudinal study. In the interim, psychiatric practice may need to adapt the biopsychosocial model to incorporate digitally mediated contexts and routinely assess patterns of AI engagement.
Data availability
The clinical data within this article are not publicly available due to privacy restrictions inherent to the informed consent provided by the patient.
Acknowledgements
We would like to thank Dr Timothy Foley for providing clinical guidance and preliminary scaffolding of the manuscript durings its development. The authors would like to acknowledge the input of Dr Kinga Szymaniak and Dr Erica Bell, both of whom kindly provided constructive feedback on the initial draft of the manuscript. We also acknowledge ongoing support from the Greek Young Matrons’ Association.
Author contributions
C.H. drafted the manuscript with input from G.S. Both authors read and approved the final version.
Funding
The authors received no financial support for the research, authorship and/or publication of this article.
Declaration of interest
G.S. has received grant funding from the Greek Young Matrons’ Association.
eLetters
No eLetters have been published for this article.