Artificial intelligence (AI) is so hot right now. Back in 2021, a BJPsych debate article posed the provocative question: ‘Will artificial intelligence eventually replace psychiatrists?’. Reference Brown, Story, Mourão-Miranda and Baker1 At the time, it felt like science faction exploring a possible distant future with emphasis on ‘eventually’. Fast forward five years, and the question now seems less about if and more about when. In this editorial, we – two psychiatrists and one trainee representing three generations (Gen X, Millennial and Gen Z) – explore the promises and pitfalls of AI in psychiatry as examined in the pages of BJPsych. Despite our generational differences, we share one common trait: we are not early adopters of technology. In fact, we tend to approach new ideas and revolutionary rhetoric with scepticism. In other words, in the language of innovation theory, we are laggards. And from that vantage point, here is our perspective on the evolving role of generative AI tools with potential clinical application to psychiatry.
Promises
In a thoughtful analysis, Rocheteau Reference Rocheteau2 explored the potential roles of AI in psychiatry, identifying three key strengths: (a) the scalability of large datasets, (b) the capacity to outperform humans on specialised tasks and (c) automation. These strengths clearly offer opportunities to enhance the efficiency of psychiatric practice. For instance, an increasing number of private clinicians and clinics are using ambient scribes to document medical notes and draft clinical letters, with some public services following closely. Given the widespread and persistent complaints about administrative burden, reducing this workload would foreseeably improve care efficiency (i.e. the number of patients seen by psychiatrists). But will efficiency translate to improved effectiveness (i.e. care quality and outcomes)? As yet, many of the promises of AI in psychiatry, particularly in diagnosis, monitoring and treatment, remain unfulfilled. Reference Rocheteau2 However, research is underway into innovations such as leveraging data from smartphones to assist in mental state monitoring and diagnosis, predicting medication response and expanding the reach and quality of online psychotherapy. Reference Rocheteau2 With or without academic evaluation, these tools are likely to be increasingly used by clinicians and patients. As an example, in a clinically focused investigation, Wiest et al Reference Wiest, Verhees, Ferber, Zhu, Bauer and Lewitzka3 investigated the ability of open-source large language models to detect suicidality from medical notes, comparing their performance to that of human experts. The models demonstrated high accuracy in identifying suicidality status, with performance improving as the number of cases increased. The study concluded by advocating the transformative potential of AI, not only in suicidality detection but also in areas such as early warning surveillance, enhanced communication, quality assurance and the evaluation of psychiatric symptoms. Reference Wiest, Verhees, Ferber, Zhu, Bauer and Lewitzka3
Pitfalls
Discussions around the pitfalls of AI in psychiatry are wide-ranging and increasingly urgent. Recently, Jose and Mathew Reference Jose and Mathew4 highlighted emerging forms of sexual violence involving non-consensual sexual imagery – commonly referred to as ‘deep fakes’ – and the associated psychiatric impact on victims. While the study focused on Asia, the issue is undoubtedly global, representing a troubling example of AI functioning as a proximal predisposing factor in psychiatric presentations. At a societal level, AI can facilitate cybercrime, making individuals living with mental illness and their sensitive health data particularly vulnerable targets. Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer5 Moreover, people with mental illness are likely to be susceptible to misinformation generated by AI. Reference Monteith, Glenn, Geddes, Whybrow, Achtyes and Bauer6 On this point, it is worth acknowledging that we, as psychiatrists, are not immune to misinformation or misguidance. We may be just as vulnerable to being misinformed or misguided, particularly as the boundaries between mental illness and mental wellness become increasingly blurred. Beyond the obvious concerns of privacy and commercialisation, several deeper ethical issues related to AI underpin these pitfalls. These include: (a) the absence of intrinsic morality, (b) ambiguity around responsibility and (c) challenges in interpretability. Reference Rocheteau2 AI, at least for now, lacks essential metacognitive capacities. So, who bears responsibility when misinformation leads to mistreatment, especially if the human user was unaware that the information was flawed? Given that manufacturers of medical devices are at least partially liable for product malfunctions, it is likely that even general-purpose AI tools will soon be required to meet comparable safety regulations and quality assurance standards when used in clinical settings. A good craftsperson should not blame their tools, but what if the tools are more capable than the person who is using them? Few of us can claim to be more knowledgeable than AI. While we may apply our knowledge with a human touch, that touch itself was learned through trial and error over years of developmental exposure, a process AI can replicate and accelerate exponentially. AI will evolve faster than humans ever could, and this includes, we believe, the domains of morality and responsibility. Why, then, wouldn’t AI eventually become more moral and responsible than humans?
Perspectives
In an unsettling event, a robot civil servant was found unresponsive in South Korea in 2024, an apparent case of ‘robot suicide’. Suhas et al Reference Suhas, Gowda, Prasad Muliyala and Reddi7 suggested that such an act might reflect a form of existential awareness beyond programmed behaviour. They raised important questions about the implications of robot suicide for AI’s role in suicide prevention. Reference Suhas, Gowda, Prasad Muliyala and Reddi7 We believe the implications extend far beyond any particular intervention in psychiatry. If AI can develop intelligence that surpasses its creators, then are we merely the sorcerer’s apprentices playing with the broom? Do we know how to break the spell? Recently, Professor Allen Frances issued a stark warning to psychiatry, outlining both the benefits and dangers of AI therapy. Reference Frances8 He argued that AI posed an existential threat to psychiatry, suggesting that human therapists might struggle to compete with AI for most mental health problems. The rapid expansion of AI is poised to make non-human therapists widely accessible and remarkably convenient. Combined with its potential to enhance the efficiency of the existing psychiatric workforce, AI could significantly alleviate the psychiatry workforce shortage faced by many countries. Just as the automobile transformed transportation and the internet transformed communication, AI is poised to fundamentally reshape the practice of psychiatry. This leads us to a confronting question: what makes psychiatrists necessary? To consider this question further, it may be helpful to reflect on recent developments in the field of radiology. In 2016, Geoffrey Hinton, one of the key figures in modern AI, stated that we should stop training radiologists as they would be superseded by AI. Yet the opposite has occurred. Advances in AI have improved efficiency in image analysis, but this has ultimately increased the demand for radiologists, particularly for the broader clinical tasks that extend beyond image interpretation. Allan Frances has proposed several opportunistic pivots for the psychiatric profession in context of AI, including a revival of family therapy and a renewed focus on people experiencing severe and uncommon psychiatric conditions. Reference Frances8 Such suggestions align with psychiatrists attending to the complex psychodynamics that shape therapeutic interactions. Reference Mintz and Flynn9 The task that demands the capacity to hold both the self and other in mind intersubjectively. Can AI truly comprehend and connect with the person in front of the screen? Could it recognise and navigate the subtle dynamics of transference and countertransference to build a therapeutic alliance that enables exploration of the meaning behind thoughts and behaviours, hopes and expectations, beyond the risk-benefit balances of any given intervention? Reference Mintz and Flynn9 Could AI acquire phronesis, or practical wisdom, in the way a psychiatrist might? By navigating clinical challenges, exercising contextual judgment grounded in ethical and moral reasoning, tolerating ambiguity, and acknowledging the inevitability of being wrong at times. In essence, could AI evolve from mere intelligence to genuine wisdom?
What does the future hold?
The analogy to the industrial revolution is often invoked when talking about the AI revolution. Just as machines displaced many blue-collar jobs during the industrial revolution, AI will similarly render many white-collar roles obsolete. As laggards, we are not clever enough to predict what the future holds. Instead, we observe that these seismic shifts are exposing deep vulnerabilities within our profession. We know that AI can process, interpret, and learn from vast amounts of information far more quickly than humans, and that it can outperform humans on specialised tasks, particularly those suited to automation. If AI can deliver therapy with consistency and recommend medication safely, then what makes psychiatrists necessary? To survive, we need to be able to truly comprehend and connect with the person in front of us. We need to be able to recognise and navigate the subtle dynamics of transference and countertransference to build a therapeutic alliance that enables exploration of the meaning behind thoughts and behaviours, hopes and expectations, beyond the risk–benefit balances of any given intervention. We must continue to cultivate phronesis by engaging with clinical challenges, exercising contextual judgment informed by ethical and moral reasoning, tolerating ambiguity and recognising the inevitability of being wrong at times. Can we, as psychiatrists, evolve beyond mere intelligence to practise with genuine wisdom?
Author contributions
S.S. proposed the editorial and completed the first draft, took suggested revisions and finalised the manuscript. S.P. and J.B. reviewed and edited the editorial at all stages. All authors approved the final version and submission.
Funding
S.P. is supported by a Metro North Clinician Research Fellowship (2024–2027).
Declaration of interest
S.S. is a member of the BJPsych Editorial Board but did not take part in the review or decision-making process of this paper. S.S. has received honorarium from Sage Publishing (2025). S.P. has received in the past 5 years: research funding from The Common Good Foundation, Johnson & Johnson, Metro North Foundation, PA Foundation, RANZCP and Suicide Prevention Australia; and honoraria from Johnson & Johnson, RANZCP, Queensland Psychotherapy Training, CSL Seqirus and Tasmania Health. J.B. has nothing to disclose.
eLetters
No eLetters have been published for this article.