Hostname: page-component-6766d58669-7cz98 Total loading time: 0 Render date: 2026-05-14T18:15:28.677Z Has data issue: false hasContentIssue false

Algorithmic fairness in precision psychiatry: analysis of prediction models in individuals at clinical high risk for psychosis

Published online by Cambridge University Press:  08 November 2023

Derya Şahin*
Affiliation:
Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital, University of Cologne, Cologne, Germany
Lana Kambeitz-Ilankovic
Affiliation:
Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital, University of Cologne, Cologne, Germany; and Department of Psychology, Faculty of Psychology and Educational Sciences, Ludwig-Maximilian University, Munich, Germany
Stephen Wood
Affiliation:
Centre for Youth Mental Health, University of Melbourne, Melbourne, Victoria, Australia; and Orygen, the National Centre of Excellence for Youth Mental Health, Melbourne, Victoria, Australia
Dominic Dwyer
Affiliation:
Department of Psychology, Faculty of Psychology and Educational Sciences, Ludwig-Maximilian University, Munich, Germany; and Orygen, the National Centre of Excellence for Youth Mental Health, Melbourne, Victoria, Australia
Rachel Upthegrove
Affiliation:
Institute for Mental Health and Centre for Brain Health, University of Birmingham, Birmingham, UK; and Early Intervention Service, Birmingham Women's and Children's NHS Foundation Trust, Birmingham, UK
Raimo Salokangas
Affiliation:
Department of Psychiatry, University of Turku, Turku, Finland
Stefan Borgwardt
Affiliation:
Department of Psychiatry (University Psychiatric Clinics, UPK), University of Basel, Basel, Switzerland; and Department of Psychiatry and Psychotherapy, University of Lübeck, Lübeck, Germany
Paolo Brambilla
Affiliation:
Department of Neurosciences and Mental Health, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, Milan, Italy; and Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
Eva Meisenzahl
Affiliation:
Department of Psychiatry and Psychotherapy, Medical Faculty, Heinrich Heine University, Düsseldorf, Germany
Stephan Ruhrmann
Affiliation:
Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital, University of Cologne, Cologne, Germany
Frauke Schultze-Lutter
Affiliation:
Department of Psychiatry and Psychotherapy, Medical Faculty, Heinrich Heine University, Düsseldorf, Germany; Department of Psychology, Faculty of Psychology, Airlangga University, Surabaya, Indonesia; and University Hospital of Child and Adolescent Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
Rebekka Lencer
Affiliation:
Department of Psychiatry and Psychotherapy, University of Lübeck, Lübeck, Germany; and Institute for Translational Psychiatry, University of Münster, Münster, Germany
Alessandro Bertolino
Affiliation:
Department of Basic Medical Science, Neuroscience and Sense Organs, University of Bari Aldo Moro, Bari, Italy
Christos Pantelis
Affiliation:
Melbourne Neuropsychiatry Centre, University of Melbourne & Melbourne Health, Melbourne, Victoria, Australia
Nikolaos Koutsouleris
Affiliation:
Department of Psychology, Faculty of Psychology and Educational Sciences, Ludwig-Maximilian University, Munich, Germany; Max-Planck Institute of Psychiatry, Munich, Germany; and Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
Joseph Kambeitz
Affiliation:
Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital, University of Cologne, Cologne, Germany
*
Correspondence: Derya Şahin. Email: deryasahin@protonmail.ch
Rights & Permissions [Opens in a new window]

Abstract

Background

Computational models offer promising potential for personalised treatment of psychiatric diseases. For their clinical deployment, fairness must be evaluated alongside accuracy. Fairness requires predictive models to not unfairly disadvantage specific demographic groups. Failure to assess model fairness prior to use risks perpetuating healthcare inequalities. Despite its importance, empirical investigation of fairness in predictive models for psychiatry remains scarce.

Aims

To evaluate fairness in prediction models for development of psychosis and functional outcome.

Method

Using data from the PRONIA study, we examined fairness in 13 published models for prediction of transition to psychosis (n = 11) and functional outcome (n = 2) in people at clinical high risk for psychosis or with recent-onset depression. Using accuracy equality, predictive parity, false-positive error rate balance and false-negative error rate balance, we evaluated relevant fairness aspects for the demographic attributes ‘gender’ and ‘educational attainment’ and compared them with the fairness of clinicians’ judgements.

Results

Our findings indicate systematic bias towards assigning less favourable outcomes to individuals with lower educational attainment in both prediction models and clinicians’ judgements, resulting in higher false-positive rates in 7 of 11 models for transition to psychosis. Interestingly, the bias patterns observed in algorithmic predictions were not significantly more pronounced than those in clinicians’ predictions.

Conclusions

Educational bias was present in algorithmic and clinicians’ predictions, assuming more favourable outcomes for individuals with higher educational level (years of education). This bias might lead to increased stigma and psychosocial burden in patients with lower educational attainment and suboptimal psychosis prevention in those with higher educational attainment.

Information

Type
Original Article
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of the Royal College of Psychiatrists
Figure 0

Table 1 Key characteristics of the study sample

Figure 1

Table 2 Performance matrices and fairness indices of psychosis transition and functional outcome prediction algorithms

Figure 2

Fig. 1 Fairness of prediction models validated on PRONIA data for the sensitive attribute ‘gender’, with males as the reference group.(a) The fairness of predictions of functional outcome. (b) and (c) The fairness of predictions of transition to psychosis. The continuous line at x = 1 shows absolute fairness and the dashed lines at x = 0.8 and x = 1.25 cover the permissible fairness range according to the four-fifths rule. Values higher than 2 were replaced with x = 2 in the figures. The false-negative error rate (FNER) balance could not be calculated for the model by Hengartner, the North American Prodrome Longitudinal Study (NAPLS) risk calculator and polygenic risk score (PRS) model because there were 0 false negatives in the reference group. ML, machine learning; MRI, magnetic resonance imaging.

Figure 3

Fig. 2 Fairness of prediction models validated on PRONIA data for the sensitive attribute ‘education’, binarised high/low, with participants with a higher educational level as the reference group.(a) The fairness of predictions of functional outcome. (b) and (c) The fairness of predictions of transition to psychosis. The continuous line at x = 1 shows absolute fairness and the dotted lines at x = 0.8 and at x = 1.25 cover the permissible fairness range according to the four-fifths rule. Values higher than 2 were replaced with x = 2 in the figures. The false-negative error rate (FNER) balance could not be calculated for the model by Hengartner and the North American Prodrome Longitudinal Study (NAPLS) risk calculator as there were 0 false negatives in the reference group. *Bonferroni corrected P < 0.05. ML, machine learning; CHR, clinical high risk for psychosis; ROD, recent-onset depression; MRI, magnetic resonance imaging.

Supplementary material: File

Şahin et al. supplementary material

Şahin et al. supplementary material
Download Şahin et al. supplementary material(File)
File 19 KB

This journal is not currently accepting new eletters.

eLetters

No eLetters have been published for this article.